Conversation

Say we discover (somehow) that it's actually impossible to simulate a human-level conscious mind on inorganic substrate. What did we discover? (e.g. we're running in a simulation, and a resolution limit prematurely halts Moore's Law; etc)
73
14
186
It's a good question! I'm not quite sure, so I'll pose a weak frame: suppose we discover (positively) that there is some reason why an inorganic computational system could never "seem" (to all humans) to behave with human-like conscious agency. What must be true of such a world?
2
1
Any and all—the most expansive and permissive possible combination of answers to these and related questions. People simply don't regard these entities as conscious, no matter what.
1
Replying to
but you have a well-defined set of observables there (the position of the ball). you can say an object levitates iff its position wrt the ground as observed by some instrument doesn't change over time
1
There are lots of human and non-human organic beings that don’t pass Turing tests. So what? Inorganic brings will be conscious when people decide to treat them as such. Keep in mind that it does not have to be a universal belief in order to motivate people to act as if they are.
2
1
This analogy does not convince me, because you CAN make a ball *appear* to levitate. So, if you can make a process *appear* to be conscious, what's the big deal if it actually isn't (according to your hypothesis)? What's the concrete consequence?