Say we discover (somehow) that it's actually impossible to simulate a human-level conscious mind on inorganic substrate. What did we discover?
(e.g. we're running in a simulation, and a resolution limit prematurely halts Moore's Law; etc)
Conversation
the word "simulate" is doing a lot of work here. simulate wrt what observables?
1
3
turing test type things say more about the communication channel with the AI than they say about the AI
2
8
Replying to
It's a good question! I'm not quite sure, so I'll pose a weak frame: suppose we discover (positively) that there is some reason why an inorganic computational system could never "seem" (to all humans) to behave with human-like conscious agency. What must be true of such a world?
Replying to
"seem" through what medium? in what social context? with what priming? what observations do people do the "seeming" with?
1
Any and all—the most expansive and permissive possible combination of answers to these and related questions. People simply don't regard these entities as conscious, no matter what.
1
Show replies
What must be true of such a world is that human consciousness is not a product of a Turing machine. That would beg for a mechanism (Orch-OR?) and strongly hint at either dualism or panpsychism.


