Say we discover (somehow) that it's actually impossible to simulate a human-level conscious mind on inorganic substrate. What did we discover?
(e.g. we're running in a simulation, and a resolution limit prematurely halts Moore's Law; etc)
Conversation
the word "simulate" is doing a lot of work here. simulate wrt what observables?
1
3
turing test type things say more about the communication channel with the AI than they say about the AI
2
8
It's a good question! I'm not quite sure, so I'll pose a weak frame: suppose we discover (positively) that there is some reason why an inorganic computational system could never "seem" (to all humans) to behave with human-like conscious agency. What must be true of such a world?
2
1
"seem" through what medium? in what social context? with what priming? what observations do people do the "seeming" with?
1
Replying to
Any and all—the most expansive and permissive possible combination of answers to these and related questions. People simply don't regard these entities as conscious, no matter what.
Replying to
then you've baked these entities not being conscious into the definition of consciousness and so it doesn't tell you anything
1
I don't think that's really true. Say that no matter how much you practiced juggling, you could never make a ball appear to levitate in the air without support. I've baked this inability into non-levitatability, but it can be explained by physical law (i.e. gravity).
3
Show replies

