Conversation

Increasingly feels that AIs will never consistently pass any philosophically reasonable test of intelligence. But as we make the tests increasingly subtle, neither will most humans. This effect needs a name. It's like a progressive decertification of human intelligence.
64
920
I think the problem is trying to attach definitions of intelligence to agents rather than to individual behaviors. A particular move or action may be intelligent in a given circumstance. The idea that an agent can be consistently intelligent all the time is basically wrong
8
118
I don't think you can even reasonably upper or lower bound it. There is no floor of basic intelligence that any agent can always stay above. There is no ceiling of intelligence that it can never break through.
2
55
Agreed , “philosophically reasonable” seems like a vague cop-out standard, but it’s actually fairly clear for me — it’s any standard I at least find interesting enough to engage with, and can explore without exhausting.
2
30
This is not moving goalposts for AIs. This is discovering through AIs that the goalposts were in the wrong place for HIs along. AI is arguably a field partly devoted to the debunking of bad theories of human intelligence.
2
64
Unpredictable inconsistency across many domains might be a good standard. You know a washing machine will be consistently good at washing clothes, but consistently bad at chess. A human will be unpredictably inconsistent at both.
4
25