Conversation

Increasingly feels that AIs will never consistently pass any philosophically reasonable test of intelligence. But as we make the tests increasingly subtle, neither will most humans. This effect needs a name. It's like a progressive decertification of human intelligence.
64
920
I think the problem is trying to attach definitions of intelligence to agents rather than to individual behaviors. A particular move or action may be intelligent in a given circumstance. The idea that an agent can be consistently intelligent all the time is basically wrong
8
118
I don't think you can even reasonably upper or lower bound it. There is no floor of basic intelligence that any agent can always stay above. There is no ceiling of intelligence that it can never break through.
2
55
Agreed , “philosophically reasonable” seems like a vague cop-out standard, but it’s actually fairly clear for me — it’s any standard I at least find interesting enough to engage with, and can explore without exhausting.
Replying to
Ie an infinite game standard of intelligence. Which will always seem like “moving goalposts” to intelligence finite-gamers, who think intelligence is a game you “win” rather than “keep playing”
1
43
This is not moving goalposts for AIs. This is discovering through AIs that the goalposts were in the wrong place for HIs along. AI is arguably a field partly devoted to the debunking of bad theories of human intelligence.
2
64
Unpredictable inconsistency across many domains might be a good standard. You know a washing machine will be consistently good at washing clothes, but consistently bad at chess. A human will be unpredictably inconsistent at both.
4
25
Replying to
The idea that philosophical reasonableness is something one can "can explore without exhausting" is probably the best/most novel test for intelligence I've heard. One problem with it I can think of though is, again, most humans would fail it.