Conversation

Increasingly feels that AIs will never consistently pass any philosophically reasonable test of intelligence. But as we make the tests increasingly subtle, neither will most humans. This effect needs a name. It's like a progressive decertification of human intelligence.
64
920
I think the problem is trying to attach definitions of intelligence to agents rather than to individual behaviors. A particular move or action may be intelligent in a given circumstance. The idea that an agent can be consistently intelligent all the time is basically wrong
8
118
Replying to
Agreed , “philosophically reasonable” seems like a vague cop-out standard, but it’s actually fairly clear for me — it’s any standard I at least find interesting enough to engage with, and can explore without exhausting.
2
30
Ie an infinite game standard of intelligence. Which will always seem like “moving goalposts” to intelligence finite-gamers, who think intelligence is a game you “win” rather than “keep playing”
1
43
This is not moving goalposts for AIs. This is discovering through AIs that the goalposts were in the wrong place for HIs along. AI is arguably a field partly devoted to the debunking of bad theories of human intelligence.
2
64
Unpredictable inconsistency across many domains might be a good standard. You know a washing machine will be consistently good at washing clothes, but consistently bad at chess. A human will be unpredictably inconsistent at both.
4
25
Replying to
I think the history of AI is a gradual deconstruction of the notion of intelligence. The word intelligence might just be as meaningful as phlogiston at some point in not-so-distant future.
1