I think lots of simple problems can serve as appropriate AI benchmarks (including chess) as long as we constrain them to measure *generalization power* and not *skill*.
-
-
It isn't very interesting that an algorithm can beat any human at chess -- but it would be significant to see an algorithm infer the rules of chess after looking at a few dozens of games, then proceed to develop a human-level chess solver using just a few hundreds of games
Show this thread -
Similarly, I don't want to see a deep learning model beat humans at a MOBA (which of course it can, given enough data), I want to see a model trained on a few thousands of plays, given a *completely new* character, doing just as well as a human in the same situation
Show this thread -
To make progress on AI, we must measure abstraction strength and generalization power, not plain skill (while ignoring *how* that skill was obtained). Focusing on skill is a bit like looking only at the speed of a vehicle but ignoring its energy consumption.
Show this thread -
Intelligence is about efficiently turning experience into generalization power. Skill at a given task, given infinite resources to get there, has no correlation with intelligence.
Show this thread
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.