"Human performance" on Atari games isn't how well a human can play with their own 10 fingers. It's how well the abstract mental models of the game inferred by the average human can perform when coded up into a simple program. That means perfect scores for most games.
-
-
Don't let them tell you that deep learning has achieved "superhuman" performance at any of these games -- any random programmer can come up with a better solution program for a given game in an afternoon. That's what human-level means.
Show this threadThanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
If you remove human visual affordances, then the human's performance sinks like a rock. https://openreview.net/pdf?id=Hk91SGWR- …
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Sample complexity is a serious issue. But I'm not sure your remark is fair. What counts as "data" for a human? Gaming happens after years of cognitive development. While AI is far from being able to do that, the data argument itself is still tenuous for that reason.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
In most cases, we are training "newborn bot" from scratch. To be fair to the "bot" you should also unlearn any game played by the human ever and probably also any visual concept that support game logic understanding...
-
Or you have to count the 24 images/sec during wake time as "pre training" data for the human ;-)
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.