"When the game is played with inhuman speed and accuracy, abusing superior control is very likely to be the best and most effective ... way to play the game." Looks like a clear confounder here in DeepMind's latest game-playing algorithm:https://medium.com/@aleksipietikinen/an-analysis-on-how-deepminds-starcraft-2-ai-s-superhuman-speed-could-be-a-band-aid-fix-for-the-1702fb8344d6 …
-
-
To me it appears that beating a video game with deep RL only shows that with enough examples you can largely reverse engineer the code running it, like a sort of "messy" inductive programming. I don't see what you learn from that about reality and human intelligence.
-
as
@dileeplearning’s DQN demos show, this is actually too generous. the contingencies for a given level have been hacked, but the reverse-engineering is so fragile it breaks if you change the details of the game. what’s been induced is a long way from a general program - 1 more reply
New conversation -
-
-
This Tweet is unavailable.
-
Indeed. Like “superhuman in a computational task should imply superhuman in intelligence in general”. Right, try playing Go in a four dimensional space, any system is better than humans
End of conversation
-
-
-
Of course, they're not advancing AGI. In fact they are retarding progress in AGI because they continue to hog research funds that could be used on more promising projects. But then again, you Marcus, have no alternative other than "let's bring back GOFAI." You're no better.
-
You don't have to have the solution to critique existing work and ask questions. If we asked more powerful questions of state-of-the-art we'd have more innovation around the scientific problems of NLU & image rec.
- 4 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.