It’s also what annoys me the most. AlphaStar is incredibly impressive at many levels, but why make obviously false claims such as “it is restricted to human-like actions” just to get more PR and mislead people not familiar with Starcraft or Machine Learning.
-
-
Show this thread
-
Sometimes a little bit of honesty and humility along the lines of “this works quite well but we’re not there yet and have a long way to go” instead of “Look we beat the pros!!!11” can go a long way.
Show this thread
End of conversation
New conversation -
-
-
Do you know what DeepMind’s mission is/was other than PR?
-
I’d like to believe that most researchers at DeepMind want to do honest science and advance the state of AI. But the reporting errors, experimental mistakes, and PR stunts are so obvious that any knowledgeable person notices. It baffles me who makes these decisions.
-
They also pull the ubiquitous trick of burying logistic regression results that are about as good as DL results in the supplement. Here are the results in the paper:pic.twitter.com/KxEaCQslVR
-
- End of conversation
New conversation -
-
-
-
Human psychology prefers positive jubilant messages over negative reality checks. Hence, posts doing a critical review of Alphastar will never get the same exposure as trumped up articles on superintelligence
-
It's very unfortunate honestly considering the repro. problems science is already having, as well as tackling current limitations of DL/RL and pushing towards actual scientific, model-based approaches. PR stunts are nice, but we have a very long way to go.
-
Agree on honest reproducible science. Disagree that PR stunts are nice.
@DeepMindAI will not convince me that this is robust science in#AI until they release their source, open themselves up to scrutiny and make strong efforts curb the#AIhype. -
I can agree on all those points definitely. It's just that nobody, especially a company like DeepMind, wants to do that because it means severe curbing of public enthusiasm. Not saying that to excuse it in any way, just a sad reality.
End of conversation
New conversation -
-
-
Heretofore we've trained AIs to play games designed for humans by humans. I expect people will design games (by hand or AI) for AIs to play and humans to watch, bypassing the "too many clicks" broken mechanisms. Games *only* winnable by computers but fun for us to spectate.
-
Sounds a lot like General Game Playing cf http://ggp.stanford.edu/notes/overview.html … no?
End of conversation
New conversation -
-
-
First off,
@AleksiPietikin1, love your post — but also makes me think of this so much.
pic.twitter.com/j6mbVV8MXR
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Anyhow, Alphastar peak APM is in the same order of magnitude of human capability while mean is lower. Technically it could be way higher. For self-driving cars, Nvidia is demonstrating 2000 processed image/sec with lower than 5ms latency. Potentially 120'000 APM.
-
The peak APM of 1500 is not human. Remember that is mostly EAPM. Humans can't get past 400 EAPM. TLO is wheel spamming in that pic.
-
Are you actually sure that all of AlphaStar’s actions are effective? It’s not remotely beyond the realm of possibility that it learned to triple-click for some reason. Was there actually a penalty applied to the reward signal for number of actions?
-
This is double-plus’s true if they used human input as a teacher signal at any point.
-
Look at this clip Still spam clicking but the micro is clearly superiorhttps://www.youtube.com/watch?v=H3MCb4W7-kM&t=39m30s …
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
