"What works for Go may not work for the challenging problems that DeepMind aspires to solve with AI, like cancer and clean energy. IBM learned this the hard way" Picks the only large scale symbolic AI project to illustrate the potential shortcomings of DRL... https://twitter.com/GaryMarcus/status/1161690752524550144 …pic.twitter.com/3lbLcRQ0kz
Both can be & are true. Really telling to me is that DRL on its own worked for Atari games but not Go— and that DM’s spin on Go really downplayed the hybrid aspect that was essential to its success. (It was also apparently necessary to build in the rules for Go, unlike Atari.)
-
-
So we build hybrid systems but are also too reliant on DRL? How can both of those be true?
-
It’s question of emphasis, in part, but if I were running your ship I would spend more time exploring principled ways of building hybrids, and more kinds of of hybrids, and more time on on open-ended problems.
- 15 more replies
New conversation -
-
-
First line from the approach section of the blog post: "We created AlphaGo, a computer program that combines advanced search tree with deep neural networks." They don't refer to it as "symbolic" b/c thats less precise than model-based RL, which is the subfield where MCTS lives.
-
No, but even some of their most visible twitterers need to be reminded that the stuff is hybrid, in some of the very threads we are in.
- 2 more replies
New conversation -
-
-
You can't play a game if you don't know the rules! Of course the rules of Go is an input to this program. Are you saying that AlphaZero also needs to formulate the rules of Go?
-
What i am really saying is that it is very instructive to compare the two systems, DQN vs Alpha*.
- 2 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.