Watson. The only commonality between Watson and AlphaGo is the AI moniker, so it feels like an odd comparison
-
-
Replying to @Zergylord @GaryMarcus
This is incorrect in several ways though. First, Watson seems to have been an everything and the kitchen sink approach. They used logistic regression, rule based, lexical databases, information retrieval, grammar based parsers, SVM based relation extractors etc.
3 replies 0 retweets 2 likes -
Part of the etc. is their heavy use of simulations to fit parameters for their strategy modules. I'll have to recheck but I recall the use of bayesian methods, reinforcement learning, Neural networks and monte carlo search in the game strategies paper: https://ieeexplore.ieee.org/document/6177733 …
3 replies 0 retweets 3 likes -
Replying to @sir_deenicus @GaryMarcus
I'm sure they did lots of things to make it work. It's still the most symbolic and least DRL system coming out of any modern research lab. It's quite a bit closer to the Gary's cognitive hybrid systems approach than anything out of DeepMind.
1 reply 0 retweets 0 likes -
Replying to @Zergylord @sir_deenicus
hype aside, alphago is a hybrid, with a monte carlo tree sim backbone that traversed trees out of symbol CS 101, alongside the DRL.
1 reply 0 retweets 1 like -
Replying to @GaryMarcus @sir_deenicus
That's an odd argument to make since you also claim DeepMind relies too much on DRL.
1 reply 0 retweets 0 likes -
Replying to @Zergylord @sir_deenicus
Both can be & are true. Really telling to me is that DRL on its own worked for Atari games but not Go— and that DM’s spin on Go really downplayed the hybrid aspect that was essential to its success. (It was also apparently necessary to build in the rules for Go, unlike Atari.)
3 replies 0 retweets 0 likes -
Replying to @GaryMarcus @sir_deenicus
First line from the approach section of the blog post: "We created AlphaGo, a computer program that combines advanced search tree with deep neural networks." They don't refer to it as "symbolic" b/c thats less precise than model-based RL, which is the subfield where MCTS lives.
1 reply 0 retweets 0 likes -
Replying to @Zergylord @sir_deenicus
No, but even some of their most visible twitterers need to be reminded that the stuff is hybrid, in some of the very threads we are in.
1 reply 0 retweets 1 like -
Yes, the AlphaZero employs a hybrid algorithm. The MCTS would invalidate a biologically plausible model for Go game play. However, this does not imply that a DL derived approximation of MCTS cannot be trained.
1 reply 0 retweets 0 likes
Sure, but nobody has ever shown this. Convolution is a great example of something that COULD be learned but really ought to be innate & nobody in practice tries to learn, because it would be vastly less efficient & not well-enough attested in many samples. MCTS for games is same.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.