Will this happen in chess? Imagine Alphazero becoming much stronger! A scary/fascinating thought.https://twitter.com/GaryMarcus/status/1168596718922813440 …
-
-
Replying to @PHChess
thoughts: (1) there is some "knowledge" (parameters, neural network design, etc.) injected into
#AlphaZero (see the burdens of replicating the original effort with@LeelaChessZero); (2) maybe the success of AZ is precisely due to the absence of bias and human chess knowledge ;)2 replies 0 retweets 4 likes -
Replying to @acherm @LeelaChessZero
2) Agreed. But lets say we add tablebases. That is "facts", not just unverified human knowledge. Surely that can only help? Alphazero is by far the most amazing I have ever seen, but experimenting with improving it sounds like a cool next step.
3 replies 0 retweets 3 likes -
in the long run i believe that knowledge will help a lot, though our current technology is not that good at incorporating such knowledge. monte carlo tree search is itself a powerful prior that does help a lot, already incorporated into alpha*
2 replies 0 retweets 5 likes -
Replying to @GaryMarcus @PHChess and
The thing about chess (that does not carry over to the real world) is that the rules of chess encode complete knowledge of the game. The AlphaGo family of methods is just applying computation to make that knowledge explicit in the form of training data compressed into a network
1 reply 0 retweets 11 likes -
Replying to @tdietterich @GaryMarcus and
So despite
@GaryMarcus valid points about prior knowledge in real world tasks, in chess, the only prior knowledge you need is that this is a two-player zero-sum game of perfect information played according to the given rules.3 replies 0 retweets 7 likes -
Replying to @tdietterich @GaryMarcus and
it's not a pure constraint satisfaction problem (SAT/CSP/SMT), and declaratively specifying rules is not enough: you need some human knowledge to drive the computation (parameters tuning, design of the NN architecture): https://blog.lczero.org/2018/12/alphazero-paper-and-lc0-v0191.html …
1 reply 0 retweets 3 likes -
Replying to @acherm @GaryMarcus and
I challenge you to relate parameter tuning and network architecture to any domain knowledge in Go or Chess (e.g. aside from the dimension of the board). This is in fact a big problem for deep learning: there is a huge gap between human-level knowledge and algorithm settings
5 replies 0 retweets 10 likes
DeepMind offered no real account of how they selected the specific architectural parameters, nor of of how robust the system might be across different parameters. but i presume that @acherm is correct that human knowledge was involved, contra the framing of the 2017 Nature paper
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.