Yes indeed. The Monte Carlo tree search is probably the biggest factor behind AlphaGo Zero's performance. Hardly starting from scratch.
-
-
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Although there is some real fuzz and gray areas there when you look at the dynamics of creating algorithms, tweaking parameters, adding a tinge of nativism, etc.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
AlphaGo trained from recorded game play. So its learning from examples. AlphaGo Zero trained from just knowing the rules of the game. Not completely scratch, but pretty much from nothing. Just because nobody has trained a DL to do MCTS, doesn't mean it can't be done.
-
The argument here is that the amount of prior knowledge to build a workable cognitive system keeps on shrinking. You cannot make the argument that systems with very little prior assumptions cannot be grown from scratch.
- 1 more reply
New conversation -
-
-
"starting from scratch" serially does/must allow stringing limited approaches together, to eventually achieve the disparate, parallel and conflicting goals of this simple-complex system. Similar to how tough math proofs dodge around technique-wise. Catch-as-catch-can; messy.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.