Referring to deep learning as just being good for "perception" really strains credulity. Chess has historically been a prototypical exercise in reasoning, but since variables weren't involved in it's mastery, I guess it's perception now?https://twitter.com/GaryMarcus/status/1068897657530138629 …
a challenge for you if you wanted to pursue an ant-symbol position would be to capture the power of MCTS using a neural net that didn’t just map onto code like this: https://jeffbradberry.com/posts/2015/09/intro-to-monte-carlo-tree-search/ …
-
-
Perhaps part of the confusion is that Alpha-Go exploits two related things: 1) a perfect forward model and 2) MCTS on that model. 1) is an active area of research where you could compare symbolic vs NN. 2) is a planning procedure, which are fully embraced by DRL.
-
To give an example, this paper uses planning procedures not defined by NNs, but the work meshes with the rest of the work in the DRL community, so calling it a symbolic-hybrid feels as unnecessary as for convolutions. http://papers.nips.cc/paper/8256-fast-deep-reinforcement-learning-using-online-adjustments-from-the-past …
End of conversation
New conversation -
-
-
in the case of recognizing images, that’s easy to do (viz building a perceptron that is not a transparent implementation of an obvious symbolic algorithm) ; in the case of MCTS harder. good research problem.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.