I agree that symbolic reasoning and analytical operators are crucial for intelligence! But I suspect that they can form naturally, i.e. as the best representation a system learns of a problem domain, and they will often rely on a continuum with distributed operations.
-
-
-
Possible, with the right learning objectives and "hints". My guess though is that both the brain, and whichever AI systems first crack natural language understanding, will use dedicated memory structures that support symbol manipulation. Like Ken Hayworth's DPAANN. Bottou says:pic.twitter.com/D81AHFCLwW
- 1 more reply
New conversation -
-
-
I think if you expanded your suggestion to include making networks have better internal categorical representations, and to provide for more flexible, dynamic "thinking-like" processes, everyone would agree with you. And since that is where we are headed anyway, there is no wall.
-
Flexible, dynamic "thinking-like" processes might well exhibit sufficient properties of symbolic computations. At least if buying into some sort of computational theory of mind, and associated PSSH version; without which the entire AI project would seem without foundations.
- 6 more replies
New conversation -
-
-
I don't get why your tweet is clearer than your paper? I mean, why do you have to "implicitly say no" when you can "explicitly say so". In short, you are hedging. Your argument amount "symbol-manipulating" primitives is of course wrong.
-
An Intuition Machine does not need "symbol manipulating primitives". Does AlphaZero have a "symbol manipulating primitive" when it demolished the hand crafted Stockfish logic engine? https://medium.com/intuitionmachine/alphazero-how-intuition-demolished-logic-66a4841e6810 …
#deeplearning#ai - 16 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.