It's also technically possible to train deep learning models to perform specific symbolic tasks (say, arithmetic, sorting...) using *few* data points, if you hard-code that task in the architecture space (replacing learning with a prior)...
-
-
Show this thread
-
Neither "train on a dense sampling of the data" nor "practically hard-code a solution template and then tune a few parameters using SGD" are particularly good options.
Show this thread
End of conversation
New conversation -
-
-
"In general, anything that requires reasoning—like programming, or applying the scientific method—long-term planning, and algorithmic-like data manipulation, is out of reach for deep learning models, no matter how much data you throw at them." — https://blog.keras.io/the-limitations-of-deep-learning.html …
- End of conversation
New conversation -
-
-
To pattern or not to pattern, that is the question.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
In which terms then we can say that
#prolog did not succeeded in understanding and reasoning given the fact that the knowledge was built a prior and all the patterns were potentially known?Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
This Tweet is unavailable.
-
This Tweet is unavailable.
- Show replies
-
-
-
From a practical perspective, that's correct. However, naming "reasoning" what happens inside a neural net is an anthropomorphization. Reasoning is by definition something which requires logic and sensible thinking.
-
And what is the specific cell in your brain that performs these ?
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.