Deep learning more and more looks like a scientific revolution in the sense of Thomas Kuhn. As exciting as seeing engineering successes, is to experience first hand how a research field goes through a “paradigm shift”.
-
-
I'm happy that you do; for one thing, your work helps to make the point that it is not obvious that neutral networks can learn such things, and this helps explain the relevance of some of our results. But I disagree that NNs are fundamentally unable to learn rules, variables, 1/n
-
tree structure, or compositionality, at least under reasonable definitions. Not reasonable: picking your favorite symbolic system, checking whether a NN learns exactly that, and interpreting failure as evidence that the whole class cannot be learned.
- 2 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.