If you see the reasoning engine as *separate* (and qualitatively different) from the deep learning system that provides it with inputs, then we disagree. Unless this reasoning "system" is a pen and paper to do math/logic. And much of human intelligence functions without it.
-
-
Replying to @ylecun
That may distill a second point of disagreement; I see nothing wrong with (for some purposes) having separate systems for (eg) image classification vs reasoning. Certainly it is possible in principle to engineer systems that way; what’s your objection? Efficiencies of training?
2 replies 0 retweets 4 likes -
Replying to @GaryMarcus @ylecun
I wonder if we are mixing different methodological questions here. If I need to build a robust and reliable system today, I would mix deep learning, probabilistic programming, and symbolic methods. 1/
1 reply 6 retweets 27 likes -
But at the same time, I think it is critically important to see if we can push DL methods to provide a unified solution to perception, reasoning, and action (with robustness and safety). 2/
5 replies 4 retweets 31 likes -
Indeed. I think
@ylecun's position is that to reach human-level AI/AGI, pipeline systems (such as@GaryMarcus's GQ3) will never suffice. In other words, training DL like systems to use logic/symbols/reasoning via gradient descent is likely to be more robust than two-stage system.2 replies 0 retweets 4 likes -
What’s the argument that it would actually be robust? No fair overrelying on data from image recognition to explain how such techniques would solve fundamentally different problems (language, common sense, everyday reasoning) that so far haven’t yielded to same approaches.
2 replies 0 retweets 2 likes -
The ML community is working hard on the robustness question. We know that stochastic gradient descent sometimes confers robustness and other times fails miserably. We are exploring many avenues: causal models, adversarial training, improved models, etc.
1 reply 2 retweets 11 likes -
Note of course difference between deep learning and ML; IMHO deep learning is useful, but just a subset of final ML toolbox. Ultimately ML will need to encompass many techniques (some not invented yet), and integrate them all to achieve robustness.
2 replies 0 retweets 10 likes -
DL = { models are modular & non-linear, minimizes an objective function, computes gradient "analytically", uses gradients to minimize }. (This applies to all learning paradigms: supervised, unsupervised,...) Which of these 4 pillars do you propose to replace, and by what?
6 replies 5 retweets 31 likes -
@AvilaGarcez said what i would have said: not replace but supplement with tools for symbolic operations2 replies 0 retweets 3 likes
also i would that differentiability has many uses but hard to see how to get it to work well with complex compositional sentences and ideas
-
-
Replying to @GaryMarcus @ylecun and
This model of space isn't compatible with compositionality:pic.twitter.com/ejAiuOcdf2
1 reply 0 retweets 1 like -
Replying to @zarzuelazen @GaryMarcus and
This model of space is compatible with compositionality:pic.twitter.com/7ar75y8IVt
0 replies 0 retweets 1 like
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.