...continued 5. The argument that "DL sucks" is simply wrong. DL is gradient-based optimization of multi-module system. That is not going away. 6. Supervised and reinforcement learning as they exist today are insufficient. .....
-
-
Replying to @ylecun @GaryMarcus
... 7. Something like self-supervised learning is necessary. Now.....
2 replies 4 retweets 27 likes -
Replying to @ylecun @GaryMarcus
The real questions are Q1. Exactly how do we get DL systems to learn to reason? Q2. How do use self-supervised learning to get machines to learn abstract representations of the world (call them symbols if you wish, but really patterns of activity of neural nets, aka vectors)?
3 replies 15 retweets 64 likes -
Replying to @ylecun @GaryMarcus
Now, whether we actually agree or disagree depends entirely on the details of the answers to these Qs. Hence the pointlessness of the discussion and the necessity to work on answers. I've listed these Qs as the most important ones in AI in all my talks of the last 5 years....
3 replies 3 retweets 21 likes -
Replying to @ylecun @GaryMarcus
...But they have existed for a very long time: since the early 90s for Q1 and since the early 80s for Q2. Now that the DL machinery works, and that so many people are working on both Qs, we have a shot at making real progress.
1 reply 2 retweets 19 likes -
Replying to @ylecun @GaryMarcus
I guess the remaining question for your position are: GQ1. Will DL be part of the solution (you said yes) GQ2. Do you agree with "vectors, not symbols; diff functions, not hard logic" GQ3. If not, how do you propose we make reasoning compatible with DL?
3 replies 3 retweets 23 likes -
Replying to @ylecun
GQ1. DL is part of the solution GQ2. "vectors, not symbols” is false dichotomy diff functions: yes, in part Operations a la logic, we do need (contra your view) GQ3. Outputs of deep learning may serve as input to reasoning; symbolic techniques needed for some inferences.
3 replies 2 retweets 14 likes -
Replying to @GaryMarcus
Most humans don't actually do much that resemble your answer to GQ3, except a small number of humans using pen and paper, and only in the last couple of millennia. Right now, we need to get machines to the level of a house cat. Never mind symbolic mathematics and formal logic.
4 replies 1 retweet 23 likes -
Replying to @ylecun
I don’t argue that people can’t use formalisms that involve derivatives just because most people can’t explicitly explain what a derivative is. I think you are confusing formal, conscious use of a certain kind of machinery with what brain does unconsciously.
1 reply 0 retweets 8 likes -
Replying to @GaryMarcus
If you see the reasoning engine as *separate* (and qualitatively different) from the deep learning system that provides it with inputs, then we disagree. Unless this reasoning "system" is a pen and paper to do math/logic. And much of human intelligence functions without it.
1 reply 2 retweets 7 likes
That may distill a second point of disagreement; I see nothing wrong with (for some purposes) having separate systems for (eg) image classification vs reasoning. Certainly it is possible in principle to engineer systems that way; what’s your objection? Efficiencies of training?
-
-
Replying to @GaryMarcus @ylecun
I wonder if we are mixing different methodological questions here. If I need to build a robust and reliable system today, I would mix deep learning, probabilistic programming, and symbolic methods. 1/
1 reply 6 retweets 27 likes -
But at the same time, I think it is critically important to see if we can push DL methods to provide a unified solution to perception, reasoning, and action (with robustness and safety). 2/
5 replies 4 retweets 31 likes - 9 more replies
New conversation -
-
This Tweet is unavailable.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.