Open letter to @ylecun: I have been explicit that I believe that symbol-manipulation is part of the solution to AGI; Hinton has ridiculed that idea. Where do you fit in? With me? W Hinton? If in between, where? The field would benefit from a clear statement of your view.https://twitter.com/tabithagold/status/1070736319901519876 …
-
1:57 -
Replying to @GaryMarcus
I have expressed my position on this many, many times (including in the recent book "Architects of Intelligence"). But you still seem misunderstand it every time and to insist that we disagree, when we don't actually disagree that much. I'm tired of wasting my time.... 1/2.
3 replies 14 retweets 99 likes -
Replying to @ylecun @GaryMarcus
... but here we go again 1. Whatever we do, DL is part of the solution 2. Hence reasoning will need to be compatible with DL 3. That means using vectors instead of symbols, and differentiable functions instead of logic. 4. This will require new architectural concepts. 2/2
10 replies 18 retweets 128 likes -
Replying to @ylecun @GaryMarcus
...continued 5. The argument that "DL sucks" is simply wrong. DL is gradient-based optimization of multi-module system. That is not going away. 6. Supervised and reinforcement learning as they exist today are insufficient. .....
3 replies 6 retweets 42 likes -
Replying to @ylecun @GaryMarcus
... 7. Something like self-supervised learning is necessary. Now.....
2 replies 4 retweets 27 likes -
Replying to @ylecun @GaryMarcus
The real questions are Q1. Exactly how do we get DL systems to learn to reason? Q2. How do use self-supervised learning to get machines to learn abstract representations of the world (call them symbols if you wish, but really patterns of activity of neural nets, aka vectors)?
3 replies 15 retweets 64 likes -
Replying to @ylecun @GaryMarcus
Now, whether we actually agree or disagree depends entirely on the details of the answers to these Qs. Hence the pointlessness of the discussion and the necessity to work on answers. I've listed these Qs as the most important ones in AI in all my talks of the last 5 years....
3 replies 3 retweets 21 likes -
Replying to @ylecun @GaryMarcus
...But they have existed for a very long time: since the early 90s for Q1 and since the early 80s for Q2. Now that the DL machinery works, and that so many people are working on both Qs, we have a shot at making real progress.
1 reply 2 retweets 19 likes -
Replying to @ylecun @GaryMarcus
I guess the remaining question for your position are: GQ1. Will DL be part of the solution (you said yes) GQ2. Do you agree with "vectors, not symbols; diff functions, not hard logic" GQ3. If not, how do you propose we make reasoning compatible with DL?
3 replies 3 retweets 23 likes -
Replying to @ylecun @GaryMarcus
(Twitter really sucks for such exchanges).
4 replies 1 retweet 6 likes
That’s why I wrote my Medium post last week; I may write another, because I think we actually ultimately managed a productive exchange, that could be further clarified in longer form.
-
-
Replying to @GaryMarcus
Medium is for one-sided position statements. It is not a platform for such discussions. Facebook is much better for that.
1 reply 2 retweets 2 likes - 3 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
Geoffery Hinton explains the difference between symbolic AI and deep learning to great applause from the