Open letter to @ylecun: I have been explicit that I believe that symbol-manipulation is part of the solution to AGI; Hinton has ridiculed that idea. Where do you fit in? With me? W Hinton? If in between, where? The field would benefit from a clear statement of your view.https://twitter.com/tabithagold/status/1070736319901519876 …
-
1:57 -
Replying to @GaryMarcus
I have expressed my position on this many, many times (including in the recent book "Architects of Intelligence"). But you still seem misunderstand it every time and to insist that we disagree, when we don't actually disagree that much. I'm tired of wasting my time.... 1/2.
3 replies 14 retweets 99 likes -
Replying to @ylecun @GaryMarcus
... but here we go again 1. Whatever we do, DL is part of the solution 2. Hence reasoning will need to be compatible with DL 3. That means using vectors instead of symbols, and differentiable functions instead of logic. 4. This will require new architectural concepts. 2/2
10 replies 18 retweets 128 likes -
Replying to @ylecun @GaryMarcus
@ylecun why is it so important to have differentiable functions?1 reply 0 retweets 1 like -
Replying to @stenichele @GaryMarcus
Without that you can't do gradient-based learning. You could give up differentiability, but then you would have to use gradient-free (zeroth order) optimization for learning, which is horribly inefficient.
2 replies 0 retweets 12 likes -
Replying to @ylecun @stenichele
Serious question: when a duckling imprints on its mother or alternative stimulus, is it doing gradient descent? Or setting a value of a variable? Both? Neither? When type your name into web browser name field and it learns your name, is that gradient descent?
1 reply 0 retweets 7 likes -
Replying to @GaryMarcus @stenichele
One-shot learning can be done very simply by storing a template in the weights of a group of neurons. Our hippocampus does this all the time. Call this "setting a variable" if that makes you happy.
2 replies 2 retweets 12 likes
It does make me happy! :) add some operations that allow you to store and retrieve and compare and concatenate and otherwise operate over the values of those variables and I will be really happy!
-
-
Replying to @GaryMarcus @stenichele
Pretty much what a key-value memory network does. It's a kind of differentiable memory. https://arxiv.org/abs/1606.03126
1 reply 3 retweets 21 likes - 3 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
Geoffery Hinton explains the difference between symbolic AI and deep learning to great applause from the