Can deep neural networks learn everything and reproduce any cognitive abilities? No matter your opinion on the question, @GaryMarcus ‘s counter-argument is thought provoking. The 2001 book is a great (but longer) read.https://medium.com/@GaryMarcus/the-deepest-problem-with-deep-learning-91c5991f5695 …
-
Show this thread
-
-
Replying to @apeyrache
1/ I'll preface this small reply thread with the following: I'm responding to the linked Medium post, not
@GaryMarcus's 2001 book, which I haven't had a chance to read yet (though it's on my list all to continuously expanding list).2 replies 1 retweet 6 likes -
2/ What is deep learning? Is Yoshua calling for hybrid systems that utilize non-deep, traditional symbol processing mechanisms? No, he's not. Many ppl, including
@GaryMarcus, have a tendency to associate deep with its most prevalent form, but it is a far more general approach.1 reply 1 retweet 8 likes -
3/ If you read the early papers (e.g. from
@ylecun and Yoshua http://www.iro.umontreal.ca/~lisa/bib/pub_subject/language/pointeurs/bengio+lecun-chapter2007.pdf …), you see DL is a research *program* with two key elements: 1) learning is better, avoid built-in assumptions if you can, 2) use hierarchical, distributed representations trained end-to-end.1 reply 2 retweets 15 likes -
Replying to @tyrell_turing @apeyrache and
4/ Supervised training of NNs with backprop fits into that program, but does not define it. What Yoshua has been saying, as in the interview
@GaryMarcus quoted, is that we want things like causal reasoning, but we want to *learn* it using distributed, hierarchical systems.1 reply 1 retweet 10 likes -
Replying to @tyrell_turing @apeyrache and
5/ My reaction to
@GaryMarcus's call for hybrid systems in that essay is that he doesn't sufficiently recognize that good hybrid systems will (1) represent P and Q in a distributed fashion, (2) *learn* P and Q and the form of their relationship. That's surely how brains do it.2 replies 1 retweet 18 likes -
“Surely” seems a bit strong here — what’s your logic? I did allow in the Medium essay for embeddings (distributed codes) as a way of representing instances of variables, said that was more modern than my 2001 proposal.
1 reply 0 retweets 3 likes -
I think “surely” is appropriate here actually... On (1): that the brain uses distributed codes is well established empirically and not really a Q anymore, (2) Most relational reasoning is learned. There may be some relations or concepts that are innate, but they’re limited, IMO.
4 replies 0 retweets 10 likes
2. distributed codes for motor yes; but does brain exclusively use distributed codes? why Oprah neurons
-
-
-
Replying to @neuroecology @GaryMarcus and
Quite. In fact, the data explicitly disproves any kind of grandmother cells. Consider for a moment the “Jennifer Anniston cells” that were reported in the media several years ago:
1 reply 0 retweets 4 likes - 14 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.