“I’m not saying I want to forget deep learning... But we need to be able to extend it to do things like reasoning, learning causality, and exploring the world .” - Yoshua Bengio, not unlike what I have been saying since 2012 in The New Yorker.https://www.technologyreview.com/s/612434/one-of-the-fathers-of-ai-is-worried-about-its-future/ …
-
Show this thread
-
Replying to @GaryMarcus
There's a couple problems with this whole line of attack. 1) Saying it louder ≠ saying it first. You can't claim credit for differentiating between reasoning and pattern recognition. 2) Saying X doesn't solve Y is pretty easy. But where are your concrete solutions for Y?
3 replies 1 retweet 68 likes -
Replying to @zacharylipton
But I did say this stuff first, in 2001, 2012 etc? Not about louder. And no, I don’t know how to solve the problems, but I have pointed to specific directions that are finally getting some air (explicit operations over variables, in particular) that for a long time were dismissed
1 reply 0 retweets 7 likes -
Replying to @GaryMarcus
There's nothing new added to this conversation on account of deep vs not deep. Basic questions about the limits of mining associations (vs reasoning) have been plumbed far earlier and far deeper by Rubin, Robins,
@yudapearl, no?3 replies 0 retweets 15 likes -
Replying to @zacharylipton @yudapearl
The basic question IMHO is symbol-manipulation - do we need it or not? Two entirely different classes of problems. No real causal reasoning with out it, but people like Hinton and LeCun dismiss it, and even ridicule it (eg https://sites.google.com/site/krr2015/home/schedule …)
7 replies 0 retweets 15 likes -
do you think non-human animals do "real" causal reasoning?
1 reply 0 retweets 2 likes -
This Tweet is unavailable.
-
Nothing in those findings seems to require anything like a classical gofai symbolic language of thought, though...
2 replies 0 retweets 0 likes -
This Tweet is unavailable.
-
I don't need any persuading about animal cognition, just about the claim that it is accomplished with anything approximating classical symbolic AI
1 reply 0 retweets 0 likes
If you can explain how a bee extrapolates the solar azimuth function to lighting conditions it hasn’t seen before nonsymbolically, that would be a start. Generally see page 149 of Algebraic Mind and references there, plus chapter 3
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.