your "billions of $ later" comment is misleading: most DL researchers weren't working on language or compositionality in language (admittedly hard problems). there have been lots of success stories in DL (e.g., speech recognition), where the money has been well spent
-
-
-
so you think none of that was relevant to language after all? certainly not portrayed that way
- 2 more replies
New conversation -
-
-
With deepest respect, I am not sure if such a "unhealthy" debate is useful for our community. We all are interested in developing learning rules which can generalize better. So, why have these unhealthy discussions? Like why Bengio v/s Marcus, It could be Bengio AND Marcus.
-
Scientific community relies on peer feedback for progress. Then why not use this opportunity to make progress TOGETHER ? After all we both are interested in solving similar problems.
- 2 more replies
New conversation -
-
-
Do you think there are potential NN solutions lurking in the historic (but neglected) literature, in addition to analyses of backprop-type problems? I was always fond of Shastri & Ajjanagade’s (1993) model of compositional inference. It was localist, of course, which may help.
-
On a related topic, I’m fairly sure that BP/DL has not properly coped with catastrophic interference yet, a problem identified at least 40 years ago (as Grossberg’s stability-plasticity dilemma). The two problems (interference and compositionality) are very likely related.
- 3 more replies
New conversation -
-
-
I have a slightly different view of the future. Future is AGI developed using first principle which will use deep learning whenever IT NEEDS IT. so DL becoming a tool (not augmenting) for the AGI. Thats the vision with which I started on Vicki, and it only became certain recently
-
As I always say - we can say we have have acheived true AGI not when shop floor guys start loosing their jobs but when AI/data scientists fear loosing theirs.
End of conversation
New conversation -
-
-
Is the answer hiding in plain sight? David Marr gave a framework for vision 40 years ago. True, he didn't have representation problem solved, and missed backpropagation in his one-directional pipeline. But the model provided a plausible mechanism for higher levels of abstraction.
-
and recently out, there's now an end-to-end trainable framework of what Marr hypothesised that outperforms all else :-) http://marrnet.csail.mit.edu/
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.