This point has always been known. But again, it is a limitation of supervised learning, not of the architecture (deep or not). Geoff Hinton's focus on unsupervised learning for the last 40 years (and me for the last 20) stems from this.
-
-
Replying to @ylecun @GaryMarcus
I agree it's a limitation of sup learning *as conceived today in ML*. But schools teach wetware pupils every day and that's sup learning, which implies to me that it's the choice of algorithms and how you put them together (architecture).
2 replies 0 retweets 1 like -
Replying to @titudeadjust @GaryMarcus
School learning is only partially supervised. Schooling teaches a tiny amount of high-level knowledge. The vast majority of our knowledge is acquired early and in a self-supervised manner. Without this background knowledge, schooling would be ineffective. See The Cake.
1 reply 0 retweets 7 likes -
Replying to @ylecun @GaryMarcus
I think you just made the point
@GaryMarcus is making. And mine. To me, the architecture of wetware is set by Evolution and that early learning. Semi-sup and sup learning thereafter is tuning. Much ML pre-supposes algos in the former and jumps to the latter.2 replies 2 retweets 3 likes -
Those of us interested in the fundamental questions view Evolution also as a learning process. Appealing to innateness is appealing to a prior learning process. That process involves interactions with the world and with other people, virtually all unsupervised.
3 replies 1 retweet 4 likes -
those who care about nature and nurture and the relationship between the two bridle at collapsing the two as if they were identical
1 reply 0 retweets 0 likes -
They aren't identical, but similar principles should apply to both. Importantly, it is very difficult for us to study our evolutionary past whereas it is much easier to study the results (what is "innate" in a child) and how learning happens in the individual.
1 reply 0 retweets 0 likes -
Replying to @tdietterich @GaryMarcus and
An advantage of "pure" computational methods is that we can study both. As we've discussed before, I agree that understanding what is innate and what is learned in the individual are central research questions.
1 reply 0 retweets 1 like -
Replying to @tdietterich @GaryMarcus and
Innateness should filter for essential properties of the environment, applicable to a vast majority of tasks. Spatiotemporal consistency is a reasonable innateness criteria in the real world. Unfortunately, that one is not taken into account in current DNN architectures
1 reply 0 retweets 0 likes -
Replying to @Abel_TorresM @tdietterich and
Spatiotemporal consistency (or correlation) is precisely the property that is exploited by ConvNets. Not sure why you say it's "not taken into account in current DNN architectures". Also, lots of papers on exploiting temporal consistency in vision.
2 replies 0 retweets 4 likes
@ylecun the chapter on common sense in Rebooting AI is extremely relevant here, showing many types of examples that go beyond the spatial, temporal and causal reasoning capacities not only of CNNs but also all other extant architectures.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.