learning =/= adaptivity, though. Better priors, if known, can indeed confer better performance. At an extreme you hand code the solution directly and have zero free parameters. But the more priors/structure, the less a model can be applied to different tasks (adaptability)
-
-
-
again where is the evidence for this? if you have a prior that tells you that the objects exist and persist in space time vs a pure blank slate i strongly believe you will be better off. without MCTS alphago wouldn’t “adapt” to go nearly as well
- 3 more replies
New conversation -
-
-
I agree there is a tradeoff. But what would constitute evidence? How would you quantify "number of innate priors"? BTW I think even in primates most everything is innate. Maybe 99% in goats and 98% in primate--x2 higher, but still low. (Arbitrary units).pic.twitter.com/DZtVnVgB30
-
Peter Marler, Steve Pinker, Noam Chomsky, Elizabeth Spelke and I have all argued that the tradeoff goes in opposite direction: better priors -> better learning. just like in
@ylecun 1989 tech report which showed better results as amount of built in info increased:pic.twitter.com/a49Kr1LLZj
- 5 more replies
New conversation -
-
-
Tricky commenting on innate priors unless you know what neural process makes an innate prior. Might be deeply involved with intelligence or might be completely unrelated.
- 2 more replies
New conversation -
-
-
Also, his statement is incorrect on its face. As the # of specific behaviors increases, so does the diversity of situations that can be accommodated by that set of behaviors.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
For reference, here is a link to one of the papers discussed in "Episode 49: How Important is Learning?", from which I clipped Fig. 2. No data supporting the tradeoff, but there is at least an evolutionary argument about why one would expect this.https://www.nature.com/articles/s41467-019-11786-6 …
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
More innate priors can't mean faster learning. We all behave reliably stupid in the middle of a town when leaping back automatically due to a 'snake-like' twig on the pavement. Zero faster learning, here. Just the contrary. Faster learning and less adaptivity! How's that?
-
Convolution is perfect example of a prior that demonstrably improves learning in many circumstances;
@Ylecun's own first working paper on the topic showed this decisively. [1/2] - 1 more reply
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.