Been thinking about this a lot. Possibilities ~ 1. Bio-plausible learning is very architecture-dependent? 2. Similarly, too much is hardwired by evolution? 3. We have the wrong learning rules 4. Missing large chunks of biologically learning algorithm. STDP is tail of elephanthttps://twitter.com/butterflyarson/status/970693942218981376 …
-
Show this thread
-
Replying to @neuroecology
I mean... we don't even understand when and why good-old backprop "works". In fact it often doesn't until you add batchnorm and other tricks.
2 replies 0 retweets 4 likes -
Replying to @ItsNeuronal
But it does work sometimes! I think Tim's point was that their biological rules, uh, basically don't.
2 replies 0 retweets 2 likes -
Replying to @neuroecology @ItsNeuronal
1/ If we're talking about Tim Lillicrap's presentation here, I can clarify: Tim and I (and a few other ppl) got excited recently about Bengio's target propagation proposals, bc they seem to solve some aspects of biological infeasibility from backprop, but allow gradient descent.
2 replies 0 retweets 3 likes -
2/ Long-story short: targetprop ain't gonna cut it in its current form. It is too restrictive, and doesn't follow true gradients across batch. We need something else... But, it was a reasonable place to explore!
1 reply 0 retweets 5 likes -
I hope y'all are trying all the others too... from the recent paper it wasn't clear whether feedback alignment, direct feedback alignment, equilibrium propagation, local representation alignment, synthetic gradients or others would fare equally, better or worse on ImageNet...
1 reply 0 retweets 1 like -
Yes, indeed, working on that. Only difference target prop was examined in the work Tim presented at Cosyne. More to come in future.
1 reply 0 retweets 0 likes
That's great! In that case, I'm far from saying biologically plausible backprop won't work on hard tasks. And anyway ImageNet is obviously the wrong task although I understand why you chose it. I want to see it on some unsupervised prediction method like https://arxiv.org/abs/1605.08104
-
-
Yes, totally, saying biologically realistic backprop doesn’t scale up is a mischaracterization of the finding to date. Difference target prop doesn’t scale up, period. I take your point on tasks, but I think ImageNet should be solvable by a good algorithm...
0 replies 2 retweets 2 likesThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.