Direct feedback alignment for training neural networks: https://arxiv.org/abs/1609.01596 - can train NNs with random projections of the error signal.
-
-
Replying to @fchollet
As measured against absolutely atrocious backprop baselines in most cases.
1 reply 0 retweets 0 likes -
Replying to @dwf
That it works at all is interesting. How well it works is secondary, since this is not meant to be practical...
2 replies 0 retweets 0 likes -
Replying to @fchollet
Is it that interesting? Geoff pointed out > a year ago that disjoint forward/backward paths work just fine.
2 replies 0 retweets 1 like
Replying to @dwf
and we all know that implementing your backwards pass with a bunch of bugs doesn't prevent learning. Still...
9:52 AM - 14 Sep 2016
0 replies
0 retweets
1 like
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.