Direct feedback alignment for training neural networks: https://arxiv.org/abs/1609.01596 - can train NNs with random projections of the error signal.
That it works at all is interesting. How well it works is secondary, since this is not meant to be practical...
-
-
Is it that interesting? Geoff pointed out > a year ago that disjoint forward/backward paths work just fine.
-
and we all know that implementing your backwards pass with a bunch of bugs doesn't prevent learning. Still...
End of conversation
New conversation -
-
-
Given feedback alignment works, that this works isn't surprising. No mention of ResNets or "deeply supervised", both quite close.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.