Our new paper is up @BadreLabhttps://twitter.com/biorxiv_neursci/status/852642700650217472 …
-
-
Nice one! We did work w deep nets speculating regions further downstream = harder 2 decode http://dx.doi.org/10.1101/071076 &http://dx.doi.org/10.7554/eLife.21397 …
2 replies 1 retweet 2 likes -
Thanks. Was not aware of the work. Will take a look. Do you show why this is so?
1 reply 0 retweets 1 like -
Yes, we showed as you move deeper representations become increasingly orthogonal — see paper for more: https://elifesciences.org/content/6/e21397 …pic.twitter.com/DKdYKkmYUH
1 reply 2 retweets 1 like -
Minor differences in representations get magnified by every quasi-linear transformation in a network. So, traversing one layer pushes every
1 reply 1 retweet 2 likes -
item slightly toward an arbitrary corner of a high-dimensional weight space, which is ok, but traversing many layers compounds the effect
1 reply 1 retweet 1 like -
making every item orthogonal. Training in deep learning networks (e.g., for object recognition tasks) does not ameliorate the problem, but
1 reply 1 retweet 1 like -
creates equivalence classes for items sharing the same label, such that tigers will be similar to one another, but no more similar to lions
1 reply 1 retweet 1 like -
than mopeds.
1 reply 1 retweet 1 like
I hope that helps! Twitter is not the best for long discussions! 
-
-
thanks! More once I've read your paper!
1 reply 0 retweets 1 like - 2 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.