3/ If you read the early papers (e.g. from @ylecun and Yoshua http://www.iro.umontreal.ca/~lisa/bib/pub_subject/language/pointeurs/bengio+lecun-chapter2007.pdf …), you see DL is a research *program* with two key elements: 1) learning is better, avoid built-in assumptions if you can, 2) use hierarchical, distributed representations trained end-to-end.
Exactly what you would expect in neural network with winner-take-all localist output units...
-
-
No, I disagree. Given the sampling issues I discussed (small % of stimuli and cells) it’s actually a guarantee of a distributed code.
-
but how would the data look different? given the sampling that we can currently do, there is no pattern of data that you would accept
- 9 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.