Exactly what you would expect in neural network with winner-take-all localist output units...
-
-
Replying to @GaryMarcus @apeyrache and
No, I disagree. Given the sampling issues I discussed (small % of stimuli and cells) it’s actually a guarantee of a distributed code.
1 reply 0 retweets 9 likes -
Replying to @tyrell_turing @apeyrache and
but how would the data look different? given the sampling that we can currently do, there is no pattern of data that you would accept
1 reply 0 retweets 0 likes -
Replying to @GaryMarcus @apeyrache and
If we it was hard to find neurons in higher-order areas that responded to stimuli, and when they did they only responded to one (note: cells in these studies usually responded to multiple stimuli), that would at least not falsify a localist account. The current data does.
1 reply 0 retweets 1 like -
Replying to @tyrell_turing @apeyrache and
there are many connectionist models with localist output schemes (eg a node cat, anode dog etc) w variable activity levels that are thresholded by winner take all that behave exactly like this. localism is about what nodes stand for, not whether they have firing rates or real
#s1 reply 0 retweets 0 likes -
Replying to @GaryMarcus @apeyrache and
Those localist output schemes are kludgy, and NN modellers know that. A human’s output for ‘cat’ is very high D! And I never said it’s about firing rates: it’s the fact that cells respond to multiple distinct stimuli that disproves localism.
3 replies 0 retweets 1 like -
Replying to @tyrell_turing @apeyrache and
every node in alexnet etc is localist and responds to different degrees to multiple inputs. you are disregarding the whole field.
1 reply 0 retweets 0 likes -
Replying to @GaryMarcus @apeyrache and
?? I'm sorry, I feel like we're speaking a different language now... AlexNet is not localist, per the definition I know. From Geoff's intro lectures (http://www.cs.toronto.edu/~bonner/courses/2014s/csc321/lectures/lec5.pdf …): "Localist architectures dedicate one neuron to each thing". Neither AlexNet nor the brain do that.
3 replies 0 retweets 1 like -
Replying to @tyrell_turing @apeyrache and
not saying whole net is, but output nodes are exactly a localist code.
2 replies 0 retweets 1 like -
Replying to @GaryMarcus @apeyrache and
Right, we agree on that. But, (1) AlexNet (and other architectures) do that because it's an easy way to do classification, not because it's principled/advantageous, (2) the real brain doesn't do that, it uses a distributed code for categorical concepts.
1 reply 0 retweets 2 likes
well maybe that is sheer conjecture
-
-
Replying to @GaryMarcus @tyrell_turing and
I prefer "well that's just like your opinion, man"
0 replies 0 retweets 1 likeThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.