The researchers were recording from a *very* small subset of all neurons in the temporal lobes, and presenting ppl with a *very, very* small set of stimuli from the set of all possible stimuli. Yet, they found cells that responded to their stimuli. What does that tell us?
-
-
Replying to @tyrell_turing @neuroecology and
We’ll, Rodrigo Quian Quiroga explained it very clearly. The Jennifer Aniston cell was actually a “Rachel in Friends” cell that also fire, at lower rate, for Monica and Phoebe.
3 replies 0 retweets 6 likes -
Replying to @apeyrache @tyrell_turing and
Exactly what you would expect in neural network with winner-take-all localist output units...
1 reply 0 retweets 0 likes -
Replying to @GaryMarcus @apeyrache and
No, I disagree. Given the sampling issues I discussed (small % of stimuli and cells) it’s actually a guarantee of a distributed code.
1 reply 0 retweets 9 likes -
Replying to @tyrell_turing @apeyrache and
but how would the data look different? given the sampling that we can currently do, there is no pattern of data that you would accept
1 reply 0 retweets 0 likes -
Replying to @GaryMarcus @apeyrache and
If we it was hard to find neurons in higher-order areas that responded to stimuli, and when they did they only responded to one (note: cells in these studies usually responded to multiple stimuli), that would at least not falsify a localist account. The current data does.
1 reply 0 retweets 1 like -
Replying to @tyrell_turing @apeyrache and
there are many connectionist models with localist output schemes (eg a node cat, anode dog etc) w variable activity levels that are thresholded by winner take all that behave exactly like this. localism is about what nodes stand for, not whether they have firing rates or real
#s1 reply 0 retweets 0 likes -
Replying to @GaryMarcus @apeyrache and
Those localist output schemes are kludgy, and NN modellers know that. A human’s output for ‘cat’ is very high D! And I never said it’s about firing rates: it’s the fact that cells respond to multiple distinct stimuli that disproves localism.
3 replies 0 retweets 1 like -
Replying to @tyrell_turing @apeyrache and
every node in alexnet etc is localist and responds to different degrees to multiple inputs. you are disregarding the whole field.
1 reply 0 retweets 0 likes -
Replying to @GaryMarcus @apeyrache and
?? I'm sorry, I feel like we're speaking a different language now... AlexNet is not localist, per the definition I know. From Geoff's intro lectures (http://www.cs.toronto.edu/~bonner/courses/2014s/csc321/lectures/lec5.pdf …): "Localist architectures dedicate one neuron to each thing". Neither AlexNet nor the brain do that.
3 replies 0 retweets 1 like
“AlexNet...solves..image classification where the input is an image of one of 1000 ...classes (e.g. cats, dogs etc.) and the output is a vector of 1000 numbers. The ith element of the output vector is interpreted as the probability that the input image belongs to the ith class
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.