Deep neural networks have seen a surge in popularity in neuroscience and psychology, where they are used as a modelling framework to understand (visual) information processing in the brain. 2/7pic.twitter.com/NQzLwIt50p
U tweetove putem weba ili aplikacija drugih proizvođača možete dodati podatke o lokaciji, kao što su grad ili točna lokacija. Povijest lokacija tweetova uvijek možete izbrisati. Saznajte više
Deep neural networks have seen a surge in popularity in neuroscience and psychology, where they are used as a modelling framework to understand (visual) information processing in the brain. 2/7pic.twitter.com/NQzLwIt50p
A computationally convenient (and therefore common) approach is to rely on single pre-trained computer vision models (Alexnet, VGG, etc.). But do DNNs, just like brains, exhibit individual representational differences that need to be accounted for? 3/7
Here we test this by training multiple identical network instances while varying only the random seed during weight initialisation. We compare the learned representations using a technique from systems neuroscience: representational similarity analysis (RSA). 4/7pic.twitter.com/mb9fCxzrQP
Simply changing the random seed leads to considerable individual differences (shared variance in distance estimates can be as low as 44% across networks). The size of the effect is comparable to training networks with completely different image sets. 5/7pic.twitter.com/GRaCm6grir
What are the origins of this? We argue that the categorization objective does not sufficiently constrain the arrangement of category clusters and exemplars. In addition, the interplay of ReLus and properties of certain distance measures contribute to differences. 6/7pic.twitter.com/0OwYKT89tf
Dropout can help, but considerable differences remain. This calls into question the practice of using single network instances to derive neuroscientific insight. Going forward, multiple DNNs may need to be analysed (similar to experimental participants). /finpic.twitter.com/KJWbuPSGb9
Interesting! So for me the question is: similarly, all 'real' brains are different. But what are the 'conserved' representations?
Conserved across human individuals, or between DNNs and brains?
any thoughts on which constrains individual differences more: randomized initialization, or randomized training order?
A good question to which I have no definite answer. We have compared differences that emerge from different random seeds (smallest intervention), differences due to different image sets (same categories), and differences due to different categories (Figure 5 in the paper).
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.