New preprint from the lab: "Individual differences among deep neural network models."
https://www.biorxiv.org/content/10.1101/2020.01.08.898288v1 …
Work with @KriegeskorteLab, @HannesMehrer, and Courtney Spoerer. #tweeprint below. 1/7
-
-
Here we test this by training multiple identical network instances while varying only the random seed during weight initialisation. We compare the learned representations using a technique from systems neuroscience: representational similarity analysis (RSA). 4/7pic.twitter.com/mb9fCxzrQP
Prikaži ovu nit -
Simply changing the random seed leads to considerable individual differences (shared variance in distance estimates can be as low as 44% across networks). The size of the effect is comparable to training networks with completely different image sets. 5/7pic.twitter.com/GRaCm6grir
Prikaži ovu nit -
What are the origins of this? We argue that the categorization objective does not sufficiently constrain the arrangement of category clusters and exemplars. In addition, the interplay of ReLus and properties of certain distance measures contribute to differences. 6/7pic.twitter.com/0OwYKT89tf
Prikaži ovu nit -
Dropout can help, but considerable differences remain. This calls into question the practice of using single network instances to derive neuroscientific insight. Going forward, multiple DNNs may need to be analysed (similar to experimental participants). /finpic.twitter.com/KJWbuPSGb9
Prikaži ovu nit
Kraj razgovora
Novi razgovor -
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.