New preprint from the lab: "Individual differences among deep neural network models."
https://www.biorxiv.org/content/10.1101/2020.01.08.898288v1 …
Work with @KriegeskorteLab, @HannesMehrer, and Courtney Spoerer. #tweeprint below. 1/7
-
-
Simply changing the random seed leads to considerable individual differences (shared variance in distance estimates can be as low as 44% across networks). The size of the effect is comparable to training networks with completely different image sets. 5/7pic.twitter.com/GRaCm6grir
Prikaži ovu nit -
What are the origins of this? We argue that the categorization objective does not sufficiently constrain the arrangement of category clusters and exemplars. In addition, the interplay of ReLus and properties of certain distance measures contribute to differences. 6/7pic.twitter.com/0OwYKT89tf
Prikaži ovu nit -
Dropout can help, but considerable differences remain. This calls into question the practice of using single network instances to derive neuroscientific insight. Going forward, multiple DNNs may need to be analysed (similar to experimental participants). /finpic.twitter.com/KJWbuPSGb9
Prikaži ovu nit
Kraj razgovora
Novi razgovor -
-
-
I'm curious: an RDM changes after one linearly transforms the representation space. Could it be that the representations are still similar up to some linear transform?
-
Good point. We show that the category centroids are quite well aligned, whereas category exemplars are not. This suggests to me that additional linear transforms won't make it go away.
Kraj razgovora
Novi razgovor -
-
-
Were the results replicated on a larger dataset like Imagenet? I believe the differences won't be as drastic as in the case CIFAR-10 which this paper shows
Novi razgovor -
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.