New preprint from the lab: "Individual differences among deep neural network models."
https://www.biorxiv.org/content/10.1101/2020.01.08.898288v1 …
Work with @KriegeskorteLab, @HannesMehrer, and Courtney Spoerer. #tweeprint below. 1/7
any thoughts on which constrains individual differences more: randomized initialization, or randomized training order?
-
-
A good question to which I have no definite answer. We have compared differences that emerge from different random seeds (smallest intervention), differences due to different image sets (same categories), and differences due to different categories (Figure 5 in the paper).
-
oh yes, Figure 5 definitely insightful on the question (though slightly different). so different images in same categories and different random seeds operate roughly similarly on representational consistency. will take a deeper dive soon, thx.
Kraj razgovora
Novi razgovor -
-
-
We looked at this question in our new paper /Deep Ensembles: A Loss Landscape Perspective/ (https://arxiv.org/abs/1912.02757 ) w/
@balajiln & Huiyi Hu and ran an ablation study measuring the effect of a) random inits, b) random batches, c) GPU/TPU noise, and d) learning rate.pic.twitter.com/c9y9M6s6mu
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
-
-
I've run an experiment towards this end before (see section 5 https://arxiv.org/pdf/1909.01838.pdf …) and found that the initialization randomness had a larger impact than training order, at least for the metric I was looking at.
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.