Example predictions on robustness benchmarks ImageNet-A, C and P. Black texts are correct predictions made by our model and red texts are incorrect predictions by our baseline model.pic.twitter.com/eem6tlfyPX
U tweetove putem weba ili aplikacija drugih proizvođača možete dodati podatke o lokaciji, kao što su grad ili točna lokacija. Povijest lokacija tweetova uvijek možete izbrisati. Saznajte više
Example predictions on robustness benchmarks ImageNet-A, C and P. Black texts are correct predictions made by our model and red texts are incorrect predictions by our baseline model.pic.twitter.com/eem6tlfyPX
Full comparison against state-of-the-art on ImageNet. Noisy Student is our method. Noisy Student + EfficientNet is 11% better than your favorite ResNet-50
pic.twitter.com/BhwgJvSOYK
Method is also super simple: 1) Train a classifier on ImageNet 2) Infer labels on a much larger unlabeled dataset 3) Train a larger classifier on the combined set 4) Iterate the process, adding noise
I also highly recommend this nice video that explains the paper very well:https://www.youtube.com/watch?v=Y8YaU9mv_us …
Thanks for sharing! would like to share that similar method also concurrently discovered by top winners in one of NeurIPS competition this year, which is related to cell image classification. I guess this method works in general https://www.kaggle.com/c/recursion-cellular-image-classification/discussion/110543#latest-648051 …
interesting! thanks for sharing.
What factors in your method might have contributed to the test accuracy improvement on ImageNet-A for NoisyStudent (L2) as compared to EfficientNet-L2? (As you point out, your models are not deliberately optimized for ImageNet-A!)
Great question. We still don't fully know yet. Our hypothesis is that ImageNet-A is a difficult dataset that requires strong generalization. Our method can be viewed as data augmentation on unlabeled data and labeled data so it generalizes better.
Cool! Fwiw, we applied a similar trick in video: https://arxiv.org/abs/1812.03626 Where we train a student network on unsupervised frames from the video itself, and then use that same student to reclassify the video, gaining higher accuracy. I think there's a lot of fun in this area.
The lead author on our paper, Giulio, Will be a student researcher at Google in a couple of days. (And interned in brain last summer). Might be fun to chat with him.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.