1/ Higher Top 1 accuracy in image classification actually isn't a great indicator that a new vision model is going to work out well in practice in an image to image task (like DeOldify). I've learned this the hard way after getting excited about a new shiny model many times!
-
Show this thread
-
2/ EfficientNets come to mind- I just have not been seeing the benefits relative to the classification performance. But this really applies generally, even within a set of architectures that have been trained the same way (FaceBook's wsl models, BiTM, etc).
1 reply 0 retweets 8 likesShow this thread -
3/ In those cases, you just don't really know until you actually try. You may be surprised, I can tell you that much! So don't just pick the one with the highest Top-1 accuracy in ImageNet...
5 replies 1 retweet 11 likesShow this thread -
Replying to @citnaj
Sai Prasanna Retweeted Sai Prasanna
I don't know whether this is tested for CV, but worth a try.https://twitter.com/sai_prasanna/status/1265730581586866182?s=19 …
Sai Prasanna added,
Sai Prasanna @sai_prasannaReplying to @dennybritzThis paper has been in my partially read list. Statistically sound method for comparing model performace, might be relevant. https://www.aclweb.org/anthology/P19-1266/ … The authors had written a delightfully name "The Hitchhiker's guide to testing statistical significance in NLP" before this one.1 reply 0 retweets 1 like
Sounds like a great lead: "Proper DNN comparison hence requires a comparison between their empirical score distributions on unseen data, rather than between single evaluation scores as is standard for more simple, convex models. "
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.