I am -loving- this combined Rectifed Adam (RAdam) + LookAhead optimizer. It's freaky stable and powerful, especially for my purposes. Only weird thing to keep in mind- I've had to multiply my learning rates by over 100x to maximally take advantage.https://medium.com/@lessw/new-deep-learning-optimizer-ranger-synergistic-combination-of-radam-lookahead-for-the-best-of-2dc83f79a48d …
-
Show this thread
-
Replying to @citnaj
Just started playing with it last night, super promising!
1 reply 0 retweets 1 like -
-
Replying to @citnaj
Stanford Cars + ResNet152 + Mixup. 20e one-run acc (only vaguely indicative but hey...) +1.4% with Ranger. Agree with the increase in LR, 2e-3 vs 3e-2 (using lr_find). Playing with it + EfficientNets now, but they're new to me so don't have as good a feel for them as ResNet
1 reply 1 retweet 1 like -
Replying to @mcgenergy
I can tell you this much- I had a super hard time fine tuning EfficientNet and others have reported the same. This optimizer might change that.....
2 replies 0 retweets 1 like -
Replying to @citnaj
haha thanks good to know, hopefully this is serendipitous timing
, otherwise I might have to just commit to a “ResNets4Lyfe” tattoo
1 reply 0 retweets 1 like
LOL I could honestly do the same....
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.