1/ "ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks" is so packed with great insights. Of particular note: They pretrained their generator on L1 loss, and report that it actually improves quality. https://arxiv.org/abs/1809.00219
-
Show this thread
-
2/ I've been experimenting with pretraining both generator and critic with non-gan losses for DeOldify, it turns out, because I suspected it would lead to not only faster training but better results as well. It's still early but I'll just say it looks promising!
1 reply 0 retweets 4 likesShow this thread -
3/ The core concept definitely works functionally, I can tell you that much. Both for colorzation in DeOldify, as well as de-artifacting/super-res. This was part of lesson 7 in http://fast.ai V3 part 1, which will be released soon.
2 replies 1 retweet 3 likesShow this thread -
For pretraining: Generator is currently evaluated on VGG-based perceptual loss + L2-Wasserstein distance for "style", as described here: https://github.com/VinceMarron/style_transfer …
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.