1/ "ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks" is so packed with great insights. Of particular note: They pretrained their generator on L1 loss, and report that it actually improves quality. https://arxiv.org/abs/1809.00219
-
-
3/ The core concept definitely works functionally, I can tell you that much. Both for colorzation in DeOldify, as well as de-artifacting/super-res. This was part of lesson 7 in http://fast.ai V3 part 1, which will be released soon.
Show this thread -
4/ The way I think of pretraining a GAN is this: A lot of time is wasted in going back and forth with the generator and critic not knowing what they're doing. Why not teach them directly with faster/simpler loss first to get most of the way there, -then- let them go GAN?
Show this thread -
5/ Not only is it (much) faster to do it this way, but I've suspected it could lead to a better outcome because of less "blind leading the blind" going on and more productive training. You're benefiting from the strengths of both more straightforward training and GANs.
Show this thread -
6/ I think I may be able to demonstrate that with DeOldify soon, but that's what they're reporting for ESRGAN. Exciting!
Show this thread -
7/ I mentioned this is in the fastai course coming up but I want to make sure to credit is given where it's due.
@jeremyphoward with the idea of actually using a simpler loss (mse, etc) for pretraining the generator, and doing binary classification for pretraining the critic.Show this thread -
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.