1/ "ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks" is so packed with great insights. Of particular note: They pretrained their generator on L1 loss, and report that it actually improves quality. https://arxiv.org/abs/1809.00219
-
-
5/ Not only is it (much) faster to do it this way, but I've suspected it could lead to a better outcome because of less "blind leading the blind" going on and more productive training. You're benefiting from the strengths of both more straightforward training and GANs.
Show this thread -
6/ I think I may be able to demonstrate that with DeOldify soon, but that's what they're reporting for ESRGAN. Exciting!
Show this thread -
7/ I mentioned this is in the fastai course coming up but I want to make sure to credit is given where it's due.
@jeremyphoward with the idea of actually using a simpler loss (mse, etc) for pretraining the generator, and doing binary classification for pretraining the critic.Show this thread -
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.