Here's something a lot of people still don't know: The latest DeOldify doesn't use GANs anymore. And I'm not being cute with terminology- NoGAN isn't used either. We needed something more production worthy and controllable and it just wasn't cutting it. 1/
-
-
I'm trying to straddle the line between maintaining a competitive advantage and giving back to the community that served us so much. It's tough to make the call sometimes. 4/
Show this threadThanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
What kind of perceptual loss did you end up using?
-
That's the part where I have to refrain from details (don't want to give it all away). I just want to at least point people in what I consider a very productive direction, which is to take the idea seriously that you can achieve great results without GANs!
- Show replies
New conversation -
-
-
If you use perceptual loss in GANs, it still doesn't get better results than the method you are using now?
-
That's what I was originally doing actually- perceptual loss along with GAN loss. GANs tend to go haywire with glitches and introduce undesirable constraints. It's a net negative even when paired with the latest training we're doing now (I've tried).
- Show replies
New conversation -
-
-
Are we talking about normalizing flows? What approach do you think is the best at the moment?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
For people looking to dive in here, check out "Perceptual Losses for Real-Time Style Transfer and Super-Resolution" from
@jcjohnss@AlexAlahi@drfeifei: https://arxiv.org/abs/1603.08155Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
GAN generates details in its way. I feel like the result with GAN is better but quantitatively worse. I usually consider perceptual loss firstly rather than GAN. Since plausible colors can be different from the ground-truth, how did you evaluate your results?
-
So we have two metrics: FID (super slow) and another one that's very fast that isn't quite as good but gives us fast feedback across resolutions to let us know how good the generalization on that is. Unfortunately neither of these are 100%, so visual inspection is a must. Lots.
- Show replies
New conversation -
-
-
Is VQ-VAE-2 on the right track?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.