Here's something a lot of people still don't know: The latest DeOldify doesn't use GANs anymore. And I'm not being cute with terminology- NoGAN isn't used either. We needed something more production worthy and controllable and it just wasn't cutting it. 1/
-
Show this thread
-
It's been a year since we last used GANs. You may have been lead to believe that GANs are -the- way to get realistic results but believe me, there's actually better ways IMHO. I can't tell you exactly what we're doing now but I can tell you this: 2/
4 replies 2 retweets 72 likesShow this thread -
It was a good thing I got past my ego (after some time) and listened to
@jeremyphoward and@fastdotai when they said they that they are getting "better than GAN" results using perceptual loss in super resolution :) 3/9 replies 11 retweets 174 likesShow this thread -
If you use perceptual loss in GANs, it still doesn't get better results than the method you are using now?
1 reply 0 retweets 2 likes -
That's what I was originally doing actually- perceptual loss along with GAN loss. GANs tend to go haywire with glitches and introduce undesirable constraints. It's a net negative even when paired with the latest training we're doing now (I've tried).
1 reply 1 retweet 5 likes -
I have been using Deoldify as a reference for my own project (not related to colorization). Might want to reconsider the use of GANs. But I wonder if it is just perceptual loss that leads to such amazing results. If I were to implement it now, no way it's going to be that good.
1 reply 0 retweets 1 like
There's more to it. I'd start by referencing fastai's work on this. There's details I haven't covered here that will be a helpful start.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.