Reducing the impact of human bias and potential for error is key to getting an image model to actually work practically. When you're reduced to manual inspection, you have to be really, really skeptical of your perceptions and the way you're going about it. 1/https://twitter.com/citnaj/status/1273008694792339457 …
-
Show this thread
-
Seriously, be equally skeptical when you're feeling great during evaluation as when you are feeling pessimistic. Keep as many things constant as possible. Do side by side comparisons. Sleep on it. Get others to help, and listen when they say you're out of your mind. 2/
1 reply 0 retweets 4 likesShow this thread -
Separate benchmarks such as FID (Frechet Inception Distance) are great for sanity checking but are definitely not a replacement for visual inspection. Notably I haven't found a benchmark that suitably weighs in the negative impact of glitches. 3/
1 reply 0 retweets 3 likesShow this thread -
Back to GANs: Because of the extra manual labor involved noted above, they severely reduce your ability to iterate, honestly compare experiments, and narrow down cause/effect. e.g. Is it really an improvement or just that you picked a particularly good stopping point? 4/
1 reply 0 retweets 2 likesShow this thread -
So really a big part of getting a good model up is setting up the conditions to be able to iterate through the vast knob tuning landscape, quickly and accurately. Choice of approach (GAN versus not using a GAN) impacts this greatly! 5/
1 reply 0 retweets 4 likesShow this thread -
GANs have some unique instability issues compared to any other option I can think of. That's what I'm trying to point out here.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.