Reducing the impact of human bias and potential for error is key to getting an image model to actually work practically. When you're reduced to manual inspection, you have to be really, really skeptical of your perceptions and the way you're going about it. 1/https://twitter.com/citnaj/status/1273008694792339457 …
-
-
Separate benchmarks such as FID (Frechet Inception Distance) are great for sanity checking but are definitely not a replacement for visual inspection. Notably I haven't found a benchmark that suitably weighs in the negative impact of glitches. 3/
Show this thread -
Back to GANs: Because of the extra manual labor involved noted above, they severely reduce your ability to iterate, honestly compare experiments, and narrow down cause/effect. e.g. Is it really an improvement or just that you picked a particularly good stopping point? 4/
Show this thread -
So really a big part of getting a good model up is setting up the conditions to be able to iterate through the vast knob tuning landscape, quickly and accurately. Choice of approach (GAN versus not using a GAN) impacts this greatly! 5/
Show this thread
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.