There are 7 "Open Questions" on the list (though I wanted to put way more than 7!) and each question includes some discussion of background and some speculation about possible ways to address the question. 2/5pic.twitter.com/e6t7J6CT5u
You can add location information to your Tweets, such as your city or precise location, from the web and via third-party applications. You always have the option to delete your Tweet location history. Learn more
There are 7 "Open Questions" on the list (though I wanted to put way more than 7!) and each question includes some discussion of background and some speculation about possible ways to address the question. 2/5pic.twitter.com/e6t7J6CT5u
I feel like writing this article really clarified my thinking about GANs as a research topic. I would strongly encourage other people to write similar articles about their sub-fields, both because of that clarifying effect and because I'd like to read more such articles. 3/5
This article covers a lot of ground, so even though I tried hard not to, I may have gotten some things wrong about existing work; please correct me if so! 4/5
Finally, I would like to thank @colinraffel, @poolio, @ericjang11, @dustinvtran, Alex Kurakin, @D_Berthelot_ML , Aurko Roy, @goodfellow_ian, and Matt Hoffman for helpful discussions,
and I'd like to especially thanks @ch402, who helped me tons with the text. 5/5
Maybe this is one of the reason why hobbyists and professional artists have a preference for GANs over more compute-heavy FLOW-based models.pic.twitter.com/KTw2Rk7GTw
Is the ultimate GAN a GAN that writes papers about GANs (in particular about open questions on GANs)?
Thanks for writing this! Articles like it are really important for the field.
I love it! I'm in a similar situation, as I'm able to identify endless research possibilities but I have very limited time. I'll put this in my schedule!!
hey @gstsdn, i wonder a lot about links between GANs and more classic methods of dimensionality reduction (like the relationship between VAEs and PCA). Do you know any theoretical work on this? Is this an open question or do I just not know the papers?
This is great -- I hope it motivates others to create a similar summary for other deep generative models!
For fast, inexpensive, reliable human evaluation on GANs, have you seen HYPE from Stanford (incl StyleGAN comparison to ProGAN et al)? http://hype.stanford.edu Stands for Human eYe Perceptual Evaluation...
How do I train GAN to produce images of different sizes? Can a latent variable control GAN output image size? PixelCNN (autoregressive models in general) probably can be trained to do this but it is slow because of one pixel at a time
Interesting!
Thank you for this! The world does not need more academic papers, it needs more explanation like this.
interesting. thanks :)
Also someone needs to look at all the counter-results in StyleGAN and BigGAN papers, spectral normalization/gp, growing/big batches, non-local layers, loads more
Thanks for sharing!
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.