.@PyTorch code for adaptive competitive gradient descent. Run it yourself to see benefits of stabilizing GANs and alleviating mode collapse. Obtained SOTA taking WGAN code with no hyperparameter tuning and no gradient penaltyhttps://github.com/devzhk/Implicit-Competitive-Regularization …
-
-
Show this thread
-
Credit goes to
@Kay12400259 for working hard to build this repoShow this thread
End of conversation
New conversation -
-
-
Going to read this one
Thank you!Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Thank you so much for sharing :)
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Massive thanks!
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Wow, game changer!
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Stunning work!
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Great results and cool idea. The paper left me with some questions: 1) How much slower is this approach for different levels of batch size and image size? 2) What do you mean on p6 by "The additional term is a gradient step for ||nabla_y f||"? What is the norm used here?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Video available of talk at
@the_IAS on using competitive gradient descent for fixing#GANs#AI on work with Florian Schaefer and@Kay12400259 at@Caltechhttps://www.youtube.com/watch?v=y4XxN3hKPDE …Show this threadThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.