Updated paper on implicit competitive regularization in #GANs https://arxiv.org/abs/1910.05852 Blog: https://f-t-s.github.io/projects/icr/ GANs work due to simultaneous optimization of generator & discriminator; not choice of divergence. With Florian Schaefer @Kay12400259 @Caltechpic.twitter.com/QwmJgJhzDZ
-
-
GANs are framed as a competition between generator and discriminator. Our work shows, in addition, cooperation is essential to stabilize training and avoid oscillations/mode collapse. Cooperation arises from simultaneous optimization. Our method CGD further enhances this.
Show this threadThanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Thanks for this. Reading the blog!
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
The interpretation behind second order derivatives is really interesting in your work and the results are amazing. An optimization question: isn't CGD much slower than GD per iteration? (As I got, you compute D_{xy} which is really slow? Also, what is the case in Neural Nets?
-
CGD is only twice cost of GD when mixed-mode differentiation is available, as in
#JAX Further, with conjugate gradients, iteration reduces to GD when there is no instability.https://twitter.com/AnimaAnandkumar/status/1219133562651176965?s=20 … - Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.