For saddle point problems and more general games, gradient descent can cycle and diverge. Using an extrapolation steps adresses this issue and converges to a saddle point (Nash equilibrium). https://en.wikipedia.org/wiki/Variational_inequality …pic.twitter.com/Kpxbd2Cqbh
Replying to @gabrielpeyre @MJEhrhardt
Prof. Anima Anandkumar Retweeted Prof. Anima Anandkumar
We have a new method called competitve gradient descent that uses mixed hessian. Unlike extra gradients these can work with high learning rateshttps://twitter.com/AnimaAnandkumar/status/1205173860284293121?s=20 …
Prof. Anima Anandkumar added,
Prof. Anima Anandkumar @AnimaAnandkumar
Florian Schafer and I have a #NeurIPS2019 poster #195 today from 10:45 to 12:45. GAN training is unstable + mode collapse. How to fix optimization in GANs? We propose competitive gradient descent: each update is a Nash equilibrium of a local game. Blog: https://f-t-s.github.io/projects/cgd/ pic.twitter.com/RinSTyKiyg
Show this thread
3:06 AM - 16 Dec 2019
0 replies
4 retweets
31 likes
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.