Prof. Anima Anandkumar Retweeted Rahel Jhirad
Great to be part of @the_IAS workshop on #DeepLearning GAN training can be stabilized without need for explicit gradient penalities!
Latest paper also available: https://twitter.com/arXiv_Daily/status/1184138720871534592?s=20 …https://twitter.com/RahelJhirad/status/1184127061100765186 …
Prof. Anima Anandkumar added,
Rahel Jhirad @RahelJhirad
At @the_IAS Workshop on Theory of Deep Learning: Where next? @AnimaAnandkumar
Fixing GAN optimization through competitive gradient descent - Optimization in ML; Beyond GD
Competitive Gradient Descent w/Florian Schafer
https://arxiv.org/pdf/1905.12103.pdf …
Livestream: https://www.ias.edu/livestream pic.twitter.com/VjdwbNsDc5
Show this thread
9:11 AM - 15 Oct 2019
0 replies
15 retweets
55 likes
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.