-
-
এই থ্রেডটি দেখানধন্যবাদ। আপনার সময়রেখাকে আরো ভালো করে তুলতে টুইটার এটিকে ব্যবহার করবে। পূর্বাবস্থায়পূর্বাবস্থায়
-
-
-
Do you have any thoughts on the relationship between input noise, gradient penalties, and normalization? Do you think they have independent benefits that stack or are they all different methods of achieving the same advantage?
-
I think input noise and gradient penalties have the same effect. Normalization might be orthogonal and it might be possible that combining normalization with gradient penalties is beneficial, although I don't have any experimental or theoretical evidence to back it up.
কথা-বার্তা শেষ
নতুন কথা-বার্তা -
-
-
I like that you've shown the Roth et al gradient penalty can be simplified a lot. That fits with my intuition for what it was doing. I've also had an intuition that spectral norm does something similar to gradient penalties. Do your results imply anything about spectral norm?
-
No, we don't have any theoretical results about spectral norm and it is also a bit tricky to analyze theoretically. Maybe it makes the architectures locally stable, but it is also possible that it solves a totally different problem.
কথা-বার্তা শেষ
নতুন কথা-বার্তা -
-
-
I see you've proven that NS-GAN doesn't converge. That's an interesting result. Do you know whether NS-GAN is always non-convergent, or would NS-GAN converge if augmented with input noise / gradient penalties?
-
NS-GAN + input noise or gradient penalties is locally asymptotically stable in the realizable case. The main problem with input noise is, of course, that it introduces noise to the training and blurs out higher frequency features in image space.
কথা-বার্তা শেষ
নতুন কথা-বার্তা -
-
-
I see you've provided convergence proofs for a lot of different base losses and a lot of different gradient penalties. Do you have any guidance for which are better to use in practice? Or if there is no general winner, any guidance for which to use in which situations?
-
From a local perspective it is just important to have f'(0) != 0 and f''(0) < 0 (also see paper by Nagarajan & Kolter). However, globally the story might be different. In practice, NS-GAN seems to work well and is also a bit more stable for the Dirac-GAN.
কথা-বার্তা শেষ
নতুন কথা-বার্তা -
-
-
I like this paper! I have a few questions. I'm sorry if these are already answered in the paper
-
Thanks. I'll try to answer your questions one by one.
-
Thanks!
কথা-বার্তা শেষ
নতুন কথা-বার্তা -
-
-
Your slides
@icmlconf were really amazing! I loved the animations you used to provide intuition on your results! Do you plan to publish them somewhere? -
Thanks! You can find a pdf version of the slides on our website: https://avg.is.tuebingen.mpg.de/publications/meschedericml2018 … Unfortunately, they don't contain the animations, so uploaded some of them to youtube:https://www.youtube.com/watch?v=GY3nPi96ZgI&list=PLx8vdCTsoUMge9C0-9mYrWpXc4J6GYkMO …
-
Perfect, thanks a lot :)
কথা-বার্তা শেষ
নতুন কথা-বার্তা -
-
-
Is there code available?
-
Nevermind, I found the code. https://github.com/LMescheder/GAN_stability … for anyone else who is looking
কথা-বার্তা শেষ
নতুন কথা-বার্তা -
-
-
Cool! I am just wondering, is progressive growing beneficial as well because it yields to faster training times?
ধন্যবাদ। আপনার সময়রেখাকে আরো ভালো করে তুলতে টুইটার এটিকে ব্যবহার করবে। পূর্বাবস্থায়পূর্বাবস্থায়
-
-
-
Very impressive!
ধন্যবাদ। আপনার সময়রেখাকে আরো ভালো করে তুলতে টুইটার এটিকে ব্যবহার করবে। পূর্বাবস্থায়পূর্বাবস্থায়
-
-
-
I assume there are feature codes that reproduce the original images. Bouncing between familiar faces might be interesting. Also exploring variations around a well known face. One feature at the time for starters. Can it turn tom cruise's face for instance. I'm guessing not.
ধন্যবাদ। আপনার সময়রেখাকে আরো ভালো করে তুলতে টুইটার এটিকে ব্যবহার করবে। পূর্বাবস্থায়পূর্বাবস্থায়
-
লোড হতে বেশ কিছুক্ষণ সময় নিচ্ছে।
টুইটার তার ক্ষমতার বাইরে চলে গেছে বা কোনো সাময়িক সমস্যার সম্মুখীন হয়েছে আবার চেষ্টা করুন বা আরও তথ্যের জন্য টুইটারের স্থিতি দেখুন।