I swear there will be a proper pdf with math galore for NoGAN eventually. I do think it has a lot of potential for to make training better so for now we can at least tweet and blog about it!https://twitter.com/remykarem/status/1135183936227749888 …
-
-
Replying to @citnaj
I will get round to trying it at some point! Is it set up as a jupyter notebook in Deoldify github?
1 reply 0 retweets 2 likes -
Replying to @CpnTaters
NoGAN training is, correct. Just make sure to read the instructions- it's not 100% top down execution.
1 reply 0 retweets 1 like -
Replying to @citnaj @CpnTaters
I ran into a few errors but I’m pretty sure it’s because I was doing something wrong. Where’s the best place to post for discussion/pointers once I inevitably get stuck? Github issue?
1 reply 0 retweets 0 likes -
Replying to @braddwyer @CpnTaters
If it's something you think is useful for others, sure that'd probably be the best place. Otherwise feel free to dm me!
1 reply 0 retweets 2 likes -
Replying to @citnaj @CpnTaters
I’ve been having trouble getting the repeatable GAN process to run the second time. Still working on investigating. But I’ll follow later this week.
3 replies 0 retweets 0 likes -
Replying to @braddwyer @CpnTaters
I carried over a habit from fastai notebooks which may not necessarily be great for this case which is that you have to run some cells carefully before doing the repeat. Namely- you'll need to make sure data_gen is initialized under 64px again. That part is dumb and I know it.
1 reply 0 retweets 0 likes -
Replying to @citnaj @CpnTaters
If I recall correctly it was that when I switched that part from 0 to 1 it wasn’t finding the 0 checkpoint it had tried to save.
2 replies 0 retweets 1 like -
Replying to @braddwyer @CpnTaters
Ohh that’s right. So what you need to do is look for the set of weights that work of all the checkpoints that are saved. Select it, and rename in to what you need here. This is the pain in the ass part of NoGAN at this point. Larger batch size helps a lot in stabilization.
4 replies 0 retweets 1 like -
Replying to @citnaj @CpnTaters
I'm not sure I understand what you mean. How do I check which ones work? I've got a list of weights like this from run 0 that I'm not sure what the _#### at the end represents:pic.twitter.com/xtfZO3MM7o
2 replies 0 retweets 0 likes
The number at the end is the iteration.
-
-
Replying to @citnaj @CpnTaters
Ah I see, that's what this is referring to? "Find the checkpoint just before where glitches start to be introduced." I thought that was referring to the same thing as "repeating the cycle below a few times (about 5-8?)" -- misunderstood! I may have to start over
1 reply 0 retweets 0 likes -
Replying to @braddwyer @CpnTaters
Yes on the checkpoint question. So basically you’re finding the good checkpoint, then doing a new round of NoGAN critic pretraining on newly generated images, that then trains the generator again in GAN setting, etc
1 reply 0 retweets 1 like - Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
NoGAN - a transfer learning technique in GANs:
G - generator
D - discriminator
1. Pretrain G using perceptual loss
2. Save generated imgs from pretrained G
3. Pretrain D as binary classifier
4. Train both G & D in a normal GAN setting