256px Asuka faces, bottom-middle: unimpressed Asuka is unimpressed. "Almost a month of GPU and these are the best faces you can generate, baka?"pic.twitter.com/1B5U7cBXAs
You can add location information to your Tweets, such as your city or precise location, from the web and via third-party applications. You always have the option to delete your Tweet location history. Learn more
Add this Tweet to your website by copying the code below. Learn more
Add this video to your website by copying the code below. Learn more
By embedding Twitter content in your website or app, you are agreeing to the Twitter Developer Agreement and Developer Policy.
| Country | Code | For customers of |
|---|---|---|
| United States | 40404 | (any) |
| Canada | 21212 | (any) |
| United Kingdom | 86444 | Vodafone, Orange, 3, O2 |
| Brazil | 40404 | Nextel, TIM |
| Haiti | 40404 | Digicel, Voila |
| Ireland | 51210 | Vodafone, O2 |
| India | 53000 | Bharti Airtel, Videocon, Reliance |
| Indonesia | 89887 | AXIS, 3, Telkomsel, Indosat, XL Axiata |
| Italy | 4880804 | Wind |
| 3424486444 | Vodafone | |
| » See SMS short codes for other countries | ||
This timeline is where you’ll spend most of your time, getting instant updates about what matters to you.
Hover over the profile pic and click the Following button to unfollow any account.
When you see a Tweet you love, tap the heart — it lets the person who wrote it know you shared the love.
The fastest way to share someone else’s Tweet with your followers is with a Retweet. Tap the icon to send it instantly.
Add your thoughts about any Tweet with a Reply. Find a topic you’re passionate about, and jump right in.
Get instant insight into what people are talking about now.
Follow more accounts to get instant updates about topics you care about.
See the latest conversations about any topic instantly.
Catch up instantly on the best stories happening as they unfold.
256px Asuka faces, bottom-middle: unimpressed Asuka is unimpressed. "Almost a month of GPU and these are the best faces you can generate, baka?"pic.twitter.com/1B5U7cBXAs
Looking at the latent space interpolation, I think all-Asuka may not be possible with feasible compute - fundamentally too hard to generate one-shot from all possible Asuka images just with feedforward CNNs+no additional supervision like tag embeddings: https://mega.nz/#!f35mVCAK!n-oETSiil_aIK0ENUOskM8GSdv8DGnb8V0tt6haJJwU …pic.twitter.com/98zWws4ulV
For generation it does seem like using an intermediary representation or providing more detail to help with the data conditioning helps a lot
Lack of detail isn't the problem - that's 1024px there, bigger than many of the original images. Unconditional also works fine for human/anime faces: I've switched to retraining Nvidia's CelebA-HQ model on the Asuka faces. It's a lack of adaptivity or metadata, IMO.
The transfer learning, incidentally, seems to be working quite well, contrary to me being told that photo faces were far too different. :) After barely a day, the 64px samples are very good:pic.twitter.com/u9a4NkcMFi
Continuing with the CelebHQ, up to 256px: I like the middle-top's problems. How many eyepatches does 2.0 Asuka have, with horizontal flipping data augmentation? Who cares - "Eye patches are..." (•_•) / ( •_•)>⌐■-■ / (⌐■_■) "...capslock for cool. Yeeeeah!"pic.twitter.com/NB7nEPWjbM
Transfer between anime faces also works well, as expected. Here's ~27h of training the Asuka 512px ProGAN on Holo faces (#spiceandwolf), up to 128px so far. Lower right is already recognizably Juu Ayakura-style.pic.twitter.com/8vA3KDRxug
Much larger set of random samples after reaching 512px, and looking closer at rough top decile by quality:https://imgur.com/a/GjnZVDp
I suffered some mode collapse/divergence, I think. Apropos of #BigGAN, I restarted afresh from Asuka and tried accumulating n=200 minibatches; had to quadruple learning rate to make noticeable progress because that leads to very few updates per hour. Not sure if better yet.pic.twitter.com/lViBu8oOPp
There's definitely some maths to do here. 512 tpu chips for BigGAN is so much. for comparison BERT large was trained in 4 days on 64 tpus, which they said would take 1 year on 8 P100shttps://www.reddit.com/r/MachineLearning/comments/9nfqxz/r_bert_pretraining_of_deep_bidirectional/e7mg8j0/ …
BERT at least is intended to be a reusable pretrained net or embedding generator. GANs aren't much used for transfer
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.