Tweetovi

Blokirali ste korisnika/cu @avdnoord

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @avdnoord

  1. Prikvačeni tweet
    4. lip 2019.

    VQVAE-2 finally out! Powerful autoregressive models in a hierarchical compressed latent space. No modes were collapsed in the creation of these samples ;) Arixv: With and More samples and details 👇 [thread]

    Prikaži ovu nit
    Poništi
  2. 9. pro 2019.

    Unsupervised pre-training now outperforms supervised learning on ImageNet for any data regime (see figure) and also for transfer learning to Pascal VOC object detection

    , , i još njih 2
    Poništi
  3. proslijedio/la je Tweet
    2. kol 2019.

    > "AI works like the brain!" Ahh, yes, I fondly remember my intro to linguistics class, wherein I read all of English wikipedia thousands of times until convergence, and then could finally construct better-than-random parse trees.

    Prikaži ovu nit
    Poništi
  4. proslijedio/la je Tweet
    15. lip 2019.

    Come hear how to train the Cake at our workshop on Self-Supervised Learning today at ICML: Lineup: Jacob Devlin, Alison Gopnik, , , , , , Olivier Henaff A Zisserman, Abhinav Gupta, Alyosha Efros.

    Prikaži ovu nit
    Poništi
  5. 4. lip 2019.

    More samples and full high-res uncompressed images can be found here [70M]:

    Prikaži ovu nit
    Poništi
  6. 4. lip 2019.

    For megapixel faces (1024x1024) we use a three-stage VQVAE model.

    Prikaži ovu nit
    Poništi
  7. 4. lip 2019.

    We found the diversity of these samples to be much higher than competing adversarial methods.

    Prikaži ovu nit
    Poništi
  8. 4. lip 2019.

    Samples from our 256px two-stage ImageNet VQVAE

    Prikaži ovu nit
    Poništi
  9. 4. lip 2019.

    We use a hierarchical VQVAE which compresses images into a latent space which is about 50x smaller for ImageNet and 200x smaller for FFHQ Faces. The PixelCNN only models the latents, allowing it to spend its capacity on the global structure and most perceivable features.

    Prikaži ovu nit
    Poništi
  10. proslijedio/la je Tweet
    23. svi 2019.

    Deep learning has so far relied on massive amounts of supervision. We show that unsupervised representation learning with Contrastive Predictive Coding greatly improves data-efficiency: By and

    Poništi
  11. 23. svi 2019.

    + now also on Twitter.

    Prikaži ovu nit
    Poništi
  12. 22. svi 2019.

    With Olivier Henaff, , and .

    Prikaži ovu nit
    Poništi
  13. 22. svi 2019.

    Excited to share our latest results on Contrastive Predictive Coding! -A linear classifier on CPC features yield 61% ACC, outperforming the original AlexNet result with unsupervised learning. -New state of the art in semi-supervised learning w 1% labels.

    Prikaži ovu nit
    Poništi
  14. 27. tra 2019.

    Note that the deadline for the self-supervised learning workshop has been extended to May 6! Plenty of time to write up your work in a 4-page abstract :)

    Poništi
  15. 19. tra 2019.

    The Self-Supervised Learning workshop submission deadline is next Thursday (25/04)! Consider submitting an extended abstract (4 pages) of your latest work. Work under review for other conferences welcome.

    Poništi
  16. proslijedio/la je Tweet
    7. tra 2019.

    We released a new large-scale corpus of English speech derived for TTS; LibriTTS: A Corpus Derived from LibriSpeech for Text-to-Speech Dataset: Paper:

    Prikaži ovu nit
    Poništi
  17. 30. ožu 2019.

    Excited to announce our Workshop on Self-Supervised Learning! Covering- Vision, NLP, Audio, Robotics, RL ... Submissions now open - deadline April 25! Speakers: , , Andrew Zisserman, Alexei Efros, Jacob Devlin, Abhinav Gupta

    Poništi
  18. 15. sij 2019.

    How to generate high-quality speech from a new speaker in 5-10 minutes with few-shot meta-learning. Check out the samples here: Arxiv:

    Poništi
  19. proslijedio/la je Tweet
    13. pro 2018.

    Starting today in the U.S., say g’day and cheerio to your , now able to speak with an Australian and British accent →

    Poništi
  20. proslijedio/la je Tweet
    7. pro 2018.

    Interested in deep learning, mutual information, and variational bounds? Come check out my poster w/ and at 17:30 in the Bayesian Deep Learning workshop!

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·