David Berthelot

@D_Berthelot_ML

Machine Learning addict, working in Google Research. My opinions do not represent those of my employer.

Bay Area, CA
Vrijeme pridruživanja: svibanj 2010.

Tweetovi

Blokirali ste korisnika/cu @D_Berthelot_ML

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @D_Berthelot_ML

  1. Prikvačeni tweet
    22. sij

    FixMatch: focusing on simplicity for semi-supervised learning and improving state of the art (CIFAR 94.9% with 250 labels, 88.6% with 40). Collaboration with Kihyuk Sohn, Nicholas Carlini

    Prikaži ovu nit
    Poništi
  2. 31. sij

    A well done video explanation of FixMatch, thanks !

    Poništi
  3. proslijedio/la je Tweet
    24. sij

    When I invented adversarial training as a defense against adversarial examples, I focused on making it as cheap and scalable as possible. Eric and collaborators have now upgraded the original cheap version to compete with newer, more expensive versions.

    Poništi
  4. 22. sij

    Code is up: And being my usual distracted self, I forgot one co-author from the list: (Sorry Alex!) The code for ImageNet will come later.

    Prikaži ovu nit
    Poništi
  5. 22. sij

    Surprisingly even with 1 example per class, results better than previously possible with 25 (before MixMatch) are achievable. On CIFAR10, with a single example per class FixMatch obtains between 48.58% and 85.32% test accuracy with a median of 64.28%.

    Prikaži ovu nit
    Poništi
  6. 12. sij

    Just saw Neon AI from Samsung: the avatars look amazing. Trying to find information: someone mentioned they were actors representing the potential, journalist pieces sound like it's entirely rendered. It's like looking for a needle of information in a haystack of marketing...

    Poništi
  7. proslijedio/la je Tweet

    "Happiness is like a rose by itself in the garden; when it blooms, the birds will sing and the bees will make honey." -- Anonymous AI

    Prikaži ovu nit
    Poništi
  8. proslijedio/la je Tweet
    Poništi
  9. proslijedio/la je Tweet
    20. pro 2019.

    Yes! I got my first big conference paper accepted at ICLR, with spotlight! We improve the previous DeepMind paper "NALU" by 3x-20x. – This took 7-8 months, working without any funding as an independent researcher. Paper: Code:

    Prikaži ovu nit
    Poništi
  10. proslijedio/la je Tweet
    11. pro 2019.
    Poništi
  11. proslijedio/la je Tweet
    11. pro 2019.

    In case you missed our poster on MixMatch () today because you aren't in Vancouver or didn't survive the poster session stampede, here's the PDF: and here's a transcript of what I said to everyone who came by: ⬇️ 1/11

    Prikaži ovu nit
    Poništi
  12. proslijedio/la je Tweet

    For those who could not unfortunately make it to NeurIPS, the poster for our paper adversarial mixup resynthesis (aka autoencoders with mixup) is available online at

    Poništi
  13. proslijedio/la je Tweet
    6. pro 2019.

    HYPE is a Oral!! We evaluate {6️⃣ GANs, 4️⃣ datasets, 2️⃣ sampling methods}. WE show statistically insignificant correlation with FID et al. Fun fact: different GANs excel at different classes in ImageNet, e.g 🍋 vs.📚 Get HYPE

    Prikaži ovu nit
    Poništi
  14. proslijedio/la je Tweet
    3. pro 2019.

    I will be at next week to present MixMatch [1] and give a T5 demo [2]! Please get in touch if you want to discuss research, eat vegan food, and/or go bouldering. [1] [2]

    Poništi
  15. proslijedio/la je Tweet
    25. stu 2019.

    I made a small package which allows reading tfrecord files in PyTorch with no tf dependency:

    Poništi
  16. proslijedio/la je Tweet
    12. stu 2019.

    I'm starting a professorship in the CS department at UNC in fall 2020 (!!) and am hiring students! If you're interested in doing a PhD please get in touch. More info here:

    Prikaži ovu nit
    Poništi
  17. 10. stu 2019.
    Poništi
  18. proslijedio/la je Tweet
    28. lis 2019.

    has new research (w/ , and me) that sets a new state-of-the art in conditional image synthesis by using consistency regularization on GANs: . Thread follows:

    Prikaži ovu nit
    Poništi
  19. proslijedio/la je Tweet
    23. lis 2019.

    New paper! We perform a systematic study of transfer learning for NLP using a unified text-to-text model, then push the limits to achieve SoTA on GLUE, SuperGLUE, CNN/DM, and SQuAD. Paper: Code/models/data/etc: Summary ⬇️ (1/14)

    Prikaži ovu nit
    Poništi
  20. proslijedio/la je Tweet
    19. lip 2019.

    Our work exploring the use of learned optimizers to make more robust image models is on arXiv! We find that in some cases learned optimizers are capable of learning more robustness image classifiers!

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·