Ekin Dogus Cubuk

@ekindogus

Research scientist at Google Brain, working on machine learning and condensed matter physics.

Vrijeme pridruživanja: travanj 2013.

Tweetovi

Blokirali ste korisnika/cu @ekindogus

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @ekindogus

  1. proslijedio/la je Tweet
    29. sij

    Graphene physicist being asked a question about graphite: "Sorry, that's well outside my expertise, so it would be unreasonable to speculate." Any physicist being asked a question about philosophy: "Allow me to launch into a 30 minute lecture about why this question is trivial."

    Prikaži ovu nit
    Poništi
  2. proslijedio/la je Tweet
    22. sij

    FixMatch: focusing on simplicity for semi-supervised learning and improving state of the art (CIFAR 94.9% with 250 labels, 88.6% with 40). Collaboration with Kihyuk Sohn, Nicholas Carlini

    Prikaži ovu nit
    Poništi
  3. 5. sij
    Poništi
  4. proslijedio/la je Tweet

    I'll start: "Do ImageNet Classifiers Generalize to ImageNet?" Presents evidence that leaderboard chasing has been fruitful. Models degrade on new test sets but: 1) Order among best models preserved. 2) Deterioration due to dist-shift, not overfitting.

    Prikaži ovu nit
    Poništi
  5. proslijedio/la je Tweet
    18. stu 2019.

    RandAugment: Practical automated data augmentation with a reduced search space Decreasing the search space in a clever way avoids the need to perform highly expensive computation search. ie. NAS→EfficientNets, AutoAugment→RandAugment Might be useful for domain randomization.

    Poništi
  6. proslijedio/la je Tweet
    19. pro 2019.

    JAX now supports Google Cloud TPUs! I contributed this example, solving a 2D wave equation with a spatially partitioned grid. The code is remarkably simple and all in pure Python!

    Prikaži ovu nit
    Poništi
  7. proslijedio/la je Tweet
    14. pro 2019.

    Very compelling talk by on implementing molecular dynamics with Jax. I think the general strategy of upgrading our simulation to include autodiff (and probprog) will be a major theme of the next 5 years. Those points apply equally well to HEP

    Poništi
  8. proslijedio/la je Tweet
    13. pro 2019.

    Tomorrow I'll be talking about JAX MD: a hardware accelerated, end-to-end differentiable, molecular dynamics library at the ML4PS at 9:20am (along with tons of amazing speakers). Paper: Code: Colab:

    Prikaži ovu nit
    Poništi
  9. proslijedio/la je Tweet
    11. pro 2019.

    JAX things!!! excited to talk about JAX MD on Saturday. Also, check out our neural tangents poster at the Bayesian deep learning workshop (also JAX)!

    Prikaži ovu nit
    Poništi
  10. proslijedio/la je Tweet
    26. stu 2019.

    Entertaining exposition aside, I think the best quote from this paper is "there are usually far more efficient ways to achieve something once we know it’s possible."

    Poništi
  11. 22. stu 2019.

    This is a great description of RandAugment. Thanks for taking the time to make the video.

    Poništi
  12. 18. stu 2019.

    The code is available online, consider trying it on your image classification or object detection task! With collaborators , Jon Shlens, and . Code:

    Prikaži ovu nit
    Poništi
  13. 18. stu 2019.

    However, we also find that the optimal distortion magnitude increases with training set size, which deserves more investigation.

    Prikaži ovu nit
    Poništi
  14. 18. stu 2019.

    Because of its interpretable hyperparameter (single distortion magnitude), we can study the interaction of data augmentation with different aspects of deep learning. For example, the optimal distortion magnitude goes up with model size, which is to be expected.

    Prikaži ovu nit
    Poništi
  15. 18. stu 2019.

    RandAugment has a significantly smaller search space, which allows it to be optimized on the model and dataset of interest (instead of having to use a smaller proxy task). It works on CIFAR-10/100, SVHN, ImageNet, and COCO.

    Prikaži ovu nit
    Poništi
  16. proslijedio/la je Tweet
    18. stu 2019.

    *New paper* RandAugment: a new data augmentation. Better & simpler than AutoAugment. Main idea is to select transformations at random, and tune their magnitude. It achieves 85.0% top-1 on ImageNet. Paper: Code:

    Poništi
  17. proslijedio/la je Tweet
    18. stu 2019.

    RandAugment was one of the secret sources behind Noisy Student that I tweeted last week. Code for RandAugment is now opensourced.

    Poništi
  18. proslijedio/la je Tweet
    18. ruj 2019.

    This morning, at 9:30am, researcher Daniel Park is discussing , a simple data augmentation method for automatic speech recognition. Stop by the Google booth to learn all about it and read more at ↓

    Poništi
  19. proslijedio/la je Tweet
    1. ruj 2019.

    At a glance, this seems like a very nice review of a bunch of related ideas.

    Poništi
  20. proslijedio/la je Tweet
    1. srp 2019.

    Very saddened to report that Mitchell Feigenbaum passed away over the weekend. Famous for his work in chaos. Highly creative & original, with contributions to cartography, vision and finance not so well known. I was lucky to be one of his friends, and will miss this unique man.

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·