Aravind

@Aravind7694

Interested in building simple ML stuff that works efficiently at scale, PhD , Interned at and .

Vrijeme pridruživanja: srpanj 2016.
Rođen/a 07. lipnja

Tweetovi

Blokirali ste korisnika/cu @Aravind7694

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @Aravind7694

  1. 31. sij

    I read this advice 1.5 years ago and have stuck with it since then. It's not mutually exclusive of course. But having clear goals / OKRs (even for new ideas) allows for a sense of direction and purpose and also avoid writing papers that don't matter in the long run.

    Poništi
  2. 29. sij

    Robotic Learning done right with full stack execution :-) ...

    Poništi
  3. proslijedio/la je Tweet
    26. sij

    Teaching Deep Unsupervised Learning (2nd edition) at this semester. You can follow along here: Instructor Team: , , , Wilson Yan, Alex Li, YouTube, PDF, and Google Slides for ease of re-use

    Poništi
  4. proslijedio/la je Tweet
    19. sij

    Crew Dragon separating from Falcon 9 during today’s test, which verified the spacecraft’s ability to carry astronauts to safety in the unlikely event of an emergency on ascent

    Poništi
  5. proslijedio/la je Tweet
    Poništi
  6. proslijedio/la je Tweet

    Just ordered a couple 1TB thumb drives for $30 each. I still find myself awestruck by tech progress -- I desperately wanted a 10MB hard drive for my IIGS as a teen, but didn't have the $399. One million times cheaper per byte, one thousand times faster, and 100 times smaller now.

    Poništi
  7. 7. sij

    "When running on the test set, examples were randomly perturbed using the same augmentations as during training; the final predictions were the average of 500 runs." - Test-time augmentations on steroids. [s/500/1500; ensemble of three models -1 obj detector with ~15k ROI labels

    Poništi
  8. proslijedio/la je Tweet

    In case you have missed it, here is my end-of-year AI/ML recap with over 50 pointers to publications:

    Poništi
  9. proslijedio/la je Tweet
    20. pro 2019.

    I often hear people say that X is broken. Congress is broken. Academia is broken. Physics is broken. Deep learning is broken. People like to stand in crowds and point fingers at the thing that is broken. I prefer the quiet folks who roll up their sleeves and get to work, fixing.

    Poništi
  10. 19. pro 2019.

    Cool Medium blogpost explaining key aspects and results of the self-supervised pre-training pipeline CPC-v2:

    Poništi
  11. proslijedio/la je Tweet
    16. pro 2019.

    Lost amid NeuroIPS: jax now has experimental cloud TPU support!

    Poništi
  12. proslijedio/la je Tweet
    9. pro 2019.

    There are now 3 papers that successfully use Self-Supervised Learning for visual feature learning: MoCo: PIRL: And this, below. All three use some form of Siamese net.

    Poništi
  13. proslijedio/la je Tweet
    9. pro 2019.

    Exciting updated results for self-supervised representation learning on ImageNet: - 71.5% top-1 with a *linear* classifier - 77.9% top-5 with only *1%* of the labels - 76.6 mAP when transferred to PASCAL VOC-07 (better than *fully-supervised's* 74.7 mAP)

    Prikaži ovu nit
    Poništi
  14. proslijedio/la je Tweet
    8. pro 2019.

    Some exciting *new* results in self-supervised learning on ImageNet: 71.5 % top-1 with a linear classifier, 5x data-efficiency from pre-training (76% top-1 with 80% fewer samples per class on ImageNet), 76.6 mAP on PASCAL VOC-07 (> supervised's 74.7)

    Prikaži ovu nit
    Poništi
  15. proslijedio/la je Tweet
    9. pro 2019.

    Beating previous state of the art in self-supervised learning for ImageNet by almost 3% absolute with less parameters (71.5% vs 68.6% top1). Extensive results for data-efficient learning on both ImageNet and Pascal VOC in the updated

    , , i još njih 2
    Poništi
  16. proslijedio/la je Tweet
    9. pro 2019.

    Unsupervised pre-training now outperforms supervised learning on ImageNet for any data regime (see figure) and also for transfer learning to Pascal VOC object detection

    , , i još njih 2
    Poništi
  17. 8. pro 2019.
    Prikaži ovu nit
    Poništi
  18. 8. pro 2019.

    Some exciting *new* results in self-supervised learning on ImageNet: 71.5 % top-1 with a linear classifier, 5x data-efficiency from pre-training (76% top-1 with 80% fewer samples per class on ImageNet), 76.6 mAP on PASCAL VOC-07 (> supervised's 74.7)

    Prikaži ovu nit
    Poništi
  19. proslijedio/la je Tweet
    11. stu 2019.

    Any friends in mgmt/exec/leadership roles you wish knew more about AI + benefit from the push of a formal class (they can take from home :) ? I just finished my recordings for 's Artificial Intelligence: Business Strategies and Applications

    Prikaži ovu nit
    Poništi
  20. 3. lis 2019.

    This is a cool paper demystifying the myth that we need model-based methods for data-efficiency. Turns out Rainbow with correct hyper-parameters is as good-

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·