Vinayak Tantia

@vinayak_tantia

Research engineer at Facebook AI Research. Bachelors in computer science from IIT Kanpur. Previously intern at MILA and Google Maps.

Vrijeme pridruživanja: rujan 2015.

Tweetovi

Blokirali ste korisnika/cu @vinayak_tantia

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @vinayak_tantia

  1. proslijedio/la je Tweet

    The field is already self-correcting. Good departments/labs are clearing their eyes, caring less about paper count, seeing through the noise. Don't worry so much about the ICML deadline. Slow down, relax, try to do work you're proud of, submit when it's ready.

    Poništi
  2. proslijedio/la je Tweet
    Poništi
  3. proslijedio/la je Tweet

    My Uber driver just spent the entire 90+ min trip listening to English language classes on YouTube and practicing out loud (and then with me). He said he loves airport drops because he gets time to study. Mobile internet is an amazing thing. It's a good time to be a human.

    Poništi
  4. proslijedio/la je Tweet
    9. stu 2019.

    0.5% of Dutch cyclists don helmets. Yet theirs are among the safest streets on earth. Why? They realize it’s more effective to slow cars, build protected infrastructure, and create a culture of everyday cycling. Not force the most vulnerable to armour up.

    Poništi
  5. proslijedio/la je Tweet
    5. stu 2019.

    ICLR papers with perfect scores (all 8s, total 11 papers): 1. "FreeLB: Enhanced Adversarial Training for Language Understanding" 2. "BackPACK: Packing more into Backprop"

    Prikaži ovu nit
    Poništi
  6. 27. lis 2019.

    I see a huge lack of this in current deep learning research. People generally try out different hyperparameters and do not try to understand why a method works. This practice has been extremely beneficial for me and I am sure it will be helpful to others too.

    Poništi
  7. proslijedio/la je Tweet
    25. lis 2019.

    Great to see this collaboration between Google researchers & engineers launch, with major improvement to search quality! The work brings together many things we've been working on over the last few years: Transformers, BERT, , TPU pods, ...

    Poništi
  8. proslijedio/la je Tweet
    25. lis 2019.

    Teaching an intro to ML course this semester? Consider having your students participate in the NeurIPS reproducibility challenge (). Awesome as a final project for students to work with real SOTA methods (which I’m using in my course)!!

    Poništi
  9. proslijedio/la je Tweet
    21. srp 2019.

    Just saw that on a T-shirt competition 🤣🤣

    Prikaži ovu nit
    Poništi
  10. proslijedio/la je Tweet
    16. lip 2019.
    Odgovor korisniku/ci

    One rule to live by: Don’t be intimidated. Everyone has the same tools. Everyone sweeps things under the rug. Just because someone famous did it doesn’t mean that it’s good work. Aim high but don’t pass up on the low hanging fruit.

    Poništi
  11. proslijedio/la je Tweet
    29. svi 2019.

    MNIST reborn, restored and expanded. Now with an extra 50,000 training samples. If you used the original MNIST test set more than a few times, chances are your models overfit the test set. Time to test them on those extra samples.

    Poništi
  12. proslijedio/la je Tweet
    28. svi 2019.

    Whoa! It turns out that famous examples of NLP systems succeeding and failing were very misleading. “Man is to king as woman is to queen” only works if the model is hardcoded not to be able to say “king” for the last word.

    Poništi
  13. proslijedio/la je Tweet
    9. svi 2019.

    Turns out there’s more ways to use Autopilot than we imagined

    Prikaži ovu nit
    Poništi
  14. proslijedio/la je Tweet

    I recently drove a for the first time (yes, late to the party), and it is truly an amazing car. So many details are *years* ahead of everyone else. Genuinely inspiring to see how much improvement can be conjured by a small group of dedicated outsiders.

    Poništi
  15. proslijedio/la je Tweet

    "Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet" cool/fun paper. A "bag of words" of nets on tiny 17x17 patches suffice to reach AlexNet-level performance on ImageNet. A lot of the information is very local.

    Poništi
  16. proslijedio/la je Tweet
    21. sij 2019.

    This blog post is by my good friend Olexa Bilaniu! :-)

    Poništi
  17. proslijedio/la je Tweet
    Odgovor korisniku/ci

    I wish I’d thought of that line when people were skeptical of the ability of teenagers to operate in financial sector.

    Poništi
  18. proslijedio/la je Tweet

    The short-sighted approach to mastering a topic is to dedicate all of your time and energy to study in depth what has been written about it so far. The long-sighted approach is to study a range of interconnected fields, and form your own mental models through relevant analogies.

    Prikaži ovu nit
    Poništi
  19. proslijedio/la je Tweet

    "Failing" is too often portrayed as something negative. Failures, and not successes, have been a higher drive for me both personally and professionally (and there have been plenty!). If you never fail, you are doing it wrong : )

    Poništi
  20. proslijedio/la je Tweet
    8. stu 2018.

    Key Papers in Deep Reinforcement Learning curated by

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·