Maithra Raghu

@maithra_raghu

PhD Candidate . Research Scientist Google Brain. Working on Deep Learning and Applications to Healthcare. in Science. organizer

Vrijeme pridruživanja: srpanj 2017.

Tweetovi

Blokirali ste korisnika/cu @maithra_raghu

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @maithra_raghu

  1. Prikvačeni tweet
    13. stu 2018.

    Excited to be one of the Forbes 30 under 30 in Science! Full list:

    Poništi
  2. 16. pro 2019.

    Had a fantastic week learning about exciting research directions and meeting old and new friends at . Thanks to the organizers, volunteers and participants for a wonderful conference! My talk at is at (~44 mins), and posters below!

    Poništi
  3. proslijedio/la je Tweet
    13. pro 2019.

    Three awesome women kicking off the workshop: , , and Xinyu Li!

    Poništi
  4. 6. pro 2019.

    How does transfer learning for medical imaging affect performance, representations and convergence? Check out the blogpost below and our paper for some of the surprising conclusions, new approaches and open questions!

    Poništi
  5. proslijedio/la je Tweet
    12. stu 2019.

    Want to improve accuracy and robustness of your model? Use unlabeled data! Our new work uses self-training on unlabeled data to achieve 87.4% top-1 on ImageNet, 1% better than SOTA. Huge gains are seen on harder benchmarks (ImageNet-A, C and P). Link:

    Prikaži ovu nit
    Poništi
  6. 2. stu 2019.

    How do representations evolve as they go through the transformer? How does the Masked Language Model objective affect these compared to Language Models? How much do different tokens change and influence other tokens? Answers in the paper by : !

    Poništi
  7. proslijedio/la je Tweet
    17. lis 2019.

    I'm so excited to share our hard work over the last 6 months 🎉! It's been quite the journey: I joined after completing my PhD at . In less than a year we were acquired by and 6 months later, we have a product that touches millions of people!

    Poništi
  8. 23. ruj 2019.

    Rapid Learning or Feature Reuse? New paper: We analyze MAML (and meta-learning and meta learning more broadly) finding that feature reuse is the critical component in the efficient learning of new tasks -- leading to some algorithmic simplifications!

    Poništi
  9. proslijedio/la je Tweet

    Rapid Learning or Feature Reuse? Meta-learning algorithms on standard benchmarks have much more feature reuse than rapid learning! This also gives us a way to simplify MAML -- (Almost) No Inner Loop (A)NIL. With Aniruddh Raghu Samy Bengio.

    Poništi
  10. proslijedio/la je Tweet
    13. ruj 2019.

    New EMNLP paper “Investigating Multilingual NMT Representation at Scale” w/ , , @caswell_isaac, . We study transfer in massively multilingual NMT from the perspective of representational similarity. Paper: 1/n

    Prikaži ovu nit
    Poništi
  11. 3. ruj 2019.

    Our paper on Understanding Transfer Learning for Medical Imaging has been accepted to !! Preprint: As a positive datapoint: we had a good reviewing experience, with detailed feedback and mostly useful comments. Thanks to the Program Chairs!

    Poništi
  12. 12. kol 2019.

    Looking forward to speaking about Artificial and Human Intelligence in Healthcare at the conference ! Will discuss developing better AI systems and human expert interactions:

    Poništi
  13. 10. srp 2019.

    Thanks to the organizers Samy Bengio and for a very interesting program!

    Prikaži ovu nit
    Poništi
  14. 10. srp 2019.

    Looking forward to attending/speaking at the Frontiers of Deep Learning workshop at ! Exciting talks on generalization, robustness, model-based RL (w/ videos after!) I'll speak about our work on transfer learning:

    Prikaži ovu nit
    Poništi
  15. 15. lip 2019.

    Intriguing invited talk at from Chiyuan Zhang on the effect of resetting different layers: Are all layers created equal?

    Poništi
  16. 5. lip 2019.

    ML models can learn to find cases of high human expert disagreement, with a direct prediction method provably outperforming classifier reuse. We test this on synthetic tasks and a large scale medical application. With Katy Blumer Rory Sayres Jon Kleinberg

    Prikaži ovu nit
    Poništi
  17. 5. lip 2019.

    Our paper on using Machine Learning (Direct Uncertainty Prediction) for predicting doctor disagreements and medical second opinions will be at next week! Blog: Paper:

    Prikaži ovu nit
    Poništi
  18. 23. svi 2019.
    Poništi
  19. proslijedio/la je Tweet
    30. tra 2019.

    I had to record a lightning talk for my poster, so my brother improvised a soundtrack on the piano and now it sounds EXCITING.

    Poništi
  20. proslijedio/la je Tweet
    29. tra 2019.

    If you are working on empirical phenomena in deep learning, consider submitting to our ICML workshop "Identifying and Understanding Deep Learning Phenomena" (). The deadline is May 5, but relevant work that was already published elsewhere is still welcome!

    Prikaži ovu nit
    Poništi
  21. 11. tra 2019.

    Huge thanks to many people for the feedback 😁

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·