Sahil Garg

@sahilgarg3098

Research Intern | Summer Analyst'19 | Senior Year CSE Undergrad

Udupi, India
Vrijeme pridruživanja: listopad 2016.

Tweetovi

Blokirali ste korisnika/cu @sahilgarg3098

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @sahilgarg3098

  1. proslijedio/la je Tweet
    29. sij

    Anima Anandkumar's Caltech 2020 lectures on "Foundations of Machine Learning and Statistical Inference" are now available online.

    Poništi
  2. proslijedio/la je Tweet
    26. sij

    Teaching Deep Unsupervised Learning (2nd edition) at this semester. You can follow along here: Instructor Team: , , , Wilson Yan, Alex Li, YouTube, PDF, and Google Slides for ease of re-use

    Poništi
  3. proslijedio/la je Tweet
    23. sij

    Very happy to share a paper that student Arjun Seshadri and I have been working on for quite a while: "Fundamental Limits of Testing the Independence of Irrelevant Alternatives in Discrete Choice" Feedback sought! Thread! 1/n

    Prikaži ovu nit
    Poništi
  4. proslijedio/la je Tweet
    23. sij

    Very excited by our research on perceptual robotics. members and collaborators will present 6 research papers at this year’s I’m starting to feel like a legit robotics researcher ;)

    Poništi
  5. proslijedio/la je Tweet
    23. sij

    Really exciting list of papers for , at the intersection of machine learning and systems!

    Poništi
  6. proslijedio/la je Tweet

    Distinguished Scientist leads Microsoft research efforts in India. He discusses the unique challenges and opportunities, addressing societal scale issues like healthcare, and rethinking what we know about underserved groups:

    Poništi
  7. proslijedio/la je Tweet
    20. sij

    I woke up thinking of a list of inconvenient truths. It's an exercise I like to do sometimes. This is what I came up with so far:

    Prikaži ovu nit
    Poništi
  8. proslijedio/la je Tweet
    22. sij

    Excited to share PCGrad, a super simple & effective method for multi-task learning & multi-task RL: project conflicting gradients On Meta-World MT50, PCGrad can solve *2x* more tasks than prior methods w/ Tianhe Yu, S Kumar, Gupta, ,

    Poništi
  9. 20. sij

    Great analysis on compensation trends and what effects they play on career progression!

    Poništi
  10. proslijedio/la je Tweet
    19. sij

    One of life’s many truths.

    Poništi
  11. proslijedio/la je Tweet
    19. sij

    Question: How long does a loop last? . . . For a while.

    Poništi
  12. proslijedio/la je Tweet
    18. sij

    From to , many worry about the significant economic future impacts of AI. Economist predicts that instead, “the future of AI in the economy will resemble the Internet more than Skynet.”

    Poništi
  13. proslijedio/la je Tweet
    15. sij

    I've started to upload the videos for the Neural Nets for NLP class here: We'll be uploading the videos regularly throughout the rest of the semester, so please follow the playlist if you're interested.

    Poništi
  14. proslijedio/la je Tweet

    Text-based games provide a platform to train RL agents that generate goal-driven language. Jericho framework by & provides benchmarks for scaling RL to combinatorially sized language action spaces:

    Poništi
  15. proslijedio/la je Tweet
    17. sij

    The Diversity-Innovation Paradox in Science: Why does greater diversity in research teams increase innovation but not directly reward minority scholars? New from , , Sebastian Munoz-Najar Galvez, Bryan He, , and Dan McFarland.

    Poništi
  16. proslijedio/la je Tweet

    How does deep learning perform DEEP learning? Microsoft and CMU researchers establish a principle called "backward feature correction" and explain how very deep neural networks can actually perform DEEP hierarchical learning efficiently:

    Poništi
  17. proslijedio/la je Tweet

    Researchers at Microsoft & developed an algorithmic approach to alleviate the imperfections of generative models via importance weighting. Learn how this approach can be used to boost any existing generative model:

    Poništi
  18. proslijedio/la je Tweet
    14. sij

    Happy to release NN4NLP-concepts! It's a typology of important concepts that you should know to implement SOTA NLP models using neural nets: 1/3 We'll reference this in CMU CS11-747 this year, trying to maximize coverage. 1/3

    Prikaži ovu nit
    Poništi
  19. proslijedio/la je Tweet
    14. sij

    What will the new age of computer-assisted learning and decision-making look like? expert talks with on The Future of Everything.

    Poništi
  20. proslijedio/la je Tweet
    14. sij
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·