Shivam Saboo

@shivamsaboo17

Deep Learning Intern @ Intel Corporation | Ex Computer Vision Intern @ Technicolor R&I, France

Vrijeme pridruživanja: ožujak 2018.

Tweetovi

Blokirali ste korisnika/cu @shivamsaboo17

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @shivamsaboo17

  1. proslijedio/la je Tweet
    4. velj

    "How to do machine learning efficiently". There's so much to love about this wonderful article.

    Poništi
  2. proslijedio/la je Tweet
    31. sij

    An Opinionated Guide to ML Research: “To make breakthroughs with idea-driven research, you need to develop an exceptionally deep understanding of your subject, and a perspective that diverges from the rest of the community—some can do it, but it’s hard.”

    Prikaži ovu nit
    Poništi
  3. proslijedio/la je Tweet
    29. sij

    Machine Learning Summer School 2020 is in Tuebingen, Germany! Please apply. Deadline: 11 Feb 2020.

    Poništi
  4. proslijedio/la je Tweet
    26. sij

    Teaching Deep Unsupervised Learning (2nd edition) at this semester. You can follow along here: Instructor Team: , , , Wilson Yan, Alex Li, YouTube, PDF, and Google Slides for ease of re-use

    Poništi
  5. proslijedio/la je Tweet
    Prikaži ovu nit
    Poništi
  6. proslijedio/la je Tweet
    27. sij

    Kornia v0.2.0 is out ! We have introduced a new data augmentation module with strong GPU support, extended the set of color conversion algorithms, supporting GPU CI tests with v1.4.0, and much more. Happy coding !

    Prikaži ovu nit
    Poništi
  7. proslijedio/la je Tweet
    27. sij

    TRADI: Tracking deep neural network weight distributions -- work with G. Franchi We’re proposing a cheap method for getting ensembles of networks from a single network training 1/

    Prikaži ovu nit
    Poništi
  8. proslijedio/la je Tweet
    21. sij

    Now it works. Let's spatially explore the 2D images! Thanks for tips, ! And Special Kudos to - 3D Ken Burns model is his great work! Here is what happens if you exaggerate this model. And it's amazing.

    Prikaži ovu nit
    Poništi
  9. proslijedio/la je Tweet
    22. sij

    Excited to share PCGrad, a super simple & effective method for multi-task learning & multi-task RL: project conflicting gradients On Meta-World MT50, PCGrad can solve *2x* more tasks than prior methods w/ Tianhe Yu, S Kumar, Gupta, ,

    Poništi
  10. proslijedio/la je Tweet
    15. sij

    Visualizing the Impact of Feature Attribution Baselines -- A new Distill article by Pascal Sturmfels, Scott Lundberg, and Su-In Lee.

    Poništi
  11. proslijedio/la je Tweet
    9. sij

    We are organizing a workshop on Causal learning for Decision Making at along with , Jovana Mitrovic, , Stefan and . Consider submitting your work!

    Poništi
  12. proslijedio/la je Tweet
    14. sij

    I often meet research scientists interested in open-sourcing their code/research and asking for advice. Here is a thread for you. First: why should you open-source models along with your paper? Because science is a virtuous circle of knowledge sharing not a zero-sum competition

    Prikaži ovu nit
    Poništi
  13. proslijedio/la je Tweet
    3. sij

    Seeking volunteers for the mentorship program at our workshop! Those with a climate change and/or machine learning background are encouraged to apply to mentor submissions from Jan 15-Feb 4. Apply to be a mentor (or mentee) at:

    Poništi
  14. proslijedio/la je Tweet
    2. sij

    Maximizing acquisition functions in Bayesian optimization is hard. However, they have some nice properties (e.g., reparametrizable, submodular), which can be useful (also in the parallel setting) as discussed by James Wilson

    Prikaži ovu nit
    Poništi
  15. proslijedio/la je Tweet

    ***What were the most interesting, scientifically insightful, and coherent papers that you read in 2019??*** Rules: 1) No restriction on methodology or aim (theoretical, experimental, & applications papers welcome). 2) Do not post your own papers.

    Prikaži ovu nit
    Poništi
  16. proslijedio/la je Tweet

    Perhaps 2019 was the yr neural architectures died. Architecture papers still form plurality of submissions (bc its easy) but most *interesting* papers are agnostic. 2012-2018 = "how do we learn fn approx. for given p(x,y)?" 2019-???? = "we have good fn approx., now what?"

    Poništi
  17. proslijedio/la je Tweet
    31. pro 2019.

    What companies look for when hiring 1960: People who know many things 1990: People who are good at learning new things 2020: People who are good at learning how to learn new things

    Poništi
  18. proslijedio/la je Tweet
    30. pro 2019.

    This is an important story to read for anyone in DL academia

    Poništi
  19. proslijedio/la je Tweet
    28. pro 2019.

    It would help the discussion if everyone first 1. reads a causal inference book, eg , 2. watches a deep learning course emphasising modularity, compositionality and automatic differentiation, 3. implements the CI book examples in eg

    Poništi
  20. proslijedio/la je Tweet
    27. pro 2019.

    The 1997 LSTM paper by Hochreiter & Schmidhuber has become the most cited deep learning research paper of the 20th century

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·