Nikhil Shah

@iamshnik

I want to work towards the development of human level intelligent systems. I am interested in theoretical machine learning and applied mathematics.

Vrijeme pridruživanja: lipanj 2019.
Rođen/a 31. srpnja 2001.

Tweetovi

Blokirali ste korisnika/cu @iamshnik

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @iamshnik

  1. proslijedio/la je Tweet

    The field is already self-correcting. Good departments/labs are clearing their eyes, caring less about paper count, seeing through the noise. Don't worry so much about the ICML deadline. Slow down, relax, try to do work you're proud of, submit when it's ready.

    Poništi
  2. proslijedio/la je Tweet
    26. sij

    Pupper does a heckin angery!

    Poništi
  3. 22. sij

    "I had blues, because I had no shoes, until upon the street, I met a man with no feet." Happiness is merely an exercise in appreciation for the things that we have. (Found on a Quora answer by Sean Kernan)

    Poništi
  4. proslijedio/la je Tweet
    17. sij
    Poništi
  5. proslijedio/la je Tweet
    9. sij

    The FEYNMAN technique of learning: STEP 1 - Pick and study a topic STEP 2 - Explain the topic to someone, like a child, who is unfamiliar with the topic STEP 3 - Identify any gaps in your understanding STEP 4 - Review and Simplify!

    Poništi
  6. 8. sij

    Today, I got a chance to teach my fellow mates the transformer architecture and the attention mechanisms mentioned in the paper and I think I did it pretty well. The feeling of satisfaction I had was amazing. Teaching is such a joy!!

    Poništi
  7. proslijedio/la je Tweet
    4. sij

    I highly recommend Primer: , which has a wealth of beautiful examples. Plus, the recent survey by Paul and Elias, geared to economists:

    Poništi
  8. 5. sij

    , check this out

    Prikaži ovu nit
    Poništi
  9. 4. sij

    A very insightful thread on causal inference.

    Prikaži ovu nit
    Poništi
  10. proslijedio/la je Tweet
    1. sij

    We present our new year special: “oLMpics - On what Language Model pre-training captures״, , Exploring what symbolic reasoning skills are learned from an LM objective. We introduce 8 oLMpic games and controls for disentangling pre-training from fine-tuning.

    Prikaži ovu nit
    Poništi
  11. proslijedio/la je Tweet

    Concise advice from for those planning research in AI/ML (or in general). 💭 However, it'll benefit many if top labs/figures can (casually) give a shout-out to emerging groups with less PR/brand exposure.

    Poništi
  12. 27. pro 2019.
    Poništi
  13. proslijedio/la je Tweet
    26. pro 2019.

    Understanding is more important than memorization! Schools should teach the students how to understand, think, doubt, and question. They should be made open to imagination and creativity. 🧠

    Poništi
  14. proslijedio/la je Tweet
    25. pro 2019.

    most whales don’t celebrate christmas but they do understand the importance of spending time with the ones you love

    Poništi
  15. proslijedio/la je Tweet
    22. pro 2019.

    If you are on the academic market or planning for a future academic career, consider casting a wide net, & look at places that may not be your obvious “top N.” You may be pleasantly surprised. My thoughts on how rank doesn’t matter as much as people think.

    Poništi
  16. proslijedio/la je Tweet
    17. pro 2019.

    MetaInit: Initializing learning by learning to initialize They propose a strategy to automatically identify good initial parameters, and show that deep architectures *without* batch norm or residual connections can be trained to get near SOTA results. 🔥

    Poništi
  17. proslijedio/la je Tweet
    14. pro 2019.

    Trying to catch up on reading the literature in my field

    Poništi
  18. 13. pro 2019.
    Tweet je nedostupan.
    Poništi
  19. proslijedio/la je Tweet
    10. pro 2019.

    Mini thread: If you haven't already 's beautiful & insightful paper on intelligence & AI, you should. An elegant distillation of where we are now, & an intriguing proposal for how to make progress.

    Prikaži ovu nit
    Poništi
  20. proslijedio/la je Tweet
    6. pro 2019.

    Why do deep ensembles trained with just random initialization work surprisingly well in practice?  In our recent paper with & Huiyi Hu, we investigate this by using insights from recent work on loss landscape of neural nets.  More below:

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·