Niru Maheswaranathan

@niru_m

Trying to separate signal from the noise; research engineer at Google Brain. Formerly , , . ’s +1. Opinions my own. ⚽️☕️👨🏾‍💻

Mountain View, CA
Vrijeme pridruživanja: ožujak 2009.

Tweetovi

Blokirali ste korisnika/cu @niru_m

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @niru_m

  1. proslijedio/la je Tweet

    Peering into a deep network trained on retina data: - Instantaneous RFs are context-dependent and state-based - Network subspace used by white noise and natural scenes are different [Nice update from ]

    Poništi
  2. proslijedio/la je Tweet
    10. pro 2019.
    Poništi
  3. 30. stu 2019.
    Poništi
  4. proslijedio/la je Tweet
    21. stu 2019.

    New paper out on : “From deep learning to mechanistic understanding in neuroscience: the structure of retinal prediction” with fantastic collaborators , , , Stephen Baccus, .

    Poništi
  5. proslijedio/la je Tweet

    Here you can see two neurons sensing one another and connecting in a petri dish. There are 86 billion neurons in the , and they use these webbed hand like structures (“growth cones”) to search for and connect to other neurons or body parts as we develop

    Prikaži ovu nit
    Poništi
  6. proslijedio/la je Tweet
    11. stu 2019.

    We approximated the implicit function theorem to tune millions of hyperparameters. Now we can train data augmentation networks from scratch using gradients from the validation loss. With and

    Prikaži ovu nit
    Poništi
  7. proslijedio/la je Tweet

    So grateful for 's colab on CKA. It seems obvious in retrospect but I hadn't considered the equivalence of calculating similarities based on examples and based on features. My experiments are so much faster now...🚀

    Poništi
  8. 6. stu 2019.

    Loved listening to this interview with Andrew Saxe. Great summaries of a lot of beautiful work! (side note, the previous interviews are just as good--kudos to for running a great podcast!)

    Poništi
  9. proslijedio/la je Tweet
    2. stu 2019.

    No, this isn’t from et al recent perspective on . This is David Robinson trying to make a similar point in 1992:

    Prikaži ovu nit
    Poništi
  10. 1. stu 2019.

    submission deadline: 31Oct 11:59 **Pacific Time**. registration deadline: 31Jan 11:59 **Eastern Time**. ()

    Poništi
  11. 1. stu 2019.

    itermplot () is a matplotlib backend that displays directly in your terminal (iterm2). Really awesome resource if you (like me) enjoy working directly from the IPython repl!

    Poništi
  12. proslijedio/la je Tweet
    1. stu 2019.

    We received ~650 abstracts for , a number comparable to two years ago in Denver (700), and a big drop wrt Lisbon last year (1000).

    Poništi
  13. proslijedio/la je Tweet
    28. lis 2019.
    Prikaži ovu nit
    Poništi
  14. 9. ruj 2019.

    s/algorithms/developments. Matlab is not an algorithm 😅

    Prikaži ovu nit
    Poništi
  15. 9. ruj 2019.

    The greatest numerical algorithms of the 20th century, according to Nick Trefethen circa 2005 ()

    Prikaži ovu nit
    Poništi
  16. proslijedio/la je Tweet
    23. srp 2019.

    "the geometry of the RNN representations can be .. sensitive to .. network architectures, yielding a cautionary tale for measures of similarity that rely representational geometry" YEEEEEEES replace RNN with any deep nets and still YES

    Poništi
  17. proslijedio/la je Tweet

    Universality and individuality in neural dynamics across large populations of recurrent networks . With fantastic collaborators , , , .

    Prikaži ovu nit
    Poništi
  18. proslijedio/la je Tweet
    5. srp 2019.

    So excited to be there in person to cheer on the in the World Cup finals! Inspired by all they do on the field, and even more so for their fight off the field for equal pay for female athletes. You go ladies! 🇺🇸

    Poništi
  19. 27. lip 2019.

    Finally, we can understand how the network processes individual words (tokens) by looking at projections of the embedding vectors onto the principal eigenvectors of the system. Overall, we think these tools will help us demystify and understand how recurrent networks work! (4/4)

    Prikaži ovu nit
    Poništi
  20. 27. lip 2019.

    The network dynamics are organized around a roughly 1-D approximate line attractor, which we identify by studying the eigendecomposition of the recurrent Jacobian of the dynamics at approximate fixed points. (3/4)

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·