Eli Pollock

@elibpollock

MIT PhD student interested in the intersection of neuroscience, computation, and cognition.

Vrijeme pridruživanja: lipanj 2017.

Tweetovi

Blokirali ste korisnika/cu @elibpollock

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @elibpollock

  1. 20. pro 2019.

    10/10 Check out the paper for more! Special thanks to for being a supportive mentor throughout this project

    Prikaži ovu nit
    Poništi
  2. 20. pro 2019.

    9/10 Finally, we asked some questions about what happens when you bend the ring into higher dimensions, showing that the geometry of the ring manifold has an effect on how stable it is

    Prikaži ovu nit
    Poništi
  3. 20. pro 2019.

    8/10 We extended the approach to allow for inputs that could control the speed of movement around the ring. This provides a simple explanation for how inputs can flexibly change network dynamics

    Prikaži ovu nit
    Poništi
  4. 20. pro 2019.

    7/10 We tried this out as a way of creating ring attractors with a “semi-discrete” working memory: network dynamics quickly move onto a low-d ring, and slowly move towards a few states that are specified by a known drift function

    Prikaži ovu nit
    Poništi
  5. 20. pro 2019.

    6/10 We had the idea of flipping linearization around: what if we specify the local behavior at a bunch of points, and solve for the connectivity of networks that have the desired global dynamics?

    Prikaži ovu nit
    Poništi
  6. 20. pro 2019.

    5/10 But what if we want to do the reverse? Maybe we’re interested in studying the connectivity of networks that solve a task in a known way, or studying the kinds of representations that are optimal for a network.

    Prikaži ovu nit
    Poništi
  7. 20. pro 2019.

    4/10 One method involves studying recurrent neural network (RNN) models as a dynamical system, and uses locally linear approximations of dynamics to describe what’s going on:

    Prikaži ovu nit
    Poništi
  8. 20. pro 2019.

    3/10 There have been some awesome papers looking at how low-dimensional activity relates to the way that brains (and artificial networks) solve tasks. Shoutout to ao for this awesome summary of work along those lines:

    Prikaži ovu nit
    Poništi
  9. 20. pro 2019.

    2/10 There’s been a lot of talk in the computational neuro world about how neurons are frequently correlated, confining their activity to a “low-dimensional” subspace (or manifold).

    Prikaži ovu nit
    Poništi
  10. 20. pro 2019.

    Excited to post my preprint about RNNs, dynamics, and ! Here’s a quick explaining the main ideas: 1/10

    Prikaži ovu nit
    Poništi
  11. 19. ruj 2019.

    Being a TA for this class is always a pleasure (so much so that I'm doing it a third time). Mehrdad imparts such clear intuition on a variety of complex topics in his lectures. It's great to see him get recognized for it

    Poništi
  12. proslijedio/la je Tweet
    22. srp 2019.

    Ten years ago today, at a TED conference, a neuroscientist claimed that he could simulate the human brain in ten years. And, er, that didn’t happen. Here’s a look at why, and whether the goal even makes any sense.

    Prikaži ovu nit
    Poništi
  13. proslijedio/la je Tweet
    15. srp 2019.

    JazLab IDs brain activity patterns that encode our expectations and influence how we perceive the world. More info: Read the paper:

    Poništi
  14. 12. srp 2019.

    Really like this sign. It's easy to feel helpless, but important to show up and speak up. Apathy is the greatest danger to justice

    Prikaži ovu nit
    Poništi
  15. 12. srp 2019.
    Prikaži ovu nit
    Poništi
  16. 17. svi 2019.

    Awesome work by Morteza that addresses questions about how we reason about our own errors

    Poništi
  17. proslijedio/la je Tweet
    10. tra 2019.

    Scientists have obtained the first image of a black hole, using Event Horizon Telescope observations of the center of the galaxy M87. The image shows a bright ring formed as light bends in the intense gravity around a black hole that is 6.5 billion times more massive than the Sun

    Poništi
  18. proslijedio/la je Tweet
    29. ožu 2019.

    New study from our group getting at this simple question: how does success/failure change neural activity on the next trial? Jing shows that reinforcement directly affects neural variability along behaviorally relevant dimensions to implement an exploration-exploitation strategy

    Poništi
  19. 17. ožu 2019.
    Poništi
  20. proslijedio/la je Tweet

    My friends, my colleagues, I tell you: give up your single neurons! We are moving toward a neural population doctrine!

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·