Ari Benjamin

@arisbenjamin

computational neuroscientist, general life appreciator

Vrijeme pridruživanja: ožujak 2010.

Tweetovi

Blokirali ste korisnika/cu @arisbenjamin

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @arisbenjamin

  1. 29. sij

    52 websites were sharing my activity with facebook, like a little blue ghost stalker 👥

    Poništi
  2. proslijedio/la je Tweet
    27. sij

    SYNPLA, a method to identify synapses displaying plasticity after learning

    Poništi
  3. 12. sij

    More than any single idea we came up with, having a continued practice of reflection that engaged everyone really boosted how much folks engage with the lab. But we came up with good ideas too. How about you all? What lab practices do you have that you're proud of?

    Prikaži ovu nit
    Poništi
  4. 12. sij

    What makes a good lab? Are group meetings really the best way? In the we recently reexamined how we organize our weeks, and then redesigned everything in a systematic way (100% democratically!). I blogged about our design process and takeaways:

    Prikaži ovu nit
    Poništi
  5. 6. pro 2019.

    As I understand it, that theorem is the one that says that, if a sender and receiver agree on a code, it requires -log r bits to communicate an event that has probability r

    Prikaži ovu nit
    Poništi
  6. 6. pro 2019.

    From Dayan and Abbott Ch. 10: "A theorem due to Shannon describes circumstances under which a generative model that maximizes the likelihood over input data also provides the most efficient way of coding those data, so density estimation is closely related to optimal coding."

    Prikaži ovu nit
    Poništi
  7. 6. pro 2019.

    I recently learned that having a efficient neural code (in the Barlow sense) is connected to having a good generative model. Took me by surprise! But makes sense in retrospect

    Prikaži ovu nit
    Poništi
  8. 4. stu 2019.

    Modest Mouse was right all along: The universe is shaped exactly like the earth — if you go straight long enough you'll end up where you were

    Poništi
  9. proslijedio/la je Tweet

    twitter

    Poništi
  10. proslijedio/la je Tweet
    9. lis 2019.

    It is now live: Ten common statistical mistakes to watch out for when writing or reviewing a manuscript Use the annotation function to leave your comments/ideas/solutions

    Prikaži ovu nit
    Poništi
  11. 1. lis 2019.

    I'm proud of this! Very clear (I hope 🤞) introduction to machine learning if you're interested in neural decoding

    Poništi
  12. 25. ruj 2019.

    Ah Matt I couldn't find your handle! Everyone should follow

    Prikaži ovu nit
    Poništi
  13. 25. ruj 2019.

    It’s a hard question that we’ve thought carefully about. There’s an empirical and a theoretical side. Our thoughts are in the Discussion, but please, tell us your thoughts!

    Prikaži ovu nit
    Poništi
  14. 25. ruj 2019.

    This is obviously about how we understand V4’s activity. But we also think this is a cautionary tale for the vary-stimuli-in-a-parameterized-way approach. When does this approach tell you about a neuron's response on other stimuli & the neuron's general role?

    Prikaži ovu nit
    Poništi
  15. 25. ruj 2019.

    We then compared these tuning curves to hue tuning curves estimated using artificial stimuli. Were they similar? Not really. Stated cleanly: in different visual contexts, the immediate effect of varying hue upon the response was different.

    Prikaži ovu nit
    Poništi
  16. 25. ruj 2019.

    We tried reverse-correlation approaches and also a new method: we fit a nonlinear V4 model (based on ImageNet-trained DNNs, Yamins/DiCarlo style) and characterized how tiny perturbations to hue affected the model’s response.

    Prikaži ovu nit
    Poništi
  17. 25. ruj 2019.

    It took a few years to get it right, but we managed to estimate tuning of macaque V4 neurons from natural scenes. We looked at hue tuning, since there's a already big literature about color responses in V4.

    Prikaži ovu nit
    Poništi
  18. 25. ruj 2019.

    In V1 there’s a rich literature asking this question by estimating tuning from natural scene responses. But what about in higher areas, like V4? We can’t use the same methods, since responses are so nonlinear.

    Prikaži ovu nit
    Poništi
  19. 25. ruj 2019.

    So, a classic way to ask "what does neural activity mean?" in the visual cortex is to vary stimuli in a systematic, parameterized way while recording responses. But do those results tell you what activity 'means' in other contexts, like while viewing natural scenes?

    Prikaži ovu nit
    Poništi
  20. 25. ruj 2019.

    New preprint! Does V4’s tuning to hue on artificial stimuli tell us about its tuning to hue on natural images? Not really. (Twitterified in the thread below) with , , Matthew Smith, and

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·