Scott Linderman

@scott_linderman

Assistant Professor Statistics and . Computational Neuroscience, Machine Learning.

Vrijeme pridruživanja: travanj 2016.

Tweetovi

Blokirali ste korisnika/cu @scott_linderman

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @scott_linderman

  1. proslijedio/la je Tweet
    28. sij

    I'm pleasantly surprised by the interest in my recent preprint here on Twitter. Feedback or questions are appreciated!

    Poništi
  2. 19. sij
    Poništi
  3. 16. sij

    New work from a really fun collaboration with and ! We recast many decision making models as instances of recurrent switching linear dynamical systems. Then we can fit these models to neural data more easily and generalize them in cool new ways!

    Poništi
  4. proslijedio/la je Tweet
    16. sij

    Optimal Transport is a super cool and underutilized tool in neural data analysis. This blog post does a great job introducing it in the case of fMRI data👇

    Poništi
  5. proslijedio/la je Tweet
    10. sij

    The outcome of the review process will be announced soon. To adjust expectations, here is some data and information. 1/n

    Prikaži ovu nit
    Poništi
  6. proslijedio/la je Tweet
    28. pro 2019.

    [urban meyer shown] [ohio state immediately loses]

    Poništi
  7. 17. pro 2019.

    My only complaint: isn't "emergent property" just a fancy name for a statistic?

    Prikaži ovu nit
    Poništi
  8. 17. pro 2019.

    This work is super similar to the likelihood free inference methods from 's group. (See their STG analysis for comparison. Really, everyone's favorite!) and

    Prikaži ovu nit
    Poništi
  9. 17. pro 2019.

    They apply the method to a bunch of neat examples: the somatogastric ganglion (everyone's favorite), a four-population model of V1, a task switching model of SC, and abstract RNNs trained to perform posterior inference.

    Prikaži ovu nit
    Poništi
  10. 17. pro 2019.

    Enjoyed this new work from Sean Bittner et al. They use normalizing flows to approximate a max-ent distribution over parameters of a black box model, subject to the constraint that the model produce a specified "emergent property" of interest.

    Prikaži ovu nit
    Poništi
  11. proslijedio/la je Tweet
    13. pro 2019.

    Wired on and Yoshua Bengio's NeurIPS talks and the potential for deeper biological inspiration in AI.

    Prikaži ovu nit
    Poništi
  12. proslijedio/la je Tweet
    12. pro 2019.

    Excited to announce voltage imaging in awake mice with ASAP3, out today in Cell. ASAP3 works in 1p or 2p, with random-access 2p producing >5-min recordings in vivo, a collaborative effort with the Dieudonné lab. Plasmids at addgene.

    Prikaži ovu nit
    Poništi
  13. 9. pro 2019.

    Here's the paper: Check out Ruoxi's talk tomorrow and poster on Wednesday morning (#147) to learn more!

    Prikaži ovu nit
    Poništi
  14. 9. pro 2019.

    If you've heard me talk about research over the past few years, you know I can't get enough of switching linear models like these! We've been working on a variety of methods for approximate Bayesian inference, and they work here too.

    Prikaži ovu nit
    Poništi
  15. 9. pro 2019.

    The key idea is that only the within-compartment dynamics are nonlinear; the coupling between compartments is all linear. By approximating within-compartment dynamics with a recurrent SLDS, the whole cell's voltage follows conditionally linear dynamics.

    Prikaži ovu nit
    Poništi
  16. 9. pro 2019.

    Finally, Ruoxi Sun is giving an oral presentation (!!!) on our recurrent state space models for spatiotemporal voltage imaging data tomorrow at 4:10pm, W Ballroom C. We use coupled rSLDS to approximate biophysical models of voltage propagation, like this Hodgkin-Huxley model.

    Prikaži ovu nit
    Poništi
  17. 9. pro 2019.

    This tutorial was fantastic. Highly recommend you check out the slides.

    Poništi
  18. 9. pro 2019.

    I'm chairing tomorrow afternoon's session on deep generative models (Track 1 Session 2 @ 4:10pm in Hall C+B3). We have a great line-up of talks! If you have questions for either of the oral presentations, feel free to ask them or

    Poništi
  19. 9. pro 2019.

    Also, I just noticed that we'll be right next door to and ! Looking forward to catching up and hearing about your latest work!

    Prikaži ovu nit
    Poništi
  20. 9. pro 2019.

    Come by 's poster tomorrow (#55, 10:45am) to learn about "Mutually Regressive Point Processes" — nonlinear Hawkes processes with additive excitation and mult. inhibition. If you're into augmentation schemes, you'll def. like our Gibbs sampler!

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·