Tweetovi

Blokirali ste korisnika/cu @MattGolub_Neuro

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @MattGolub_Neuro

  1. proslijedio/la je Tweet
    10. lip 2019.

    Super excited to see this work published! Congrats to Emily for finishing such an awesome project. These were super difficult experiments for her to run, and a really challenging problem to think through!

    Poništi
  2. proslijedio/la je Tweet
    27. lip 2019.

    New work out on arXiv! Reverse engineering recurrent networks for sentiment classification reveals line attractor dynamics (), with fantastic co-authors , , and . summary below! 👇🏾 (1/4)

    Prikaži ovu nit
    Poništi
  3. 1. stu 2018.

    We hope this tool inspires you to unleash modern deep learning approaches toward understanding how networks and brains solve challenging tasks!

    Prikaži ovu nit
    Poništi
  4. 1. stu 2018.

    FixedPointFinder identifies the stable (black) and unstable (red) fixed points, along with linearized dynamics local to each fixed point (red lines are dominant modes). Trajectories of the network state are overlaid in blue.

    Prikaži ovu nit
    Poništi
  5. 1. stu 2018.

    Here’s an example--we trained a 16-unit LSTM network to implement a 3-bit memory (a.k.a. the Flip Flop task). Each input (gray) delivers transient pulses to flip the state of a corresponding output (trained network: purple; training signal: cyan).

    Prikaži ovu nit
    Poništi
  6. 1. stu 2018.

    We've created a Tensorflow toolbox for reverse engineering trained RNNs (with ). You train a network (e.g., "vanilla", LSTM, GRU, custom), then we use TF to do the fixed point optimizations and Jacobian computations.

    Prikaži ovu nit
    Poništi
  7. proslijedio/la je Tweet
    10. ruj 2018.

    Great work, Megan, Christeva, and colleagues! It’s reassuring to see that every once in a while, the answer is not “Motor cortex just does everything”.

    Poništi
  8. 3. srp 2018.

    We've posted code to accompany our 2015 eLife paper. The framework extracts a subject's internal model of a dynamical system being controlled. Perhaps useful for those studying BMI / motor / control / learning!

    Poništi
  9. proslijedio/la je Tweet
    2. srp 2018.

    Deep convolutional neural networks are great models of the visual system, but these static systems don't explain the temporal dynamics of real visual responses. So we built deep recurrent networks: Paper:

    Prikaži ovu nit
    Poništi
  10. 20. lip 2018.

    To reach or not to reach, that was the question. New work from et al shows that preparatory activity in F5 and AIP separates according to anticipated delays.

    Poništi
  11. proslijedio/la je Tweet

    Most of you know me as a successful neuroscientist / deep learning researcher but I have a story that I want to share briefly. I grew up in a group home, which is basically an orphanage.

    Prikaži ovu nit
    Poništi
  12. 12. ožu 2018.

    How does the brain quickly learn to improve behavior, and what are the limitations this type of learning? Check out our latest paper, "Learning by neural reassociation," as featured in Byron Yu's talk.

    Poništi
  13. proslijedio/la je Tweet
    15. velj 2018.

    First tweet, new paper: we asked can learning motor tasks in your mind w/o physical movements (via a BMI) ‘transfer’ and improve overt behavior, & if so, by what neural mechanism? Thx co-authors Paul & Stephen

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·