M Ganeshkumar

@Ganeshk92

Slowly becoming a computational Neuroscientist @ NUS. Using in silico lab rats to understand few-shot learning. Do share your opinions :)

Vrijeme pridruživanja: rujan 2012.

Tweetovi

Blokirali ste korisnika/cu @Ganeshk92

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @Ganeshk92

  1. proslijedio/la je Tweet
    30. sij

    Beautiful, simple and elegant. Lateral PFC reflects and predicts conscious content in the absence of motor reports. Also, a clear example of why we should be careful about the interpretation of fMRI results (as well as why we need more single neuron work)

    Poništi
  2. proslijedio/la je Tweet
    24. sij

    is one of the most important techniques I don't often recommend PhD Thesis' - 's is exceptional. He's a brilliant writer! Check out this taxonomy / table of contents!!! 👇👇👇

    Poništi
  3. proslijedio/la je Tweet

    As far as current machine learning is concerned, generalization originates from the ability to learn the latent manifold on which the training data lies, i.e. the ability to interpolate between training samples (local generalization, by definition)

    Prikaži ovu nit
    Poništi
  4. 24. sij

    Update on the 2013 paper on memory schema and the complementary learning system by McClelland

    Poništi
  5. proslijedio/la je Tweet
    23. sij

    Q-learning is difficult to apply when the number of available actions is large. We show that a simple extension based on amortized stochastic search allows Q-learning to scale to high-dimensional discrete, continuous or hybrid action spaces:

    Poništi
  6. proslijedio/la je Tweet
    21. sij

    New results: Achieving stable dynamics in neural circuits

    Poništi
  7. proslijedio/la je Tweet
    16. sij

    Read our paper "A distributional code for value in dopamine-based reinforcement learning" online here:

    Poništi
  8. proslijedio/la je Tweet
    15. sij

    When neuroscience and AI researchers get to chatting, cool stuff happens! My first, and I hope not last, trip into neuroscience has been published in Nature. 1/

    Prikaži ovu nit
    Poništi
  9. proslijedio/la je Tweet
    15. sij

    We have 2 papers published in today! 🎉 One describes AlphaFold, which uses deep neural networks to predict protein structures with high accuracy. AlphaFold made the most accurate predictions at the 2018 scientific community assessment CASP13. 1/4

    Prikaži ovu nit
    Poništi
  10. proslijedio/la je Tweet
    9. sij

    Are you confused about all these results in the fMRI literature on motor learning? Increases, decreases, shifts of activity, pattern change, changes in connectivity? We were as well… (1/n)

    Prikaži ovu nit
    Poništi
  11. proslijedio/la je Tweet

    Latent dynamics in the neural across three cortical areas in monkeys are stable throughout years of consistent behavior

    Poništi
  12. proslijedio/la je Tweet

    If you are a young PI in a learning/compneuro related area, I recommend checking out the Scholars program (). They have multiple open slots. And it is an understatement that they have many good people ()

    Prikaži ovu nit
    Poništi
  13. proslijedio/la je Tweet
    6. sij

    *REMINDER + PLS RT* Our workshop, From Neuroscience to Artificially Intelligent Systems (NAISys), has an abstract deadline of January 10. This Friday!!! But, it's only 1-page, so easy-peasy: Please send in ideas for how neuroscience can inform AI!

    Poništi
  14. proslijedio/la je Tweet
    6. sij

    Despite Deep 's popularity, there are precious few good intro tutorials! This is a really nice one. It combines: - toy implementation - math concepts - intuitive explanations

    Poništi
  15. proslijedio/la je Tweet
    2. sij

    Excited to share a new review on all things Engram Susumu Tonegawa () and I wrote Memory engrams: Recalling the past and imagining the future 1/3

    Prikaži ovu nit
    Poništi
  16. proslijedio/la je Tweet

    Just got sent the printed preview version of "Dive into deep learning" for which there is the free interactive online version: - omg this text book is so awesome.

    Prikaži ovu nit
    Poništi
  17. proslijedio/la je Tweet

    Neurons in primate non-linearly mix information about space and non-spatial elements of the environment in a task-dependent manner; this efficient code flexibly represents unique perceptual experiences and corresponding memories

    Poništi
  18. proslijedio/la je Tweet
    26. pro 2019.

    Starting Jan 6, we're doing a series of lectures at MIT on deep learning and AI. Skip the first one, but afterwards there are some great talks (inc. , ). All are welcome. Seating limited (1st come, 1st served). Video will be posted here:

    Prikaži ovu nit
    Poništi
  19. proslijedio/la je Tweet
    Poništi
  20. proslijedio/la je Tweet
    24. pro 2019.

    Attention is one of the most important breakthroughs in the history of Deep Learning. This is definitively the best explanation of it I've seen. For / folks - try building an attention mechanism from scratch!

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·