6) So it turns out that most of the processing complexity of a single neuron is the result of two specific biological mechanisms - the distributed nature of the dendritic tree coupled with the NMDA ion channel. Take away one of those things - and a neuron turns to a simple device
-
Show this thread
-
7) One additional advantage deep neural netowrks have compared to thousands of complicated differential equations, is the ability to visualize their inner workings. The simplest method is to look at the first layer weights of the neural network:
1 reply 2 retweets 46 likesShow this thread -
8) Here depicted are weights for one of the artificial units in first layer of the large DNN that mimics a neuron with the full complexity: One can see the spatio-temporal structure of synaptic integration: The basal and oblique trees integrate predominantly recent inputspic.twitter.com/93JOjgd4yr
1 reply 9 retweets 50 likesShow this thread -
9) Here, a different first layer unit, the apical tree appears to pay attention to what happened on it for many more milliseconds than the basal and oblique trees that we saw in previous unit (BTW, the blue traces are inhibition, the red traces are excitation)pic.twitter.com/O5oAr8vxkK
2 replies 4 retweets 38 likesShow this thread -
10) if we look at the first layer units of the small DNN that fitted the neuron with AMPA only synapses, then it appears that the units don't really pay attention at all to what happens at the apical tree or any distal locations at basal and oblique trees.pic.twitter.com/QEBDzsB2KY
1 reply 3 retweets 32 likesShow this thread -
11) it is a little bit hard to see the details of what's going on in those weight plots because there are so many synapses. So Let's focus on a single dendritic branch and zoom in on it. For a single branch with NMDA, it's possible to mimic it's behavior with only 4 hidden unitspic.twitter.com/W1RKKURaUu
1 reply 3 retweets 35 likesShow this thread -
12) and here are it's spatio-temporal patterns of integration: I'll verbally describe those filters from top to bottom as questions that the neuron is "asking" the input in order to determine its output (Note: no inhibition here, only excitation. time window extent is 100ms)pic.twitter.com/J6Qqdy01fu
1 reply 2 retweets 31 likesShow this thread -
13) unit 1: was there very recent excitation that was proximal to soma? unit 2: was there very recent excitation that was distal to soma? unit 3: was there a quick distal to proximal pattern of excitation? unit 4: was there a slow distal to proximal pattern of excitation?pic.twitter.com/jE9uarrJkD
1 reply 6 retweets 34 likesShow this thread -
14) many more details are in the preprint on bioRxiv: https://www.biorxiv.org/content/10.1101/613141v1 …
@bioRxiv@biorxiv_neursci4 replies 9 retweets 62 likesShow this thread -
15) huge thanks to my PhD supervisors
@Segev_Lab and@mikilon and also to all my lab mates@TMoldwin@Oren_Amsalem@GuyEyal@MichaelDoronII@gialdetti and also to everyone else that listened to me talked on and on of these stuff in the past! :-)16 replies 4 retweets 59 likesShow this thread
David Beniaguev Retweeted David Beniaguev
Released all code and data of this work! Think you can build a simpler yet not less accurate model for a single neuron? Just want to analyze the input-output dataset from a completely new perspective? I've tried making all of that as simple as I couldhttps://twitter.com/DavidBeniaguev/status/1244334898888007693 …
David Beniaguev added,
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.