I am looking for good papers on the possibility that dendritic trees may compute derivatives of functions but haven’t found anything so far. Such computations may be very useful for both learning and closed-loop control. Are there papers I have overlooked?
-
Show this thread
-
cc:
@YiotaPoirazi,@trabranco,@TonyZador,@KordingLab,@AdamMarblestone,@tyrell_turing,@jbimaknee2 replies 0 retweets 1 likeShow this thread -
Replying to @bayesianbrain @YiotaPoirazi and
It's never been written about. It's very obvious and I have been talking anting to write that paper literally for 15 years. Would be great to write!
1 reply 0 retweets 3 likes -
Replying to @KordingLab @bayesianbrain and
You mean the temporal derivative of an input waveform? Or something else?
2 replies 0 retweets 0 likes -
Replying to @AdamMarblestone @KordingLab and
I am interested in a general process for dendritic integration where the synaptic inputs are the function values f_i of a function of several variables and the output of the dendritic computation is the partial derivative of f_i with respect to one or more variables.
1 reply 0 retweets 1 like -
Replying to @bayesianbrain @AdamMarblestone and
Aidan Rocke Retweeted KordingLab 👨💻 🧠∇ 🔬 📈, 🏋️♂️ ⛷️ 🏂 🛹 🕺 ⛰️ ☕ 🦖
In general, this may be a function learned by a network of neurons. So I think this definition of
@KordingLab might be a special case: https://twitter.com/KordingLab/status/1220101640432254976 … If spike trains can encode ordered pairs (x,f(x))...it should be possible.Aidan Rocke added,
1 reply 0 retweets 1 like -
Replying to @bayesianbrain @KordingLab and
Still too abstract for me — need a concrete example say a fxn of 2 variables.
2 replies 0 retweets 1 like -
Replying to @AdamMarblestone @KordingLab and
For concreteness, I think dendritic trees may compute the partial derivatives of functions using an algorithm similar to the Cauchy Integral Formula for derivatives. It can be implemented on any binary tree where a large number of local computations occur in parallel.
1 reply 0 retweets 1 like -
Replying to @bayesianbrain @AdamMarblestone and
For a large number of functions(ex. polynomials, sigmoid, trigonometric) we have geometric convergence in error. I think any proposed algorithm must scale very well with the dimension of the input to the dendritic tree. Very fast convergence implies robustness.
1 reply 0 retweets 0 likes
Like say I am a synapse, the jth in this tree. Function f(x,y). I have an associated x_j, y_j and my input at time t is taken as f(x_j, y_j)? Now what do you want the neuron spike output to represent? Or is what Konrad said the thing you want?
-
-
Replying to @AdamMarblestone @KordingLab and
If we represent these inputs as a vector [f(x_1,y_1),...,f(x_n,y_n)] then the spike output may represent the dot product of [f(x_1,y_1),...,f(x_n,y_n)] and [f_1,...,f_n] where the f_i denote frequencies of different voltage oscillations in a dendritic tree.
1 reply 0 retweets 0 likes -
Replying to @bayesianbrain @AdamMarblestone and
This formulation is not exact in the sense that it's not Contour Integration but the information I need is there. I think that by averaging these computations we can get a good stochastic estimate of a partial derivative with respect to any x_i or y_i.
1 reply 0 retweets 0 likes - 5 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.