Deep learning is to biological neural systems as quantum theory is to consciousness. @KordingLab @tyrell_turing @tdverstynen @GaryMarcus @GaryMarcus @danilobzdok @kendmil
-
-
Replying to @neuro_data @KordingLab and
There's a major difference: DL was guided in its infancy by ideas from neuroscience, so there is a relatively direct link between them. In contrast, the application of quantum mechanics to the c-word is taking two distinct fields and tying them together on speculative grounds.
3 replies 1 retweet 41 likes -
Replying to @tyrell_turing @neuro_data and
The relation between deep learning - with its single neuron type and largely homogenous architecture - and the actual complexity of the human brain, with > 1000 neuron types, hundreds of proteins at each synapse and > 100 distinct brain regions - is risible.
2 replies 1 retweet 36 likes -
Replying to @GaryMarcus @neuro_data and
Every model is an abstraction. Newtonian mechanics ignores air turbulence, molecular interactions, etc. Climate models capture coarse grained interactions, not the multitude of animals, plants, and wind-currents that truly shape the climate. Neural networks are no different.
6 replies 4 retweets 35 likes -
Replying to @tyrell_turing @neuro_data and
Let's be real. Current neural nets have been shown empirically to work on some problems (after tinkering to get details right) - but do we really *know* that they are an abstraction of the brain, in which their details map onto simplifications of actual brain processes? No.
3 replies 6 retweets 55 likes -
Replying to @GaryMarcus @neuro_data and
I'm sorry, but this is a bad take. Yes, we know they are simplifications of real brains. 1) Neurons do something very similar to linear integration with a non-linearity. 2) They process inputs in a distributed, parallel manner. ANNs capture this basic process, period.
5 replies 2 retweets 33 likes -
Replying to @tyrell_turing @neuro_data and
1 & 2 are vague but true for some aspects of the brain, possibly not all, but at best only part of an answer. if you had something analogous to an alternator for a car, would that in itself mean that your alternator analogue captures the dynamics of an internal combustion engine?
1 reply 0 retweets 2 likes -
Replying to @GaryMarcus @neuro_data and
Okay, I'm not sure what you're not getting here. My claim is simply that there are aspects of neural computation that ANNs capture. That is a fact, not a hypothesis. Why are we even debating this?
4 replies 0 retweets 1 like -
Replying to @tyrell_turing @GaryMarcus and
Two facts: 1) Input/output transformation to/from individual neurons is central to computations performed by neuronal circuits 2) The mechanisms by which (dendritic) inputs are transformed to (somatic) outputs in individual neurons in the brain are not well understood 1/2
1 reply 0 retweets 2 likes -
Replying to @SMBrocklehurst @tyrell_turing and
There has been lots of great study on mechanisms of dendritic integration *in* *vitro* using *simplified* *models*. However, that is really not the same thing as understanding in vivo mechanism. 2/2
1 reply 0 retweets 2 likes
yes, my point is analogous to understanding in a dish vs in an actual organism. often what we learn in the former doesnt work all that well in the latter. without firm grasp of context, understanding isolated parts can be misleading. (see Bargmann-Marder arguments on worm)
-
-
Replying to @GaryMarcus @tyrell_turing and
Indeed. Of course, in vitro models are often exceptionally useful; but people really do need to be clear about the what the limitations of those models are, and how they might differ from the in vivo situation.
0 replies 0 retweets 1 likeThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.