"once you add external memory and suitable tool libraries, it is not even clear to me that an ANN could not learn the equivalent of the transition function of a brain" is, again and unfortunately, science fiction. What are the reasons to believe this?
-
-
Replying to @SimonDeDeo @MimeticValue and
A lot of very complex systems have relatively simple transition functions. If you want to argue how to best build a system, you need different arguments than for your much stronger claim that a system can not possibly be built in a particular way.
1 reply 0 retweets 1 like -
Replying to @Plinz @MimeticValue and
This is a different argument that the one I was responding to. Your new argument relies on the idea that neuron-level response is "relatively simple". Our best knowledge of the brain suggests that it is, in fact, extremely complex.
1 reply 0 retweets 1 like -
Replying to @SimonDeDeo @MimeticValue and
There exists a wide range of estimates on the effective complexity of brains. Picking the one that supports your nonexistence claim best may mean that you are not maximizing the probability that you are correct.
1 reply 0 retweets 0 likes -
Replying to @Plinz @MimeticValue and
Assuming you're not denying our best scientific evidence about neuron-level interaction, this seems like yet another argument, that relies upon "effective complexity". Can you tell me what "effective complexity" means?
1 reply 0 retweets 1 like -
Replying to @SimonDeDeo @MimeticValue and
Individual neurons are fairly indeterministic. A large part of synaptic connectivity may just serve normalization. Effective complexity comes down to how many neurons we need to allocate, identify, learn and compute a particular function reliably enough.
1 reply 0 retweets 0 likes -
Replying to @Plinz @MimeticValue and
Your claim is now that (a) the complexity of neuron-level interaction is actually irrelevant to whole brain simulation, and that (b) we can simulate a brain with far fewer neurons than are actually found in humans. Do you have any arguments for this?
1 reply 0 retweets 1 like -
Replying to @SimonDeDeo @MimeticValue and
(a) I don’t think that whole brain simulation is the way to go if you use a different computational substrate. (b) Yes, but that is a highly speculative and weak claim on my part, based on the low firing rate and reliability of neurons, and the high price of of random access.
1 reply 0 retweets 2 likes -
Replying to @Plinz @MimeticValue and
Low reliability: we know from fault-tolerant computation that N unreliable gates can be made reliable with a circuit of only N*log(N). So that can't help.
1 reply 0 retweets 1 like -
Replying to @SimonDeDeo @Plinz and
Low firing rate: if we could replace slow-firing neurons with a smaller number of fast ones, we should see our deep learning networks shrinking with clock speed (we don't).
2 replies 0 retweets 1 like
Our current deep learning paradigm uses almost exclusively normalized weighted sums of reals, trained by stochastic gradient descent in a differentiable setup using the chain rule. We need high clock speeds to compensate for the dramatic number of update operations this requires.
-
-
Replying to @Plinz @MimeticValue and
Yes, we like it when we have higher clock speeds. But higher clock speeds do not let us get by with fewer neurons, as your argument requires.
1 reply 0 retweets 0 likes -
Replying to @SimonDeDeo @MimeticValue and
If you want to act in real time, you need to minimize the number of steps between sensor and actuator, which you can partially compensate by increasing the number of parallel paths, which implies a larger number of elements. It turns out that individual neurons are cheap.
1 reply 0 retweets 0 likes - 1 more reply
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.