There exists a wide range of estimates on the effective complexity of brains. Picking the one that supports your nonexistence claim best may mean that you are not maximizing the probability that you are correct.
-
-
Replying to @Plinz @MimeticValue and
Assuming you're not denying our best scientific evidence about neuron-level interaction, this seems like yet another argument, that relies upon "effective complexity". Can you tell me what "effective complexity" means?
1 reply 0 retweets 1 like -
Replying to @SimonDeDeo @MimeticValue and
Individual neurons are fairly indeterministic. A large part of synaptic connectivity may just serve normalization. Effective complexity comes down to how many neurons we need to allocate, identify, learn and compute a particular function reliably enough.
1 reply 0 retweets 0 likes -
Replying to @Plinz @MimeticValue and
Your claim is now that (a) the complexity of neuron-level interaction is actually irrelevant to whole brain simulation, and that (b) we can simulate a brain with far fewer neurons than are actually found in humans. Do you have any arguments for this?
1 reply 0 retweets 1 like -
Replying to @SimonDeDeo @MimeticValue and
(a) I don’t think that whole brain simulation is the way to go if you use a different computational substrate. (b) Yes, but that is a highly speculative and weak claim on my part, based on the low firing rate and reliability of neurons, and the high price of of random access.
1 reply 0 retweets 2 likes -
Replying to @Plinz @MimeticValue and
Low reliability: we know from fault-tolerant computation that N unreliable gates can be made reliable with a circuit of only N*log(N). So that can't help.
1 reply 0 retweets 1 like -
Replying to @SimonDeDeo @Plinz and
Low firing rate: if we could replace slow-firing neurons with a smaller number of fast ones, we should see our deep learning networks shrinking with clock speed (we don't).
2 replies 0 retweets 1 like -
Replying to @SimonDeDeo @Plinz and
"high price of random access"—not sure what this means. Are you saying that a von Neumann architecture will push us over the edge?
1 reply 0 retweets 0 likes -
Replying to @SimonDeDeo @MimeticValue and
Computer architectures have evolved since the 1960ies, and they will continue to follow whatever computational paradigm we want to pursue. Nervous systems don't have that degree of flexibility: they are constrained to local control, no address space, expensive long-range routing.
1 reply 0 retweets 0 likes -
Replying to @Plinz @MimeticValue and
Your claim is now that non-local control, address spaces, and long-range routing are sufficient to reduce the number of neurons required. Why should we believe this?
1 reply 0 retweets 0 likes
If you add constraints on the way functionality can be represented and learned, you will likely lose efficiency (i.e. need more moving parts to achieve the same result). If that is true, then reducing the constraints is going to increase the potential for greater efficiency.
-
-
Replying to @Plinz @MimeticValue and
Your argument would suggest that massive gains are possible in deep learning if we reduce our reliance on large neural networks. The opposite appears to be the case.
0 replies 0 retweets 1 likeThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.