People that think that modeling the complexity and functionality of a mind is going to stay out of our reach for long (or forever) are probably unrealistic. To show limits of Bayes etc. we may need more mathematical proofs, and less motivated reasoning.
-
-
Replying to @Plinz @MimeticValue and
I don't think that an ANN is a good paradigm to replicate something like a brain. But once you add external memory and suitable tool libraries, it is not even clear to me that an ANN could not learn the equivalent of the transition function of a brain, for instance.
1 reply 0 retweets 1 like -
Replying to @Plinz @MimeticValue and
"once you add external memory and suitable tool libraries, it is not even clear to me that an ANN could not learn the equivalent of the transition function of a brain" is, again and unfortunately, science fiction. What are the reasons to believe this?
1 reply 0 retweets 1 like -
Replying to @SimonDeDeo @MimeticValue and
A lot of very complex systems have relatively simple transition functions. If you want to argue how to best build a system, you need different arguments than for your much stronger claim that a system can not possibly be built in a particular way.
1 reply 0 retweets 1 like -
Replying to @Plinz @MimeticValue and
This is a different argument that the one I was responding to. Your new argument relies on the idea that neuron-level response is "relatively simple". Our best knowledge of the brain suggests that it is, in fact, extremely complex.
1 reply 0 retweets 1 like -
Replying to @SimonDeDeo @MimeticValue and
There exists a wide range of estimates on the effective complexity of brains. Picking the one that supports your nonexistence claim best may mean that you are not maximizing the probability that you are correct.
1 reply 0 retweets 0 likes -
Replying to @Plinz @MimeticValue and
Assuming you're not denying our best scientific evidence about neuron-level interaction, this seems like yet another argument, that relies upon "effective complexity". Can you tell me what "effective complexity" means?
1 reply 0 retweets 1 like -
Replying to @SimonDeDeo @MimeticValue and
Individual neurons are fairly indeterministic. A large part of synaptic connectivity may just serve normalization. Effective complexity comes down to how many neurons we need to allocate, identify, learn and compute a particular function reliably enough.
1 reply 0 retweets 0 likes -
Replying to @Plinz @MimeticValue and
Your claim is now that (a) the complexity of neuron-level interaction is actually irrelevant to whole brain simulation, and that (b) we can simulate a brain with far fewer neurons than are actually found in humans. Do you have any arguments for this?
1 reply 0 retweets 1 like
(a) I don’t think that whole brain simulation is the way to go if you use a different computational substrate. (b) Yes, but that is a highly speculative and weak claim on my part, based on the low firing rate and reliability of neurons, and the high price of of random access.
-
-
Replying to @Plinz @MimeticValue and
Low reliability: we know from fault-tolerant computation that N unreliable gates can be made reliable with a circuit of only N*log(N). So that can't help.
1 reply 0 retweets 1 like -
Replying to @SimonDeDeo @Plinz and
Low firing rate: if we could replace slow-firing neurons with a smaller number of fast ones, we should see our deep learning networks shrinking with clock speed (we don't).
2 replies 0 retweets 1 like - 5 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.