@GaryMarcus' nativist argument against statistical learning AI seems to rest on the premise that evolution is fundamentally different from a statistical learning paradigm.
This is false (evolution IS a form of statistical learning), but I am grateful that Gary makes that point.
-
-
Replying to @Plinz @GaryMarcus
I would argue biological evolution is Not a form of statistical learning - since there is no learning - but pure selection, among random changes in the genomic code and interaction of biomolecules, completely determined by specific dependencies/constraints of the environment.
1 reply 0 retweets 2 likes -
Replying to @Derya_ @GaryMarcus
Learning is the search for better function approximations, and so is evolution. Gradient descent works only when you can identify a gradient. Otherwise you are stuck with evolutionary methods.
1 reply 1 retweet 3 likes -
Replying to @Plinz @GaryMarcus
That’s my point that in biological evolution there is No search for better approximations - only selection out of many many random possibilities - and only on basis of selective survival advantage and not even necessarily good solutions...
1 reply 0 retweets 1 like -
Replying to @Derya_ @GaryMarcus
I think of evolution of minds mostly as a search for meta learning systems. I suspect that the human mind is the first fully general function approximator (we are the first to use externalizable Turing complete languages), and I agree with Gary that evolution fixes many priors.
2 replies 8 retweets 19 likes -
What do you mean by using Turing-complete languages? Every physical system including the human brain is bounded; no TM is there.
2 replies 0 retweets 0 likes -
Like any other physical system, we cannot perform an operation that would require unlimited resources or computational steps. But until we run into resource limits, there is nothing we cannot compute, and we can externalize our memory.
1 reply 0 retweets 1 like -
the problem is in modeling biological systems you will very quickly run into major resource limits using Turing-machine type computational steps. Thus, beyond philosophical debate, the practical application of that paradigm seems limited to me - like to be proven wrong though :)
1 reply 0 retweets 0 likes
The infinite tape is a red herring that we inherit from classical math. We need to build our foundations with automata that never have to run to infinity, I think. Physical systems have finite resolution and resources.
-
-
I can add that there is a nice discussion of this suggestion (that is, abandoning TMs in favor of automata) in the introduction of the book http://www.springer.com/us/book/9789027715104 …. That's a good read!
1 reply 1 retweet 2 likes -
I think there is a demarcation between creating knowledge and learning process ( human or arteficial)-that is irreducibility of learning product and reducibility of evolutionary one. That is why I guess they are different.
1 reply 0 retweets 0 likes - 6 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.