Current AI doesn't even get near *mouse* intelligence--and a great deal of what mice (and most other animals) do is innate. If we could get achieve artificial mouse intelligence, we'd have a good foundation for human intelligence.
-
-
Replying to @TonyZador @ylecun and
I agree. At least if that is the type of artificial intelligence we are after - the kind that allows a wee beastie like a mouse to do all the cognitive operations necessary for mouse survival
1 reply 0 retweets 2 likes -
Replying to @WiringTheBrain @TonyZador and
And I would argue most of the brain is devoted to this kind of adaptive control of behavior. And it would be super cool to build AI like that - as artificial agents...
1 reply 0 retweets 1 like -
Replying to @WiringTheBrain @TonyZador and
But is that what AI folks want? Or are they after the ability to perform abstract reasoning and other really intellectual operations better than humans can?
2 replies 0 retweets 2 likes -
Replying to @WiringTheBrain @ylecun and
Indeed, what AI folk want is abstract reasoning. They tried jumping straight to it under the Minsky Symbolic AI program, which failed. Maybe we can jump straight to human cognition with ML, but i'm skeptical. I think we need to pass through mouse intelligence. Short jump to human
5 replies 2 retweets 9 likes -
Replying to @TonyZador @WiringTheBrain and
This is the ratbrain.h hypothesis: that to get humanbrain from ratbrain all you need to change is some constants in ratbrain.h. If true, it predicts abrupt change in power of what we can build. Consistent with GAI alarmism.
1 reply 0 retweets 3 likes -
Replying to @BAPearlmutter @TonyZador and
seriously doubt it. if it was just one parameter change from rat to human level intelligence, human level intelligence would like be much more widespread given the large adaptive advantage.
1 reply 0 retweets 1 like -
Replying to @GaryMarcus @BAPearlmutter and
The huge adaptive advantage of human-level intelligence apparently only happens at our level. Very smart apes and hominids were around for millions of years, but the population didnt explode. Evolution doesnt get to anticipate how useful something will be before it gets there.
1 reply 0 retweets 0 likes -
Replying to @TonyZador @BAPearlmutter and
no of course not. but improvements to vision have happened many many times, whereas the transition from primate-like cognition to human-like cognition has happened only once.
2 replies 0 retweets 1 like -
Replying to @GaryMarcus @BAPearlmutter and
So maybe the gradient of fitness with respect to extra visual acuity is kind of flat, whereas the gradient of fitness with respect to extra intelligence hits an inflection point near humans (language).
2 replies 0 retweets 1 like
the gradient is less flat because you have to more random mutations of the just the sort happen simulataneously. which is to say it is less probable
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.