C) Smaller point: efficiency e.g. speed counts too. If it takes you a year of agonising to select cilantro, and me 2 seconds, and we both have same goal, make same selections each time, I eat all the cilantro before you. Not always relevant, but some domains where speed counts.
-
-
Replying to @MattJoass @wolfejosh and
D) Very small point: AGI only needs to perform 'at least as well as', not 'better' (though I appreciate you were referring to 'Superintelligence'). AGI is particularly relevant for the set 'every cognitive task that is economically valuable'.
1 reply 0 retweets 1 like -
Replying to @MattJoass @wolfejosh and
In "The Beginning of Infinity" which I cannot recommend highly enough, DD links the process by which knowledge grows to the concept of personhood & arguments about why progress in AGI has been blocked. I use some of that in critiquing Bostrom here: http://www.bretthall.org/superintelligence.html …
1 reply 0 retweets 2 likes -
Replying to @ToKTeacher @wolfejosh and
Thanks for the great read Brett! The idea that ‘no program can create new knowledge’ seems to be a central part of your critique of Bostrom. AlphaGoZero identified new moves that humans are now learning from. This seems to have crossed the threshold? https://www.nature.com/articles/nature24270 …
1 reply 0 retweets 0 likes -
Replying to @MattJoass @wolfejosh and
No “threshold” at all. A computer that calculates heaps of different moves is going to find some no one knew before. But it’s “brute force”, not creativity. When AlphaGo wants to build itself a body so it can learn tennis...or travel to Paris to do stand-up comedy, we’ll talk ;)
1 reply 0 retweets 2 likes -
Replying to @ToKTeacher @MattJoass and
It’s also parametrically constrained. There is no calculation that says flip the board and storm out. Or turn the pieces into projectiles and induce human to mechanically throw them at other player.
1 reply 0 retweets 3 likes -
Replying to @wolfejosh @MattJoass and
Yes. So there’s a finite list of *types of tasks* it can attempt - & we can describe what they are. Which is to say: it’s not a universal explainer. Ergo: it’s no AGI. With an actual AGI it’s impossible in principle to list the types of tasks it might attempt. As with all people.
3 replies 0 retweets 2 likes -
-
Replying to @DKedmey @ToKTeacher and
FWIW, I have come to suspect that intelligence is the ability to make models (usually in the service of regulation), and there is a class of universal function approximators that can (given enough time and resources) solve any problem that can be solved by a computational system.
1 reply 0 retweets 1 like -
I suspect that our brain implements such a universal learning system, together with attentional biases, some pre-wiring and a complex evolved reward function. This may be all that makes us human (together with our training with environmental data).
1 reply 0 retweets 1 like
It that hunch is true, then intelligence is much more substrate independent than most folks in the cognitive sciences would think. There is also reason to think that the core functionality must be easy to implement (even though we did not have the right idea yet).
-
-
So, I'd say: Humans are generally intelligent (but it requires breaking out of certain local optima that are epistemology breaking belief attractors due to our co-evolution with religion). That means that a scalable generally intelligent machine will beat humans on all dimensions
1 reply 0 retweets 1 like -
Replying to @Plinz @ToKTeacher and
Appreciate these thoughts and will spend time thinking about it. One quick question: What happens if our in-built / culturally endowed regulation is just a starting point...and modeling can rewrite it? Could intelligence service something more?
1 reply 0 retweets 0 likes - 5 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.