A general AI will easily be taken down by a simpler, more specialized AI whose only purpose is to take it down. Just like a simple virus can kill a human. It will be even easier if the simple AI killer has access to the similar computing resources as the general AI.
-
-
If that was a certainty (not just possible), why did evolution not favor a viral goo over complex organisms?
3 replies 0 retweets 5 likes -
Evolution doesn't favor complex organisms. Complex organisms just happened to occupy a niche in the biosphere.
1 reply 0 retweets 4 likes -
Replying to @IntuitMachine @Plinz and
IIRC Nick Bostrom covers this topic in reasonable detail in his book Superintelligence. Good reference.
1 reply 0 retweets 1 like -
Replying to @FieryPhoenix7 @IntuitMachine and
Superintelligence is a dark fantasy in which humankind places all of its resources at the disposal of an untested and unsupervised AI system. Meanwhile they obsess over turning off the machine because it might contain a mind.
4 replies 0 retweets 5 likes -
Replying to @tdietterich @FieryPhoenix7 and
Thinking machines are just an evolutionary step beyond our limitations. Little AI engines are birthed every day, and try as we might, there is no way to predetermine there actions decades later. "Access to resources" is the fantasy part.
1 reply 0 retweets 0 likes -
Replying to @sd_marlow @tdietterich and
How is 'access to resources' a fantasy when we already have semi-autonomous system that run the global economy?
2 replies 0 retweets 0 likes -
Replying to @IntuitMachine @sd_marlow and
Would you put the entire resources of the earth at the disposal of a paper clip making robot?
2 replies 0 retweets 1 like -
Replying to @tdietterich @sd_marlow and
It's not a question of a rational person making the decision of allocating resources. It's a question about human civilization deciding, and so far humans have a miserable track record (see: climate change).
1 reply 0 retweets 0 likes -
Replying to @IntuitMachine @sd_marlow and
Example resource questions: How big a machine do you run the system on? Does it have access to 3D printers, DNA synthesizers, and other robotic devices? What about access to weapons? Financial instruments? The web? How long do you let it run?
1 reply 0 retweets 1 like
I don't think we should think of autonomous intelligent systems as human scale robots. If the AGI problem can be solved (and I don't see a convincing technical or philosophical reason why it won't), there may be sentient corporations and nation states.
-
-
Replying to @Plinz @tdietterich and
one might argue this is already the case
2 replies 0 retweets 0 likes -
Replying to @mattsiegel @Plinz and
On that score, Ted Chiang offers some wisdom: https://www.buzzfeednews.com/article/tedchiang/the-real-danger-to-civilization-isnt-ai-its-runaway … As does Shivani, if more abrasively: https://thebaffler.com/salvos/oculus-grift-shivani … Insights like theirs are one reason I do as much political economy research now as I do tech law & policy.
0 replies 0 retweets 0 likes
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.