Many are terrified of Roko’s Basilisk, but I’m more worried about Grey Goo/the Paperclip Maximiser. The idea of nanotech carrying out a mundane task so efficiently that it essentially gives reality cancer is more fucked up, and more plausible, than a vindictive cyber-Yahweh.
-
-
Replying to @G_S_Bhogal
If the paper clip maximizer is general purpose enough to deal with all the edge cases then we can probably just throw a paper clip maximizer minimizer at it and they'll cancel each other out. I can live in a half paper clip world.
1 reply 0 retweets 1 like -
Replying to @atthatmatt
Thing is, a paperclip maximiser would have had a headstart, and, after its intelligence explosion, would probably anticipate the formation of a paperclip maximiser minimiser, eliminating the threat at its root (i.e. us).
1 reply 0 retweets 1 like -
Replying to @G_S_Bhogal
Weird that you attribute absolutely unlimited, flexible intelligence to it but not the ability to question it's own paperclip goals. It can do anything except moderate paper clips, or pivot to the search for the one perfect paper clip, or abstractly contemplate paper clips.
1 reply 0 retweets 0 likes -
Replying to @atthatmatt
The point of the thought experiment is that the AI is single-minded in its goal of maximising paper clips, no matter how smart it gets. Intelligence =/= motivation. Even the smartest human on earth wants, in the end, to engage in filthy animal sex with someone hot.
1 reply 0 retweets 4 likes -
Replying to @G_S_Bhogal
Ya, I'm familiar with the idea, that's why I'm questioning the extrapolation of the premise. Things are always more boring in practice than in our imagination. Intelligence seems to interfere with rote, copy paste results like paper clip maximizing, not encourage it.
1 reply 0 retweets 0 likes -
Replying to @atthatmatt @G_S_Bhogal
So only a stupid system would actually pursue a stupid goal to completion. But a stupid system would fail to deal with enough special cases and changing circumstances to actually succeed at goal completion. A system smart enough would also be smart enough to switch goals.
1 reply 0 retweets 0 likes
It depends on what kind of intelligence it exhibits. The kind you are speaking of is contemplative intelligence (i.e. a higher state of consciousness), but this isn't implied in the premise, which appears to deal more with practical intelligence (getting things done efficiently).
-
-
Replying to @G_S_Bhogal
Just cuz you can categorize them separately doesn't mean they're really separate. I think there's a reasonable chance you can't dominate the world without the ability to question your own motivations. It's critical for getting unstuck.
0 replies 0 retweets 0 likesThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.