https://xkcd.com/1968/ is #TheLastDerail in a nutshell. I can try to imagine hypotheses, but I'm not sure why Randall Monroe thinks this a clever thing to say. If I wrote a fictional character saying this straight out, I'd be accused of writing a 2D straw character.
-
Show this thread
-
"Some people worry about the deaths of billions, the destruction of humankind, the waste of the cosmic endowment and the loss of unimaginably vast and glorious intergalactic civilizations that would've been. Lol! I worry about something else which I think is MUCH higher-status."
10 replies 9 retweets 55 likesShow this thread -
Replying to @ESYudkowsky
I wondered why he couldn't worry about two things.
1 reply 0 retweets 2 likes -
Replying to @fnedrik
Because the point is to put down people who worry about the second thing. I can imagine theories of 2D villainous reasons to do this, like pushing yourself up by putting down others. If there's a 3D reason, I don't know it.
1 reply 0 retweets 2 likes -
Replying to @ESYudkowsky @fnedrik
I don’t read it that way. I think he’s fully on your side in stressing the importance of AI risk. Diff is, AGI is some unknown years out w possibly big unseen hurdles ahead. Dimmer AIs controlling swarms of weaponized drones is almost already upon us, and it’s kind of a big deal
1 reply 0 retweets 1 like -
Replying to @rob4lderman @fnedrik
Stressing risks unrelated to AGI and derailing AGI conversations by crying "AI risk!" ain't on my side. Drones are as unrelated to AGI as assault rifles are to drones. Derailing a drone conversation with "Let's talk about currently deadly machines!" wouldn't be on Monroe's side.
2 replies 0 retweets 1 like -
Replying to @ESYudkowsky @fnedrik
What? It’s all AI risk. AI swarms are much different than just drones. And the comparisons to rifles? Geez. Nobody is trying to derail your concerns about AGI dude
1 reply 0 retweets 0 likes -
Replying to @rob4lderman @fnedrik
It's all "machine risk". No, better yet, it's all "objects made of matter risk". Why worry about drone thingies when bullet thingies are higher-status and less nerdy?
2 replies 0 retweets 4 likes -
Replying to @ESYudkowsky @fnedrik
I’m in fact a card-carrying nerd myself, I’m not one to shy away from neediness. But if you really think that weaponized drone swarms coordinated by machines processing massive amounts of information in real time doesn’t fall under “AI risk”.... well then that’s what you think
1 reply 0 retweets 0 likes
Then I'm uninterested in AI risk. I specialize in the mostly unrelated issues of AGI. Of course they're both made of computers, and likewise computers and assault rifles are both made of matter; but AI and AGI and assault rifles all three have few problems or solutions in common.
-
-
(*Rarely* there is work on ML robustness general enough that it might genuinely scale up to AGI or to components of AGI. "Adversarial examples" is one honest and unforced example that comes to mind.)
1 reply 1 retweet 3 likes -
Replying to @ESYudkowsky @fnedrik
I disagree with you on most of that, but I’m happy to have you focus entirely on AGI, which you do very well, while Randall and I fret about lesser machine intelligences that might nonetheless be civilizational game changers
0 replies 0 retweets 1 like
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.