https://xkcd.com/1968/ is #TheLastDerail in a nutshell. I can try to imagine hypotheses, but I'm not sure why Randall Monroe thinks this a clever thing to say. If I wrote a fictional character saying this straight out, I'd be accused of writing a 2D straw character.
-
Show this thread
-
Replying to @ESYudkowsky
Not saying that I would agree with Monroe, but it seems pretty clear to me why he might think that "I'm more worried about a concrete risk that's looming right now than a long-term speculative one, let's focus on first getting through the urgent one" would be important to say.
1 reply 0 retweets 5 likes -
Replying to @xuenay @ESYudkowsky
These are really two different topics, one with high probability and moderate impact, and one with unknown probability and terminal impact. They should not be conflated despite both being somewhat related to AI.
2 replies 0 retweets 6 likes -
Replying to @Plinz @ESYudkowsky
Also xkcd is humor not scholarly analysis: this would hardly be the first time that two unrelated things were conflated for the purpose of making a joke. :)
1 reply 0 retweets 3 likes -
Replying to @xuenay @ESYudkowsky
xkcd is generally more interested in insight than in humor, which makes this cartoon so perplexing. And Randall Munroe is not exactly a normie whisperer trying to deliver the conclusion that is in highest demand.
1 reply 0 retweets 3 likes -
Replying to @Plinz @ESYudkowsky
I think the "stupid normie status signaling" hypothesis is uncharitable and wrong. I think it's totally reasonable for smart geeks to be mainly worried about dystopian scenarios brought by ML, and to find AGI concerns a silly focus in comparison (again, not that I'd agree, but).
2 replies 0 retweets 16 likes -
+1. There are many commonly held clusters of views on things like timelines, the usefulness of present safety work, and the scale of misuse risks that could justify this conclusion without incoherence, even if they're wrong.
3 replies 0 retweets 2 likes -
You're being too charitable, which is also a bias. This wasn't a neutral "of these two risks, here's what I think their relative probabilities are", it was an obvious putdown of people who visibly care about the second risk.
4 replies 0 retweets 5 likes -
Replying to @ESYudkowsky @Miles_Brundage and
Out of curiousity, do you estimate that risk 2 is worsened by xkcd applying social pressure to folks who are panicking about risk 2 but haven't considered risk 1? (not that I think RM is somehow intentionally being strategic. just wondering.)
1 reply 0 retweets 0 likes
Risk 2 is worsened by people who think they can gain status by putting down the act of talking about it. Risk 2 is also worsened by people who are thinking panicked instead of computer-sciencey thoughts about it. Not considering Risk 1 is fine; it's unrelated to Risk 2.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.