"Some people worry about the deaths of billions, the destruction of humankind, the waste of the cosmic endowment and the loss of unimaginably vast and glorious intergalactic civilizations that would've been. Lol! I worry about something else which I think is MUCH higher-status."
-
-
Show this threadThanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Not saying that I would agree with Monroe, but it seems pretty clear to me why he might think that "I'm more worried about a concrete risk that's looming right now than a long-term speculative one, let's focus on first getting through the urgent one" would be important to say.
-
These are really two different topics, one with high probability and moderate impact, and one with unknown probability and terminal impact. They should not be conflated despite both being somewhat related to AI.
-
Also xkcd is humor not scholarly analysis: this would hardly be the first time that two unrelated things were conflated for the purpose of making a joke. :)
-
xkcd is generally more interested in insight than in humor, which makes this cartoon so perplexing. And Randall Munroe is not exactly a normie whisperer trying to deliver the conclusion that is in highest demand.
-
I think the "stupid normie status signaling" hypothesis is uncharitable and wrong. I think it's totally reasonable for smart geeks to be mainly worried about dystopian scenarios brought by ML, and to find AGI concerns a silly focus in comparison (again, not that I'd agree, but).
-
+1. There are many commonly held clusters of views on things like timelines, the usefulness of present safety work, and the scale of misuse risks that could justify this conclusion without incoherence, even if they're wrong.
-
You're being too charitable, which is also a bias. This wasn't a neutral "of these two risks, here's what I think their relative probabilities are", it was an obvious putdown of people who visibly care about the second risk.
-
I am not sure if we disagree/if so, what about. Yes, it was non-neutral and a putdown.
- 1 more reply
New conversation -
-
-
Ha.. I understood it as a warning against anthropomorphising an AI a giving it evil agenda (rebelling against humans; having consciousness etc.) when the real danger could be ut. function misalignment in the near future. Those unstoppable killer robots are civ stopper right there
-
Granted... it's not universe-level ex. risk, but then the self-awareness and self-improvement can come along the ride sometime after they wipe us out, anyway.
End of conversation
New conversation -
-
-
He also did that awful comic about free speech which now gets trotted out everywhere. It's the whole issue where people feel they can just dismiss ideas from the outgroup. TBH, it's partly your fault for tying rationality and AI alignment to libertarian political values :/
-
I'd say there was an effort to keep these ideas free of politics. See "politics is the mind killer"
-
Words, not actions. In reality, the whole thing began on a libertarian email list, the blog was originally hosted on Overcoming Bias, and the Less Wrong comment sections were filled with at first libertarians and eventually neoreactionaries. The conclusion was kinda inevitable.
End of conversation
New conversation -
-
-
How would you have drawn it?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.