https://xkcd.com/1968/ is #TheLastDerail in a nutshell. I can try to imagine hypotheses, but I'm not sure why Randall Monroe thinks this a clever thing to say. If I wrote a fictional character saying this straight out, I'd be accused of writing a 2D straw character.
If AI Xrisk were under 5%, we would have more important xrisks to worry about. I have always refused this line of argument, and people on the other side of this debate are correct to see it as a poor way of thinking. It fosters neglect of the arguments on probability.
-
-
I did not make a claim about the actual probability here btw. I also reject the idea that we should only worry about the biggest xrisk we can find. It might be a good idea if particular people specialize on particular worries.
-
FWIW I agree with this. This specialization is (non-locally) valuable.
End of conversation
New conversation -
-
-
Very sensible. But the percentages are irrelevant if there is in fact a category error at the root of the comparison. Admittedly this is an "empiricist or rationalist?" thing, but the "billions die" position is purely conjectural.
#TheLastDerail position is not. -
Why? The idea that fully automated drones would increase the number of victims of military confrontations is conjecture as well?
-
Again, risk of fully automated drones killing n humans (at human behest) can be assessed empirically right now. It's 1.0, for some definitions of "human" and "fully automated". The "everyone dies" xrisk is extrapolated from a model with no empirical constraints.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.