https://xkcd.com/1968/ is #TheLastDerail in a nutshell. I can try to imagine hypotheses, but I'm not sure why Randall Monroe thinks this a clever thing to say. If I wrote a fictional character saying this straight out, I'd be accused of writing a 2D straw character.
-
Show this thread
-
Replying to @ESYudkowsky
Not saying that I would agree with Monroe, but it seems pretty clear to me why he might think that "I'm more worried about a concrete risk that's looming right now than a long-term speculative one, let's focus on first getting through the urgent one" would be important to say.
1 reply 0 retweets 9 likes -
Replying to @xuenay @ESYudkowsky
These are really two different topics, one with high probability and moderate impact, and one with unknown probability and terminal impact. They should not be conflated despite both being somewhat related to AI.
2 replies 0 retweets 6 likes -
Calling the impact of autonomous weapons "moderate" requires a hyperlocal -- some would call it parochial -- perspective.
1 reply 0 retweets 0 likes -
Someone who cannot realize that hundreds of thousands dying is a moderate problem when compared to everyone dying is lacking perspective, no?
2 replies 0 retweets 0 likes -
No. The point was that n100k deaths only looks "moderate" because you're comparing to something from a different epistemic category. One thing already happens but you can't see it. The other (the hyperlocal one) is literally all in your head.
2 replies 0 retweets 0 likes -
The probably of super human level AI within less than 100 yrs is close to 100% imho. The probability that we can make ALL AI safe is low. Still, even if you put AI Xrisk at 0.001%, you ought to worry more about it than about fully automated predator drones, no?
2 replies 0 retweets 0 likes -
If AI Xrisk were under 5%, we would have more important xrisks to worry about. I have always refused this line of argument, and people on the other side of this debate are correct to see it as a poor way of thinking. It fosters neglect of the arguments on probability.
2 replies 0 retweets 2 likes -
Very sensible. But the percentages are irrelevant if there is in fact a category error at the root of the comparison. Admittedly this is an "empiricist or rationalist?" thing, but the "billions die" position is purely conjectural.
#TheLastDerail position is not.1 reply 0 retweets 0 likes
Why? The idea that fully automated drones would increase the number of victims of military confrontations is conjecture as well?
-
-
Again, risk of fully automated drones killing n humans (at human behest) can be assessed empirically right now. It's 1.0, for some definitions of "human" and "fully automated". The "everyone dies" xrisk is extrapolated from a model with no empirical constraints.
0 replies 0 retweets 0 likesThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.