https://xkcd.com/1968/ is #TheLastDerail in a nutshell. I can try to imagine hypotheses, but I'm not sure why Randall Monroe thinks this a clever thing to say. If I wrote a fictional character saying this straight out, I'd be accused of writing a 2D straw character.
The probably of super human level AI within less than 100 yrs is close to 100% imho. The probability that we can make ALL AI safe is low. Still, even if you put AI Xrisk at 0.001%, you ought to worry more about it than about fully automated predator drones, no?
-
-
If AI Xrisk were under 5%, we would have more important xrisks to worry about. I have always refused this line of argument, and people on the other side of this debate are correct to see it as a poor way of thinking. It fosters neglect of the arguments on probability.
-
I did not make a claim about the actual probability here btw. I also reject the idea that we should only worry about the biggest xrisk we can find. It might be a good idea if particular people specialize on particular worries.
- 1 more reply
New conversation -
-
-
Again, no. You're comparing an empirical statement to a conjectural statement. They are not in the same epistemic category. Adding numeric values doesn't fix that. Also you were just saying it's an "unknown" probability. Now all of a sudden it's a statistical certainty?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.