Conversation

You get that he believes (unaligned) AGI would wipe out humanity, right? Would you disagree that (some survivors) is better than (no survivors)?
12
39
The part where he, Eliezer Yudkowsky, is so incredibly overconfident in his own projections of AI doom that he's willing to kill the cast majority of people on Earth is what's *extremely* not okay. (Which probably wouldn't even prevent future generations from building AGI!)
3
54
Show replies
Eliezer's tradeoff (some humans live vs. no humans live) makes perfect sense IFF you think there is a 99.9999% chance of all humans dying from an insufficiently-aligned AI. If you would explain how you would dispute that premise, then a productive conversation could follow.
1
1
Show replies
No. But people seem to have misread what he wrote to think so.
Quote Tweet
Oh, for fuck's sake. I'll say it more plainly. I did not propose first use of nuclear weapons, by anyone, on anything. If anyone tells you I said otherwise, mark them down for intellectual dishonesty, lack of seriousness, and grossly misrepresenting someone else's position. twitter.com/ESYudkowsky/st…
1
3