This is... not okay.
This is *extremely* not okay.
Conversation
I fully get that he believes this. I wrote ~12,000 words explaining why I don't: lesswrong.com/posts/wAczufCp
1
7
The part where he, Eliezer Yudkowsky, is so incredibly overconfident in his own projections of AI doom that he's willing to kill the cast majority of people on Earth is what's *extremely* not okay.
(Which probably wouldn't even prevent future generations from building AGI!)
3
2
54
Show replies
Eliezer's tradeoff (some humans live vs. no humans live) makes perfect sense IFF you think there is a 99.9999% chance of all humans dying from an insufficiently-aligned AI. If you would explain how you would dispute that premise, then a productive conversation could follow.
1
1
I spent ~12,000 words disputing that premise.
Spoiler: a productive conversation did not follow. lesswrong.com/posts/wAczufCp
1
8
Show replies
No. But people seem to have misread what he wrote to think so.
Quote Tweet
Oh, for fuck's sake. I'll say it more plainly. I did not propose first use of nuclear weapons, by anyone, on anything.
If anyone tells you I said otherwise, mark them down for intellectual dishonesty, lack of seriousness, and grossly misrepresenting someone else's position. twitter.com/ESYudkowsky/st…
1
3
I don't know if you're for or against his abortion position here, but the line is super arbitrary no matter what.
1







