Thread for some of my thoughts in response to this piece with a bunch of stuff about how the situation with AGI is fucked (a conclusion I agree with)
(Not sure what sort of focused time Iโll have when, but Iโll try to keep adding stuff here when I do)
lesswrong.com/posts/uMQ3cqWD
Conversation
Seems like the obvious thing to do would be to comment on LW, not twitter, but for whatever reasons I notice that Iโd rather post here, and I basically donโt expect myself to successfully post my thoughts there, so ๐คท๐ฝโโ๏ธ
Quote Tweet
โTom, if it were that alone, I shouldnโt hesitate. Or if there were any single good reason, Iโd tell you at once. The trouble is I have a hundred reasons, none of them good.โ
-Game of Kings by Dorothy Dunnett
I love this line! Especially โa hundred reasons, none of them goodโ.
1
25
Thank you very much to Eliezer for writing it, and I very much agree that the thing where it keeps being literally him doing this stuff is quite a bad sign :-(.
And not that I expect anyone to be confused on this point, but I will nonetheless actively disclaim being the sort
2
1
28
1. I parse the original as, "a collection of EY's thoughts on why safe AI is hard". It's EY's thoughts, why would someone else (other than ) write a collection of EY's thoughts?
1
3
My shoulder Eliezer (who I agree with on alignment, and who speaks more bluntly and with less hedging than I normally would) says:
1. The list is true, to the best of my knowledge, and the details actually matter.
Many civilizations try to make a canonical list like this in 1980 and end up dying where they would have lived *just* because they left off one item, or under-weighted the importance of the last three sentences of another item, or included ten distracting less-important items.
1
3
2. There are probably *not* many civilizations that wait until 2022 to make this list, and yet survive.
3. It's true that many of the points in the list have been made before. But it's very doomy that they were made by *me*.
1
2
Show replies


