Trying to compile a list of the best arguments in favour of AI Doomerism. Here's where I'm at so far; please contribute your favourites!
Yoshua's note:
Conversation
AI could defeat all of us combined
1
1
14
Why AI alignment could be hard with modern deep learning
1
1
12
"The Alignment Problem from a Deep Learning Perspective" Paper from and others arxiv.org/pdf/2209.00626
1
1
15
Quote Tweet
Replying to @DavidDuvenaud @aidangomezzz and @AndrewCritchCA
Paul Christiano has written some relatively banal doom scenarios, in which even governance that improves human lives and power along all measurable axes is paired with so much manipulation and deception that humans are effectively marginalized:
alignmentforum.org/posts/HBxe6wdj
1
Quote Tweet
Replying to @DavidDuvenaud @aidangomezzz and @AndrewCritchCA
The sequel is very compelling too:
alignmentforum.org/posts/AyNHoTWW
1
Quote Tweet
Replying to @aidangomezzz
Some detailed scenarios of how we could lose influence have been written by @AndrewCritchCA , in which power is delegated to AI gradually because it’s effective, and anyone pushing back is marginalized:
alignmentforum.org/posts/LpM3EAak
1
Quote Tweet
Replying to @aidangomezzz
This one is long but should definitely be on the list:
cold-takes.com/without-specif
The DeepMind safety team also put together a good literature review trying to summarize different threat models: alignmentforum.org/posts/wnnkD6P2
1
Quote Tweet
Replying to @aidangomezzz
Joe's is the best full breakdown of the argument I'm aware of arxiv.org/abs/2206.13353 (shorter version I haven't read: joecarlsmith.com/2023/03/22/exi)
1
1





