I've been citing lesswrong.com/posts/uMQ3cqWD to explain why the situation with AI looks doomy to me. But that post is relatively long, and emphasizes specific open technical problems over "the basics".
Here are 10 things I'd focus on if I were giving "the basics" on why I'm worried:
Conversation
Quote Tweet
Eliezer Yudkowsky's response to "Can someone please explain how people get such highly confident estimates of near-certain doom from AI?":
Show this thread
1
3
I don’t find this argument in itself compelling at all, since many other huge problems were solved in the past without an early clear sense of direction.
I am more compelled by the arguments about why this time is different (speed, low barrier to entry, etc)
2
9
My intuition is that MIRI's argument is almost more about sociology than computer science/security (though there *is* a relationship). People won't react until it is too late, they won't give up positive rewards to mitigate risk, they won't coordinate, the govt is feckless, etc.
And that's a big part of why it seems overconfident to people, bc sociology is not predictable, or at least isn't believed to be.
1
7
Yeah I agree. The argument seems twofold of either we won't get lots of smaller failures or that society won't properly react. IMO we will get smaller failures and there will be reactions eventually. (Lawsuit is an example reaction and those are already starting)
1
Show replies
Yes but it's also about the nature of intelligence, and the relationship of intelligence to power-in-the-world, and that's the part I find least realistic.
cf. this from
1
In the original text it’s clear that Scott is *caricaturing* fast takeoff and himself disagrees.
I think that kind of confusion in the debate happens a lot! Eg people taking paperclip maximizes as a literal scenario instead of a random thought experiment EY made up one day.
1
8
Show replies




