Conversation

AI is going to kill us all - a thread on my process trying to figure out wtf is going on and how likely it is we are all going to die within ten years
96
528
I've been in the rationalist community since 2015, but managed to avoid all serious discussion/thinking about AI risk for a while. I'm not technical, didn't know how to program or do machine learning, and so didn't feel like I had the authority to be able to have thoughts.
3
80
Like, 'AI risk' felt like a super abstract thing, depending on *way* too many variables that I couldn't even begin to understand. My ability to navigate it felt just like "which experts should I trust more," and experts were saying different things, so ???
3
80
I've often had the experience of learning about an abstract field, thinking hard about it based on my intuitions, and then once I learn more about the field I realize all my intuitions were misguided due to lack of hands-on knowledge. This made me assume I couldn't think about AI
1
83
But discourse about AI has gotten louder, and I did srsly date the director of MIRI for a while, and though he didn't try to AI-risk-pill me much, I started to absorb the concern by osmosis. Should I be really concerned about this? Should I trust him/others around me?
1
74
I suspect I'm more susceptible to socially-driven beliefs than most rationalists; for weird, abstract, high-variance things like AI risk, I tend to try to look to the highest-confidence voice around me and absorb that. I know this about myself, and thus don't trust it.
Replying to
I couldn't tell if I was feeling increased worries about AI because all the smartest people around me were worried about it, or because I actually was learning about it. I also had some embarrassment in not "landing on" ai risk by myself, in isolation. This all confused me.
1
70
But recently, with releases of stuff like Dall-E, I saw the landscape of concern suddenly increase. The prediction markets forecasting the arrival of artificial general intelligence suddenly dropped closer and closer. And *this* was the thing that freaked me out the most.
5
120
Not because I updated in the direction of AI research happening faster than I thought, but because *everyone else* updated on this. From my perspective, if you were thinking clearly about AI risk, then cool new stuff like Dall-E should *not* have changed your risk assessment much
10
141
Like, it *shouldn't* have been surprising that these new advances happened, based on the speed of previous achievements. And the fact that it seemed to drop prediction markets, unnerve people on my timeline... this made me suspect that I should trust general consensus much less.
7
140
And this made me much more concerned, and gave me some feeling of knowing what my own judgment was, and being able to trust it a bit more. AI risk to me seems clearly much more important than climate change, to the degree I've mostly stopped caring about climate change.
7
124
I have noticed I've started the process of trying to adjust to the idea that I won't ever see old age, and that if I do have kids that they will never grow up. This is a huge pool of agony that will take a long time to sift through.
9
102
To be clear, I'm still quite uncertain about it. While my judgment and fear has upticked a lot, I am not free of the self-suspicion of "how much of these beliefs are social" or "how much will my intuitions fail with exposure to concrete information."
3
69
I'm aware I'm in a bubble, and it's sooo easy to have your beliefs warp invisibly beneath you when you're in bubbles. But is it a bubble of insane weirdos or is it a bubble of smart people who each independently thought carefully about this and arrived at 'oh we're fucked'?
10
128
There's a lot of heated debates from my circles about how dangerous AI is, but it's not like "90% chance AI's gonna kill us" vs "AI will never be a serious threat" the debates are more like "is it 90% or 30% chance we will all be dead in ten years". It's a matter of degree.
59
208
(ps: probably didn't get my odds quite right, I forget when ppl give odds for 5 or 10 or 20 years, the point is the odds are somewhere between too high for comfort and change-your-life-plans high)
14
64