I've been in the rationalist community since 2015, but managed to avoid all serious discussion/thinking about AI risk for a while. I'm not technical, didn't know how to program or do machine learning, and so didn't feel like I had the authority to be able to have thoughts.
Like, 'AI risk' felt like a super abstract thing, depending on *way* too many variables that I couldn't even begin to understand. My ability to navigate it felt just like "which experts should I trust more," and experts were saying different things, so ???
I've often had the experience of learning about an abstract field, thinking hard about it based on my intuitions, and then once I learn more about the field I realize all my intuitions were misguided due to lack of hands-on knowledge. This made me assume I couldn't think about AI
But discourse about AI has gotten louder, and I did srsly date the director of MIRI for a while, and though he didn't try to AI-risk-pill me much, I started to absorb the concern by osmosis. Should I be really concerned about this? Should I trust him/others around me?
I suspect I'm more susceptible to socially-driven beliefs than most rationalists; for weird, abstract, high-variance things like AI risk, I tend to try to look to the highest-confidence voice around me and absorb that. I know this about myself, and thus don't trust it.
I couldn't tell if I was feeling increased worries about AI because all the smartest people around me were worried about it, or because I actually was learning about it. I also had some embarrassment in not "landing on" ai risk by myself, in isolation. This all confused me.
But recently, with releases of stuff like Dall-E, I saw the landscape of concern suddenly increase. The prediction markets forecasting the arrival of artificial general intelligence suddenly dropped closer and closer. And *this* was the thing that freaked me out the most.
Not because I updated in the direction of AI research happening faster than I thought, but because *everyone else* updated on this. From my perspective, if you were thinking clearly about AI risk, then cool new stuff like Dall-E should *not* have changed your risk assessment much
Like, it *shouldn't* have been surprising that these new advances happened, based on the speed of previous achievements. And the fact that it seemed to drop prediction markets, unnerve people on my timeline... this made me suspect that I should trust general consensus much less.
And this made me much more concerned, and gave me some feeling of knowing what my own judgment was, and being able to trust it a bit more. AI risk to me seems clearly much more important than climate change, to the degree I've mostly stopped caring about climate change.
I have noticed I've started the process of trying to adjust to the idea that I won't ever see old age, and that if I do have kids that they will never grow up. This is a huge pool of agony that will take a long time to sift through.
To be clear, I'm still quite uncertain about it. While my judgment and fear has upticked a lot, I am not free of the self-suspicion of "how much of these beliefs are social" or "how much will my intuitions fail with exposure to concrete information."
I'm aware I'm in a bubble, and it's sooo easy to have your beliefs warp invisibly beneath you when you're in bubbles. But is it a bubble of insane weirdos or is it a bubble of smart people who each independently thought carefully about this and arrived at 'oh we're fucked'?
There's a lot of heated debates from my circles about how dangerous AI is, but it's not like "90% chance AI's gonna kill us" vs "AI will never be a serious threat" the debates are more like "is it 90% or 30% chance we will all be dead in ten years". It's a matter of degree.
(ps: probably didn't get my odds quite right, I forget when ppl give odds for 5 or 10 or 20 years, the point is the odds are somewhere between too high for comfort and change-your-life-plans high)
I think we're just constantly surprised by the "cool things" that are possible with "AI" without it actually being very close at all to actual general AI.
Also the seemingly assumed general AI ergo human extinction thing seems like even more of a leap to me.
I think your point is a good point to a large extent, but also:
You can be well updated and still freak out at because it just clears up some uncertainty and reveals you're probably in one of the faster timelines. It's just additional new information, I think that's fine?
There was/is reasonable disagreement/uncertainty about the scaling hypothesis: can existing techniques get us to human-level AI just by using more compute/data to train them? Recent developments are substantial evidence in favor.
Thus it's reasonable for timelines to update.
I'm not sure this is true. Progress is unpredictable and "lumpy", there's always a chance we could fall into another AI winter, or that the things being solved could turn out to be harder than expected. Every day of progress is evidence, let alone major progress.