Conversation

Replying to
I suspect I'm more susceptible to socially-driven beliefs than most rationalists; for weird, abstract, high-variance things like AI risk, I tend to try to look to the highest-confidence voice around me and absorb that. I know this about myself, and thus don't trust it.
2
98
I couldn't tell if I was feeling increased worries about AI because all the smartest people around me were worried about it, or because I actually was learning about it. I also had some embarrassment in not "landing on" ai risk by myself, in isolation. This all confused me.
1
70
But recently, with releases of stuff like Dall-E, I saw the landscape of concern suddenly increase. The prediction markets forecasting the arrival of artificial general intelligence suddenly dropped closer and closer. And *this* was the thing that freaked me out the most.
5
120
Not because I updated in the direction of AI research happening faster than I thought, but because *everyone else* updated on this. From my perspective, if you were thinking clearly about AI risk, then cool new stuff like Dall-E should *not* have changed your risk assessment much
10
141
Like, it *shouldn't* have been surprising that these new advances happened, based on the speed of previous achievements. And the fact that it seemed to drop prediction markets, unnerve people on my timeline... this made me suspect that I should trust general consensus much less.
7
140
And this made me much more concerned, and gave me some feeling of knowing what my own judgment was, and being able to trust it a bit more. AI risk to me seems clearly much more important than climate change, to the degree I've mostly stopped caring about climate change.
7
124
I have noticed I've started the process of trying to adjust to the idea that I won't ever see old age, and that if I do have kids that they will never grow up. This is a huge pool of agony that will take a long time to sift through.
9
102
To be clear, I'm still quite uncertain about it. While my judgment and fear has upticked a lot, I am not free of the self-suspicion of "how much of these beliefs are social" or "how much will my intuitions fail with exposure to concrete information."
3
69
I'm aware I'm in a bubble, and it's sooo easy to have your beliefs warp invisibly beneath you when you're in bubbles. But is it a bubble of insane weirdos or is it a bubble of smart people who each independently thought carefully about this and arrived at 'oh we're fucked'?
10
128
There's a lot of heated debates from my circles about how dangerous AI is, but it's not like "90% chance AI's gonna kill us" vs "AI will never be a serious threat" the debates are more like "is it 90% or 30% chance we will all be dead in ten years". It's a matter of degree.
59
208
Replying to
Personally I believe human + AI integration is far superior than just AI or just Human. What if we just evolve that direction?
2
Replying to
Paranoia at best. AI systems (even the most recent Gato) are quite brittle. What they show in the news is the 'handpicked' examples. My prediction: AI will 'never' reach human level intelligence even in the next 100 years.
1
2
Replying to
Am i missing something, or you didn’t make the case that AI will kill us all, but just observed that AI is developing faster than the general consensus? Which by the way is true of climate change, which absolutely will kill us
1
6
Replying to
It is delusional to think that large language models can solve the alignment problem by having ethical, aligned machines “emerge” from large data. they can barely induce three digit addition. to extent people genuinely care about alignment, we need to consider other solutions.
6
18
Replying to
They’re hypothetical, so there is no “right” number throw out. There are many very intelligent and well informed people who work in DARPA and BD, that would disagree and push the cyborg theories as far more plausible.
Replying to
Until we reach a point where AI is so commonplace that corporations can essentially just make their own in a vacuum, I'm not concerned. The intricate networks an AI needs to effectively combat humanity are incredibly fragile, partly by design and partly by neglect.
1
1