Conversation

Replying to
But discourse about AI has gotten louder, and I did srsly date the director of MIRI for a while, and though he didn't try to AI-risk-pill me much, I started to absorb the concern by osmosis. Should I be really concerned about this? Should I trust him/others around me?
1
74
I suspect I'm more susceptible to socially-driven beliefs than most rationalists; for weird, abstract, high-variance things like AI risk, I tend to try to look to the highest-confidence voice around me and absorb that. I know this about myself, and thus don't trust it.
2
98
I couldn't tell if I was feeling increased worries about AI because all the smartest people around me were worried about it, or because I actually was learning about it. I also had some embarrassment in not "landing on" ai risk by myself, in isolation. This all confused me.
1
70
But recently, with releases of stuff like Dall-E, I saw the landscape of concern suddenly increase. The prediction markets forecasting the arrival of artificial general intelligence suddenly dropped closer and closer. And *this* was the thing that freaked me out the most.
5
120
Not because I updated in the direction of AI research happening faster than I thought, but because *everyone else* updated on this. From my perspective, if you were thinking clearly about AI risk, then cool new stuff like Dall-E should *not* have changed your risk assessment much
10
141
Like, it *shouldn't* have been surprising that these new advances happened, based on the speed of previous achievements. And the fact that it seemed to drop prediction markets, unnerve people on my timeline... this made me suspect that I should trust general consensus much less.
7
140
And this made me much more concerned, and gave me some feeling of knowing what my own judgment was, and being able to trust it a bit more. AI risk to me seems clearly much more important than climate change, to the degree I've mostly stopped caring about climate change.
7
124
I have noticed I've started the process of trying to adjust to the idea that I won't ever see old age, and that if I do have kids that they will never grow up. This is a huge pool of agony that will take a long time to sift through.
9
102
To be clear, I'm still quite uncertain about it. While my judgment and fear has upticked a lot, I am not free of the self-suspicion of "how much of these beliefs are social" or "how much will my intuitions fail with exposure to concrete information."
3
69
I'm aware I'm in a bubble, and it's sooo easy to have your beliefs warp invisibly beneath you when you're in bubbles. But is it a bubble of insane weirdos or is it a bubble of smart people who each independently thought carefully about this and arrived at 'oh we're fucked'?
10
128
Replying to
(ps: probably didn't get my odds quite right, I forget when ppl give odds for 5 or 10 or 20 years, the point is the odds are somewhere between too high for comfort and change-your-life-plans high)
14
64
Replying to
I don't follow closely, but seems pretty likely to me that AI will eventually take over from us humans. (Not sure they'll kill us - maybe keep a few around as curiosities.) Why is this so bad? Maybe they'll be better than us.
Replying to
30% risk or higher in 10 years or less seems absolutely bonkers. Saying this as someone who is intermediate with this subject matter at best, but DALL-E isn't solving anything, it's just outputting images with a high degree of detail? There's a huge leap to Xrisk?
3
13
Replying to
Ok, I'm a computer and IT engineer. I won't consider myself an expert, but: You have to consider 'what' can be considered as Artificial Inteligence and then as a threat. Today AI is mostly data process like Machine Learning which helps on taking decisions based on data...
1
3
Replying to
tbh as someone who's currently finishing thesis on training dl models to solve elementary school math word problems i have hard time processing there are people who seriously think this.
1
25
Replying to
and basically all the transformer models are trained to do is to replicate structure of the training data (language). things like gpt-3 are just overtrained parrots, how is that even close to anything resembling general ai?
1
2
Replying to
I'm yet to hear a serious AI researcher, who is not working and publishing in "AI risk", who is remotely concerned that AGI is around the corner (10 years). Idk who your educated friends are, but maybe you should read some articles from the 50s how people thought the same.
11
Replying to
This thread seems to be lacking substantive examples of how/why AI can be so risky. Please share some of your top concerns. Paper clip? Or is just that you're concerned others are concerned. Run the poll, too. Poll won't represent your ultra ML friends, but it broaches the topic
6