Recently has been critiquing groups trying to build AGI, saying that by being aware of risks but still trying to make it, they’re recklessly putting the world in danger.
I’m interested to hear your thought/reactions to what Kerry says and the fact he’s saying it.
Conversation
I’ve been saying some form of this in my writing going back years.
Unfortunately, this turned out to be an easy way to lose friends in EA, especially those working for organizations building AGI.
I’m so relieved these critiques are finally getting more attention. 🙏
13
fwiw, I think there's an important role for someone being pretty straightforward in critiquing those who are increasing the risk of harmful AGI outcomes
but man, I definitely feel like 😬 about being the one to do it
I did a thread on this yesterday:
Quote Tweet
so I'm starting to build a reputation as a Twitter AGI firebrand
not sure how I feel about it tbh
in general, I'm game for stepping up when reality throws you an unexpected curveball
but I wanna be real with y'all about why my reaction is currently
and not yet 
Show this thread
1
20
Show replies
I don't think you need to halt progress on AGI globally. You just want AGI to come later (or successful alignment to come sooner).
Slowing down the progress of whichever lab is the farthest along would be sufficient to increase the probability alignment is solved first.
3
11
Show replies
I'm pro the conversation. That said, I think the premise -- that folks are aware of the risks -- is wrong. My impression is most of the "fucks" don't think the risks are actually all that great.
2
7
That's really interesting!
Take OpenAI as an example, since conversations about AI X-risk seemed pretty foundational to its creation, I would have thought this was relatively widely discussed there.
Is that just not the case?
1
10
Show replies
Seems worth discussing 1) the strongest nuanced / specific case for not supporting AGI development
2) what in practice could be done about 1)
But I'm skeptical this can be done meaningfully or well on twitter.
1
3
imo Twitter is pretty good for lots of this because it brings in the public in an interesting and important way.
1
2
Show replies
The problem is that "trying to stop" and "actually stopping" ain't the same thing
I turned down AI work in the 2000s - I had a PhD offer with Schmidthuber explicitly because I was worried that it was dangerous
Result is that I'm poor and sidelined and it's happening anyway
2
14
Show replies
I generally agree w Kerry. Nothing they are building rn seems reckless. But aiming at AGI as goal seems reckless. Line btwn narrow & AGI is blurry & unclear so hard to draw line. There should be coordination to stop/ban work on obv dangerous things, tho, like self-modifying AGIs
1
2
Quote Tweet
Replying to @janleike @SpencrGreenberg and @KerryLVaughan
I think it's important (if doable) to clearly define where the line with narrow AI is. We can potentially do miracles with narrow AI (AlphaFold) and many people would be happy to work within the the boundaries, but will continue pushing AGI if they're told to stop altogether








