I've recently learned that this is a *spicy* take on AI Safety:
AGI labs (eg OpenAI, DeepMind, and others) are THE CAUSE of the fundamental problem the AI Safety field faces.
I thought this was obvious until very recently.
Since it's not, I should explain my position.
Conversation
(I'll note that while I single out OpenAI and DeepMind here, that's only because they appear to be advancing the cutting edge the most.
This critique applies to any company or academic researcher that spends their time working to solve the bottlenecks to building AGI.)
1
60
To vastly oversimply the situation, you can think of AI Safety as a race.
In one corner you have the AGI builders who are trying to create AGI as fast as possible.
In the other corner, you have people trying to make sure AGI will be aligned with human goals once we build it.
5
5
73
If AGI gets built before we know how to align it, it *might* be CATASTROPHIC.
Fortunately, aligning an AGI is unlikely to be impossible.
So, given enough time and effort into the problem, we will eventually solve it.
3
3
68
This means the actual enemy is time.
If we have enough time to both find capable people and have them work productively on the problem, we will eventually win.
If not, we lose.
I think the fundamental dynamic is really just that simple.
4
2
77
AGI labs like OAI and DeepMind have it as their MISSION to decrease the time we have.
Their FOUNDING OBJECTIVE is to build AGI and they are very clearly and obviously trying *as hard as they can* to do just that. They raise money, hire talent, etc. all premised on this goal.
3
93
Every day an AGI engineer at OpenAI or DeepMind shows up to work and tries to solve the current bottlenecks in creating AGI, we lose just a little bit of time.
Every day they show up to work, the odds of victory get a little bit lower.
My very bold take is that THIS IS BAD
5
21
166
Now you might be thinking:
"Demis Hassabis and Sam Altman are not psychopaths or morons. If they get close to AGI without solving alignment they can just not deploy the AGI."
There are a number of problems with this, but the most obvious is: they're still robbing us of time.
5
77
Every. Single. Day. the AGI labs are steadily advancing the state of the art on building AGI.
With every new study they publish, researcher they train, and technology they commercialize, they also make it easier for every other AGI lab to build and deploy an AGI.
1
1
67
So unless they can somehow refrain from deploying an unaligned AGI and stop EVERYONE ELSE from doing the same, they continue to be in the business of robbing humanity of valuable time.
They are the cause of the fundamental problem faced by the AI Safety community.
2
2
69
In conclusion: Stop building AGI you fucks.
8
25
162
Notably, a number of people in the AI Safety community basically agree with all of this but think I shouldn't be saying it. (Or, at least, that EA Bigwigs shouldn't say it.)
I obviously disagree.
But it's a more complex question which I'll reserve for a future thread.
11
84
