I find myself skeptical of this. There was a whole safety team at OpenAI filled with very smart people that talked (as far as I can tell) a lot to leadership about the safety concerns, and the organization was founded on the basis of safety concerns.
Conversation
Sure, you can always say the case hasn't been made clear enough, but in the case of OpenAI, it seems like they had a lot of communication with people who were concerned, and I generally expect high-bandwidth communication here to be better than written arguments.
1
13
Like, my sense is that it's just really hard to convince someone that their job is net-negative.
"It is difficult to get a man to understand something when his salary depends on his not understanding it"
And this barrier is very hard to overcome with just better argumentation.
1
25
I disagree with "the case for the risks hasn't been that clearly laid out". I think there's a giant, almost overwhelming pile of intro resources at this point, any one of which is more than sufficient, written in all manner of style, for all manner of audience.
1
1
26
(I do think it's possible to create a much better intro resource than any that exist today, but 'we can do much better' is compatible with 'it's shocking that the existing material hasn't already finished the job'.
1
12
I also disagree with "The burden for the we're-all-going-to-die-if-we-build-x argument is -- and I think correctly so -- quite high."
If you're building a machine, you should have an at least somewhat *lower* burden of proof for more serious risks. It's your responsibility!
2
24
But I don't think the latter point matters much, since the 'AGI is dangerous' argument easily meets higher burdens of proof as well.
I do think a lot of people haven't heard the argument in any detail, and the main focus should be on trying to signal-boost the arguments...
1
17
... and facilitate conversations, rather than assuming that everyone has heard the basics.
A lot of the field is very smart people who are stuck in circa-1995 levels of discourse about AGI.
I think 'my salary depends on not understanding it' is only a small part of the story.
1
17
ML people could in principle talk way more about AGI, and understand the problem way better, without coming anywhere close to quitting their job. The level of discourse is by and large *too low* for 'my job is at risk' to be the very next obstacle on the path.
4
2
28
Also, many ML people have other awesome job options, have goals in the field other than pure salary maximization, etc.
2
1
16
More of the story: Info about AGI propagates too slowly through the field, because when one ML person updates, they usually don't loudly share their update with all their peers. This is because:
1. AGI sounds weird, and they don't want to sound like a weird outsider.
2. Their *peers* and the *community as a whole* might perceive this information as an attack on the field, an attempt to lower its status, etc.
2
23
3. Tech forecasting, differential technological development, long-term steering, exploratory engineering (en.wikipedia.org/wiki/Explorato), 'not doing certain research because of its long-term social impact', prosocial research closure, etc. are very novel and foreign to most scientists.
2
1
20
Show replies
I think this is really the main thing. It sounds too sci-fi a worry. The "sensible, rational" viewpoint is that AI will never be that smart because haha, they get funny word wrong (never mind that they've grown to a point that would have looked like sorcery 30 years ago).
2
17
That's an example of a more-normal view, but it's also a view that makes AI sound unimportant.
There's an important tension in ML between "play up AI so my work sounds important and impactful (and because it's true!)", and "downplay AI in order to sound serious and respectable".
2
1
14
Show replies
If my job is to "build a disco diffuser to generate illustrations for tweets" or "use a transformer to translate catalog pages," I have zero need, and very likely zero direct experience, to talk about AGI.
There are people who can, and could, but that's a relatively small set.



