The way that the most prominent critics of AI risk totally fail to engage with even the most basic arguments made by people in the field suggests that they don't have any good counterarguments. That's very concerning!
Conversation
The (very basic) idea that they're totally missing is that power is useful for many other goals, not just for its own sake: en.m.wikipedia.org/wiki/Instrumen
A more formal version of that claim: arxiv.org/abs/1912.01683
And its possible consequences in AGIs: arxiv.org/abs/2209.00626
5
6
92
This is basic enough that *any* amount of due diligence or spark of curiosity about why a bunch of thoughtful people disagree with them should have led them to at least *mention* it. Even the editors should've been able to realize that this isn't what a good argument looks like.
2
4
123
You could turn that argument around. The way that the AI risk people fail to engage with their critics within AI, neuroscience, and other fields is very concerning. This suggests that the AI critics don't have any good arguments.
4
17
Where are the critics? I've been keeping an eye out for a long time, and just can't find much. There's this piece by LeCun, and the Chollet piece. In both cases they make some interesting points and also miss even the most basic arguments that alignment researchers have given.
2
22
Show replies
For all X, "most prominent critics of X" tend to be celebrities who haven't thought much about X, but sometimes throw out a comment. So maybe that's not a good category to focus on.
3
2
31
In general, yes. In this case, though, I'm referring to prominent critics within ML and related fields, who have thought a lot about intelligence and other relevant topics. So they seem much more relevant than "celebrities" in general.
1
23
This article specifically focuses on, from a neuroscience perspective, the naivety of the malevolent Skynet scenario
It does not address paperclip factories
And certainly does not deny likely human-guided military applications
Quote Tweet
Maybe now would be a good time to remind people of this brilliant lecture
"Superintelligence: The Idea That Eats Smart People"
here it is in text form
idlewords.com/talks/superint
youtube.com/watch?v=kErHiE
3
7
I'm engaging specifically with the thing you're focusing on. I'm saying that there is a central premise (instrumental convergence) that's used to argue that "skynet scenarios" are plausible, and you didn't engage with it.
See also:
Quote Tweet
Replying to @RichardMCNgo
I have been to two separate events where Yann LeCun made this point and then Stuart Russell pointed out how a survival instinct appears naturally in RL training as dying limits the reward gained.
Both times Yann admitted that was right. Both times were before writing this piece.
1
24
Show replies
twitter.com/ylecun/status/
I was surprised by this one. Seemingly forgetting that agents' goals are learned during training
Quote Tweet
Replying to @nobliver
How could the aims possibly be "inscrutable" since *we* would be the ones who would design and hardwire those aims in the form of objectives.
2
26
Show replies
I think it's unfair to cite and 's views as of 4 years ago as evidence that they haven't engaged with the arguments today. (It might be true that they still haven't thought about the topic, but this article isn't evidence.)
3
4
Here’s LeCun from 9 hours ago:
Quote Tweet
Calm down.
Human-level AI isn't here yet.
And when it comes, it will not want to dominate humanity.
Even among humans, it is not the smartest who want to dominate others and be the chief.
We have countless examples on the international political scene.
blogs.scientificamerican.com/observations/d
Show this thread
2
7
Show replies









