Conversation

The way that the most prominent critics of AI risk totally fail to engage with even the most basic arguments made by people in the field suggests that they don't have any good counterarguments. That's very concerning!
Quote Tweet
“Don’t Fear the Terminator” Artificial intelligence never needed to evolve, so it didn’t develop the survival instinct that leads to the impulse to dominate others. Article by @TonyZador @YLeCun blogs.scientificamerican.com/observations/d
Show this thread
This is basic enough that *any* amount of due diligence or spark of curiosity about why a bunch of thoughtful people disagree with them should have led them to at least *mention* it. Even the editors should've been able to realize that this isn't what a good argument looks like.
2
123
You could turn that argument around. The way that the AI risk people fail to engage with their critics within AI, neuroscience, and other fields is very concerning. This suggests that the AI critics don't have any good arguments.
4
17
Where are the critics? I've been keeping an eye out for a long time, and just can't find much. There's this piece by LeCun, and the Chollet piece. In both cases they make some interesting points and also miss even the most basic arguments that alignment researchers have given.
2
22
Show replies
For all X, "most prominent critics of X" tend to be celebrities who haven't thought much about X, but sometimes throw out a comment. So maybe that's not a good category to focus on.
3
31
In general, yes. In this case, though, I'm referring to prominent critics within ML and related fields, who have thought a lot about intelligence and other relevant topics. So they seem much more relevant than "celebrities" in general.
23
This article specifically focuses on, from a neuroscience perspective, the naivety of the malevolent Skynet scenario It does not address paperclip factories And certainly does not deny likely human-guided military applications
Quote Tweet
Maybe now would be a good time to remind people of this brilliant lecture "Superintelligence: The Idea That Eats Smart People" here it is in text form idlewords.com/talks/superint youtube.com/watch?v=kErHiE
3
7
I'm engaging specifically with the thing you're focusing on. I'm saying that there is a central premise (instrumental convergence) that's used to argue that "skynet scenarios" are plausible, and you didn't engage with it. See also:
Quote Tweet
Replying to @RichardMCNgo
I have been to two separate events where Yann LeCun made this point and then Stuart Russell pointed out how a survival instinct appears naturally in RL training as dying limits the reward gained. Both times Yann admitted that was right. Both times were before writing this piece.
1
24
Show replies
Show replies
Here’s LeCun from 9 hours ago:
Quote Tweet
Calm down. Human-level AI isn't here yet. And when it comes, it will not want to dominate humanity. Even among humans, it is not the smartest who want to dominate others and be the chief. We have countless examples on the international political scene. blogs.scientificamerican.com/observations/d
Show this thread
2
7
Show replies