It's increasingly clear the worst harm done by Yudkowsky is focusing people on apocalyptic AI risk over the already existent AI problems.
-
Show this thread
-
By focusing our notion of the dangers of AI on species-ending threats, we're primed not to think about the bad shit AI does now as a danger.
2 replies 22 retweets 40 likesShow this thread -
AI moderation of social networks and its effects on discourse? Not the paperclip optimizer, so who cares?
2 replies 9 retweets 26 likesShow this thread -
Face recognition that can't handle black people? But in our sci-fi stories AI tortures us for all eternity!
2 replies 12 retweets 27 likesShow this thread -
AI risk is happening. Invisible assumptions of well intentioned programmers are causing real harm.
1 reply 12 retweets 29 likesShow this thread -
But because our framework for the concept is defined by sci-fi, including Yudkowsky's sci-fi masquerading as science, we miss this.
1 reply 6 retweets 21 likesShow this thread -
And I single Yudkowsky out because A) his thought is the worst offender and B) his ideas have permeated the pop science discourse.
1 reply 5 retweets 22 likesShow this thread -
By 2100 more people will have died because of Yudkowsky's bad influence than because of the strong AI we still won't have invented.
2 replies 12 retweets 39 likesShow this thread -
Replying to @ElSandifer @PhilSandifer
It also doesn't help that EY focuses on Paperclip Maximizers of the future when... THE DEMON IS ALREADY HERE. https://hackernoon.com/the-parable-of-the-paperclip-maximizer-3ed4cccc669a …
2 replies 5 retweets 15 likes -
Replying to @dlkingauthor
The case for understanding capitalism as a rogue AI is strong.
3 replies 10 retweets 35 likes
-
-
This Tweet is unavailable
-
This Tweet is unavailable
-
This Tweet is unavailable
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.