Conversation

This is perhaps an example of a real AI safety problem. In this particular case, you could add poison control restrictions, but in general it’s not clear how to prevent a recommender algorithm from auto-creating things like suicide kits. It would need way more world data.
Quote Tweet
Last Friday, CBS cancelled a segment about our clients suing Amazon for selling suicide kits to their now deceased kids. CBS’ cowardice gave me renewed clarity about how urgent this litigation is. 🧵@naomi_leeds 1/
Show this thread
Replying to
I’m highly in favor of legalized, painless euthanasia without too many restrictions, but… dangerous algorithmic underbellies are not the way to get there.
1
7
Replying to
Yep, I personally stop-listed a bunch of them for the now defunct “frequently bought together” and built a system to scrape reviews and use “ask a question” to detect _some_ categories of harmful or incompatible recs but you can only remove these things after some damage is done
1
2
Show replies
Replying to
Ben Thompson, in the context of YT radicalizing people, once mentioned it would be very possible for these companies to hire "red team" folks to try and find these bad algorithmic reinforcing cycles. Seems doable? Biggest challenge is probably local/generational context.
1
2
Replying to
I think the sheer scale became too much about 5-6 years ago. Facebook iirc has like 10k people looking for bad content. That's individual items. Here we're talking combinatorial.
3
Replying to
Even if the recommendations began as an AI problem — the humans and retained lawyers at Amazon were aware — responded in writing and willfully refused to modify the recommendations
9