Conversation

This is perhaps an example of a real AI safety problem. In this particular case, you could add poison control restrictions, but in general it’s not clear how to prevent a recommender algorithm from auto-creating things like suicide kits. It would need way more world data.
Quote Tweet
Last Friday, CBS cancelled a segment about our clients suing Amazon for selling suicide kits to their now deceased kids. CBS’ cowardice gave me renewed clarity about how urgent this litigation is. 🧵@naomi_leeds 1/
Show this thread
6
29
Replying to
Yep, I personally stop-listed a bunch of them for the now defunct “frequently bought together” and built a system to scrape reviews and use “ask a question” to detect _some_ categories of harmful or incompatible recs but you can only remove these things after some damage is done
1
2