Conversation

This is perhaps an example of a real AI safety problem. In this particular case, you could add poison control restrictions, but in general it’s not clear how to prevent a recommender algorithm from auto-creating things like suicide kits. It would need way more world data.
Quote Tweet
Last Friday, CBS cancelled a segment about our clients suing Amazon for selling suicide kits to their now deceased kids. CBS’ cowardice gave me renewed clarity about how urgent this litigation is. 🧵@naomi_leeds 1/
Show this thread
6
29
Replying to
Ben Thompson, in the context of YT radicalizing people, once mentioned it would be very possible for these companies to hire "red team" folks to try and find these bad algorithmic reinforcing cycles. Seems doable? Biggest challenge is probably local/generational context.
1
2