I found this to be interesting. Human content moderation is an absolute hellscape. If we double down on it, we'll be creating another machine of human suffering. Yet, AI has big problems. How do we reconcile our desire for scale with a desire for accuracy and impartiality?https://twitter.com/SachaBaronCohen/status/1316080610956455938 …
Certainly easier to train an AI what extreme content looks like (beheadings, animal torture, etc etc) than the subtleties & games of most human communication. We'll always have to deal with humans being able to target each other with leverage. Now more so than ever.