This article mentions moderation-as-a-service as a potential solution to some of the more unruly aspects of UGC businesses. It’s an intriguing idea, but I think we’re a long way away from it.https://twitter.com/WillOremus/status/1335218769233371138 …
-
Show this thread
-
True, we’re starting to have some of the necessary tools, like toxicity labeling ML models via API. But behavior norms are shaped by communities and what’s toxic in one context might be fine in another, and vice versa.
1 reply 0 retweets 1 likeShow this thread -
Calibrating something like this to your specific community or platform is not trivial, and even once you have calibrated the model, there’s the question of what you do with its output once you’ve got it. Is it automatically actioned? Reviewed by a human?
1 reply 0 retweets 1 likeShow this thread -
Then you have to measure the impact of your policy, a task which is a sprawling nightmare in and of itself. Do you focus on business impact, the effect on your users? How do you evaluate the impact of someone who made an off-color joke vs. someone who’s a continual net negative?
1 reply 0 retweets 1 likeShow this thread -
Obviously, there aren’t a lot of standards or best practices around this yet. We wouldn’t be having controversies like the ones mentioned in this article if we did.
1 reply 0 retweets 1 likeShow this thread
As people continue to live more of their lives online, strewing UGC in their wake wherever they go, the conversation about moderation standards will be forced. I’ll be curious what kind of winding road it ends up taking
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.

