If your underlying data has bias, so will your machine learning. This makes using ML for moderation very dangerous. https://twitter.com/adrjeffries/status/924628053107642369 …
Train the AI that nazis & the like are evil and their opinions are invalid and not worth considering, & good results will magically pop out.
-
-
If on the initial training, "jew" or "gay black woman" is "negative", assign negative weight to all the training inputs that caused it.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.