If your underlying data has bias, so will your machine learning. This makes using ML for moderation very dangerous. https://twitter.com/adrjeffries/status/924628053107642369 …
All data has bias. The problem is that they're on a delusional trip about "neutrality" rather than utilizing good bias to train.
-
-
Train the AI that nazis & the like are evil and their opinions are invalid and not worth considering, & good results will magically pop out.
-
If on the initial training, "jew" or "gay black woman" is "negative", assign negative weight to all the training inputs that caused it.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.