If your underlying data has bias, so will your machine learning. This makes using ML for moderation very dangerous. https://twitter.com/adrjeffries/status/924628053107642369 …
-
This Tweet is unavailable.Show this thread
-
Replying to @seldo
All data has bias. The problem is that they're on a delusional trip about "neutrality" rather than utilizing good bias to train.
1 reply 0 retweets 0 likes -
Replying to @RichFelker @seldo
Train the AI that nazis & the like are evil and their opinions are invalid and not worth considering, & good results will magically pop out.
1 reply 0 retweets 0 likes
Replying to @RichFelker @seldo
If on the initial training, "jew" or "gay black woman" is "negative", assign negative weight to all the training inputs that caused it.
10:53 AM - 30 Oct 2017
0 replies
0 retweets
0 likes
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.