Cc @Outsideness
They want to make AI “unbiased”. Which means biased toward the left. That’s what that’s code for right?
AI is already just a thing that does its thing & that is “problematic” cause it doesn’t fit narrative.https://twitter.com/ariannahuff/status/1099837320314998787 …
-
1:00 -
Replying to @SelimSeesYou
"What if deep learning learns the wrong things?"
1 reply 1 retweet 16 likes -
Replying to @Outsideness
I get that in the abstract sense: “prevent humans from coming to harm” & then it never lets any of us leave the house cause it isn’t safe. But they mean “predictive policing AI says the crime is in the black neighborhoods. That’s racist. Let’s tinker with it & fix this.”
2 replies 0 retweets 6 likes -
Replying to @SelimSeesYou @Outsideness
So the only way to make predictive algos 'unbiased' is to wreck their predictive capacity?
1 reply 0 retweets 4 likes -
Replying to @RupertVonRipp @SelimSeesYou
It should predict the verbal output of a Grievance Studies professor.
1 reply 0 retweets 8 likes -
Replying to @Outsideness @RupertVonRipp
I know their in before is “systemic/implicit racism” but we all know that’s rubbish. The current crop of tech we have wasn’t created by programmers/scientists sitting around saying “let’s throw in some code so its mean to poc & women hahahaha” They just make a thing that works.
1 reply 0 retweets 4 likes -
And that’s a problem of course. God forbid it work correctly if it doesn’t tell them what they want to hear.
1 reply 0 retweets 3 likes -
This is why Harrison Bergeron remains the more conceptually-accurate dystopia. AI isn't going to ignore inequalities, it's going to brutally enforce equality.
1 reply 1 retweet 5 likes
The people who want that are going to lose.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.