But... how is the AI accountable to the public????? How does one know whether it is inappropriately censoring things? Who is training the system? What reports will the public see about how and what types of content it automatically filters? https://twitter.com/mims/status/984104412531609600 …
-
This Tweet is unavailable.
-
Replying to @kimmaicutler @sonyaellenmann
Couldn't you ask the exact same questions about a filtering system run by humans? I'm not sure what they have to do with AI at all really.
2 replies 0 retweets 1 like -
AI just allows them to do bias laundering and disclaim any responsibility when it fails.
1 reply 0 retweets 2 likes -
Andrew Wooster Retweeted Jane Calamari Comeback Ruffino
You’ll still get results like this, but Zuck will be able to say before a committee “aw shucks, it’s the AI, sorry we promise we’ll make it better
”https://twitter.com/janeruffino/status/984105622223323136?s=21 …Andrew Wooster added,
Jane Calamari Comeback Ruffino @janeruffinoIn 2013 I wrote a FB Note about preventing domestic violence. It was removed by FB following a complaint by a convicted abuser.I briefly got it back so could at least ctrl+c the text. He complained again and it was gone forever. It wasn’t defamatory. It was pretty helpful tbh.Show this thread1 reply 0 retweets 1 like -
A) The same thing would happen with human reviewers sometimes. B) Zuck's response in that case would be exactly the same. It would just be about training people instead of software. I am skeptical that "bias laundering" is a thing that works.
2 replies 0 retweets 0 likes
A) Yes it would. But at the point that sth is operating at the scale FB is, its moderation policies -- regardless of whether they are done by humans, software or both -- should have some kind of public accountability mechanic. B) Training according to what standards?
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.