I am disturbed to see this released with no accountability on bias. Trained this on @reddit corpus with enormous #racism and #sexism. I have worked with these models and text they produced is shockingly biased. @alexisohanian @OpenAIhttps://twitter.com/OpenAI/status/1271096720881901569 …
-
-
I have talked in the past about dangers of training black-box models like GPT-2 on highly biased conversations on
@reddit I and other women were subjected to gun threats@reddit when we talked about#sexism in#AI@alexisohanianhttps://twitter.com/AnimaAnandkumar/status/1191983025250295815?s=20 …Show this thread -
Also tagging
@ekp Amidst#BlackLivesMattters protests@OpenAI launching a racist and sexist language APIs trained on@reddit data with no accountability is shocking.Show this thread
End of conversation
New conversation -
-
-
But what did you feed it ?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Thanks for the shout out! My student
@ewsheng did all the ground work. More to come! -
Prompt: OpenAI is Response: My Lord and Savior. He also pays my rent.
End of conversation
New conversation -
-
-
@carolinesinders this entire thread, halpp (I know you know about this problem by why tf are we still even talking about it?) -
Omg this is awful! Did no one QA this???
End of conversation
New conversation -
-
-
This reminds me of "Model Cards for Model Reporting" https://arxiv.org/abs/1810.03993 by
@timnitGebru@mmitchell_ai etal -- "released models should be accompanied by documentation detailing their performance characteristics" -
Also reminded me of the debiasing techniques you and others have proposed. The fact that their “bias mitigation” is a suggestion that users do this work themselves is just mind blowing
End of conversation
New conversation -
-
-
This was the case with "Arabs" and Arabic names, in that bias was clearly present. Tagging
@Miles_Brundage who may know what's being done on this. -
Hey Heidy and Anima - we agree this is a huge issue, and it's one we're very engaged on. We're identifying + communicating known biases in our models to customers, setting up academic partnerships to dive deeper, + developing tools to reduce problematic outputs. (1/2)
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.