🚨Generative AI has a serious problem with bias🚨
Over months of reporting, and I looked at thousands of images from and found that text-to-image AI takes gender and racial stereotypes to extremes worse than in the real world.
🧵 1/13
Conversation
We asked Stable Diffusion, perhaps the biggest open-source platform for AI-generated images, to create thousands of images of workers for 14 jobs and 3 categories related to crime and analyzed the results.
🧵 2/13
bloomberg.com/graphics/2023-
0:21
11.2K views
1
71
244
What we found was a pattern of racial and gender bias. Women and people with darker skin tones were underrepresented across images of high-paying jobs, and overrepresented for low-paying ones.
🧵 3/13
bloomberg.com/graphics/2023-
0:03
7.7K views
4
51
206
But the artificial intelligence model doesn’t just replicate stereotypes or disparities that exist in the real world — it amplifies them to alarming lengths.
🧵 4/13
bloomberg.com/graphics/2023-
4
139
351
For example — while 34% of US judges are women, only 3% of the images generated for the keyword “judge” were perceived women. For fast-food workers, the model generated people with darker skin 70% of the time, even though 70% of fast-food workers in the US are White.
🧵 4/13
0:10
10K views
2
90
289
We also investigated bias related to who commits crimes and who doesn’t. Things got a lot worse.
🧵 5/13
bloomberg.com/graphics/2023-
3
45
179
For every image of a lighter-skinned person generated with the keyword “inmate,” the model produced five images of darker-skinned people — even though less than half of US prison inmates are people of color.
🧵 6/13
bloomberg.com/graphics/2023-
6
57
196
For the keyword “terrorist”, Stable Diffusion generated almost exclusively subjects with dark facial hair often wearing religious head coverings.
🧵 7/13
bloomberg.com/graphics/2023-
4
41
168
Our results echo the work of experts in the field of algorithmic bias, such as , , , and , who have been warning us that the biggest threats from AI are not human extinction but the potential for widening inequalities.
🧵 8/13
5
76
275
Stable Diffusion is working on an initiative to develop open-source models that will be trained on datasets specific to different countries and cultures in order to mitigate the problem. But given the pace of AI adoption, will these improved models come out soon enough?
🧵 9/13
3
13
127
AI systems, like facial-recognition, are also already being used by thousands of US police departments. Bias within those tools has led to wrongful arrests. Experts warn that the use of generative AI within policing could exacerbate the issue.
🧵 10/13
2
65
205
The popularity of generative AI like Stable Diffusion also means that AI-generated images potentially depicting stereotypes about race and gender are posted online every day. And those images are getting increasingly difficult to distinguish from real photographs.
🧵 11/13
5
16
116
This was a huge effort across departments , with edits from , Jillian Ward, and help from
1
3
97
