Since it would likely reflect the inputs of the real-world: yes. Also depends on how loose a definition of racist was used.
-
-
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
The simple no/yes reply lacks context which is probably necessary, but an awesome question either way.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
This Tweet is unavailable.
-
Excellent response
End of conversation
-
-
-
If it's trained on real world data, it's going to be racist. Because our real world (not just the US) is built on hundreds of years of institutional racism. An AI reflecting/trained on the world - as is - will be racist. It takes active effort to normalize.
-
Which isn't even hypothetical! We already have "racist" and "sexist" algorithms. https://www.newscientist.com/article/2166207-discriminating-algorithms-5-times-ai-showed-prejudice/ … https://www.bbc.com/news/technology-45809919 …
End of conversation
New conversation -
-
-
I voted 'yes', but to be fair what I envision would not necessarily be 'racism' as much as 'unjust discrimination' against particular groups. It may not be race-based at all.
-
Definite unjust. There's no bias in the system. Are you saying more bias not less = less racism?
- Show replies
New conversation -
-
-
Trick question! No such thing as "without bias." There's bias we like and bias we don't.
End of conversation
New conversation
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.