-
-
Replying to @IrisVanRooij
A more lethal replaying of the early history of photo-film technology? :(
1 reply 0 retweets 3 likes -
Replying to @kaznatcheev @IrisVanRooij
The irony is we sometimes hold the AI to a higher standard than people. Hopefully questioning why the AI "has to be" biased and stopping it can also lead to questioning why we can't help people also be less biased, racist, etc.
1 reply 0 retweets 3 likes -
Replying to @o_guest @IrisVanRooij
Good point. Side note: I think it is good to always hold AI to higher standard that private persons. Danger with AIs isn't that they make decisions, but that decisions are at scale. Racist AI does much more damage than typical racist uncle (unless uncle is POTUS or some such).
1 reply 0 retweets 4 likes -
Replying to @kaznatcheev @IrisVanRooij
Sure, but I think we agree that the training data and the fact the AI became racist was due to people/the world being racist so both can/should be addressed to really stop the issue, right?
1 reply 0 retweets 3 likes -
In part I fully agree, but I think there's a very important point to be made that statistical asymmetries that are conducive to racism and discrimination are inevitable for any AI trained by public masses of data. But they don't have cog counterweights, opposed to (nice) humans
1 reply 0 retweets 2 likes -
Inevitable sounds like a weasel word used to shift blame from developers to 'the world'. Data scientist should've thought about how dataset might be biased & accounted for it (with better curated dataset) before going to scale. In this case, should have had more PoC in training.
2 replies 0 retweets 3 likes
Yeah, literally the hiring policies of the company could have solved this by hiring a more representative/diverse set of humans.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.