-
-
Good point. Side note: I think it is good to always hold AI to higher standard that private persons. Danger with AIs isn't that they make decisions, but that decisions are at scale. Racist AI does much more damage than typical racist uncle (unless uncle is POTUS or some such).
-
Sure, but I think we agree that the training data and the fact the AI became racist was due to people/the world being racist so both can/should be addressed to really stop the issue, right?
-
In part I fully agree, but I think there's a very important point to be made that statistical asymmetries that are conducive to racism and discrimination are inevitable for any AI trained by public masses of data. But they don't have cog counterweights, opposed to (nice) humans
-
Inevitable sounds like a weasel word used to shift blame from developers to 'the world'. Data scientist should've thought about how dataset might be biased & accounted for it (with better curated dataset) before going to scale. In this case, should have had more PoC in training.
-
Yeah, literally the hiring policies of the company could have solved this by hiring a more representative/diverse set of humans.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.