Saying that bias in AI applications is "just because of the datasets" is like saying the 2008 crisis was "just because of subprime mortgages". Technically, it's true. But it's singling out the last link in the causality chain while ignoring the entire system around it.
-
-
3. In the event that you end up training a model on a biased dataset: will QA catch the model's biases before it makes it into production? Does your QA process even take ML bias into account?
Show this thread -
These are not data problems -- these are organizational and cultural problems. The fact that a biased dataset caused an issue is actually the outcome of the entire system. Team diversity will help with these things, organically, but having formal processes is necessary by now.
Show this thread
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.