There's no way to de-bias this data, since there are other kinds of bias in the data that we don't even think about, along with proxies for the biases we do try to remove. But all you have to do is not collect and retain it, and the entire problem goes away.
-
-
Show this thread
-
Who knew in 2021 the Matrix would turn out to be just a stupid matrix.
Show this thread -
This is what I mean when I say working on AI ethics at places like Google is like doing climatology at Exxon-Mobil, or heading the coyote safety division at ACME. It removes an entire class of effective regulatory interventions from discussion.
Show this thread
End of conversation
New conversation -
-
-
Regulate data collection *and* use? Obviously. But this doesn't solve the ethics problem - breaks as soon as someone just hard-codes if(a) do_b(). This can lead to same outcome and unethical/problematic/biased algorithms (and it did in the past - see the skin color bias).
-
I'm not claiming to solve the general ethics problem, just this new class of ML-derived ethics problems
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.