Correlation between outcomes and race, gender, etc. is NOT bias per se. And yet that's what almost all claims of discrimination by machine-learned models are based on.
-
-
I think that the job of a model is much more than just being accurate. It needs to help us solve issues. Great if it is accurate *and* can help us fix our issues. Why should I care about a super accurate black-boxy binary answer, say?
-
I do not think I disagree with you that incorporating our own biases into the system is a good idea. But I think that "system design" is tremendously difficult, and accuracy is only a very small part of a successful solution.
- Näytä vastaukset
Uusi keskustelu -
Lataaminen näyttää kestävän hetken.
Twitter saattaa olla ruuhkautunut tai ongelma on muuten hetkellinen. Yritä uudelleen tai käy Twitterin tilasivulla saadaksesi lisätietoja.