Correlation between outcomes and race, gender, etc. is NOT bias per se. And yet that's what almost all claims of discrimination by machine-learned models are based on.
-
-
Vastauksena käyttäjälle @pmddomingos
This seems to be a well-known counterexample, no? https://news.berkeley.edu/2019/10/24/widely-used-health-care-prediction-algorithm-biased-against-black-people/ … ... the core bias here may be class based rather than race based, but it's a genuine error w/ discriminatory result: ML underestimated medical risk for the poor due to dumbly limited feature choice
1 vastaus 1 uudelleentwiittaus 1 tykkäys
Using health care costs to predict risk is dumb, not biased. Precisely the problem with a lot of these stories is that they mistake dumbness for bias. The solution in this case is better feature choice, not "debiasing" the algorithm.
Lataaminen näyttää kestävän hetken.
Twitter saattaa olla ruuhkautunut tai ongelma on muuten hetkellinen. Yritä uudelleen tai käy Twitterin tilasivulla saadaksesi lisätietoja.