In many settings, algorithms are adopted as "advisors" to human decision-makers. For instance, algorithms "advise" judges on sentencing, and "advise" employers on hiring. 2/
-
-
Prikaži ovu nit
-
Some argue that this is an opportunity to address social problems (eg race/gender disparities) by tweaking the algorithm to nudge decision-makers towards different actions (eg reduced disparities). We call this "algorithmic social engineering". 3/
Prikaži ovu nit -
The problem is that decision-makers may not share the same incentives as the activist/social-planner. Eg, a judge may prioritize crime or reelection over reducing racial disparities. Indeed, this is why the social problem exists in the first place! 4/
Prikaži ovu nit -
Tweaking an algorithm a la "fair ML" does not change the decision-maker's preferences, it just makes it the predictive component of the algorithm hard to decipher. The algorithm is an opaque mix of prediction and preferences. 5/
Prikaži ovu nit -
Aware that they are being manipulated, what does the decision-maker do? They use the algorithm less. The result could be even further from the social goal than if a purely predictive algorithm had been used. 6/
Prikaži ovu nit -
Think to the Ban the Box literature -- in the presence of statistical discrimination, removing information can have a net negative impact on a disadvantaged group. Fair ML removes information, by creating algorithms that are hard to decipher. It could backfire. 7/
Prikaži ovu nit -
More generally, if social problems stem from the incentives/preferences of decision-makers, there are much more direct ways to address them! Change the incentives, or change the decision-makers. Create penalties for judges with high racial disparities, elect new judges. 8/
Prikaži ovu nit -
The distinction between prediction and preferences has implications for software architecture, organizational structure, and regulation as well. The task of prediction and the choice of social goals are two distinct endeavors that should be treated separately. 9/9
Prikaži ovu nit
Kraj razgovora
Novi razgovor -
-
-
Humans are biased too
. Is there data or is this theoretical? (very cool btw)Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
-
-
That's a great paper. The fundamental problem reminds me of Persico's "Racial Profiling, Fairness, and Effectiveness of Policing," where the social aim of minimizing crime does not correspond to the officers' incentives of maximizing success rates in searches.
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.