Excited to share this paper on how the language of measurement can unite + clarify issues in the fairness, accountability, transparency in ML space. A common framework for an emerging field. Joint with @hannawallach
https://arxiv.org/abs/1912.05511
1/
-
Prikaži ovu nit
-
This draws from the social sciences to weigh in on a setting where social and technical problems are intertwined — in the fairness/accountability/transparency in ML space. (Work in progress. Feedback welcome!) 2/
1 reply 0 proslijeđenih tweetova 3 korisnika označavaju da im se sviđaPrikaži ovu nit -
The language of measurement helps us unpack assumptions in the design of computational decision-making systems — to diagnose, mitigate, and prevent harms. 3/
1 reply 0 proslijeđenih tweetova 3 korisnika označavaju da im se sviđaPrikaži ovu nit -
Harms emerge from a mismatch between constructs (e.g., 'creditworthiness') and operationalizations (e.g., credit history), or between theoretical understandings of those constructs. (And these constructs are there whether we want to talk about it or not. “Risk,” “quality”.) 4/
1 reply 0 proslijeđenih tweetova 5 korisnika označava da im se sviđaPrikaži ovu nit -
Assumptions are present in every step of the ML pipeline: data collection, feature design, target variable, task design. Making these assumptions explicit through the language of measurement and evaluating validity helps us know where to look for potential problems! 5/
1 reply 0 proslijeđenih tweetova 7 korisnika označava da im se sviđaPrikaži ovu nit -
Ok ok, some constructs seem gnarly. “Privacy” or “fairness” are essentially contested—inherently ill-defined. But to the degree to which we measure them anyways, the language of measurement to unpack our assumptions gives us a powerful framework for interrogating them. 6/
1 reply 0 proslijeđenih tweetova 4 korisnika označavaju da im se sviđaPrikaži ovu nit -
So, you already assumed a precise mathematical defn of fairness, corresponding to a theoretical understanding? Excellent! A precise operationalization comes with a strong assumption about the theoretical construct and embedded values. (Plus. earlier issues baked in upstream!) 7/
0 proslijeđenih tweetova 5 korisnika označava da im se sviđaPrikaži ovu nit -
Finally, the focus of algorithms is often on allocative harms. What can measurement help us understand about civic capacity, or representational harms? (Credit to
@profjennabednar at#ChoiceUMich ) 8/1 reply 0 proslijeđenih tweetova 6 korisnika označava da im se sviđaPrikaži ovu nit
Want more on measurement, taxonomies of harms, plus examples from fatml + nlp? Check out our @fatconference tutorial in January with @sulin_blodgett
Plus @s010n @haldaume3 @hannawallach
https://azjacobs.com/measurement
/end self promo
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.