Excited that our paper on how various explainability algorithms are used in practice (https://arxiv.org/abs/1909.06342 ) was accepted to #FAT2020 @fatconference 
-
Prikaži ovu nit
-
Grateful for all of my co-authors
@alicexiang, Shubham Sharma@UTAustin,@adrian_weller@turinginst, Yunhan (Jack) Jia,@ankurtaly@fiddlerlabs, Joydeep Ghosh@CognitiveScale,@ruchir_puri@IBMResearch,@josemfmoura@CMU_ECE, and@pde331 reply 0 proslijeđenih tweetova 5 korisnika označava da im se sviđaPrikaži ovu nit -
Also thankful for those that shared thoughts and feedback with us:
@hima_lakkaraju@krishnagade@bansalg_@terahlyons@krvarshney@rajiinio@frossi_t@PeterLoPR@saayelimukherji@a_b_powell@KarinaAlexanyan@gabizij@pjturcot@ejette@RosieCampbell@nicole_rigillo1 reply 0 proslijeđenih tweetova 10 korisnika označava da im se sviđaPrikaži ovu nit -
Here are some interesting findings from our interviews with over thirty organizations: 1. Feature importance explanations are the most popular type of explanations, with Shapley values being most commonly adopted (the SHAP repo by
@scottlundberg and@suinleelab helped with this)1 reply 2 proslijeđena tweeta 1 korisnik označava da mu se sviđaPrikaži ovu nit -
2. Though most papers motivate explanations for "end users," we find that most explanation techniques are overwhelmingly used as sanity checks for ML engineers.
1 reply 3 proslijeđena tweeta 2 korisnika označavaju da im se sviđaPrikaži ovu nit -
3. Most organizations do not have a clear goal for why they want "explainability." Some reported using it due to directives from higher-ups.
1 reply 4 proslijeđena tweeta 1 korisnik označava da mu se sviđaPrikaži ovu nit -
4. There are many technical limitations for deploying certain explanation techniques at scale (e.g., influence functions for sample importance are great in theory, but are computationally expensive to deploy).
1 reply 1 proslijeđeni tweet 1 korisnik označava da mu se sviđaPrikaži ovu nit -
We then provide a framework for deciding what type of explainability is right for your organization and raise concerns related to explainability (e.g., privacy, lack of causality, etc.).
1 reply 0 proslijeđenih tweetova 2 korisnika označavaju da im se sviđaPrikaži ovu nit -
I'm also at
#NeurIPS2019@NeurIPSConf in Vancouver this week if you want to discuss explainability, fairness, and human-AI teams. Feel free to reach out! I'll be presenting a subset of this as a poster at the#HCML2019 workshop on Friday (Dec. 13) in West Level 2, Room 223-2241 proslijeđeni tweet 6 korisnika označava da im se sviđaPrikaži ovu nit
Even more thrilled since this is my first paper since joining @CambridgeMLG @LeverhulmeCFI as a doctoral student and joining @PartnershipAI as a research fellow #phdchat @AcademicChatter 
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.