Rezultati pretraživanja
  1. 18. stu 2016.

    Gender neutral pronouns get translated to gender stereotypes by Google translate

  2. Odgovor korisniku/ci

    Google translate: Turkish 3rd person pronoun "O"=gender-neutral. "O" is a doctor➡️"He" is a doctor. "O" is a nurse➡️"She" is a nurse.

  3. 16. ruj 2017.

    Pretty damn excited about my new class on fairness, accountability, and transparency in machine learning!

    Prikaži ovu nit
  4. Crazy gender biases built into word embedding a used for countless AI language systems! paper:

  5. 26. ožu 2018.

    Statistical studies of algorithmic fairness initially struggled for funding against AI takeover & existential risk. No longer—lobbyists now use as an excuse not to regulate. See eg today's new nonsense GDPR AI doomsday report from opaquely funded Centre for Data Innovation

  6. Odgovor korisniku/ci

    ^Testing. Brilliant bias demo. Teacher=she. Manager=he. President=he. Programmer=he. Engineer=he. Nurse=she. Anomaly: secretary=he.

  7. 13. tra 2018.

    Really nice blog post on measuring bias in text embedding/word vectors.

  8. 7. lip 2018.
  9. Odgovor korisniku/ci

    Paper uses AI to designate people as "criminals" and "non-criminals" from PHOTO OF THEIR FACE ALONE. Madness.

  10. 24. stu 2016.

    Opinion in PNAS: “The dangers of faulty, biased, or malicious algorithms requires independent oversight”

  11. 3. velj

    Excited for this paper to be available now on work we did in 2017! "Case study: predictive fairness to reduce misdemeanor recidivism through social service interventions"

  12. 26. sij

    So happy to hear that our essay on algorithmic injustice is starting to get included on syllabi for courses on the moral & political implications of - if you are using our essay in your teaching, please be sure to let me & my coauthors know!

  13. 9. sij
  14. 10. pro 2019.

    Hot off the press: Our case study on AI in healthcare. Tl;dr: developing ML systems is a ⚡️sociotechnical⚡️problem, people and institutions shape use, explainability is not the only means to accountability + much more!

    Prikaži ovu nit
  15. Interested in working on fair/interpretable ML () and reinforcement learning, and its applications to decision making in criminal justice, healthcare, and business? (NYU) and I are looking for postdocs! More details and application:

    Prikaži ovu nit
  16. 31. svi 2019.

    My presentation materials for "Interpretable Machine Learning with rsparkling" at is now online:

  17. 18. velj 2019.

    A brilliant & important paper. Should interest anyone concerned about AI bias, , criminal justice reform, & how to interpret government records.

  18. How can AI systems deliver fair and accurate decision-making by adapting information collection to individual needs? , Bernardo Garcia, and I will present the "Active Fairness" framework in January

  19. 8. stu 2018.

    This is how we do it: 's model explanation dashboard which can be used w/ "black-box" or interpretable models. Cheatsheet attached :), walk-through video linked.

  20. 21. lis 2018.

    I was happy to sign on to support these universal guidelines for AI: & hope others will, too!

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.