Ben Green

@benzevgreen

Author of The Smart Enough City, Applied Math PhD candidate, Affiliate, Research Fellow.

Cambridge, MA
Vrijeme pridruživanja: lipanj 2009.

Medijski sadržaj

  1. 28. sij

    Awesome talk by at on closing the AI accountability gap! The paper draws on several other domains to develop new approaches for AI accountability.

  2. 28. sij

    Kicking off the morning at , interrogates why technical projects fail to account for social realities. Enabling technical and social reform requires methodological reform toward “algorithmic realism.” Read the paper here:

  3. 9. sij

    "Algorithmic Realism" (w/ ) diagnoses dominant CS thinking as "algorithmic formalism" and explores how a shift to "algorithmic realism" (following a similar shift, last century, in the law) could lead to more socially beneficial algorithms.

    Prikaži ovu nit
  4. 9. sij

    "The False Promise of Risk Assessments" explores why risk assessments are such a misguided tool for criminal justice reform, how to counter risk assessments to enable more substantive change, and what this tells us about the limits of algorithmic fairness.

    Prikaži ovu nit
  5. 13. pro 2019.

    “We typically assume that there’s one best model, but in practice there can be many models that produce different results.” -

  6. 12. pro 2019.

    Hot off the digital presses: the AI Now 2019 Report is now live! Including AI trends from the past year, a discussion of new and emerging AI developments, and recommendations for governments, civil society, and researchers. Read it here:

  7. 4. pro 2019.

    📢🚨 A new report about the New York City Automated Decision System Task Force just dropped! If you're at all interested in the role and governance of algorithms in cities, you're going to want to read this.

  8. 3. pro 2019.

    And for a longer discussion of this topic, check out my working paper "Data Science as Political Action."

    Prikaži ovu nit
  9. 3. pro 2019.

    In my paper for the AI for Social Good Workshop , I argue that "good" isn't good enough. CS attempts to do good lack both a definition of good and a theory of change for how to achieve it. These attempts to do good can cause significant harm.

    Prikaži ovu nit
  10. 19. lis 2019.

    I'm honored to join with an incredible group of scholars and advocates urging HUD to withdraw its proposed rule creating a safe harbor for the use of algorithms in housing.

  11. 1. lis 2019.

    Exhibit A: if Warren tries to break up Facebook in the public interest, we're going to fight it Exhibit B: people need to know that Facebook has the public's best interests at heart Zuckerberg can't even keep his story straight in the same conversation.

  12. 1. lis 2019.

    Why does the of all places require that passwords be alphanumeric? The registration system won't allow the passwords that Safari auto-generates.

  13. 24. ruj 2019.

    Fairness: Participants exhibited racial bias in their interactions with the risk assessment. The extent of these disparate interactions varied across treatments, but were not eliminated in any.

    Prikaži ovu nit
  14. 24. ruj 2019.

    Reliability: Our study participants were unable to effectively evaluate the accuracy of their own or the risk assessment’s predictions or to calibrate their reliance on the risk assessment based on its performance.

    Prikaži ovu nit
  15. 24. ruj 2019.

    Almost all of our treatments improved the accuracy of predictions, and there was quite a bit of variation across the different treatments. Yet none of the treatments led to better accuracy than the risk assessment alone.

    Prikaži ovu nit
  16. 24. ruj 2019.

    First, we posited three principles as essential to ethical and responsible algorithm-in-the-loop decision making. These principles relate to the accuracy, reliability, and fairness of decisions.

    Prikaži ovu nit
  17. 24. ruj 2019.

    Decision making is increasingly sociotechnical, yet we lack a thorough normative & empirical understanding of these processes. My new paper with Yiling Chen (forthcoming !) explores the principles & limits of algorithm-in-the-loop decision making.

    Prikaži ovu nit
  18. 17. ruj 2019.

    The yelp filtered reviews have some real gems

  19. 12. ruj 2019.
  20. 9. kol 2019.

    This is a *chef's kiss* case study in irresponsible engineering: 1. Denying the potential social impacts of your software 2. Pretending that the software won't affect people's behavior, despite marketing it as a tool to do just that 3. Blaming lay users for any flaws or misuses

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·