Medijski sadržaj
- Tweetovi
- Tweetovi i odgovori
- Medijski sadržaj, trenutna stranica.
-
Awesome talk by
@rajiinio at#FAT2020 on closing the AI accountability gap! The paper draws on several other domains to develop new approaches for AI accountability.pic.twitter.com/M3NiFIYmaZ
-
Kicking off the morning at
#FAT2020,@salome_viljoen_ interrogates why technical projects fail to account for social realities. Enabling technical and social reform requires methodological reform toward “algorithmic realism.” Read the paper here: https://www.benzevgreen.com/wp-content/uploads/2020/01/20-fat-realism.pdf …pic.twitter.com/flKRO2vlqm
-
"Algorithmic Realism" (w/
@salome_viljoen_) diagnoses dominant CS thinking as "algorithmic formalism" and explores how a shift to "algorithmic realism" (following a similar shift, last century, in the law) could lead to more socially beneficial algorithms. https://www.benzevgreen.com/20-fat-realism/ pic.twitter.com/x2GTqqfMd7
Prikaži ovu nit -
"The False Promise of Risk Assessments" explores why risk assessments are such a misguided tool for criminal justice reform, how to counter risk assessments to enable more substantive change, and what this tells us about the limits of algorithmic fairness. https://www.benzevgreen.com/20-fat-risk/ pic.twitter.com/z99IyrnspS
Prikaži ovu nit -
“We typically assume that there’s one best model, but in practice there can be many models that produce different results.” -
@berkustunpic.twitter.com/BOvZHmiEnE
-
Hot off the digital presses: the AI Now 2019 Report is now live! Including AI trends from the past year, a discussion of new and emerging AI developments, and recommendations for governments, civil society, and researchers. Read it here: https://ainowinstitute.org/AI_Now_2019_Report.pdf …pic.twitter.com/NkGbdprGDx
-

A new report about the New York City Automated Decision System Task Force just dropped! If you're at all interested in the role and governance of algorithms in cities, you're going to want to read this. @AINowInstitute https://ainowinstitute.org/ads-shadowreport-2019.pdf …pic.twitter.com/RxMCBeXoQG
-
And for a longer discussion of this topic, check out my working paper "Data Science as Political Action." https://arxiv.org/abs/1811.03435 pic.twitter.com/MwhT2RBtUy
Prikaži ovu nit -
In my paper for the AI for Social Good Workshop
@NeurIPSConf, I argue that "good" isn't good enough. CS attempts to do good lack both a definition of good and a theory of change for how to achieve it. These attempts to do good can cause significant harm. https://www.benzevgreen.com/wp-content/uploads/2019/11/19-ai4sg.pdf …pic.twitter.com/q0MQLaU4e4
Prikaži ovu nit -
I'm honored to join with an incredible group of scholars and advocates urging HUD to withdraw its proposed rule creating a safe harbor for the use of algorithms in housing. https://ainowinstitute.org/ainow-cril-october-2019-hud-comments.pdf … https://twitter.com/AINowInstitute/status/1185324560331214849 …pic.twitter.com/akX5B8W1Wl
-
Exhibit A: if Warren tries to break up Facebook in the public interest, we're going to fight it Exhibit B: people need to know that Facebook has the public's best interests at heart Zuckerberg can't even keep his story straight in the same conversation. https://www.theverge.com/2019/10/1/20892354/mark-zuckerberg-full-transcript-leaked-facebook-meetings …pic.twitter.com/AnQ8uHxDRX
-
Why does the
@TheOfficialACM of all places require that passwords be alphanumeric? The registration system won't allow the passwords that Safari auto-generates.pic.twitter.com/jhnhvn9plJ
-
Fairness: Participants exhibited racial bias in their interactions with the risk assessment. The extent of these disparate interactions varied across treatments, but were not eliminated in any.pic.twitter.com/6kVWxt5Npz
Prikaži ovu nit -
Reliability: Our study participants were unable to effectively evaluate the accuracy of their own or the risk assessment’s predictions or to calibrate their reliance on the risk assessment based on its performance.pic.twitter.com/m3WOtgZxOo
Prikaži ovu nit -
Almost all of our treatments improved the accuracy of predictions, and there was quite a bit of variation across the different treatments. Yet none of the treatments led to better accuracy than the risk assessment alone.pic.twitter.com/ru9kQu1Yke
Prikaži ovu nit -
First, we posited three principles as essential to ethical and responsible algorithm-in-the-loop decision making. These principles relate to the accuracy, reliability, and fairness of decisions.pic.twitter.com/MD9A3nGfgn
Prikaži ovu nit -
Decision making is increasingly sociotechnical, yet we lack a thorough normative & empirical understanding of these processes. My new paper with Yiling Chen (forthcoming
@ACM_CSCW!) explores the principles & limits of algorithm-in-the-loop decision making. http://www.benzevgreen.com/wp-content/uploads/2019/09/19-cscw.pdf …pic.twitter.com/E6pFGOrSm4
Prikaži ovu nit -
-
Looking forward to speaking at tonight's
@CenterForArch@AIA_NewYork event on mobility and smart cities! With all-star co-panelists@datasew,@juanfrans, and@shannonmattern. https://calendar.aiany.org/2019/07/15/mobility-spatial-agency-autonomy-in-the-new-urban-interface/ …pic.twitter.com/EnXPe1i8Up
-
This is a *chef's kiss* case study in irresponsible engineering: 1. Denying the potential social impacts of your software 2. Pretending that the software won't affect people's behavior, despite marketing it as a tool to do just that 3. Blaming lay users for any flaws or misusespic.twitter.com/7nHCbY8etJ
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.