Oren Etzioni from Allen Institute for AI talking about negative public discourse around AI. Taking a few (deserved) potshots at Nick Bostrom
-
-
-
I find AI optimists to be about as frustrating as AI pessimists. My position: AI is weird! Really fucking weird!!
-
Hmm, Etzioni seems to assuming that the ultimate goal of AI is to emulate the human mind, which is soooo uninteresting to me
-
Now Etzioni is talking about a moral imperative to deploy AI tech in medicine, transport, etc bc "without it, people are dying".
-
But also, the more we automate systems, the more attack surface we expose. You make shit smart, you also make it hackable.
-
Would asking questions of the CEO of the Allen Institute for Artificial Intelligence constitute "punching down"? asking for a friend
-
Now: the panel on designing accountability and fairness into AI, interpreting mechanisms and outputs, biases and discrimination
-
We jump straight to Skynet when talk AI ethics. It sucks all the oxygen out of the room. We need more granular discussions. -
@katecrawford -
Transparency in algorithmic determinations doesn't necessarily show you what the biases of an algorithm are over time -
@katecrawford -
Judges can't ask Qs of expert systems like an expert witness. they'll assume system output is science instead of expert opinion -
@jackbalkin -
If you're going to build AI systems to assist in legal cases, you need to account for the culture of the legal system -
@jackbalkin -
If AI systems are going to assist in a judicial setting, you will need trained computer scientists on hand (court employees?) -
@jackbalkin -
Now
@jackbalkin compares the shift to AI expert systems as similar to culture shift in creation of the administrative state in early 20th C. -
The administrative state agents now need to be part policy experts, part programmers! -
@jackbalkin -
And
@katecrawford says we need to start teaching "data ethics" -- I'm wondering how effective ethics courses in bio curricula have been? -
Need to deliver AI systems along with new professional norms. They go hand in hand. It's what's called a reprofessionalization -
@jackbalkin -
OMG someone in Q&A says learning algorithms can't have racial bias just because the outputs are different for different races ok bro
-
Ah good, the moderator chastised him for not asking a question and instead opining for a solid three minutes
-
I am glad I can wince at
@thricedotted who is sitting next to me -
Technically speaking, computers aren't capable of racial bias. Technically, I am just a bag of neurons that fire in response to stimulus.
-
Heyyy,
@gleemie shoutout at this panel for her labor analyses of digital menial laborers - 1 more reply
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.