Having spent a lot of time over the past several years thinking about and studying ethics in technology, the more I am convinced that the discussions about ethics in technology have a major, irreconcilable blind spot.
-
-
Maybe we shouldn’t be trying to eliminate bias in AI. Maybe we should be designing for bias in the favor of the oppressed. Worth thinking about.
Show this thread -
Should we be eliminating power dynamics or inverting power dynamics? If we try to scrub power dynamics from a system too early in a neatly bounded space, we just let those power dynamics to creep back in, or we need those boundaries to be impenetrable.
Show this thread -
My next ethics talk may just be a single slide that says REVOLUTION
Show this thread
End of conversation
New conversation -
-
-
I am convinced that in a lot of cases, the point of having an AI or algorithm make decisions is not to achieve neutrality, but to cover one’s ass. The algorithm makes the same shitty decisions a human would but now you’re like sorry, can’t blame a person for this.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Neutrality is often a knife's edge of unstable equilibria. I'm not convinced that it even makes sense to try to pretend we can do that successfully with AI for many categories of problems.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
I think people often believe technology is immune to biases, when it actually amplifies them e.g. no one deliberately designs an automatic soap dispenser that doesn't work for black people, but the designers didn't test this and so an entire group suddenly can't wash their hands.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.