The key AI bias problem isn't whether geniuses at Google can figure out how to reduce bias in their apps but instead that biased systems built by other companies will be considered infallible because it's "same tech as Google's".
@zeynep has been sounding this alarm for a whilehttps://twitter.com/benedictevans/status/1093914847828271104 …
-
-
Replying to @Carnage4Life @zeynep
As far as I can see she has mostly been talking about how somehow machine learning cannot be ‘audited’, which I think is a total blind alley. The problem is not how it works. It’s people who don’t understand how it works. Databases, a 1960s tech, have EXACTLY the same problem
1 reply 0 retweets 1 like -
Replying to @benedictevans @Carnage4Life
That's a different issue, but I have been talking about how and why the key issue is the implementation in the wild more so than Google/Facebook with a lot of AI/tech folk. +
1 reply 0 retweets 2 likes -
But databases in the 1960s simply do not have "exactly the same problem." They have bugs and complexity but you can debug and you know exactly how the classification works because someone programmed it symbolically. Machine learning is inherently different. It's not symbolic.
1 reply 0 retweets 3 likes
Nowadays, you audit machine learning the way you audit a human: you look at outcomes and try to read "minds". (Well, humans can tell you why they think they did something but they aren't always great at that). You *can* audit a 1960s program/db by looking at code & debugging.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.