The key AI bias problem isn't whether geniuses at Google can figure out how to reduce bias in their apps but instead that biased systems built by other companies will be considered infallible because it's "same tech as Google's".
@zeynep has been sounding this alarm for a whilehttps://twitter.com/benedictevans/status/1093914847828271104 …
-
-
But databases in the 1960s simply do not have "exactly the same problem." They have bugs and complexity but you can debug and you know exactly how the classification works because someone programmed it symbolically. Machine learning is inherently different. It's not symbolic.
-
Nowadays, you audit machine learning the way you audit a human: you look at outcomes and try to read "minds". (Well, humans can tell you why they think they did something but they aren't always great at that). You *can* audit a 1960s program/db by looking at code & debugging.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.