It's worse than black boxes; the current generation of machine learning is opaque and excellent at finding proxy variables for prohibited categories. So you can publish the algorithm, publish the training data, exclude prohibited categories and still hide the bias in plain sight
-
This Tweet is unavailable.
-
This Tweet is unavailable.
-
-
Replying to @Pinboard @mattyglesias
plus as the ML gets more complex and higher-dimensional it becomes much more difficult for humans to even be able to explain the patterns it is supposedly finding
1 reply 0 retweets 4 likes
Replying to @nkl @mattyglesias
Yeah, that's what I mean by opacity. You get a ton of numerical weights on a linear algebra thing, but no mapping between those weights and human categories. It's like slicing open a brain and trying to see the thoughts inside
8:24 AM - 12 Jun 2019
0 replies
0 retweets
5 likes
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.