More diversity would be great for many reasons, but the problem is inherent to the method/data. Pretty clear we will need to find new technical methods and political will (and for sure, diversify the people who design systems and spot issues but the challenge is deeper).
-
-
-
And let's not get even started with fair... Different definitions of fair are often in mathematical conflict (see the Kleinberg et. al paper). (And, crucially) we're not using math the way we use them to probe laws of nature.)
- Show replies
New conversation -
-
-
+1. Machine learning algorithms are biased based on the data they were trained with the same way humans are biased by their experiences. There's no such thing as an unbiased human just one who isn't biased about a specific thing in a specific context. Applies similarly to ML
-
Yep, and, more and more solid papers showing that there is no way out for ML. Post-hoc intervention is possible, as with humans, but that's a whole different thing.
End of conversation
New conversation -
-
-
This is why it would be nice if we didn't use the same terms for "algorithms" in the traditional sense (like quicksort, euclid's gcd method, etc) and ML, which could be thought of as "algorithm constructors", which are much trickier to reason about and predict consequences of.
-
Was thinking same, this terminology matters and we should strive to not convolute.
- Show replies
New conversation -
-
-
This Tweet is unavailable.
-
In most cases ML 1-surfaces bias; and/or 2-focuses it via feedbackloops; and/or 3-creates new ones by adding the ability to detect things at scale that we couldn't before (not hiring people prone to depression, for example). All those are risks, though 1 is also an opportunity.
- Show replies
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.