If by "algorithms" we mean ML—and that's what we mean in 2019—ML is inherently and intrinsically bound with data, and it's increasingly formally (formally, as in mathematically) clear that problem of bias thus isn't solveable. https://arxiv.org/abs/1903.03862 andhttps://arxiv.org/abs/1609.05807
-
This Tweet is unavailable.
-
This Tweet is unavailable.
-
-
This Tweet is unavailable.
-
In most cases ML 1-surfaces bias; and/or 2-focuses it via feedbackloops; and/or 3-creates new ones by adding the ability to detect things at scale that we couldn't before (not hiring people prone to depression, for example). All those are risks, though 1 is also an opportunity.
3 replies 0 retweets 9 likes -
Facebook, Twitter, and YouTube failed to take implications of ML recommendations seriously enough, and now White Supremacy and Religious Extremism are the Hip New Counter-Culture to subscribe to for imho too many of our youth.
1 reply 1 retweet 0 likes
Yep, that's an example of "ability to detect at scale"—something we couldn't do before. Look, chemistry is great but it's scientific debut brought enormous issues: fake/poisonous food, transformed war/explosives (thus Nobel peace prize!) We grappled with them, we have FDA etc.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.