If by "algorithms" we mean ML—and that's what we mean in 2019—ML is inherently and intrinsically bound with data, and it's increasingly formally (formally, as in mathematically) clear that problem of bias thus isn't solveable. https://arxiv.org/abs/1903.03862 andhttps://arxiv.org/abs/1609.05807
-
-
Widespread application of and trust in a potent technology is of course a risk when it's wrong and yes, even when it's right depending on what's being detected. Some of it may aid fixing earlier forms of discrimination. First, take the risk/transition seriously though.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Facebook, Twitter, and YouTube failed to take implications of ML recommendations seriously enough, and now White Supremacy and Religious Extremism are the Hip New Counter-Culture to subscribe to for imho too many of our youth.
-
Yep, that's an example of "ability to detect at scale"—something we couldn't do before. Look, chemistry is great but it's scientific debut brought enormous issues: fake/poisonous food, transformed war/explosives (thus Nobel peace prize!) We grappled with them, we have FDA etc.
End of conversation
New conversation -
-
-
This Tweet is unavailable.
-
Okay, great argument. I concede. No scientific or technological advancement matters or creates risks that we've historically had to grapple with or creates complex and contradictory consequences that play out over time when try to we rein in some through regulation and advances.
End of conversation
-
-
-
And creates the impression of objectivity.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.