Or, ML algorithms will, without anyone understanding what's going on, weed out people prone to depression or women more likely to become pregnant in the next two years. It's ML, not just bureacratic rules codified into some code.
-
-
Why is an institution implementing an ML system without understanding how it could be wrong any different to an institution implementing a database without having that understanding?
2 replies 0 retweets 8 likes -
Cannot even reverse/engineer debug. I think a better grouping is that ML is opaque like humans, but as humans we have some insight into human foibles. Traditional databases (or programs) are like bureaucratic rules: they can be a maze, but you can potentially figure them out. 1/2
2 replies 1 retweet 24 likes -
This Tweet is unavailable.
-
We have a handle on why and how the NYT news side behaves, as well as the op-ed page. They even write editorials explaining their reasoning (which you can further analyze), and we have fields of study on why and how institutitional power operates. Not at all there for ML.
0 replies 2 retweets 10 likes -
This Tweet is unavailable.
-
There is an entire industry geared towards influencing mass media, and newspapers have had a lot of pressure on them—from subscribers to protests to regulation in many countries to journalism schools to codes of ethics to flak.. Media is both analyzable and often pressured.
3 replies 0 retweets 8 likes -
And so are all flawed institutions. None of this supports your basic assertion that the risk of misuse or misunderstanding of ML is different in principle from the ways all other techs & processes are subject to misuse or misunderstanding. ‘It’s not auditable’ is not good enough
3 replies 0 retweets 3 likes -
ML is going to allow us to detect things at scale and cheaply that we could not before. That, in the hands of the powerful, can be a terrible tool. I can write the awesome scenarios but.. until recently, you just couldn't detect say, gay or rebel or uyghur, *at scale* and cheap.+
3 replies 4 retweets 27 likes -
Replying to @zeynep @benedictevans and
Plus, ML will allow us to classify and optimize at scale, and be better at it than humans potentially, but opaquely... Humans hire from alumni network, have gender/race biases in hiring and are credentialist. What is ML going to weed out? Don't even know where to begin to look.+
2 replies 1 retweet 19 likes
Notice that my argument isn't 1-humans are great; 2-AI is biased (that's a problem but that's almost the easier problem because that's mainly a political problem). I'm saying ML, the current form of AI, will work, work well, at scale and cheaply and in ways we don't understand.
-
-
Replying to @zeynep
An interesting paper on auditing deep learning systems tldr: testing and guessing. https://scholarship.tricolib.brynmawr.edu/handle/10066/18664 …
2 replies 0 retweets 0 likes -
Replying to @EoinProut @zeynep
There is a big diff with deep learning and other AI methods. Deep learning is pretty much a black box.
0 replies 0 retweets 0 likes
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.