We have a handle on why and how the NYT news side behaves, as well as the op-ed page. They even write editorials explaining their reasoning (which you can further analyze), and we have fields of study on why and how institutitional power operates. Not at all there for ML.
-
This Tweet is unavailable.
-
-
This Tweet is unavailable.
-
There is an entire industry geared towards influencing mass media, and newspapers have had a lot of pressure on them—from subscribers to protests to regulation in many countries to journalism schools to codes of ethics to flak.. Media is both analyzable and often pressured.
3 replies 0 retweets 8 likes -
And so are all flawed institutions. None of this supports your basic assertion that the risk of misuse or misunderstanding of ML is different in principle from the ways all other techs & processes are subject to misuse or misunderstanding. ‘It’s not auditable’ is not good enough
3 replies 0 retweets 3 likes -
ML is going to allow us to detect things at scale and cheaply that we could not before. That, in the hands of the powerful, can be a terrible tool. I can write the awesome scenarios but.. until recently, you just couldn't detect say, gay or rebel or uyghur, *at scale* and cheap.+
3 replies 4 retweets 27 likes -
Replying to @zeynep @benedictevans and
Plus, ML will allow us to classify and optimize at scale, and be better at it than humans potentially, but opaquely... Humans hire from alumni network, have gender/race biases in hiring and are credentialist. What is ML going to weed out? Don't even know where to begin to look.+
2 replies 1 retweet 19 likes -
You keep making this assertion. I keep pointing out why it’s flawed. This would be a more productive conversation if you could respond to that.
2 replies 0 retweets 2 likes -
When we take formal rules and put them in a database (say the CA immigration system: it adds points. Doesn't matter by hand or computation) or even when we automate a fairly well-understood system (flying) we have ways of debugging/troubleshooting that we don't have for ML. +
1 reply 0 retweets 5 likes -
Replying to @zeynep @benedictevans and
So the Q is: do you think we will have explainable/interpretable AI? I'm on the this doesn't seem anymore likely than using brain scans to understand humans. ML is classification that works not via rules we wrote (not Symbolic/Minsky style code). That really is different! +
2 replies 0 retweets 11 likes
We use ML because it can do things that we can not hand-code by writing rules or instructions. It's exactly because it's so different that it is so powerful and useful, and spread so widely in just a few years (once it had data to eat).
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.