Conversation

Replying to
Yep, this is what's happening. Big transition. "The technology is reshaping how some companies approach recruiting, hiring and reviewing workers, offering employers an unrivaled look at job candidates through a new wave of invasive psychological assessment and surveillance."
Image
5
89
Of course, human hiring is far from perfect. But this is like willy-nilly introducing a cat to an ecosystem without one to control some other pest. Problems if it works, Problems if it doesn't—and inability to tell which is which (since much of this is ML) is a core issue.
6
82
This Tweet was deleted by the Tweet author. Learn more
Replying to
We had this disagreement over Twitter before That ML models aren't interpretable (except obvious things which wouldn't need ML model if that's all there was) doesn't seem to be controversial among practitioners or CS professors.
2
Replying to and
I look at the interpretability work, and it seems promising in things like visual systems where humans, too, can literally eyeball what's going on in the layers, but not for much else. If I see work that succesfully interprets and rules out, I'll change my mind. Please do send.
1
This Tweet was deleted by the Tweet author. Learn more
Replying to and
Don’t think search and maps are driven by ML? IIRC the last paper I read on maps was about building huge lookup tables of routes. (Yes, this is the curse of AI — anything that works is no longer called AI). But seems to be talking about neural networks.
1
This Tweet was deleted by the Tweet author. Learn more
Replying to and
Hmm. Ok I think I see what you're getting at. You're saying that real-world applications will have more than just 'non-interpretable bits (e.g. neunets), and the parts around them will be 100% debuggable and improvable, like any other software system.
1
This Tweet was deleted by the Tweet author. Learn more