Q: I have an unexplainable AI that prevents 1000 violent crimes and an explainable one that prevents 500. What do I do? A: Allow 500 people to be raped, beaten and murdered. I wish AI ethics people would consider the utilitarian effects of their grand pronouncements.https://twitter.com/random_walker/status/1051586184399470592 …
-
-
Replying to @stucchio
I agree that this totalitarian argument for advertising ‘explainable AI’ is stupid, but in practice it may often be possible to make the classifiers and models explainable, and doing so may actually improve their performance.
2 replies 0 retweets 4 likes -
How would you know that one AI actually prevents crime? How would you even prove counterfactuals in the un-explainable case?
1 reply 0 retweets 0 likes -
Replying to @blubberquark @Plinz
The same way you do it in the explainable case: careful backtesting/forward testing/etc.
1 reply 0 retweets 0 likes -
Now that I understand what you mean, I agree with it even less.
1 reply 0 retweets 0 likes -
Replying to @blubberquark @Plinz
Would you prefer an a/b test? What measurement - if any - would convince you that a black box predictor is outperforming an "explainable" one?
1 reply 0 retweets 1 like -
Can we at least agree on an extreme case? Abolishing courts and replacing them with black box predictors would be bad, even if backtesting shows that according to some metric they perform better. While we're at it: How do you get ground truth for backtesting?
2 replies 0 retweets 0 likes
Why would that be bad?
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.