We had this disagreement over Twitter before That ML models aren't interpretable (except obvious things which wouldn't need ML model if that's all there was) doesn't seem to be controversial among practitioners or CS professors.
And that the shift from what’s called symbolic programming etc. to connectionist approaches is a big shift. It’s a different animal, re:interpretability. There is a program if research, but so is there for the brain. Arguably we’ll never fully be able to interpret either.
-
-
I actually think the question of interpretability is probably ill-formed. I don't think it's an accident that much of the open theory is asking what we can guarantee about models (e.g., algorithmic fairness, statistical guarantees of deep learning, etc.)
-
Even prima facie, these sorts of tools _look_ very different from the sorts of tools we'd use to debug "normal" systems. This is where a lot of the discomfort is coming from. If we don't know what we can guarantee about models, what exactly does that mean for us in practice?
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.