Super excited to publish "The Building Blocks of Interpretability," a @distillpub article where we explore rich user interfaces for understanding neural networks. https://distill.pub/2018/building-blocks …pic.twitter.com/fWcOQoreCB
-
Show this thread
-
Our key insight is that we should be *combining* interpretability techniques (not studying them in isolation). E.g., with feature vis + attribution, we can compare two classifications (floppy ears apparently distinguish types of dog!). https://distill.pub/2018/building-blocks …pic.twitter.com/zqRqB6K01y
1 reply 18 retweets 39 likesShow this thread -
One of my favorite parts of the article is when we begin to formalize this insight as a design space. Low-hanging future work: enumerate all interpretability interfaces and conduct comparative evaluations to understand what they are/aren't good for. https://distill.pub/2018/building-blocks …pic.twitter.com/qRH9sDAE0K
1 reply 11 retweets 46 likesShow this thread -
For each of our interface ideas, we also provide colab notebooks so that you can try it out with your own input images. E.g., https://colab.research.google.com/github/tensorflow/lucid/blob/master/notebooks/building-blocks/AttrChannel.ipynb … reproduces the class comparison UI. HackerNoon has a good overview of how to run them with Google's GPUs: https://hackernoon.com/train-your-machine-learning-models-on-googles-gpus-for-free-forever-a41bd309d6ad …pic.twitter.com/cXpPsgLYHm
1 reply 15 retweets 34 likesShow this thread -
It's been a blast working on this article with
@ch402,@enjalot,@shancarter,@ludwigschubert,@hypotext, and@zzznah. I've learned a lot from them these past few months!3 replies 1 retweet 14 likesShow this thread
That's... one hell of a line-up. Congratulations all of you for somehow finding your way into the same room
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.