I don't know anything about neural networks but I feel like the question should be less "the interpretability of neural networks" than "the interpretability of reality, as assisted by neural networks".
-
Show this thread
-
Replying to @xenocryptsite
They're distinct problems. You can come up with scenarios where a neural network can solve a very well-defined toy problem, but you can't understand how it arrives at the solution. Something in the spirit of those 200-move solved chess endgames that look like random flailing.
1 reply 0 retweets 0 likes -
Replying to @Pinboard
I mean there's some structure to the chess-space that explains why those endgames are solvable.
1 reply 0 retweets 0 likes -
Replying to @xenocryptsite
Maybe I'm misunderstanding the point you're making.
1 reply 0 retweets 0 likes -
Replying to @Pinboard
"It turns out that all of the descendants of this particular board in the chess-game-tree have White winning", or something, is a statement about chess, not about algorithms. An algorithm might help you prove it.
2 replies 0 retweets 0 likes
I'm analogizing to chess to make a somewhat different point: that the question of 'how does this algorithm solve the problem' can be difficult even for problem spaces that are quite straightforward. That it's a distinct issue from the inscrutability of the universe
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.