I don't know anything about neural networks but I feel like the question should be less "the interpretability of neural networks" than "the interpretability of reality, as assisted by neural networks".
They're distinct problems. You can come up with scenarios where a neural network can solve a very well-defined toy problem, but you can't understand how it arrives at the solution. Something in the spirit of those 200-move solved chess endgames that look like random flailing.
-
-
I mean there's some structure to the chess-space that explains why those endgames are solvable.
-
Maybe I'm misunderstanding the point you're making.
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.