I don't know anything about neural networks but I feel like the question should be less "the interpretability of neural networks" than "the interpretability of reality, as assisted by neural networks".
-
-
"It turns out that all of the descendants of this particular board in the chess-game-tree have White winning", or something, is a statement about chess, not about algorithms. An algorithm might help you prove it.
-
(Like "the two millionth digit of pi is X" is a statement about pi, even though I would need a computer to check, it's not about "interpreting the computer".)
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.