I don't know anything about neural networks but I feel like the question should be less "the interpretability of neural networks" than "the interpretability of reality, as assisted by neural networks".
-
Show this thread
-
Replying to @xenocryptsite
They're distinct problems. You can come up with scenarios where a neural network can solve a very well-defined toy problem, but you can't understand how it arrives at the solution. Something in the spirit of those 200-move solved chess endgames that look like random flailing.
1 reply 0 retweets 0 likes -
Replying to @Pinboard
I mean there's some structure to the chess-space that explains why those endgames are solvable.
1 reply 0 retweets 0 likes -
Replying to @xenocryptsite
Maybe I'm misunderstanding the point you're making.
1 reply 0 retweets 0 likes -
Replying to @Pinboard
"It turns out that all of the descendants of this particular board in the chess-game-tree have White winning", or something, is a statement about chess, not about algorithms. An algorithm might help you prove it.
2 replies 0 retweets 0 likes -
Replying to @xenocryptsite @Pinboard
(Like "the two millionth digit of pi is X" is a statement about pi, even though I would need a computer to check, it's not about "interpreting the computer".)
1 reply 0 retweets 0 likes -
Replying to @xenocryptsite @Pinboard
So if neural networks can...classify images or whatever...then that is ultimately reflecting some property of image-space, a property which doesn't depend on the algorithm used. I'm not sure I'm explaining this well either.
1 reply 0 retweets 0 likes
Are you familiar with the Busy Beaver function? A lot of interesting philosophical stuff falls out of thinking about it. If you're not familiar, I suggest https://www.scottaaronson.com/writings/bignumbers.html … and https://www.scottaaronson.com/blog/?p=2725
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.