Any static evaluation function generalizes across similar boardstates. Question is whether the learned similarity metric is surprising/deep.
-
-
Replying to @Meaningness
PSA: Despite frequent misrepresentation by researchers when talking to public, “neural” networks are UTTERLY DISSIMILAR to brains.
1 reply 0 retweets 2 likes -
Replying to @Meaningness
It is uncontroversial that “neural” networks are not at all like actual networks of neurons; everyone in the field admits this when pressed.
1 reply 1 retweet 3 likes -
Replying to @Meaningness
I consider the combination of the misleading use of “neural” and misrepresentation of significance of results to be scientific misconduct.
2 replies 0 retweets 2 likes -
Replying to @Meaningness
@Meaningness Hmm, so it's clear they're dissimilar, but don't you think there's enough isomorphism to make it not completely misleading?1 reply 0 retweets 0 likes -
Replying to @Meaningness
@modulux Relative to “symbolic AI,” 1980s connectionism was valuable in pointing out that neurons are slow and massively parallel, and >1 reply 0 retweets 0 likes -
Replying to @Meaningness
@modulux the symbolic AI algorithms of the day could not plausibly be implemented in wetware. That’s a genuine constraint.1 reply 0 retweets 0 likes -
Replying to @Meaningness
@Meaningness So for example the relative successes of RNNs mimicking language and grammar, you think it's unrelated to how we do it?1 reply 0 retweets 0 likes
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.