Only question is whether the AlphaGo “neural” network did any non-obvious generalization. I don’t have access to the journal article, but >
-
-
Replying to @Meaningness
> nothing I have read suggests it did. Generally, when analyzing “neural” learning results, I’ve found they didn’t do anything interesting.
1 reply 0 retweets 1 like -
Replying to @Meaningness
@Meaningness it seriously seems like your standards are too high here1 reply 0 retweets 0 likes -
-
Replying to @Meaningness
@Meaningness like, you're complaining that the generalizations found are not "surprising/deep" but come on, what is even depth2 replies 0 retweets 2 likes -
Replying to @admittedlyhuman
@admittedlyhuman Well, in one case I analyzed, it turned out that replacing the NN with a linear evaluator worked better.1 reply 0 retweets 1 like -
Replying to @Meaningness
@admittedlyhuman If it turned out AlphaGo was just learning a linear combination of features, would you agree it was uninteresting?2 replies 0 retweets 1 like -
Replying to @Meaningness
@Meaningness no, because finding the right mix of features and weights to linearally combine is still an achievement2 replies 0 retweets 1 like -
Replying to @admittedlyhuman
@Meaningness when you get right down to it, everything's a linear combination of features1 reply 0 retweets 0 likes -
Replying to @admittedlyhuman
@Meaningness you can get reductive about anything, but you didn't make a computer that was competent at Go2 replies 0 retweets 0 likes
@admittedlyhuman (3) I have written game playing programs that blew away the state of the art at the time
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.