The standard analysis of Go has always been that grandmasters see regional patterns on the board that are good or bad. That’s evaluation.
-
-
Replying to @Meaningness
What Google did was spend incredible quantities of computer time playing vast numbers of games—noting which board configurations led to wins
1 reply 0 retweets 1 like -
Replying to @Meaningness
It is hard to see how this strategy could have not-worked.
2 replies 0 retweets 0 likes -
Replying to @Meaningness
Only question is whether the AlphaGo “neural” network did any non-obvious generalization. I don’t have access to the journal article, but >
2 replies 0 retweets 0 likes -
Replying to @Meaningness
> nothing I have read suggests it did. Generally, when analyzing “neural” learning results, I’ve found they didn’t do anything interesting.
1 reply 0 retweets 1 like -
Replying to @Meaningness
@Meaningness it seriously seems like your standards are too high here1 reply 0 retweets 0 likes -
-
Replying to @Meaningness
@Meaningness like, you're complaining that the generalizations found are not "surprising/deep" but come on, what is even depth2 replies 0 retweets 2 likes -
Replying to @admittedlyhuman
@admittedlyhuman Well, in one case I analyzed, it turned out that replacing the NN with a linear evaluator worked better.1 reply 0 retweets 1 like -
Replying to @Meaningness
@admittedlyhuman If it turned out AlphaGo was just learning a linear combination of features, would you agree it was uninteresting?2 replies 0 retweets 1 like
@admittedlyhuman (I’m sure it’s not just linear, but it might be something quite simple, in the same way)
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.