It has always been known that brute-force tree search won’t work for Go; there are too many possible moves. So, you need a better evaluator.
-
-
Replying to @Meaningness
The standard analysis of Go has always been that grandmasters see regional patterns on the board that are good or bad. That’s evaluation.
1 reply 0 retweets 1 like -
Replying to @Meaningness
What Google did was spend incredible quantities of computer time playing vast numbers of games—noting which board configurations led to wins
1 reply 0 retweets 1 like -
Replying to @Meaningness
It is hard to see how this strategy could have not-worked.
2 replies 0 retweets 0 likes -
Replying to @Meaningness
Only question is whether the AlphaGo “neural” network did any non-obvious generalization. I don’t have access to the journal article, but >
2 replies 0 retweets 0 likes -
Replying to @Meaningness
> nothing I have read suggests it did. Generally, when analyzing “neural” learning results, I’ve found they didn’t do anything interesting.
1 reply 0 retweets 1 like -
Replying to @Meaningness
@Meaningness it seriously seems like your standards are too high here1 reply 0 retweets 0 likes -
-
Replying to @Meaningness
@Meaningness like, you're complaining that the generalizations found are not "surprising/deep" but come on, what is even depth2 replies 0 retweets 2 likes -
Replying to @admittedlyhuman
@Meaningness as for surprising, it is legitimately surprising that we now have a computer that can play Go very competently1 reply 0 retweets 1 like
@admittedlyhuman Why do you find that surprising? (I have never played Go, so maybe I’m missing an intuition here)
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.