So everyone in AI lost interest in board games. Go was a backwater for decades.
-
-
AlphaGo shows you can brute-force the static evaluator if you have enough teraflops. Totally unsurprising:board games are trivial like that.
2 replies 0 retweets 0 likes -
Replying to @Meaningness @xuenay
This is why chess is harder I think. The hinging values (space, material, initiative, etc. ) the evaluator measures are a bit more obscure
1 reply 0 retweets 0 likes -
-
Replying to @Meaningness @xuenay
Why chess is harder to solve in practice and why it's still possible to confound chess engines w/sequences like unbalanced material exchange
1 reply 0 retweets 0 likes -
Replying to @Intrinsic29 @xuenay
Harder by what metric? (Superhuman chess came decades before superhuman go.)
1 reply 0 retweets 0 likes -
Replying to @Meaningness @xuenay
But it's still beatable and was much more beatable 5 years ago. It is a hard comparison though imo.
1 reply 0 retweets 0 likes -
-
Replying to @Meaningness @xuenay
Np, Engines assign values to material (roughly 9 pts for a queen, 3 for minor pieces) but that value changes in diff kinds of positions.
1 reply 0 retweets 0 likes -
So players test them by offering exchanges like 2 minor pieces and 3 pawns for a queen in unclear positions and hope to be favored after.
1 reply 0 retweets 0 likes
Maybe AlphaGo’s brute-force-the-static-evaluator approach would eliminate remaining chess vuln, then. (Perhaps interesting to chess fans.)
-
-
Replying to @Meaningness @xuenay
It's really hard to measure because I'm not sure how many ML experts are working on chess. There's a huge computer chess community, but
2 replies 0 retweets 0 likes -
Replying to @Intrinsic29 @xuenay
Yeah, I don’t think you’d get a lot of cred in the ML community for a better chess program.
0 replies 0 retweets 1 like
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.