@Meaningness like, you're complaining that the generalizations found are not "surprising/deep" but come on, what is even depth
-
-
Replying to @admittedlyhuman
@admittedlyhuman Well, in one case I analyzed, it turned out that replacing the NN with a linear evaluator worked better.1 reply 0 retweets 1 like -
Replying to @Meaningness
@admittedlyhuman If it turned out AlphaGo was just learning a linear combination of features, would you agree it was uninteresting?2 replies 0 retweets 1 like -
Replying to @Meaningness
@Meaningness no, because finding the right mix of features and weights to linearally combine is still an achievement2 replies 0 retweets 1 like -
Replying to @admittedlyhuman
@Meaningness when you get right down to it, everything's a linear combination of features1 reply 0 retweets 0 likes -
Replying to @admittedlyhuman
@Meaningness you can get reductive about anything, but you didn't make a computer that was competent at Go2 replies 0 retweets 0 likes -
Replying to @admittedlyhuman
@admittedlyhuman (1) I would have no interesting in doing that; Go is trivial; (2) I couldn’t spend millions of hours of GPU time on a stunt1 reply 0 retweets 0 likes -
Replying to @Meaningness
@Meaningness I'm coming at this from an engineers perspective, where the proof is in the pudding2 replies 0 retweets 0 likes -
Replying to @admittedlyhuman
@admittedlyhuman From an engineering perspective, this looks to me like “turns out you can solve this uninteresting problem by throwing GPUs1 reply 0 retweets 0 likes -
Replying to @Meaningness
@admittedlyhuman at it” which is like the old adage that you can make a brick fly by putting a big enough rocket engine on it.2 replies 0 retweets 0 likes
@admittedlyhuman Making a brick fly is a fun stunt, but probably you don’t learn much from it.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.