I probably should have said “all that AG0’s net does”! Aiming for twittery concision does result in inaccuracy
-
-
Replying to @Meaningness @moyix and
I’d really like to run this control experiment: replace the AG0 learner with something much simpler. The MENACE algorithm might even do the trick, if it tracked results for all nxm subsets of the board, for bounded m,n.
3 replies 1 retweet 2 likes -
Replying to @Meaningness @Aelkus and
Robert E. P. Levy Retweeted Julian Togelius
Robert E. P. Levy added,
Julian Togelius @togeliusStandard evolution strategies (invented in the 1970s, can be implemented in 10 lines of code) are comparable in performance to fancy Natural Evolution Strategies, which in turn are comparable in performance to Deep Q-learning, on ALE Atari Games. https://twitter.com/Miles_Brundage/status/968322571585445891 …Show this thread1 reply 0 retweets 3 likes -
It’s getting perilously close to the time when I’m going to have to work hard not to make myself obnoxious by saying “I told you so years ago, the only interesting thing DL does is texture recognition”
0 replies 0 retweets 2 likes -
I’d rather believe it’s sloppy thinking, but yeah, the incentives are not conducive to honesty.
0 replies 0 retweets 2 likes -
Yes. Three years ago I wanted to run some of the missing controls on the ImageNet results, based on my theory that it’s mostly texture recognition… but getting the funding for the gpu-years didn’t seem likely.
1 reply 0 retweets 3 likes
In 2015 I thought about approaching VCs to see if they’d fund an adversarial AI lab that would try to show it doesn’t really work. If they are throwing billions at it, they might pay $10m for a reality check. Guessed they’d probably rather not know, and sell on to greater fools.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.