Deep networks probably mostly just memorize and regurgitate their training data. This seems likely all that AlphaGo Zero does, for instance.
Relevant post from @nostalgebraist
@xuenay
http://ift.tt/2t36Ix2 pic.twitter.com/UCaeQaoT3S
You can add location information to your Tweets, such as your city or precise location, from the web and via third-party applications. You always have the option to delete your Tweet location history. Learn more
Well… the question is to what extent there actually is a higher level feature space. In the image recognizers, there is, although I think to a lesser degree than is usually supposed. In other applications, maybe not much at all.
sure, but at least for images do you think that feature space counts as a success for NN? or is it just a fluke bc images are fundamentally easier/different than other domains? can see the argument we've figured out the magic of visual cortex and are now misapplying it everywhere
DL works startlingly better than expected on image classification, which is an interesting result. But there’s strong reasons to think it doesn’t work anything like mammalian visual cortex.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.