True but a little uncharitable. Of course they're interpolating, but it's the fact that they're doing it across lower dimensional representations that's interesting. Until now automagically coming up with the higher level feature space was the hard part.
-
-
-
Well… the question is to what extent there actually is a higher level feature space. In the image recognizers, there is, although I think to a lesser degree than is usually supposed. In other applications, maybe not much at all.
- 5 more replies
New conversation -
-
-
In the AlphaGo Zero case, its training data is self-generated though – it seems weird to call this memorization.
-
Yes… but it’s not the dl net which is doing that, it’s the rl architecture.
- 15 more replies
New conversation -
-
-
seems likely considering you can recover training data just from black box queries of the networkhttps://twitter.com/alexbrattmd/status/968270922867146752 …
-
Right! I’ve seen other results of this same general form.
End of conversation
New conversation -
-
-
What happens is that we've solved for the mystery that once made these games interesting, which was the real point. The real work now is to devise good games in post ML world
-
That’s a very interesting perspective!
End of conversation
New conversation -
-
-
I find myself bothered by the “just”. We *know* neural networks memorize their training data. What’s surprising is that they don’t just plop a delta function on every training input, when they totally could.
-
It might be that this is entirely explained by the density of training examples being “enough”, but few shot learning works way too well for it to be that simple.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.