If a general metalearning algorithm can be efficiently approximated by a Deep Net, we are fucked right here and now you guys.
Replying to @shobith
It is a big if (roughly: is the generalization of meta learning tractable using stochastic gradient descent), but I don't know of a proof or even a good proof candidate that it isn't.
2:52 PM - 17 Jun 2018
0 replies
0 retweets
2 likes
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.