Can differentiable programming learn arbitrarily complex behaviors, up to superhuman performance, given a dense sampling of the behavior manifold (e.g. an infinite training data generator, such as self-play)? Yes. We knew that. Arguably AlphaGo was the concrete proof-of-concept.
-
-
It just moves the threshold for "have we drawn enough data yet" by a bit -- by no more than a few orders of magnitude. But when infinite data is available, this is "just" the size of your cloud computing bill.
Show this threadThanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
model structure could make or break learning imho trying to solve, say, an nlp task using a bag of unigrams will never work regardless of the training data size
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.