Thus, the question of failure or success boils down to "have we drawn enough training data yet?" (or in some cases, "does our architecture have sufficient capacity & appropriate structure wrt the training data available?"). Whether the answer is yes or no, it teaches us nothing.
-
-
Show this thread
-
Note that the question of model choice (the structure of the differentiable architecture used) is rather secondary, because as long as memorization capacity is sufficient, *any* model will do -- if there is training data to match. Even a single hidden Dense layer.
Show this thread -
It just moves the threshold for "have we drawn enough data yet" by a bit -- by no more than a few orders of magnitude. But when infinite data is available, this is "just" the size of your cloud computing bill.
Show this thread
End of conversation
New conversation -
-
-
Wait, aren’t there solid mathematical arguments that present skepticism to that claim? I thought mode collapse in GANs is an instructive example of that skepticism. https://simons.berkeley.edu/news/research-vignette-promise-and-limitations-generative-adversarial-nets-gans …
-
CycleGAN solves that problem by requiring information preservation to reconstruct the original input The keyword here is "arbitrarily". We don't know. In more complex contexts, new problems may arise. I'm looking forward to machine victories in StarCraft 2 (very complex)
- Show replies
New conversation -
-
-
I’d tease-out arbitrarily complex. Even with the permutations in Go, the next state is super-constrained. E.g. Like sequence-to-sequence code generation... hard... ouch.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.