Stating the obvious: a lot of current deep learning tricks are overfit to the validation sets of well-known benchmarks, including CIFAR10. It's nice to see this quantified. This has been a problem with ImageNet since at least 2015.https://arxiv.org/abs/1806.00451
-
-
NEVER waste spaghetti.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Throwing science at the wall, not spaghetti ;) Dunno whom said it first, I've heard it in Portal (game) Or was it Portal II? Well, I post both Portals here, see what sticks :D
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
What do you think about this https://twitter.com/GoogleAI/status/1003722208672268288 … and other "autoML" efforts at google?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
"oh! it looks like a Pollock"
-
That, Sir, is a mixed metaphor of alchemy
@alirahimi0 . And why does Francois never engage in peer discussion w the likes of Ali or Ian? Granted, probably w Ian over beers on campus.@goodfellow_ianpic.twitter.com/qZpIRFq80C
End of conversation
New conversation -
-
-
Has anyone written about applying the idea of falsification to deep learning? If all ML models are doing is hypothesis generation, aren't most of the generated hypotheses false? Or: what is the underlying theory of deep learning? The world is made of data?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.