you are kidding right?
-
-
Replying to @filippie509 @MaxALittle and
@abhijitysharma i would urge you to try this out and to read carefully Alcorn et als new arXiv paper, to understand the strengths and weaknesses of data augmentation.
1 reply 1 retweet 8 likes -
Replying to @GaryMarcus @filippie509 and
That paper doesn't study data augmentation at all AFAICT. They're using a pre-trained imagenet baseline, which explicitly avoids the kind of data augmentation necessary to recognize these synthetic 3d images in unusual poses.
3 replies 0 retweets 5 likes -
Replying to @jeremyphoward @filippie509 and
I wasn’t saying it did, I was saying that one could learn something by trying to use data augmentation as a candidate solution. Simple tricks like translation won’t work, and even 3-d/6d rotation may not work and may be prohibitively cost/not viable.
1 reply 0 retweets 2 likes -
Replying to @GaryMarcus @filippie509 and
OK. I don't think this paper helps show the "strengths and weaknesses of data augmentation". It's already been seen that data augmentation and convnets can give good pose invariance. You can help it along a bit using stuff like Group Equivariant Convolutional Networks
1 reply 0 retweets 6 likes -
Replying to @jeremyphoward @GaryMarcus and
The Alcorn paper simply shows that you can't expect to use different poses at inference time than you had in your data or used in data augmentation at training time, unless you force appropriate symmetry in your architecture
2 replies 0 retweets 7 likes -
Replying to @jeremyphoward @filippie509 and
Or, put differently, it highlights how fragile deep learning is when tested outside of distribution, and shows how (their word) “naive” DNN’s understanding of objects is.
6 replies 1 retweet 10 likes -
Replying to @GaryMarcus @filippie509 and
Yes it does show that, for some values of "outside of distribution". That's why things like thoughtful architecture and loss function selection and data augmentation choices are important.
1 reply 0 retweets 7 likes -
Replying to @jeremyphoward @filippie509 and
and good priors. and transparent ways of incorporating good priors. and causal reasoning etc cc
@yudapearl and@eliasbareinboim2 replies 2 retweets 5 likes -
Replying to @GaryMarcus @jeremyphoward and
The out of distribution problem is well know and studied. There are many ways to deal with it. It is not a fundemental problem of DNN. Its a problem of the training methodology. See for example https://arxiv.org/pdf/1802.04865
2 replies 0 retweets 1 like
try out your favorite techniques on Alcorn et al and let us know how it goes
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.