“Universal adversarial perturbations” seems most dramatic ML result in years; if so, not getting deserved attention https://arxiv.org/pdf/1610.08401v1.pdf …
-
-
Replying to @Meaningness
Result: there is a *fixed*, human-invisible map you can add to *any* image, and it renders it unclassifiable by multiple DL systemspic.twitter.com/s4zMsPy6Jy
22 replies 490 retweets 650 likes -
Replying to @Meaningness
I have long suspected that DL image classifiers depend mainly on texture and maybe color, making much less use of shape than vertebrates do…
6 replies 44 retweets 98 likes -
Replying to @Meaningness
The form of the universal adversarial perturbation is consistent with this hypothesis. It subtly screws up texture/color info:pic.twitter.com/tv3tRPp1UF
8 replies 71 retweets 150 likes -
Replying to @Meaningness
(In this picture, from the paper, the intensity of the perturbation is vastly magnified to show its form; it's actually undetectable to eye)
1 reply 14 retweets 47 likes -
Replying to @Meaningness
In casual searches, I have not found anyone else suggesting this explanation. I think it might give a lot of insight into DL image work.
4 replies 3 retweets 20 likes -
Replying to @Meaningness
“Measuring the tendency of CNNs to Learn Surface Statistical Regularities“ https://arxiv.org/abs/1711.11561
1 reply 0 retweets 1 like
Thanks!! This is nice work (and supports my theory about why DL image classifiers work…) I’m trying to resist writing a long tweetstorm about it, but I’m not sure I’m going to succeed!
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.