“Universal adversarial perturbations” seems most dramatic ML result in years; if so, not getting deserved attention https://arxiv.org/pdf/1610.08401v1.pdf …
-
-
Replying to @Meaningness
Result: there is a *fixed*, human-invisible map you can add to *any* image, and it renders it unclassifiable by multiple DL systemspic.twitter.com/s4zMsPy6Jy
22 replies 497 retweets 652 likes -
Replying to @Meaningness
I have long suspected that DL image classifiers depend mainly on texture and maybe color, making much less use of shape than vertebrates do…
6 replies 46 retweets 100 likes -
Replying to @Meaningness
The form of the universal adversarial perturbation is consistent with this hypothesis. It subtly screws up texture/color info:pic.twitter.com/tv3tRPp1UF
8 replies 73 retweets 152 likes -
Replying to @Meaningness
(In this picture, from the paper, the intensity of the perturbation is vastly magnified to show its form; it's actually undetectable to eye)
1 reply 15 retweets 49 likes -
Replying to @Meaningness
In casual searches, I have not found anyone else suggesting this explanation. I think it might give a lot of insight into DL image work.
4 replies 3 retweets 21 likes -
This Tweet is unavailable.
Thanks!! This is nice work (and supports my theory about why DL image classifiers work…) I’m trying to resist writing a long tweetstorm about it, but I’m not sure I’m going to succeed!
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.