To gain some idea of the far future of ML security, we studied a simple toy problem called "adversarial spheres," simulating a future where advanced ML models are extremely accurate. We find that even then, an adversary can still easily fool them. https://arxiv.org/abs/1801.02774
-
-
If humans observe the world by looking at patches of pixels at each time, humans would fall prey to epsilon perturbed adversarial inputs too. The notion of "receptive field" is misappropriated IMO. Higher level pixels don't get a larger view of the image but a task-condensed one.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.