To gain some idea of the far future of ML security, we studied a simple toy problem called "adversarial spheres," simulating a future where advanced ML models are extremely accurate. We find that even then, an adversary can still easily fool them. https://arxiv.org/abs/1801.02774
-
-
An alien looking at a Disney movie might see nothing that looked to them like a human, maybe nothing like a 3D scene. Our cartoons don't objectively resemble photographs, they're built to feed into a particular neural classifier and do an unusual simple thing that stimulates it.
- 1 more reply
New conversation -
-
-
Those can't be counted as classification errors.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.