Attacks against neural networks are getting better, they're now black box, i.e., don't require information from inside the neural network under attack: https://arxiv.org/abs/1712.04248
-
-
btw there was a presentation at 34c3 on that topic https://media.ccc.de/v/34c3-8860-deep_learning_blindspots … worth watching (//cc
@kjam) -
Thanks! I also have some extra resources here: https://blog.kjamistan.com/adversarial-learning-for-good-my-talk-at-34c3-on-deep-learning-blindspots/ …
End of conversation
New conversation -
-
-
A new level of CAPTCHA.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Previous black-box attacks were not really "black-box": they either still needed access to some internals (e.g. probabilities) or training data. This is the first attack that just observes the behaviour of an algorithm (e.g. an autonomous car) for given inputs.
-
To be more precise for the experts: all "real" black-box methods (no training data, no probabilities) published before were either restricted to very simple algorithms (e.g. linear classifiers) or very simple datasets (like MNIST), so not a real threat to real-world applications.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.