Hypothesis (& one I’m pretty sure is true): adversarial examples will be a permanent problem in deep learning/ML. They exist because these systems do not have an underlying model that corresponds to their behavior.https://twitter.com/catherineols/status/1020458649825636352 …
I am willing to bet a bottle of Lagavulin that ml systems will be able to deal with most adversarial examples in ten years from now. Are you willing to bet against it?
-
-
Depends what you mean by most—I’m not interested in the security aspect; just the possibility of their construction. I’ll go out on a limb and say construction will always be possible except perhaps at isolated points.
-
Lets say: I am willing to risk a bet that AI will not perform systematically worse than humans.
- 2 more replies
New conversation -
-
-
Quick question: Is this a full bottle of Lagavulin, or just the bottle itself? Cause, you know, one could parse the terms of your bet either way...
-
No adversarial examples in the bet itself! To be judged by a human.
- 1 more reply
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.