Hypothesis (& one I’m pretty sure is true): adversarial examples will be a permanent problem in deep learning/ML. They exist because these systems do not have an underlying model that corresponds to their behavior.https://twitter.com/catherineols/status/1020458649825636352 …
-
Show this thread
-
The machine appears to “know” what a face is—to generalize from examples and to make human-like mistakes. Except, you can make a small perturbation that leaves the image apparently unchanged (to someone who actually knows what a face is...)
1 reply 1 retweet 3 likesShow this thread -
...and get the system to say it’s anything you want. It’s very different from an optical illusion, in the human brain, which is driven by an error in a high-level theory.
4 replies 1 retweet 5 likesShow this thread -
Replying to @SimonDeDeo
I agree that it's unlikely that adversarial examples have much to do with human optical illusions. More likely that the models have learned to create and look at spurious features.
1 reply 0 retweets 3 likes -
Replying to @togelius @SimonDeDeo
Many optical illusions are the result of biases of the visual system that improve interpretation convergence for the majority of scenes.
1 reply 0 retweets 0 likes -
Yes! That’s why they’re there. Their origin (and explanation) is very different.
1 reply 0 retweets 1 like
However, I suspect that tickles are adversarial examples in the domain of touch.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.