Hypothesis (& one I’m pretty sure is true): adversarial examples will be a permanent problem in deep learning/ML. They exist because these systems do not have an underlying model that corresponds to their behavior.https://twitter.com/catherineols/status/1020458649825636352 …
-
Show this thread
-
Replying to @SimonDeDeo
I suspect that is going to change, once we don’t train to match patterns to classified bitmaps, but patterns to single cohesive world model, which constrains the solution space much more tightly.
1 reply 0 retweets 4 likes -
Replying to @Plinz
My intuition is very much the other direction. Until a decision process connects to (and represents) theories about the world, these will always exist.
3 replies 0 retweets 1 like -
Replying to @SimonDeDeo @Plinz
One can make the argument that you can't find an adversarial feature if your input space where a light field and not a two dimensional projection.
1 reply 0 retweets 0 likes -
It is also an hint that deep learning methods are incomplete. The biological brain efficiently samples the world and does not actually see everything. How brains reconstruct reality is more important than what it is currently perceiving.
2 replies 0 retweets 0 likes
Deep Learning just means compositional function approximation. It is not restricted to chaining normalized weighted sums of real numbers, even though that‘s currently popular.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.