Hypothesis (& one I’m pretty sure is true): adversarial examples will be a permanent problem in deep learning/ML. They exist because these systems do not have an underlying model that corresponds to their behavior.https://twitter.com/catherineols/status/1020458649825636352 …
-
-
My intuition is very much the other direction. Until a decision process connects to (and represents) theories about the world, these will always exist.
-
I don’t think that the amount of training data needs to be much larger, but the data has to be largely mappable to the same set of relations, ie a single cohesive world simulation. I suppose we agree on that?
- 5 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.