Noise injection and adversarial input doesn't help much. I think the problem is something along the lines that MLPs have no means of converging on a globally coherent interpretation in terms of composable prototypes.
-
-
-
I am not a fan of embodimentalism, but suspect that there is a substantial difference between living in an image database and living in a dynamic interactive 3space, and it's not the noise. Learning systems can ultimately only converge on a model of the data they have.
End of conversation
New conversation -
-
-
Also, we observe over time. A time series of (incomplete) images. Might help prevent some classes of adversarial ... Scenes?
-
Yes, and there are a number of projects that have started working on real-time video.
End of conversation
New conversation -
-
-
How does it show robustness? It only shows robustness until you show it an optical illusion
-
You rarely look at a picture of a car where someone changes a few pixels and you think it is an ostrich now. Most optical illusions are a tradeoff for learned biases that aid convergence on correct interpretations in the majority of circumstances.
- 1 more reply
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.