Modern deep CNNs are unable to generalize over tiny image transformations such as translation, scale, contrary to widespread belief: https://arxiv.org/abs/1805.12177 @filippie509 @GaryMarcus
-
-
Because I don't think we can easily tell what the invariances are that we'd like to put in (particularly higher up in the sensory hierarchy). The system needs to be constructed in a way that would allow for it to pick them up. See ferret cortical rewiring experiment.
-
I would think that we need some invariances innately, others to be acquired. If you are talking about Sur’s rewiring experiments, I don’t see the relevance
- 5 more replies
New conversation -
-
-
In analogy to DNA and mutations, could it be that the knowledge of invariance is innate, but the mechanism of this hard wiring allows for learning?
- 1 more reply
New conversation -
-
-
Because this invariance should not be brittle. It should be a robust invariance.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.