Modern deep CNNs are unable to generalize over tiny image transformations such as translation, scale, contrary to widespread belief: https://arxiv.org/abs/1805.12177 @filippie509 @GaryMarcus
-
-
No surprise there. CNNs are only slightly positionally invariant due to the small convolution. Invariance in deep learning must be obtained from the use of huge numbers of training samples. There are always gaps that adversarial techniques can easily uncover.
1 reply 1 retweet 7 likes -
Replying to @RebelScience @MaxALittle and
And what's wrong with having a large number of training samples ?
1 reply 0 retweets 0 likes -
Replying to @vakibs @RebelScience and
For many use cases, it is impractical to get enough training examples to make up for the fact that CNNs cannot learn anything deep about the images. Better neural network designs exist.
1 reply 0 retweets 5 likes -
Replying to @paklnet @RebelScience and
I am not convinced, honestly. Invariance is over-rated. And I say this as somebody who was a fan of planar homologies and weird shit like that. I consider it better to train the invariance by a well-thought out regime of training data, including probably synthetic data.
3 replies 0 retweets 1 like -
Replying to @vakibs @RebelScience and
Invariance isn’t just over-rated, worse: enforcing invariance in a design can actually be harmful to general task performance. Avoid if possible. So I think I agree with you there, and raise you.
1 reply 0 retweets 2 likes -
Come on guys. Invariance is an absolute must. There can be no intelligence without it. The brain does not use lots of training samples to achieve it. The brain uses timing as a glue. Transformations of an object do not break invariance because the brain can predict future states.
2 replies 0 retweets 9 likes -
Replying to @RebelScience @paklnet and
But invariance should be learned, not imposed as a prior. That is the point. Different signals have different invariances. Translation is just one of many...
2 replies 0 retweets 4 likes -
-
Replying to @GaryMarcus @RebelScience and
Because I don't think we can easily tell what the invariances are that we'd like to put in (particularly higher up in the sensory hierarchy). The system needs to be constructed in a way that would allow for it to pick them up. See ferret cortical rewiring experiment.
1 reply 0 retweets 2 likes
I would think that we need some invariances innately, others to be acquired. If you are talking about Sur’s rewiring experiments, I don’t see the relevance
-
-
Replying to @GaryMarcus @RebelScience and
There could be some innate invariances. But my guess is that majority are learned. Rewiring shows that the cortex can interpret completely different modality. Either these signals have the same invariances (doubt that) or cortex is able to accommodate many.
2 replies 0 retweets 1 like - 4 more replies
New conversation -
-
-
Replying to @GaryMarcus @filippie509 and
This I agree with. There are some innate invariances and that is what makes one kind of net more efficient in learning than another kind.
0 replies 0 retweets 0 likesThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.