About SARM. At this point I am 100% convinced that the VGG16 experiment is not for real. Most likely a big experimental mistake, not fraud.
-
-
Replying to @fchollet
The paper is difficult to follow, so at first I wasn't sure about algo details. After some discussions, in particular with Kevin Murphy,
1 reply 0 retweets 3 likes -
Replying to @fchollet
it seems that the VGG16 experiment merely consists in using convolutional kernels obtained by computing the PCA/LDA of the input filter maps
1 reply 0 retweets 1 like -
Replying to @fchollet
I have tried this exact same setup last year, explored every possible variant of that algorithm. I know for a fact that it doesn't work.
1 reply 0 retweets 0 likes -
Replying to @fchollet
Better: I know why it doesn't work. Which isn't addressed in the paper.
4 replies 0 retweets 3 likes -
Replying to @fchollet
Conditional on this interpretation of the algorithm used, there is a 0% chance that performance would be anywhere near backprop for VGG.
1 reply 0 retweets 1 like -
Replying to @fchollet
I cannot speak for the first two experiments. As for the VGG16 experiment, I am confident enough about this to state it publicly.
1 reply 1 retweet 3 likes -
Replying to @fchollet
That said. If this paper manages to reignite interest in sparse coding, then that's a positive result. Important, under-appreciated topic.
3 replies 0 retweets 7 likes -
Replying to @fchollet
Backprop is computationally inefficient and data-inefficient. In the future we won't be using it. But its replacement has yet to be found.
8 replies 17 retweets 35 likes -
Replying to @fchollet
@withfries2 aren't people working with synthetic gradients nowadays?1 reply 0 retweets 0 likes
maybe that's what @ML_Hipster is up to these days
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.