Smells a bit fishy to me too. arXiv submission only after NIPS acceptance feels like an end run around reproduction efforts.
-
-
Replying to @dwf
I'd love to see a reusable OS implementation. I want to believe
1 reply 0 retweets 2 likes -
shouldn't this be very easy to reimplement though?
2 replies 0 retweets 0 likes -
Indeed, and I will gladly eat my words if it turns out to be completely reproducible.
2 replies 0 retweets 0 likes -
But if I had a bombshell like this, I wouldn't have sit on it for 8 weeks of NIPS reviewing; makes me suspicious.
1 reply 0 retweets 1 like -
The result isn't that surprising is it? Roughly stacked sparse coding. Similar models have been built before
1 reply 0 retweets 0 likes -
Only difference is claim to match VGGnet. But sparse-coding has given strong numbers in past
1 reply 0 retweets 0 likes -
Layer-local greedy training doing that well is pretty surprising, yes.
2 replies 0 retweets 3 likes -
the reason DL works so well is the joint training of the stacked reprs. Greedy approaches fundamentally flawed
1 reply 0 retweets 5 likes -
Maybe. It doesn't seem logical that joint training should ever lose to greedy training, but if it does?
2 replies 0 retweets 0 likes
<5% chance that this paper upends backprop. The one that does will have a theoretical build-up that this paper lacks
-
-
I'm with you on that one. But if it truly gives BP a run for its money in this one case, it'd be interesting.
0 replies 0 retweets 2 likesThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.