Every once in a while, a paper comes out claiming to get competitive results using only some form of greedy layer-wise training
-
-
Replying to @fchollet
That's the DL equivalent of cold fusion: it violates what we know on a fundamental level, and overwhelming evidence should be required
2 replies 3 retweets 29 likes -
Replying to @fchollet
But these papers never provide much evidence or explanations, and seem to take their own claims in strides, unaware of their enormity
1 reply 1 retweet 12 likes -
Replying to @fchollet
For the record, DL works because of joint feature learning. You can't get anywhere close to competitive results without it, in some form
5 replies 16 retweets 59 likes
If you claim otherwise, do explain how your proposal overcomes the information loss problem, and provide overwhelming evidence. Think twice
0 replies
1 retweet
12 likes
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.