The best 1st layer in a 2-layer model isn't the same as in 20-layer model. Greedy representation learning is fundamentally broken.
-
-
-
And deep learning changed everything because it offered a computationally practical way to learn all layers at the same time.
- Show replies
New conversation -
-
-
@fchollet also end-to-end training with no feature engineeringThanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
@fchollet Although it all started with one layer-at-a-time pre-training, no?Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
@fchollet I feel like there is a lot here. Please write a blog about it.Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
@AshwinKalyan@fchollet I think "self play" concept can be a good complement to backprop. See it already in generative / game playing algos.Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.