ML pro tip: learning your features is better than leveraging random kernels.
-
-
Show this threadThanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Welp
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
This Tweet is unavailable.
-
Cause the feature map is just an approximation in this case tho you can train the exact feature map or use a pre-trained model like VGG.
End of conversation
-
-
-
@robdadashi just in case ;)Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Hi Francois! Do you know when we will have this layer?
- End of conversation
New conversation -
-
-
I start my lectures with perceptron, linear regression, logistic regression and svm. Only changing the loss in network. Can you add in keras the loss of perceptron (0 1 loss). Only to didatic porpuses?
-
Perceptron is not a loss-based algorithm. It's just a binary classification algorithm that stacks a sgn_fn on top of a linear layer and train the weights based on a linear algebra idea not gradients! So there is no any perceptron loss!
End of conversation
New conversation -
-
-
SVMs have many great properties, e.g. global convergence and being able to tune the regularisation effectively. Seems like a legit method to train a feature extractor, but only when replacing the classifier finally by an actual SVM (if you chose to use an SVM instead of DL).
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.