Implementing fully connected nets, convnets, RNNs, backprop and SGD from scratch (using pure python, numpy, or even JS) and training these models on small datasets is a great way to learn how neural nets work. Invest time to gain valuable intuition before jumping onto frameworks.https://twitter.com/dennybritz/status/961829329985400839 …
-
-
Knowing the algorithm != Knowing how/why it works. My recommendation: do Kaggle competitions, and use more than neural nets. Use many different ML models. Make it visual. Plot your features.
-
It's perhaps clearer with a simpler example: implementing SVD teaches you how to implement SVD. Once you're done, you still don't know what it does. Using SVD on various datasets and plotting & using the results is what gives you intuition about SVD. Also, learning math
- Show replies
New conversation -
-
-
1/3 I think you raise some good points, but in my view these things are orthogonal. There are many difficult, real world problems where current paradigms like deep learning are inadequate, and cookbook cookiecutter approaches won’t work.
-
2/3 Knowing what’s under the hood not only makes it easier to debug errors, it also gives me more confidence to modify extend the existing paradigm to tackle new types of problems.
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.