Implementing fully connected nets, convnets, RNNs, backprop and SGD from scratch (using pure python, numpy, or even JS) and training these models on small datasets is a great way to learn how neural nets work. Invest time to gain valuable intuition before jumping onto frameworks.https://twitter.com/dennybritz/status/961829329985400839 …
-
-
Grad students knew how to implement neural nets in C in 2000. And they didn't have good intuition about them. A high school student playing with NN frameworks in 2018 can develop stronger understanding of NNs in a matter of days -- just thanks to a better application context
-
Understanding is great, until you hit a bug. Going down to the bottom is a reminder of how many *implementation details* actually matter. Random seeds, tie breaking in max-pool, whether the gradient is correct (eps tolerance can still be broken when chained), others.
- Show replies
New conversation -
-
-
That is indeed a compelling argument for moving up the stack. However, there might be a cognitive bias in play here - if you have, at some point in the past, understood lin. alg. / NNs under the hood, the application based intuition just “clicks” and becomes a reflexive skill.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.