1/ Been thinking about @karpathy [0] and @petewarden's [1] recent posts on deep learning as "software 2.0", ie something fundamentally new. From a software eng POV, seems like the right analogy is to JIT compilation.
[0] https://medium.com/@karpathy/software-2-0-a64152b37c35 …
[1]https://petewarden.com/2017/11/13/deep-learning-is-eating-software/ …
-
-
3/ There's something analogous going on with the ability for reverse-mode autodifferentiation to do end-to-end optimization at runtime of a loss function for a model composed of many separately defined layers. Both techs change the tradeoff between modularity and performance.
Show this threadThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.