if nobody bites on question 3, it will make my next medium post a lot easier to write :)https://twitter.com/GaryMarcus/status/1209640096900812800 …
-
-
Deep learning is the use of backprop/auto-diff. to allow gradient based learning of free parameters in large networks of “parametrized functional modules” which may include functions or compositions thereof supporting attention+variables, and may include non-learned modules also.
-
When most or all free parameters in the system are being learned by backprop-enabled gradient descent, that’s deep learning.
- 4 more replies
New conversation -
-
-
we can use backprop to train CHMMs.., EM just works better than backprop in this case. Also, with query-training, we are starting to use backprop for training pieces of RCN. IMO it is too all encompassing to say using grad=DL, when most ideas about structure came from elsewhere.
-
Yes, it is like the Borg. Anything that uses SGD is assimilated as Deep Learning. If a race becomes assimilated into the Borg, does that race also become part of the Borg? Of course!!!
- 2 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.