It’s strange to see people defining deep learning as supervised learning via backprop, considering that the 2006 deep learning revolution was originally based on the idea that neither of those things work very well
-
-
Replying to @goodfellow_ian
Maybe it's probably a lot of it is supervised learning.
1 reply 0 retweets 1 like -
Replying to @ChombaBupe @goodfellow_ian
I am also curious, are there currently good alternatives to backprop?
2 replies 0 retweets 1 like -
Replying to @ChombaBupe
In situations where you can’t use backprop, there are other methods you can fall back to. In situations where you can use backprop, I can’t think of a situation where it would make sense to choose not to
2 replies 0 retweets 13 likes -
Replying to @goodfellow_ian @ChombaBupe
Evolutionary algorithms (which may/may not use gradients), as recently popularised by Uber AI Labs, have their merits. In particular for reinforcement learning, but also for supervised learning.
2 replies 0 retweets 6 likes -
Replying to @kaixhin @goodfellow_ian
Evolutionary algorithms search for solutions using random search. Backprop is more like informed search and thus may be faster and reliable in most cases, I think.
1 reply 0 retweets 0 likes -
Replying to @ChombaBupe @goodfellow_ian
Indeed, backprop is efficient and performs well when gradients are informative. However, there are scenarios where EAs may work better (see https://eng.uber.com/deep-neuroevolution/ …).
1 reply 4 retweets 10 likes
i have learned things in jeet kun e do training that any ai startup would pay millions for, but i cannot teach them if they are not ready...
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.