Graphical models got cool because Microsoft Research decided to specialize in them. Deep learning got a boost from Jeff Dean creating Google Brain in 2011. Lots of these shifts in zeitgeist have to do with the funding decisions of a few big tech companies.
-
-
You could alternatively say "when someone pushes them hard enough to show that [some of them] can scale to real problems" -- which makes it seem at least somewhat more reasonable, less contingent/sociological
1 reply 0 retweets 1 like -
Replying to @AdamMarblestone @xaqlab
Why should scaling to optimizing ads on large datacenters be a cue for neuroscientists?
1 reply 0 retweets 1 like -
Better something than nothing
1 reply 0 retweets 1 like -
Replying to @AdamMarblestone @xaqlab
Probably better to be driven by actual experimental observations from neuroscience
1 reply 0 retweets 2 likes -
Adam Marblestone Retweeted Rodney Brooks
Adam Marblestone added,
2 replies 0 retweets 2 likes -
Seriously though, approximate backprop seems simpler than approximate PGM inference... and many probabilistic inference problems can be re-framed as neural nets as the field is doing now
1 reply 0 retweets 0 likes -
And there are things like:https://www.nature.com/articles/s41467-017-00181-8 …
1 reply 0 retweets 1 like -
And PGMs have been heavily explored, while backprop was ignored for a long time... so I think things are OK
2 replies 0 retweets 0 likes -
Replying to @AdamMarblestone @xaqlab
Gradient descent wasn't ignored. It was just an unremarkable part of many learning algorithms. What always struck me as weird was how neural network people deified it as, like, the One True Learning Algorithm.
1 reply 0 retweets 2 likes
Adam Marblestone Retweeted Adam Marblestone
Adam Marblestone added,
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.