This represents a misreading of my paper. I asked if we would hit a wall, and impicitly said no, provided that we started doing other stuff like adding in symbol-manipulating primitives. If if we do that, we can avoid the wall. I’m just suggesting how we ought to steer. @Plinzhttps://twitter.com/dougblank/status/950393794377240576 …
-
-
Replying to @GaryMarcus @Plinz
I think if you expanded your suggestion to include making networks have better internal categorical representations, and to provide for more flexible, dynamic "thinking-like" processes, everyone would agree with you. And since that is where we are headed anyway, there is no wall.
2 replies 0 retweets 1 like -
Man, give me a break. The deep learning community is certainly not headed there. DL is supervised learning. There can be no AGI without unsupervised learning. Marcus is right but he did not go far enough, IMO. DL must be discarded like yesterday's garbage if we are to solve AGI.
3 replies 0 retweets 4 likes
There is no generally accepted definition of Deep Learning. Stochastic gradient descent in differentiable feed forward networks may be standard practice but probably not the way to go, and most DL researchers seem to agree.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.