If learning is an important component of AGI, then optimization is, too. What type of optimization? Gradient based opt is far more efficient than non gradient based opt. So, I'll bet gradient-based optimization will be key. If your learner is non-linear, that's deep learning!
-
-
- 4 more replies
New conversation -
-
-
We don’t know the answer yet. Deep learning continues to be a very productive paradigm. We are still finding powerful new network architectures and interesting meta-learning and transfer learning strategies.
- 2 more replies
New conversation -
-
-
Nature’s gradient descent uses brain physics, not computer physics running a model of it. The hidden AGI thwart, presupposed into invisibility? Using computers.
#ComputerImperialism - predestining arcane AGI failure all along. Brain physics on the chips - another way to AGI.Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.