My takeaway from Rich Sutton’s “The Bitter Lesson” is that AI methods that survive the test of time tend to scale. Systems that scale have simple and elegant designs that are easy to understand. Designing such systems require even greater human creativity. http://www.incompleteideas.net/IncIdeas/BitterLesson.html …
-
-
The question is whether these innate structures *are* what they *seem* to be. If we can find a clever meta-method of guiding the system to rapidly re-discover them from scratch in the way which functions well, that would work beautifully. For example, this is what Bengio wants:pic.twitter.com/3pTd0x0Ur8
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Ideally, you would be able to justify the architecture of a neural net in terms of axioms that hold for the problem to solve. Eg. geometric shapes are invariant if shifted around and this is exploited in convolutional networks. So knowledge may justify the structure of a network.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.