1/ #DeepLearning is only good at interpolation. But applications need extrapolation that can reason about more complex scenarios than it is trained on. With current methods, accuracy degrades rapidly when complexity of test instances grows. Our new work aims to overcome this
-
-
Interesting, would be nice to try that in connection to this physics task where we use tree recursive networks. https://arxiv.org/abs/1702.00748
-
Indeed! I had discussed with
@cosmo_shirley about using recursive networks for physics problems. That is the right direction. Would be great to see how much external memory helps and how do we share stack parameters among different functions.
End of conversation
New conversation -
-
-
Thank you for this! Very cool! Have you tried comparing this to HyperNets: https://blog.singularitynet.io/just-deep-is-too-flat-b3813e2242f1 … seems like both methods try to generalize the usability of Neural Nets. The code for HyperNets is linked to at the bottom of the article
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
What do you think about information maximization over this memory augmented? Borrowing ideas from: https://www.linkedin.com/posts/adrian-b-690b61101_really-interesting-work-done-at-facebookai-activity-6597933652405952512-ICN0 … Applied it on language models but I think that it could be extrapolated to other areas.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Exactly my thoughts
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.