Blackbox neural network is not symbolic reasoning. We raised these concerns in openreview of this paper https://openreview.net/forum?id=S1eZYeHFDS … Sadly too much dogma and not enough scientific method in #Deeplearning. You cannot claim to be learning reasoning by overfitting @GaryMarcushttps://twitter.com/ylecun/status/1231346676285280256 …
-
-
Looks interesting! Where is this published? Nice to see TreeRNNs, TreeLSTMs and your extensions evaluated on these math problems. Perhaps of interest: - our 2016 paper looked at arguably the simplest of such problems, add/substract, in TreeRNNs & GRUs: https://dare.uva.nl/search?identifier=1b8c30c9-1ef6-4c29-9a44-583cce58f507 …
-
Goal was to evaluate compositional generalization in these models to greater depth, and to define diagnostic probes to learn what's going on inside. - Zhu et al. & Le & Zuidema also published papers on the TreeLSTM in the srping of 2015:https://twitter.com/wzuidema/status/1217143691074248707 …
End of conversation
New conversation -
-
-
They show (table 6) that their model doesn't generalize across generators (e.g., 93.6% accuracy drops to 10.9%). In appendix E they discuss this is because the model learns to rely on data artifacts (e.g. sequence length)--result of doing symbolic math without symbol manipulation
-
I recall that lots of my classmates in school were doing the same thing)
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.