tree structure, or compositionality, at least under reasonable definitions. Not reasonable: picking your favorite symbolic system, checking whether a NN learns exactly that, and interpreting failure as evidence that the whole class cannot be learned.
-
-
I think our results are consistent with Brenden's, but Brenden et al., at least in the earlier results, (i) emphasized the negative results with NNs, (ii) used training conditions less favorable for generalization to longer expressions.
-
In particular, in Veldhoen et al (2016) we encourage generalization to longer expressions by withholding some lengths at train time (i.e., we train on length 1,2,4,5,7, and test also on length 3,6,8,9). Alhama & Zuidema (2018, JAIR), we use 'incremental novelty exposure'.
- 3 more replies
New conversation -
-
-
Agree, though with our work we can't really conclude whether innateness is the answer (what we call pre-wiring could be "pre-learned", i.e. learned with some other source of input prior to the task; we just know it helps if the wiring is there).
- 1 more reply
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.