Another blunder for simple neural nets, this time in language, _exactly_ as anticipated by Marcus 2001. Sure it can be remedied but casts clear lens on a serious underlying problem that @fchollet and I have been trying to call attention to. By @bprickethttps://scholarworks.umass.edu/ics_owplinguist/2/ …
-
-
leads to actual limitation of seq2seq models... can't train more hidden units without a larger (real) dataset. really do believe this problem (learning a representation that generalizes reduplication) is solvable with seq2seq + enough data
- 3 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.