Turns out one of the top performing models on SNLI (>86% accuracy) is essentially a bag-of-words model. Interesting. http://arxiv.org/pdf/1606.01933v1.pdf …
-
-
Replying to @yoavgo
I have a pure BOW sent encoder for SNLI that gets ~82.5, beats LSTM and tree CNN. Will release as a Keras example + blog post.
1 reply 0 retweets 18 likes -
There's a strong need for revisiting baselines rather than relying on an older paper to have done it well, esp as techniques improve
1 reply 0 retweets 14 likes -
I've found shallow CNNs or even logreg to frequently outperform RNNs with attention on text classification too...
2 replies 1 retweet 9 likes -
RNN with attention is a total overkill for most text cat. And won't work well.
2 replies 0 retweets 3 likes
agreed, yet some fairly big names in DL NLP rely on it reflexively for sentence pair scoring or text clf.
4:26 PM - 3 Sep 2016
0 replies
0 retweets
4 likes
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.