Turns out one of the top performing models on SNLI (>86% accuracy) is essentially a bag-of-words model. Interesting. http://arxiv.org/pdf/1606.01933v1.pdf …
-
-
Replying to @yoavgo
I have a pure BOW sent encoder for SNLI that gets ~82.5, beats LSTM and tree CNN. Will release as a Keras example + blog post.
1 reply 0 retweets 18 likes -
There's a strong need for revisiting baselines rather than relying on an older paper to have done it well, esp as techniques improve
1 reply 0 retweets 14 likes -
I've found shallow CNNs or even logreg to frequently outperform RNNs with attention on text classification too...
2 replies 1 retweet 9 likes
yet most people don't even try these, they jump directly to LSTM or GRU. This is a consequence of DL hype
4:05 PM - 3 Sep 2016
0 replies
0 retweets
7 likes
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.