New blog post: using pre-trained word embeddings in Keras to solve a text classification problem http://blog.keras.io/using-pre-trained-word-embeddings-in-a-keras-model.html …
feel free to run the code on the "official" train/test split. I think it's likely to be better than what Andrew &Quoc tried
-
-
a deep convnet with global max pooling is a very strong sequence classification model, generally outperforms LSTM+attention
-
lstm is too empirical, it is miracle how it works at all...
End of conversation
New conversation -
-
-
In fact, running your code, I am getting 95% even without pre-trained vecs! (Theano backend)
-
try it on the official split, I'd be curious to see the results. Btw pretrained vects would at least speed up training.
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.