Intuitively, a stronger logreg baseline should get above 92%. A gradient boosted trees + logreg ensemble even higher
-
-
Show this threadThanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Keras logreg with 0.5 dropout, on top 70k 4-grams selected via sklearn's f_classif score function. Binary vectorization. After 3 epochs.
-
Some training runs get to 91.6. This is single-model. No parameter was tuned based on test scores, at all.
End of conversation
New conversation -
-
-
I've not heard it referred to ACL-IMDB but assuming it's IMDB sentiment, recent discussion and numbers + NBSVM comphttps://twitter.com/Smerity/status/905864837196013568 …
-
The 92.3 baseline that
@jeremyphoward hit with what we'll term NBSVM++ is good comparison. SotA is complicated:https://twitter.com/Smerity/status/905869890887618560 … - Show replies
New conversation -
-
-
Maybe make the problem more interesting.. something like normalising by mAh usage as if running on a phone?
-
Are there even remotely accurate models of this? Ie forward pass (in terms of mat muls, adds, mem access, etc) -> device mAh ?
@petewarden ?
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.