This applies to machine learning: remember that your model's metrics are making the very strong assumptions that 1) your distribution is static, 2) your test data is representative of the real world. Your 99% accurate model won't be 99% accurate in production.
-
-
Show this threadThanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
This Tweet is unavailable.
-
Good point by
@fchollet. Even if you exclude the false 95% and 99% accuracy claims produced by poor rigor, methodology, or fraud, many algorithms fail to account for uncertainties or errors in input data. Physicists seem much better at thinking about and adjusting for this.
End of conversation
-
-
-
So what do you do when you need a model to detect anomalies that happen in 1% (or less) of real world casea?
-
Focus on sensitivity instead of accuracy
- Show replies
New conversation -
-
-
as certainty tends to 1, metacertainty tends to 0
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Ooh, looks like somebody's subtweeting 538
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Perhaps in general, but 538 do check their accuracy after the fact, and do remarkably well: https://projects.fivethirtyeight.com/checking-our-work/ …pic.twitter.com/exfgkCil3O
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.