What if you just have a very simple problem?
-
-
-
My classifier predicts whether the sun will rise in the east or the west with 99.97% accuracy. Submitting to Nature as we speak.
- Show replies
New conversation -
-
-
Anything less than or equal to 98% is acceptable.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
It's fairly easy to get over 99% accuracy on well-studied data sets such as Dogs Vs. Cats. Nonetheless, I also think your statement is 99% accurate.
-
Well studied = over-fitting. Anything curated is probably not reproducible in terms of data generation.
End of conversation
New conversation -
-
-
My classifier can tell what gender the pregnant person was assigned at birth with waaay greater than 99% accuracy.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Definitely a big red flag. One common problem I've seen people run into is when training on newly constructed datasets, duplicate samples end up making their way into both training and testing sets. Need to be extra vigilant when working with new data.
-
But I see also a tendency to remove outliers (with dubious methodology) so as to increase the performance of the model.
End of conversation
New conversation -
-
-
My classifier says Impeach like 100% what that gives?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
If an impossibly high metric is able to inform you of overfitting, would you not consider that "informative"?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.