If you modify a complex piece of code, and your tests pass on the first try, you should immediately proceed to break the code in an obvious way and rerun the tests, to check that you're actually testing what you think you're testing.
-
-
The best accuracy achievable without looking at the inputs (i.e. just by learning the label distribution) can be a pretty good initial baseline for a difficult problem.
Show this threadThanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
I do the equivalent of this with plain old regression as part of my model diagnostics. I also compare them to a trivial classifier. Worth the effort...a predictive model for apnea later predicted 6/24 infants developing apnea who would otherwise have been sent home from the ED.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
In one of my machine learning class assignments, I implemented Naive Bayes classifier and got 100% accuracy. I was shocked. Then realized I was only testing 1st example 1000 times.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Smiles.... I was analysing a supermarket sales and handled all missing values yet i have nan on my dataset and 0 on the isnull().sum()
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Sorry if this seems like a trivial question but could you please tell me the reason we might get a high accuracy even after changing the test inputs? Could it be because of a class imbalance problem?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.