If you modify a complex piece of code, and your tests pass on the first try, you should immediately proceed to break the code in an obvious way and rerun the tests, to check that you're actually testing what you think you're testing.
-
-
This Tweet is unavailable.
-
Label distribution can help to develop and evaluate naive classifier strategies using probabilities.
End of conversation
-
-
-
I did this to test my model by reasoning like that "what if I input noise, how many of the performance is bc the net memorized the most probable cluster (it was a regression problem). I feel proud that the creator of keras (I used keras) think is necessary to emphasize this
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Hmm. Can this be made into an initializer somehow? To bias the weights feeding into the output layer?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Good point. It's also relevant to suggest using metrics less sensitive to class balance/distribution. Auc, f1, equal error rates, etc..
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.