There's a lot of subtletly with class weighting and neural networks - really neat empirical paper by @zacharylipton that looked at it: https://arxiv.org/pdf/1812.03372.pdf …
-
New conversation
-
-
-
+1. Ofcourse like with anything, caveats exist because it can get tricky tho. Under-represented classes in training can deviate from test (or in real world) by virtue of being, under-represented. Weights need to be carefully calibrated, and tools enabling that is
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
This is such an essential concept that is needed for neural networks. With any type of medical dataset, you need to understand that a lot of data is imbalanced, which you need to adjust appropriately for in your
#NeuralNetwork.@TensorFlow has an article:https://www.tensorflow.org/tutorials/structured_data/imbalanced_data …Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Understanding this is very important as all "real" world datasets are imbalanced.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Be careful though. If your batch size is small relative to the frequency of the minority class you will get irregular sizes of your gradients (e.g. small, small, small, huge gradient -> weight update).
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
When working with images, usually modifying sampling strategy using duplicate samples + image augmentations >>> class weighting. Faster convergence, less overfitting.
-
Using a better loss works often better than class weighting as far as my experience goes (for example FocalLoss). Would be nice to read about these things in a comparative study if it exists.

- Show replies
New conversation -
-
-
Would this work with a segmentation task?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.