Everything is a dog when you use ImageNet pretrained weights
-
-
zavé + ke rézon!!
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Deep Dream was such a disappointment for me because of this. It was clear the net was so biased toward easily accessible data (dogs and birds).
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Haha, aptly put. We learnt this hard way
@TuringIQ . Folks out there should know that pre-trained on imagenet is not pre-trained on ENTIRE imagenet.Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Also the problem is that for classifiers to work there has to be equal number of training images per category, but in real life proportions are different. To my it seems connected with catastrofic foretting. For a network to not forget we have constantly feed it gradient.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
I understood that the models have been trained to recognize universal shapes (edges, corners, shadows) and can thus be retrained more easily to recognize other complex objects. Is that incorrect?
-
Yep. You can to retrain the model, starting from pre trained weights with differential learning rates and achieving superb results. Smaller learning rates for inner layers (edges, corners, shapes) etc and bigger learning rates for outer layers.
End of conversation
New conversation -
-
-
But aren't the first layers supposed to be learning really basic stuff like edges and blobs?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.