New notes on using ML to generalize from small amounts of training data: attacking MNIST with just 10 training examples for each digit. 93.81% accuracy: http://cognitivemedium.com/rmnist_anneal_ensemble …
(The main focus, as you can probably tell from my post, was the annealing, not the use of the ensemble. I wanted to find the best possible hyper-parameters for a single net. )
-
-
oh ya totally. It just happened that last night I was thinking of how to ensemble covnets and cuoldnt quite get my hands around the idea since as ive said originally I always thought you only avrg very different models.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.