New notes on using ML to generalize from small amounts of training data: attacking MNIST with just 10 training examples for each digit. 93.81% accuracy: http://cognitivemedium.com/rmnist_anneal_ensemble …
-
-
Yes thats why I said use each digit as a mean and choose a covariance. No training ver easy to form baseline classifier. What I meant was, as far as I remember, GMMs didn't perform that bad on MNIST.
-
Well, having trained a whole bundle of baselines already, I'm not going to do another without a really compelling reason.
- 2 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.