New notes on using ML to generalize from small amounts of training data: attacking MNIST with just 10 training examples for each digit. 93.81% accuracy: http://cognitivemedium.com/rmnist_anneal_ensemble …
-
Show this thread
-
Replying to @michael_nielsen
If you need a benchmark, this type of stuff is relatively easy to do with Gaussian mixture models. Use every example as a mean & modify the covariances to increase the accuracy.
1 reply 0 retweets 1 like -
Replying to @caglar_ee
I ran a half-dozen different baselines in the post linked at the top. None got much above 75%. I haven't done GMM, but I'd be surprised if they were much better.
1 reply 0 retweets 0 likes -
Replying to @michael_nielsen
If I remember correctly I did get around 98.4% accuracy on the test set (without any preprocessing) with GMMs (20-25 Gaussians) using all the training data. SVM with rbf kernel was something like 98.6%. So it wasn't overfitting much.
3 replies 0 retweets 1 like -
-
Replying to @michael_nielsen
Yes thats why I said use each digit as a mean and choose a covariance. No training ver easy to form baseline classifier. What I meant was, as far as I remember, GMMs didn't perform that bad on MNIST.
1 reply 0 retweets 0 likes -
Replying to @caglar_ee
Well, having trained a whole bundle of baselines already, I'm not going to do another without a really compelling reason.
1 reply 0 retweets 0 likes -
Replying to @michael_nielsen
It was just a suggestion in case you want another baseline. They are easy to use with small number of training examples and didn't perform bad on MNIST as far as I remember. That's all.
1 reply 0 retweets 1 like
Okay, thanks for the suggestion!
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.