New notes on using ML to generalize from small amounts of training data: attacking MNIST with just 10 training examples for each digit. 93.81% accuracy: http://cognitivemedium.com/rmnist_anneal_ensemble …
-
Show this thread
-
Replying to @michael_nielsen
If you need a benchmark, this type of stuff is relatively easy to do with Gaussian mixture models. Use every example as a mean & modify the covariances to increase the accuracy.
1 reply 0 retweets 1 like -
Replying to @caglar_ee
I ran a half-dozen different baselines in the post linked at the top. None got much above 75%. I haven't done GMM, but I'd be surprised if they were much better.
1 reply 0 retweets 0 likes -
Replying to @michael_nielsen
If I remember correctly I did get around 98.4% accuracy on the test set (without any preprocessing) with GMMs (20-25 Gaussians) using all the training data. SVM with rbf kernel was something like 98.6%. So it wasn't overfitting much.
3 replies 0 retweets 1 like
The key point: I'm using just 10 of each training digit, not 6,000. That's the whole point of the investigation.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.