New notes on using ML to generalize from small amounts of training data: attacking MNIST with just 10 training examples for each digit. 93.81% accuracy: http://cognitivemedium.com/rmnist_anneal_ensemble …
I ran a half-dozen different baselines in the post linked at the top. None got much above 75%. I haven't done GMM, but I'd be surprised if they were much better.
-
-
If I remember correctly I did get around 98.4% accuracy on the test set (without any preprocessing) with GMMs (20-25 Gaussians) using all the training data. SVM with rbf kernel was something like 98.6%. So it wasn't overfitting much.
-
The key point: I'm using just 10 of each training digit, not 6,000. That's the whole point of the investigation.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.