New notes on using ML to generalize from small amounts of training data: attacking MNIST with just 10 training examples for each digit. 93.81% accuracy: http://cognitivemedium.com/rmnist_anneal_ensemble …
Well, having trained a whole bundle of baselines already, I'm not going to do another without a really compelling reason.
-
-
It was just a suggestion in case you want another baseline. They are easy to use with small number of training examples and didn't perform bad on MNIST as far as I remember. That's all.
-
Okay, thanks for the suggestion!
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.