New notes on using ML to generalize from small amounts of training data: attacking MNIST with just 10 training examples for each digit. 93.81% accuracy: http://cognitivemedium.com/rmnist_anneal_ensemble …
-
Show this thread
-
The most useful ideas: use a small conv net with dropout & data augmentation (to reduce overfitting), simulated annealing to find hyper-parameters, and an ensemble of many nets to improve performance.
4 replies 4 retweets 26 likesShow this thread -
Replying to @michael_nielsen
Also, what reduced sample training set have you decided on using ?
1 reply 0 retweets 1 like
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.