New notes on using ML to generalize from small amounts of training data: attacking MNIST with just 10 training examples for each digit. 93.81% accuracy: http://cognitivemedium.com/rmnist_anneal_ensemble …
-
Show this thread
-
The most useful ideas: use a small conv net with dropout & data augmentation (to reduce overfitting), simulated annealing to find hyper-parameters, and an ensemble of many nets to improve performance.
4 replies 4 retweets 26 likesShow this thread
On the addictive enjoyment (?) involved in training neural nets: http://cognitivemedium.com/rmnist_anneal_ensemble …pic.twitter.com/A6MtKR37ko
6:07 PM - 26 Nov 2017
0 replies
8 retweets
27 likes
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.