New notes on using ML to generalize from small amounts of training data: attacking MNIST with just 10 training examples for each digit. 93.81% accuracy: http://cognitivemedium.com/rmnist_anneal_ensemble …
-
-
On the addictive enjoyment (?) involved in training neural nets: http://cognitivemedium.com/rmnist_anneal_ensemble …pic.twitter.com/A6MtKR37ko
Show this threadThanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
I had a question on the ensemble. The way I was taught it, the ensemble performs best when the models it averages are really different but aren't you using covnets all the way through?
-
Yes, I am. It'd be nice to find other models with pretty good performance to include in the ensemble.
- 4 more replies
New conversation -
-
-
Also, what reduced sample training set have you decided on using ?
-
It's in the linked repo.
End of conversation
New conversation -
-
-
I'm slightly surprised! In my experience, bare-bones SA tends not to work that well in non-trivial applications when the function to be optimized can becomes increasingly complex; and one should try out other global optimization methods (e.g. parallel tempering).
-
*can become (unfortunate that Twitter does not allow editing messages)
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.