With BPROP training I get 2.3% of errors in MNIST with just 1000 hidden units 2-layers net, no convolution layers. Pretty good AFAIK.
looks like it would sometimes be a classification problem and sometime a regression problem. You need support for multiple losses.
-
-
Yep… one size fits all is hard. However fortunately it’s pretty easy to document this, and to pick a sane default.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
in theory Redis could guess after a few tries in most cases? If the “teachers” outputs always look like all 0s but a single 1…
-
black box ML is a very difficult problem. One size fits all is impossible, you need to constrain the problem upstream.
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.