contradicted by the fact that it generalizes surprisingly well on unseen data?
-
-
-
Depends on the unseen data. NNs memorize much more irrelevant things found in some, but not all unseen data than legacy classifiers.
End of conversation
New conversation -
-
-
The field's term of art for memorizing shit is "overtraining". It doesn't always happen, but it's definitely a serious hazard.
-
I thought it was "overfitting" which in its trivial forms is avoided by having separate train & test sets, but when both are "biased", tough
- 1 more reply
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.