Submitted solutions for my first @kaggle competition! #7 here: https://www.kaggle.com/c/denoising-dirty-documents/leaderboard …. Rough code on Github: https://github.com/gdb/kaggle .
@thegdb are deep neural nets subject to overfitting much? I know the Kaggle leaderboard is often deceptive that way.
-
-
@avibryant In general they are, though there are standard regularization techniques (such as dropout: http://www.cs.toronto.edu/~rsalakhu/papers/srivastava14a.pdf …). -
@avibryant I'm not regularizing at all, but my (naive) suspicion is that my dataset size is large relative to my network size. -
@avibryant (My validation loss seems to be monotonically dropping, which I think means I'm not overfitting.)
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.