"WTF? Why can't I get the accuracy of this classifier higher?!" (manually looks at training set targets) "Oh. I see"
-
-
-
Replying to @Zecca_Lehn
It is, that's why I was confused. It's a basic CNN that *should* have been better. Mislabeled training set though...yep.
1 reply 0 retweets 1 like -
Replying to @generativist
Do CNNs tend to improve the imbalanced class problem?
1 reply 0 retweets 1 like -
Replying to @Zecca_Lehn
As always, the annoying answer for deep learning: "maybe? assuming you have enough examples." ;)
1 reply 0 retweets 0 likes -
-
Replying to @Zecca_Lehn
Oh yea. Had a small dataset recently. Though, "way too tiny for deep methods. Stick to basics." Then, I tried it anyway. 20% improvement.
2 replies 0 retweets 0 likes -
-
Replying to @Zecca_Lehn
Yes. I think it was just a lucky example, but it took 20 minutes of testing, so it was affordable luck.
1 reply 0 retweets 1 like -
Replying to @generativist
Imagine compute time on CNNs are 10x+ and memory demand 5x+ over RFs.
1 reply 0 retweets 0 likes
Depends how many trees you grow but in general: yes, expensive. And, I don't have a GPU, which makes it more painful. Need cloud instances.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
