My research will never become a good ML paper.
But it's okay.
-
-
-
I’ve come to believe that almost every application of ML can be made into a good paper. You just have pick out the right novel subcomponents and make them into a good paper story. The opposite is not true not though. Most research will never make it into industry.
-
If you define “good ML paper” as “ML paper accepted at NeurIPS/ICML/etc” then writing a good ML paper is probably much more about how skilled you are at storytelling and paper writing than about ML. Of course, that’s a bad definition of “good ML paper” ;)
-
Effective storytelling extends well beyond writing papers that get accepted at academic venues. I think it is a really important life skill that can transfer over to things like being able to raise money for a startup, or convincing others to collaborate.https://twitter.com/hardmaru/status/1054220900176777216 …
-
Totally agree. I think it’s perhaps one of *the* most important life skills. It applies to almost everything. Stories elicit emotions, and as humans we tend to act on emotions. Anything that involves convincing others in one or another is driven by storytelling.
- 1 more reply
New conversation -
-
-
As a computer vision person, I very much agree with your second point that collecting more/better training data is one of the best ways to improve performance on vision problems in industry. Hard to write papers about this, but it really works.
- 1 more reply
New conversation -
-
-
True breakthroughs like AlphaZero might in reinforcement learning settings.
-
how many business problems involve a well-structured 2-player self-contained game that is exactly the same each time?
-
Also perfect information
-
Perfect information or not the goodness of the situation is apporximated using a neural network; i.e. even if the info is perfect we can't enumerate all possibilities and say that one move is the best. Look at dota2 where it'd imperfect info, they use a similar technique
- 1 more reply
New conversation -
-
-
Great analysis. This is something we struggle with, as one of our main missions is to bring academic solutions into the company to create value. Finding ways to use the research to make justifiable gains is never easy.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
There should be a metric to evaluate complexity of models, instaibility and training difficulty. So in other words research community need regularization term to combat overfitting of their approaches.
-
The problem with model complexity in research isn’t overfitting (after all, the model performance is high/bleeding edge in research) The problem is with the costs to benefit ratio of implementing highly complex models in production
End of conversation
New conversation -
-
-
85% of papers do not come with reproducible code. Ones with reproducibility often have severe constraints or narrow specific scenario or impractical. Reproducible papers with significant improvement and broad applicability typical gets picked up industry pretty fast these days.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
So this "research" engineer diligently collects, cleans, prepro the data....
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Perhaps also why many ML research advances don't translate to analogous problems in other areas of science. It's difficult to re-apply methods that are hyper-optimized for benchmark data sets.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
This is why I believe we should focus more on practical cases and solving real application problems. There are so many areas where we should have gotten solutions already but most companies seems to be lost when it's to apply ML. Energy optimization should be already massive e.g.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.