Since many of the interesting machine learning papers now regularly required 100s or even 1000s of CPU/GPUs for replication - what strategies are realistically left for startups, public institutions & individuals to do meaningful research in ML?
-
-
Plus, work that develops fundamental ideas (think dropout, batch norm, CapsNets,..) often starts on tractable datasets (MNIST, CIFARs, ..). ImageNet-scale research is often done by people/companies with enough resources and almost always doesn't attack the fundamentals.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.