Since many of the interesting machine learning papers now regularly required 100s or even 1000s of CPU/GPUs for replication - what strategies are realistically left for startups, public institutions & individuals to do meaningful research in ML?
-
-
excellent points, thanks!
-
A lot of important work comes from the constraint of having small compute. In ~2011 Ng et al attacked ImageNet with 1000+ CPUs. In 2012, Hinton et al did way better with 2 GPUs. In general, less compute often means people need better ideas. Which isn't a tragedy!
- 4 more replies
New conversation -
-
-
Well said. However, we're getting there: an Nvidia DGX-1 costs 149k, same as a gas chromatograph. Granted, the GC will kill your budget with abt 1.5k/month of consumables, but still, powerful GPUs *are* getting expensive.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.