Not really true. Generic HPC systems are terrible for AI. AI startups should build their own systems; generic HPC cripples them over time.
-
-
-
Even GPU based HPC systems?
-
GPUs are not enough. The most important piece is the network. You will not gain any speedups with a standard network, it will be even slower
-
As in interconnect? Don't many researchers work on single node, multi-GPU boxes?
-
Yes. I would estimate for researchers: 90% multiple single GPUs; 9.5% 1 node (=4GPUs); 0.5% multiple nodes (HPC). But startups: 50%+ 1 node+
-
Gotcha. What's the #1 thing I need to know about interconnects? Any articles/blogs that cover this?
-
Bandwidth (>100Gbit/s) and latency (<10μs) are important as is network layout. Outdated, but gives you an overview:http://timdettmers.com/2014/09/21/how-to-build-and-use-a-multi-gpu-system-for-deep-learning/ …
End of conversation
New conversation -
-
-
More info on the rise of Big Compute https://blog.rescale.com/cloud-3-0-the-rise-of-big-compute/ …
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Not true for AI startups.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Renting makes always sense until you exceed 50% of the full owning cost; from that moment on you should consider owning instead
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
With quantum computing owning is not even feasible for most users since the hardware is still in a 60s-era very large form factor state.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Check
@golemprojectThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.