@fchollet check out @bzamecnik 's perf. analysis of #Keras multi-GPU training speedups - many detailed measurements https://github.com/rossumai/keras-multi-gpu/blob/master/blog/docs/measurements.md …https://twitter.com/RossumAi/status/924985495851012096 …
-
-
But it would need auditing, in particular to check whether we correctly place the grads computation on each device (now we leave it to TF)
-
His NVIDIA DevBox is PCIe 16x, Azure is PCIe 8x, custom builds often even worse. Scaling also impacted by small batch sizes (64 -> 32).
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.