@fchollet check out @bzamecnik 's perf. analysis of #Keras multi-GPU training speedups - many detailed measurements https://github.com/rossumai/keras-multi-gpu/blob/master/blog/docs/measurements.md …https://twitter.com/RossumAi/status/924985495851012096 …
-
-
When training from an ImageDataGenerator, performance was worse (65% or so efficiency). Potentially because of slow disk I/O on EC2
-
Adrian got 97% efficiency on a GoogleNet-like with 4 GPUs (74 min to 19 min): https://www.pyimagesearch.com/2017/10/30/how-to-multi-gpu-training-with-keras-python-and-deep-learning/ … -with 4 GPUs, I had low 90s efficiency
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.