This is not distributed. All 4 GPUs are located on a single machine.
-
-
-
It is distributed across 4 gpus on a single machine, the same code can be used to distribute across machines by just changing the mpi run command, will share the details soon.
- Show replies
New conversation -
-
-
Thank you for the tweet
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Looks really neat! And - as of magic - they seem to get better than linear speed-up: 5-6x faster with 4 GPUs compared to 1

-
It definitely looks great.
@HussAnders I’ve experienced similar speed ups due to better GPU resource utilization. (More data in Shared Memory/Caches rather than Global Memory or in Host Memory)
End of conversation
New conversation -
-
-
How would you compare the `multi_gpu` function in Keras latest release with respect to Keras + Horovod?
-
I have the same question too.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.