There are multiple distribution strategies available. Most of the time, you will use MirroredStrategy (replicates your model on each available GPU, send a sub-batch to each replica at every training step, and keeps the replicas in sync after processing every step) or TPUStrategy.
-
-
Show this threadThanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
I tried multipleStrategy on multiple machines but didn’t work out. The ip adress is recognized by both but then the terminal hangs... a short tuto on requirements for this functionality would be great

-
Except memory issues it always worked quite well for me. What does the ip address have to do with it?
End of conversation
New conversation -
-
-
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Is there any way to get reproducible results with GPU, especially mirrored strategy? Conventional wisdom (and personal experience) seems to be that it's only possible with pure-CPU learning.
-
This isn't possible at this time.
End of conversation
New conversation -
-
-
wow nice
functionalities being added to @TensorFlow & Keras. I wish I had another GPU to try this.pic.twitter.com/dLlxPdbtSVThanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Always works like a charm
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.