("steps per execution" sets the number of training batches to process sequentially in a single execution: increase the number until you reach full accelerator utilization, which, for a TPU, is a lot of flops)
-
-
Show this threadThanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
So you are saying that a TPU speedup with minimal code changes is possible in tf.Keras?
-
This is the setup needed to start running on TPU on Colab: https://keras.io/examples/vision/xray_classification_with_tpus/#introduction--setup … Besides this, Keras provides you with a way to reach full device utilization (steps_per_execution), which is a necessary condition for TPU training to provide a large speedup.
- Show replies
New conversation -
-
-
Has anyone trained Tacotron on a TPU?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Colab is great! i am just a hobbyist but it's been a great platform. I must mention how great your book is, as well
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
I tried TPU on Colab just a few weeks ago. One “downside” I found is it seemed work best with tfrecords but somehow they need to be from google storage? Is this true, or I am mistaken.
-
It's true. Not just TFRecord, even the model checkpoints, if any has to be loaded from GCS.
End of conversation
New conversation -
-
-
That's cool. However, it didn't worked with keras image generators the last time I tried
-
You have to use a http://tf.data Dataset, like this:https://keras.io/examples/vision/image_classification_from_scratch/#generate-a-dataset …
- Show replies
New conversation -
-
-
Gonna keep this one bookmarked for a bit. Thnx.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.