If you're using Colab and you feel like training your model on GPU is slow, switch to the TPU runtime and tune the "steps_per_execution" parameter in compile(). Seeing a 5-10x speedup is pretty common.
-
Show this thread
-
("steps per execution" sets the number of training batches to process sequentially in a single execution: increase the number until you reach full accelerator utilization, which, for a TPU, is a lot of flops)
2 replies 5 retweets 110 likesShow this thread -
Replying to @fchollet
Wouldn't it be possible to have an 'auto' / self tuning mode for this?
1 reply 0 retweets 1 like
Replying to @ogrisel
Yes, we are working on it (it will also benefit GPU users)
0 replies
1 retweet
4 likes
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.