Interesting result - VGG 2x faster on @pytorch vs keras + @tensorflow https://medium.com/@vishnuvig/transfer-learning-using-pytorch-part-2-9c5b18e15551 …
Seems to mainly be due to preprocessing time
-
-
@vishnuvig I explored multiprocess in Keras, possible to get 2-4x improvementhttps://github.com/stratospark/keras-multiprocess-image-data-generator …1 reply 1 retweet 9 likes -
Replying to @stratospark @jeremyphoward and
or using TFs input queues, gives me around 3xspeedup. But not as easy to use as Keras image datagenerato..
1 reply 0 retweets 3 likes -
Replying to @fkratzert @stratospark and
For the model where I compared the result , I did not use any multiprocessing capability. Pytorch performance of 2x is without multiprocess
3 replies 2 retweets 2 likes -
Replying to @vishnuvig @fkratzert and
I tried this on my system, up'd to Keras 2. pytorch: 80s/epoch. keras 2: 93s/epoch. 1 worker each. w/o flip/normalize for fairer comparison
2 replies 1 retweet 1 like -
Replying to @stratospark @vishnuvig and
You can get even better performance by removing the overhead of feed_dict. Simply use a TF queue and build your Keras model on top of that.
3 replies 2 retweets 7 likes -
-
Replying to @mat_kelcey @fchollet and
having said that, using queues needs to be made a LOT simpler, almost no examples i see on GitHub use them, it's always feed_dict
3 replies 0 retweets 7 likes
Agreed... also, we should make it easy to swap a placeholder with a queue.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.