Interesting result - VGG 2x faster on @pytorch vs keras + @tensorflow https://medium.com/@vishnuvig/transfer-learning-using-pytorch-part-2-9c5b18e15551 …
Seems to mainly be due to preprocessing time
-
-
The above performance comparison does not make sense, unfortunately (makes input preprocessing *blocking* instead of parallel to the model).
-
It's effectively not a runtime comparison but a preprocessing pipeline comparison --inefficiently configured one vs efficiently configured.
End of conversation
New conversation -
-
-
Sounds great. Is there sample code or blog post showing this technique?
-
You could do something similar to the first example here, but replacing placeholders with TF queues: https://blog.keras.io/keras-as-a-simplified-interface-to-tensorflow-tutorial.html …
End of conversation
New conversation -
-
-
a benchmark using feed_dict ? o_O
-
having said that, using queues needs to be made a LOT simpler, almost no examples i see on GitHub use them, it's always feed_dict
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.