I tried this on my system, up'd to Keras 2. pytorch: 80s/epoch. keras 2: 93s/epoch. 1 worker each. w/o flip/normalize for fairer comparison
-
-
Replying to @stratospark @vishnuvig and
You can get even better performance by removing the overhead of feed_dict. Simply use a TF queue and build your Keras model on top of that.
3 replies 2 retweets 7 likes -
-
Replying to @mat_kelcey @fchollet and
having said that, using queues needs to be made a LOT simpler, almost no examples i see on GitHub use them, it's always feed_dict
3 replies 0 retweets 7 likes -
Replying to @mat_kelcey @fchollet and
hell, probably all MY tf code on GitHub uses feed_dict :)
2 replies 2 retweets 3 likes -
Replying to @mat_kelcey @fchollet and
yeah... can't really blame'em. all of my code use feed_dicts too. :-)
2 replies 0 retweets 3 likes -
Replying to @hardmaru @mat_kelcey and
The overhead of feed_dict is small. However, making input preprocessing blocking instead of having it happen in parallel to execution ->
3 replies 1 retweet 4 likes -
I am using keras.preprocessing.image ,is there a way I can use parallel execution in it. Or should I use directly tensorflow directly in it
1 reply 0 retweets 0 likes -
Replying to @vishnuvig @hardmaru and
Just use `pickle_safe=True` and `num_workers=4` (for instance) in your call to `fit_generator`. That is all.
2 replies 0 retweets 6 likes -
That's cool will try it and update the blog
1 reply 1 retweet 0 likes
Better to use TF input queues, though
-
-
By using pickle_safe = True and nb_worker = 6 the performance improved to 1 min 40 sec. Have updated the blog accordingly.
0 replies 0 retweets 1 likeThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.