I far behind is the keras implementation in tensorflow comparing with the official code. Or is it even the same thing. I know I should just check but just in case someone have a quick answer.
-
-
-
It's pretty close, at worst tf.keras maybe be 1-2 months behind. The feature sets are somewhat different though.
End of conversation
New conversation -
-
-
Good new, obvs, but also can’t help wondering about breaking changes. Anything to be wary of?
-
There will be full release notes. I don't expect any breaking changes.
- Show replies
New conversation -
-
-
Thanks for everything Francois

Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Any posts you’re aware of on contributing new optimizers?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Nice work. I have a quick question though; I have read your ten-minute seq2seq and AttentionWithContext and I got stuck when I tried to merge the attention output layer (2D=(None, units)) with the decoder input (3D=(None, None, num_decoder_tokens)). Any hint?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Awesome! Excited to see what's new
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
