New example on http://keras.io : a neural machine translation example with a seq2seq Transformer. Starts from raw data, and includes data preparation, building the Transformer (from scratch!) & training it, and inference. Less than 200 lines total.https://keras.io/examples/nlp/neural_machine_translation_with_transformer/ …
-
-
Dont you think using a pre made model with transfer learning is much better for the environment?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Thanks for sharing.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Does it support caching to make inference faster?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Hi Francis, just wanted to know, why you think
@huggingface implementations for model.generate is way too slower than@PyTorch . I am an active user of@TensorFlow , but no one has actually addressed it. Even the implementation you had doesn't support caching If I am not wrong. -
Hey! Tensorflow maintainer at Hugging Face here - that's not
@fchollet's fault, it's mine! Can you tell me what model you were using and what the speeds were like with TF and PT so I can try to reproduce the issue and file a ticket? - Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.