Of course, I should point out it's not 50 lines because Keras has some kind of built-in solution for text generation (it doesn't). It's 50 lines because Keras makes it easy to implement anything. It only uses generic features.
-
-
Show this thread
-
It uses a utility to read text files, a text vectorization layer (useful for any NLP), the LSTM layer and the functional API, the callbacks infrastructure, and the default training loop.
Show this thread -
All of the problem-specific logic fits in 50 lines because language models are conceptually simple. A simple API means making conceptually simple things easy to implement
Show this thread
End of conversation
New conversation -
-
-
This is nice and I'm working on a very similar demo. Question: I want to replace the vectorize_layer with a word embedding from TFHub, do you think that could be a problem?
-
That should work fine
End of conversation
New conversation -
-
-
It’s no GPT-3 but it’s very cool
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Thank you for sharing, I’m studying RNN and I found it more difficult that CNN.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Nice, thank you. If I’m understanding correctly, it essentially is generating summaries of the original reviews except they are the same length?
-
It's just learning to predict the next word in a sentence given the previous words.
- Show replies
New conversation -
-
-
Contributing to the official Keras samples requires some effort as they require a script and a notebook. Simple and elegant google colab books like this one are a lot more fun to build and study.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.