Thanks 4 this! If you wanted to detect anomalies in a multi variable time series, would seq to seq, vanilla RNN, or a encod best approach?
-
-
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Hi, any chance we get to know about adding the attention mechanism? Cheers.
-
It is in progress
see: https://docs.google.com/document/d/1psFXnmMlSTg5JapgZKz26ag-zBu3ERrxkKoEzNpzl4w/edit?usp=sharing … There are also PRs for initial steps waiting for feedback:https://github.com/fchollet/keras/pull/7980 … - Show replies
New conversation -
-
-
Thank you! In inference, ca we return the decoder, state, feed it with the last token to avoid predicting the same thing several times ?
-
Yes, process one timestep at a time in a loop and reinject the previous state. That's a more efficient approach
End of conversation
New conversation -
-
-
Perfect timing! I'm working on it now. Thank you very much!
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Is there a straight-forward way to use hierarchical softmax/NCE/sampled softmax in native Keras to make language models scale?
-
If you have a TF implementation of hierarchical softmax, it would be a 2-line change to put it in a Keras model. Happy to show how
- Show replies
New conversation -
-
-
Great Tutorial! Many cases though where this approach falls short and we need ATTENTION, here's a canonical example: https://github.com/andhus/extkeras/tree/master/examples/attention …
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Thanks so much for this tutorial! Could be really useful teaching high-school students (
@kaliouby)Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.