I think it would be pretty cool to have a transformer/(masked)self attention keras layer to go alongside the recurrent models! I don't know if anyone has made one in the general keras backend? Would be neat to merge into the main codebase, if so!!
-
-
-
Looks like current work is on standardizing basic attention layers:https://github.com/tensorflow/community/blob/master/rfcs/20190115-dense-attention.md …
- Show replies
New conversation -
-
-
can we have a tensorflow-2.0a2 on pypi ?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Tensorflow 2.0 made working with complex models easy.
-
What kinda of project are you working on?
- Show replies
New conversation -
-
-
Fancy names!
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.