is there also an easy way to do small variations? e.g., run to the two sides of each word, without the word?
-
-
not entirely sure what you mean. Please give an example?
1 reply 0 retweets 0 likes -
z_i = concat(LSTM(x_1,..,x_{i-1}), LSTM(x_n,...x_{i+1}))
3 replies 0 retweets 1 like -
you're talking about variable length sequences?
1 reply 0 retweets 0 likes -
variable length I take for granted. I meant representing the context of the word without the word itself (or sim tweaks)
1 reply 0 retweets 2 likes -
also there is some structure in the joint distribution of the left and right contexts that should be modeled
1 reply 0 retweets 0 likes -
yes, presumably a multi-layer perceptron on top of the concat should capture this to some extent.
2 replies 0 retweets 0 likes -
if you want to use Keras, easy way is to do the complicated splitting in Numpy & use a multi-input model.
1 reply 0 retweets 0 likes -
Replying to @fchollet
i like the keras api for the "standardized" use cases. Trying to think how something like it can be extended to less standard.
1 reply 0 retweets 1 like -
will it keep track of the gradients if I switch to numpy in the middle? How?
1 reply 0 retweets 2 likes
no, this would just be a preprocessing step. Depends on what you want to achieve...
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.