How many dimensions does our world representation have?
Conversation
actually, while you will see me rant on the limitations of language, I think there's not that much of a difference
1
wearing the boring hat for a second, dimensionality of word2vec comes from optimizing for particular encoding architecture
1
I'd actually be interested in the details, the word2vec idea just seemed like a really convenient but sloppy shortcut
2
Replying to
it's not sloppy, it's actually quite clean- just seems unlikely to be analogous to the way humans actually encode words
1
1
word2vec just tries to predict a word from immediately surrounding words, this rep makes them small enough to put in an LSTM
1
1
Replying to
I have a few idea generation techniques that rely on subtracting one vector and adding another
Quote Tweet
Replying to @allgebrah
it is entertaining to force one process into the other's structures: mushroom houses, crystallizing cities, mycelium crystals.
1
Replying to
also this one
Quote Tweet
Replying to @allgebrah
transpose X from environment Y to environment Z
(via twitter.com/ctrlcreep/stat)


