Usual reminder: when I've been saying for the past 5+ years that deep learning is interpolative, I don't mean it does linear interpolation in the original encoding space (which would be useless). It does interpolation on a low-dimensional manifold embedded in the encoding space.
-
Show this thread
-
Replying to @fchollet
I’ve wondered a bit / gone down rabbit holes in the recent past about the connections between language embeddings and topology, but it would be fascinating to see if there are task / data independent forms that emerge in the low-dim manifold where interpolation takes place.
2 replies 0 retweets 5 likes -
Makes sense. Thinkinging in topology always seems to help me think about ML. Makes me think about how many languages evolved with similar grammars, showing somewhat of the innate structure of language in our brains.
1 reply 0 retweets 1 like -
Replying to @erikorndahl @fchollet
YES! I forget the book but I read something about a year or so ago about this theory of geometry and gesture shaping language, and was thinking very similarly re applications in ML
1 reply 0 retweets 1 like
Maybe "Mind in motion: how action shapes thought?"
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.