I think the functional realization of the representation of meaning is not hard. The difficulties are still in finding ways to efficiently approximate more general classes of functions than we can do with stochastic gradient descent.
-
-
Replying to @Plinz
Google translate vectors are in my opinion a step forward, but I have seen no true "functional realization of the representation of meaning". Is there one? If it's "not hard", then surely this problem should have been solved already?
1 reply 0 retweets 0 likes -
Replying to @JustinStares
The conceptual manifold that the translate vectors operate on can be thought of as an address space to the actual mental representations: the predictive functions that generate the virtual dream world that you perceive as reality.
1 reply 0 retweets 2 likes -
Replying to @Plinz
Agreed: the theory of 'word space'. But vector values used by G. translate are based on big data (written input). The vector values I use in my "virtual dream world" are different: they are multi-sensory. I'm not saying it's impossible, just that it hasn't been done yet.
1 reply 0 retweets 1 like -
Replying to @JustinStares
Because the conceptual manifold is tied to language and shared between speakers, its shape can be inferred from statistics over text. But to imagine what things look like, you need to infer their shape from patterns on your retina and skin.
1 reply 0 retweets 0 likes -
Replying to @Plinz
Can its shape really be inferred from text? When I read the words "your first beer" I see an image of a fresh pint of ale. You say "your first teacher" and I think: Mrs Burcham. You presume that vector values are universal. But they are in part subjective.
1 reply 0 retweets 0 likes -
Replying to @JustinStares
The word vectors describe the relationships of the words among each other. The beer you are seeing is an association between the concept triggered by the word and the cortical software that generates a predictive hallucination of your drink.
1 reply 0 retweets 2 likes -
Replying to @Plinz
Yes: the vectors describe the relationships between each other. All I am claiming is that these vector values (in humans) also incorporate experiences (sights, sounds, memories). This is how we learn. These values cannot be inferred from text.
1 reply 0 retweets 0 likes -
Replying to @JustinStares
I am not sure that we are talking about the same thing. There is difference between the vector spaces that encodes relationships between text, and the vector spaces that encode relationships between impulses on your retina. They represent functions over different types of data.
1 reply 0 retweets 1 like -
Replying to @Plinz
I see what you're saying, but I think there is only one vector of meaning. I'm actually quoting Saussure. He stated there were in fact two 'vectors'; one 'acoustic image' (the word); one 'concept'. Google vectors give approximate values for concepts, but they need refining.
1 reply 0 retweets 0 likes
I don't think you see what I am saying. If you like, you could watch my TEDx talk, which has the shortest visual explanation that I did on this.
-
-
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.