I guarantee that won't happen :) You can't discover meaning if you don't represent knowledge, let alone multimodal grounds of knowledge. Also, you can't manipulate knowledge without having a cognitively powerful filtering mechanism. Both points are the essence of my papers. :)
-
-
Replying to @eyad_nawar
I don't think so. The meaning of a datum is given by its relation to a unified, [more or less] coherent model of the universe. Knowledge is a second order model: a model of what you implemented in your simulations and simulacra of the universe, and thus what you know about it.
1 reply 0 retweets 1 like -
Replying to @Plinz
Yes, I agree. That is why I said "multimodal grounds" refering to the symbol grounding problem. In my work, I use the computational neuroscience concept, "free energy principle", which is based on the idea of beliefs about the model of the environment the agent inhabits.
1 reply 0 retweets 1 like -
Replying to @eyad_nawar @Plinz
By multimodal grounds, I mean multiple grounded references of knowledge that represent a deterministic model of the universe, upon which all stochastic models may successively take place. But what we are both talking about, is what all current DL models lack. I explain that […]
1 reply 0 retweets 0 likes -
Replying to @eyad_nawar @Plinz
from a math, stats & philosophy persp. in my next article. Current DL models, as GPT-2 don't have a model of the universe to base its knowledge. That is why you'd see me & Gary blame their lack of "priors". Their words are referenced to how many times it appeared in the corpus.
1 reply 0 retweets 2 likes -
Replying to @eyad_nawar
The priors are the result of an evolutionary search. Nothing a DL algorithm cannot do.
2 replies 0 retweets 0 likes -
Replying to @Plinz @eyad_nawar
“Their words are referenced to how many times it appeared in the corpus“ ... in the context of cooccuring other words. And the same is true for the blips on your retina, skin or thalamus.
1 reply 0 retweets 0 likes -
Replying to @Plinz
Given Neuroscience's perspective, I don't think we recognize things by the repeating pattern in single pixel value combinations. Yes, there are low representation of knowledge, but not single values. Another note is the ability to reflect back on the brain for a more complex […]
2 replies 0 retweets 0 likes -
Replying to @eyad_nawar @Plinz
inference, as I said before, being one very powerful filtering mechanism. In one of my papers, I talk about "spatially associated objects" among other filtering mechanisms, to narrow down the probabilistic distribution of recognizable objects. I'm aware that some inference […]
1 reply 0 retweets 0 likes -
Replying to @eyad_nawar @Plinz
may take place way before the sensory input reaches the brain, thats from a cognitive perspective. And you're a far more expert on that subject that I am, but I would argue that we don't do differentiable calculus on a limited domain of a cont. funct, which is why we generalize.
1 reply 0 retweets 0 likes
For your argument to work, you would have to show that differentiable calculus plus evolutionary search (and other common extensions) can in principle not represent or discover the desired functions.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.