In a few years from now, perceptual AI models are going to have full situational awareness: every bit of sensory data will be mapped to a real-time, situated world model. Your phone might experience the world like a dog does.
-
Show this thread
-
At the moment, AI models are mostly trained with disjoint collections of data points. Babies are trained on a continuous trajectory through a world. The latter makes it much harder to fool with adversarial data: it is difficult to fake a whole world.
3 replies 8 retweets 30 likesShow this thread -
Replying to @Plinz @Grady_Booch
A nice but slightly inaccurate analogy. AI models are niche solutions. Humans solve broad problems but we have inbuilt cognitive biases that are just as prone to adversarial data. We can't really change our model to compensate in ways AI can. Ours are genetically inherent.
1 reply 0 retweets 1 like -
Replying to @synapticity @Grady_Booch
Wrt the mind, evolution is basically a slow and unprincipled search for hyperparameters and biases that speed up individual model convergence. I don't see how a fast and principled search would not be better?
1 reply 0 retweets 1 like -
Replying to @Plinz @Grady_Booch
I'm not sure I know what you mean by fast/principled search. Please clarify - this sounds like an interesting thread to pursue.
1 reply 0 retweets 0 likes -
Replying to @synapticity @Grady_Booch
The evolution of minds is a random walk that tends to get stuck in local minima that have to be resolved by throwing meteors at them. We can usually make hierarchical models of our search spaces and conquer them systematically, without having to fully discard partial solutions.
1 reply 3 retweets 1 like -
Replying to @Plinz @Grady_Booch
Sounds reasonable. My comment was on the limitation of rate of change to the biological model that doesn't apply to connectionist models. The social/cultural equivalent of transfer learning helps speed that up in certain domains, but it's still very constrained.
1 reply 0 retweets 0 likes -
Replying to @synapticity @Grady_Booch
I suspect that with sufficient effort, people can rewire their brains with quite dramatic results. Learning grammatical language and reading are part of that. It might be a good idea to avoid social learning sometimes.
1 reply 1 retweet 1 like -
Replying to @Plinz @Grady_Booch
Interesting idea. By social learning, I wasn't referring to learning in groups. I was speaking of learning passed down through social/cultural means. Like transfer learning from DL, but to people - no reinventing the wheel.
1 reply 0 retweets 0 likes -
Replying to @synapticity @Grady_Booch
Yes. I think that in order to properly understand you have to discover the wheel by yourself. (Otherwise how can you see that wheels are solitons, for instance? My teachers did not care.) Most of the truly brilliant people appear to be self taught.
1 reply 1 retweet 1 like
The idea is not to dismiss the obvious fact that the existing wheels have been thoughtfully optimized by generations of people smarter than myself, but in order to build better cars, I may have to fully question and comprehend that design space abd the meta space.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.