"Distribution shift" is a complicated way of saying it, perhaps makes it appear more esoteric than it is? The simple story is that data-fitting ML interpolates the data, therefore cannot extrapolate. To extrapolate, one needs a model. This should not be news ...
-
-
-
it’s not news; it’s verification that deep learning’s achilles heel has persisted, for decades
End of conversation
New conversation -
-
-
Hey Gary could you reboot humanity at the same time? Maybe we can get humans and machines going toward a positive outcome simultaneously.



-
that's my other line of work, my book Kluge and this recent WSJ piece:https://www.wsj.com/articles/the-problem-with-believing-what-were-told-11567224060 …
- 1 more reply
New conversation -
-
-
The issue of generalization is less acute for Google and others coz they truly never encounter a new image which isn't close enough to a sample in their training dataset. E.g. AR view in maps. They have collected street views for years now in all weather conditions.
-
they (Waymo) still can't make their driverless cars safe enough for the real world
- 1 more reply
New conversation -
-
-
Yes, still a long way to go! Probabilistic circuits, computational graphs for joint distributions, might be a step towards a reboot. They know when they do not know, can inject symbolic knowledge into deep models, and speedup AIR models. See the papers at NeurIPS, ICML, AAAI.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Seems like a pre-observation class prior effect. In this case a "bais before you even see the picture" effect. Can be a good thing if the sample of experience generating the priors is consistent the reality that maters. And i suspect for living creatures w physical bodies it is.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.