Intuition is the part of your knowledge that you cannot test for correctness. Proving correctness requires deriving a very low dimensional representation, so you can apply analytic operators. Most of the functions that a brain approximates cannot be translated into that form.
IMHO, causal models are simply those that describe a domain as two or more interfacing systems that can affect each other's evolution. There is no reason why DL cannot create such models.
-
-
It can learn the external observables, but can it learn the arrow of causation (ie things like Simpson’s paradox, or sunrise because rooster crows or other way around), and/or can it give correct answers when familiar parts are put together in a new way?
-
Right now, DL is very good at fooling the researcher. See:https://medium.com/intuitionmachine/the-illusion-of-the-ungameable-objective-538a96a53efe …
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.