This bold defense of deep learning says that
some DL folks have yet to internalize the Ladder of Causation, eg https://ucla.in/2HI2yyx . Can any DL-insider think of a good way to convince @Lecun that some questions cannot be answered from data alone, no matter what? #Bookofwhy.https://twitter.com/ylecun/status/1209497021398343680 …
-
-
Can see both viewpoints. DL can manage to do some causal modeling.
@yudapearl can a precise boundary be established? That could help inform research efforts into how DL can be extended to handle more causal effects? -
* Metallurgist: My nails just did a table, they can do a house?" * Carpenter: Sure, you can do the nails, I'll do the woodwork, together we can build a house. * Metallurgist: No! The nails should do it. They can perhaps be extended to handle the woodwork.
#Bookofwhy
Kraj razgovora
Novi razgovor -
-
-
It's clear
@ylecun believes the 'new math' of CI can be expressed in the mathematical framework of DL. Given CI math/language is not particularly complex, I don't see why CI and counterfactual 'beliefs' could not be expressed as outputs given knowledge,model/belief, and question -
Sure. The math of woodwork is not too complex, so I dont see why it cannot be expressed as output of the math of metallurgy, given the unique metallurgical properties of wood.
#Bookofwhy
Kraj razgovora
Novi razgovor -
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.