1) I say at the end of the thread that the successful approach (currently and for the foreseeable future) consists of systems that blend symbolic world models with deep learning perception modules. The core of such systems is symbolic though, the ML is peripheral.
-
-
-
2) Symbolic cognitive systems (and most software in general) doesn't *have* to be handcrafted. In the future most software will be generated. When our ML algorithms start getting good at abstraction. For now, our models just aren't conducive to abstraction.
End of conversation
New conversation -
-
-
This Tweet is unavailable.
-
This Tweet is unavailable.
-
-
-
On the learning the symbolism part. I think most symbolic systems can explain things beyond real world (aka data distribution) so you probably can not learn them looking at real world. For example, you can not learn Newtonian dynamics without ever experiencing 0 gravity.
-
I think it is very hard to separate understanding and predicting as long as you stick to real world. This was my main take-away from the Foresight and Understanding book of Stephen Toulmin. So, we can "learn" them if we can only change the definition of "learn" in ML
End of conversation
New conversation -
-
-
I think what
@fchollet is saying is that there needs to be a minimal innate ontology to supplement symbolic systems...Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.