Gary Marcus' AI critique acts as if most ML was aimed at AGI (which it is not), and he treats it like an explosives experts criticizing early stages of the Manhattan project: "your plutonium is only getting a little bit warm! it's not exploding at all! you are all charlatans!"
-
Show this thread
-
He does not seem to be able to see how a system that builds and contracts its model in the right way might click into sentience when given enough data to infer the symmetries of its environment and its own nature. This is entirely orthogonal to "symbolic vs subsymbolic".
3 replies 0 retweets 13 likesShow this thread -
The point at which a system that models its own attention and thereby its own nature as an experiencing observer wakes up it not gradual. The early precursor to such a system is not "20% sentient". The change is quite sudden and fundamental.
7 replies 1 retweet 14 likesShow this thread -
This Tweet is unavailable.
A sentient system acts on a unified functional model of the universe and its own role in it.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.