Conversation

Pleasantly surprised to see a number of people reacting in pointedly reasonable ways
Quote Tweet
On the Google engineer who thinks his AI has come to life: 1. Whether this terrifies or thrills you, this exchange is astonishing 2. Seems obvious that an advanced language AI would master the ability to communicate the experience of sentience before it actually experiences it
Show this thread
Image
Image
Image
4
133
A lot of people quickly rediscover old basics like Searle’s Chinese room (DT’s endorphin-from-speed example is essentially a subsymbolic equivalent), hard problem of consciousness etc. Strong AI people never actually adequately address these, just make leaps of faith over them
2
21
Surprisingly, the 1980s vintage Hofstadter/Dennett set of essays, The Mind’s I, remains one of the best general introductions to the issues from a Strong AI sympathetic perspective. Despite all the deep learning advances I haven’t seen the foundations debates improved upon in 30y
1
41
It’s important to note though, that “it’s just function approximation” isn’t the mic-drop skeptical response newbies seem to think it is. The universal function approximation properties of neural nets have been recognized for decades. That’s a starting insight, not endpoint.
2
24
But “universal” functions are not special. Leibniz made up a universal smooth curve-fit function centuries ago. In control theory there are “universal” controllers. Universality in some mathematical sense is neither rare, nor self-evidently useful or equivalent to “generality”
2
12
The big feature that makes people all excited is when a “universal” scheme performs suspiciously better than you’d expect from statistical extension. Ie systems based on “future is like the past” logic seemingly able to manufacture serendipity (surprisingly “lucky”) outcomes
1
16
In the 90s people got similarly excited for a simpler scheme called “probably approximately correct” or PAC-learning. Where the systems seemed to do better than you’d intuitively expect statistical extension systems to do.
2
14
Personally, I infer from these that a) we don’t understand the richness of statistics b) we use only a fraction of the information content of our own experiences through narrative memory processes that are very situationally path-dependent and *not* strong generalizations
1
33
c) this is a feature not a bug When you embody and situate an intelligence it learns a narrow set of useful things by actual testing, not explosively confabulatory bullshit generalizations The useless generalizations are filtered by Darwinian selection pressures of embodiment
1
26
While things like GANs and simulations reproduce weak forms of survival pressures, they ARE weak. They’re not the real thing. Bet: when we start doing serious “testing in production” of AIs in robot bodies, the scary apparent generality will give way to useful specificity
Replying to
But if you’re new to these debates I strongly recommend setting aside high-falutin’ conceptual debates on ill-defined notions like “sentience” (which cedes frame control to the “general” in “AGI”) and track progress in embodied problems like a robot making a cup of tea first
1
34
Progress in understanding that liquids live in containers and behave certain ways is far slower than producing surreal images of tea cups or impressive literary sentences about tea drinking. Language models are not reality. Image models do not reproduce physics.
2
62
I’m betting in my lifetime we’ll get impressively useful robot butlers that will make cups of tea properly using natural language commands and deep-learned “qualitative physics” as it is sometimes called (ie correct intuitive knowledge of liquids in cups, heating etc)
2
19
In my headcanon future of AI, the butlerian jihadists are secret afraid of robot butlers not superintelligences
3
23
Probably the best thing someone starting in AI, robotics, controls etc can do today is to simply table all consideration of ideas at the level of minds, intelligence, sentience, quality, p-zombies, inverted spectra etc. Prioritize experience with cup-of-tea problems in production
1
19
In the 80s and 90s we all started with the abstract stuff (hail Hofstadter, Penrose, Dennett) because the ability to actually try to do anything was so damn limited by computing. You can now tinker very cheaply with actual problems and come at abstract questions more powerfully
1
30
I fully expect that the next Douglas Hofstadter is a 19-year-old currently playing with actual robots and BERT models, not clever thought experiments. They’ll write this generation’s Godel, Escher, Bach bottom up starting with tea-making-butler blooper reels.
3
51
Hygiene point: firmly reject any overtures that require you to begin with the presumption that “general” intelligence is a self-evidently well-posed concept. It’s like a Darwinian agreeing to debate a creationist on evolution by first conceding that “god exists”
2
40
It is tiresome how many newbies try to engage me with the unexamined idea that “general intelligence” is a thing. Things just go downhill from there and soon you’re frustratingly talking past each other about the meaning of “sentience” and “voilition” etc. Meanwhile no robot tea.
4
37
Me too, and unironically. But I don’t push “there is no such thing as intelligence” TINSTAI-pill stances because belief in (default-specific) intelligence as opposed to brute-scalable “general” intelligence is mostly harmless linguistic convention to point to some behaviors
Quote Tweet
Replying to @vgr
@artpi always told me he doesn’t care about artificial intelligence because he doesn’t believe in intelligence at all, and for a while I thought he was trolling me, but now that you’ve added the word “general” it makes perfect sense
1
12
The reason admitting “generality” a priori is you immediately get to “just add GPUs/TPUs” scaling-to-god silliness. Make them work for every inch of generality they claim, step-by-step. Move the goalposts one inch at a time in a Zeno’s race. It’s good for them.
2
17