Conversation

Replying to
Personally, I infer from these that a) we don’t understand the richness of statistics b) we use only a fraction of the information content of our own experiences through narrative memory processes that are very situationally path-dependent and *not* strong generalizations
1
33
c) this is a feature not a bug When you embody and situate an intelligence it learns a narrow set of useful things by actual testing, not explosively confabulatory bullshit generalizations The useless generalizations are filtered by Darwinian selection pressures of embodiment
1
26
While things like GANs and simulations reproduce weak forms of survival pressures, they ARE weak. They’re not the real thing. Bet: when we start doing serious “testing in production” of AIs in robot bodies, the scary apparent generality will give way to useful specificity
1
25
But if you’re new to these debates I strongly recommend setting aside high-falutin’ conceptual debates on ill-defined notions like “sentience” (which cedes frame control to the “general” in “AGI”) and track progress in embodied problems like a robot making a cup of tea first
1
34
Progress in understanding that liquids live in containers and behave certain ways is far slower than producing surreal images of tea cups or impressive literary sentences about tea drinking. Language models are not reality. Image models do not reproduce physics.
2
62
I’m betting in my lifetime we’ll get impressively useful robot butlers that will make cups of tea properly using natural language commands and deep-learned “qualitative physics” as it is sometimes called (ie correct intuitive knowledge of liquids in cups, heating etc)
2
19
In my headcanon future of AI, the butlerian jihadists are secret afraid of robot butlers not superintelligences
3
23
Probably the best thing someone starting in AI, robotics, controls etc can do today is to simply table all consideration of ideas at the level of minds, intelligence, sentience, quality, p-zombies, inverted spectra etc. Prioritize experience with cup-of-tea problems in production
1
19
In the 80s and 90s we all started with the abstract stuff (hail Hofstadter, Penrose, Dennett) because the ability to actually try to do anything was so damn limited by computing. You can now tinker very cheaply with actual problems and come at abstract questions more powerfully
1
30
I fully expect that the next Douglas Hofstadter is a 19-year-old currently playing with actual robots and BERT models, not clever thought experiments. They’ll write this generation’s Godel, Escher, Bach bottom up starting with tea-making-butler blooper reels.
3
51
Replying to
It is tiresome how many newbies try to engage me with the unexamined idea that “general intelligence” is a thing. Things just go downhill from there and soon you’re frustratingly talking past each other about the meaning of “sentience” and “voilition” etc. Meanwhile no robot tea.
4
37
Me too, and unironically. But I don’t push “there is no such thing as intelligence” TINSTAI-pill stances because belief in (default-specific) intelligence as opposed to brute-scalable “general” intelligence is mostly harmless linguistic convention to point to some behaviors
Quote Tweet
Replying to @vgr
@artpi always told me he doesn’t care about artificial intelligence because he doesn’t believe in intelligence at all, and for a while I thought he was trolling me, but now that you’ve added the word “general” it makes perfect sense
1
12
The reason admitting “generality” a priori is you immediately get to “just add GPUs/TPUs” scaling-to-god silliness. Make them work for every inch of generality they claim, step-by-step. Move the goalposts one inch at a time in a Zeno’s race. It’s good for them.
2
17
Replying to
I’ll do you one better. Forget presupposing that intelligence exists - I prefer to challenge that consciousness itself exists. For me it’s either pansychism or nothing is actually conscious, including us.
1
1