Looks like a common knowledge moment happening. For over a decade people privately told me they agreed with me on superintelligence derangement syndrome, but surprisingly few were willing to be open about it. A switch has flipped.
This discussion between a Google engineer and their conversational AI model helped cause the engineer to believe the AI is becoming sentient, kick up an internal shitstorm and get suspended from his job. And it is absolutely insane. https://washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/…
Thinking a superintelligent AGI (artificial general intelligence) in the yudkowsky-bostrom sense will emerge and oppress/kill us all unless we solve a bullshit theology problem they call the “AI alignment” problem
Angels on a pinhead. But weirdly and toxically influential in SV.
It is all AI cosplay indeed, a fantastic simulation, fun game, and perhaps a unique bias research tool. Nothing more.
Two obvious main components remain lacking:
- Identity centricism
- real time model updates (memory)
FTR, agree with Google's conclusion. The simulation will continue to improve, but it's still a backwards-looking, pattern-matching statistical simulation that does not learn and reason in real time. twitter.com/tomgara/status…
There is no single topic on which I’ve done a larger 180 against expert consensus than AGI.
A few years ago, I lost sleep over the prospect—even wrote a novel with it as the core device.
Now, I’m skeptical I’ll see it in my lifetime.
Narrow AI w/ extreme value, yea.
AGI, no. twitter.com/wintonARK/stat…
I personally would love for AGI to spontaneously emerge from clever mimicry. But I know that is just not going to happen. Might we get there eventually? Sure. But my completely subjective and ill-informed opinion is that it will be a long time from now. Gracefully shuffles back.