Conversation

Looks like a common knowledge moment happening. For over a decade people privately told me they agreed with me on superintelligence derangement syndrome, but surprisingly few were willing to be open about it. A switch has flipped.
Quote Tweet
This discussion between a Google engineer and their conversational AI model helped cause the engineer to believe the AI is becoming sentient, kick up an internal shitstorm and get suspended from his job. And it is absolutely insane. washingtonpost.com/technology/202
Show this thread
Image
Image
Image
Image
7
91
Replying to
Thinking a superintelligent AGI (artificial general intelligence) in the yudkowsky-bostrom sense will emerge and oppress/kill us all unless we solve a bullshit theology problem they call the “AI alignment” problem Angels on a pinhead. But weirdly and toxically influential in SV.
Show replies
Replying to and
It is all AI cosplay indeed, a fantastic simulation, fun game, and perhaps a unique bias research tool. Nothing more. Two obvious main components remain lacking: - Identity centricism - real time model updates (memory)
Quote Tweet
FTR, agree with Google's conclusion. The simulation will continue to improve, but it's still a backwards-looking, pattern-matching statistical simulation that does not learn and reason in real time. twitter.com/tomgara/status…
Replying to and
I personally would love for AGI to spontaneously emerge from clever mimicry. But I know that is just not going to happen. Might we get there eventually? Sure. But my completely subjective and ill-informed opinion is that it will be a long time from now. Gracefully shuffles back.
Show more replies