Conversation

Looks like a common knowledge moment happening. For over a decade people privately told me they agreed with me on superintelligence derangement syndrome, but surprisingly few were willing to be open about it. A switch has flipped.
Quote Tweet
This discussion between a Google engineer and their conversational AI model helped cause the engineer to believe the AI is becoming sentient, kick up an internal shitstorm and get suspended from his job. And it is absolutely insane. washingtonpost.com/technology/202
Show this thread
Image
Image
Image
Image
Replying to
The reluctance has been similar to the reluctance to be openly critical of other cults like Scientology. It’s powerful and connected enough that if you’re in tech, there’s a social tax to disagreeing with them. Used to think they’re more benign. No longer certain of that.
2
32
Show replies
Replying to
Thinking a superintelligent AGI (artificial general intelligence) in the yudkowsky-bostrom sense will emerge and oppress/kill us all unless we solve a bullshit theology problem they call the “AI alignment” problem Angels on a pinhead. But weirdly and toxically influential in SV.
6
65
Show replies
Replying to
If you sit down with a robotic arm 🦾 and write out a series of instructions for it to make a coffee it’s pretty clear 🤦‍♂️ AI is just an algorithm- series of instructions with ‘no sentience’. How these algorithms are applied is the only danger. AI is just sci-fi!! 🤦‍♂️
1
1
Show replies
Replying to
this has nothing to do with the people worried about superintelligence tho, all of them would agree that LaMDA exhibits zero sentience(or that the question is not meaningful)
1
1