Looks like a common knowledge moment happening. For over a decade people privately told me they agreed with me on superintelligence derangement syndrome, but surprisingly few were willing to be open about it. A switch has flipped.
This discussion between a Google engineer and their conversational AI model helped cause the engineer to believe the AI is becoming sentient, kick up an internal shitstorm and get suspended from his job. And it is absolutely insane. https://washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/…
The reluctance has been similar to the reluctance to be openly critical of other cults like Scientology. It’s powerful and connected enough that if you’re in tech, there’s a social tax to disagreeing with them. Used to think they’re more benign. No longer certain of that.
Thinking a superintelligent AGI (artificial general intelligence) in the yudkowsky-bostrom sense will emerge and oppress/kill us all unless we solve a bullshit theology problem they call the “AI alignment” problem
Angels on a pinhead. But weirdly and toxically influential in SV.
If you sit down with a robotic arm 🦾 and write out a series of instructions for it to make a coffee it’s pretty clear 🤦♂️ AI is just an algorithm- series of instructions with ‘no sentience’. How these algorithms are applied is the only danger. AI is just sci-fi!! 🤦♂️
this has nothing to do with the people worried about superintelligence tho, all of them would agree that LaMDA exhibits zero sentience(or that the question is not meaningful)