Conversation

It doesn't actually make that much of a difference. They were always going to be deadly at a higher intelligence; this just makes the danger more legible to people who wouldn't have the understanding to see it in a smart thing with "it just predicts text" written on the tin.
But if it happened through EMs like Hanson thought or the progression was slower and more opaque, the alignment problem might have been defeated. With LLMs that everyone everywhere are racing to create (easily), our death is now a certainty
1
: Do what I say, not what I do. Do what I mean, not what I say. If words, actions, and meaning are promiscuous, how fuzzy is the line between a benevolent and malevolent AGI/ASI?
My concern is that adding other domains will get it out of the current "primarily trained on predictive skill" equilibrium into a directly agentic equilibrium, and this will reduce the time/intelligence-level during which you will have access to superhuman AI before it kills you.
2
7