AGI is the eschatological leap of faith that a series of mottes will converge to a bailey. The big achievement of the AGI thought-experiment crowd (not AI practice) is to label their motte-and-bailey fallacy as a “moving the goalposts” lack of vision on the part of skeptics.
It’s roughly like believing that building better and better airplanes will converge to a time machine or hyperspace drive. The adjective “general” does way too much work in a context where “generality” is a fraught matter.
I should add: I don’t believe humans are AGIs either.
In fact, I don’t think “AGI” is a well-posed concept at all. There are approximately turing complete systems, but assuming the generality of a UTM relative to the clean notion of computability is the same thing as generality of “intelligence” is invalid.
At least the really silly foundation on IQ and psychometrics is withering away. I think the Bostrom style simulationist foundation is at least fun to think about though even sillier taken literally. But it highlights the connection to the hard problem of consciousness.
I’ve been following this conversation since the beginning about 15 years ago, and I feel I need to re-declare my skepticism every few years, since it’s such a powerful attractor around these parts. Like periodically letting my extended religious family know I’m not religious.
It’s interesting that the AGI ideology only appeared late into the AI winter, despite associated pop-tropes (Skynet etc) being around much longer. AGI is a bit like the philosopher’s stone of AI. It has sparked interesting developments just as alchemy did chemistry.
In the form of thought experiments about exponentially self-improving paperclip maximizers and such. Older AI visions were more like science fiction tropes than seriously argued positions. When you read Simon or Minsky it is much more grounded stuff.
Some early stuff got pretty wild in ambition
But I think the most salient difference is that Simon and Minsky were actually AI practitioners. If what you mean is that now we have people like Yudkowsky who view their job as predicting and forecasting general AI, then I agree