AGI is the eschatological leap of faith that a series of mottes will converge to a bailey. The big achievement of the AGI thought-experiment crowd (not AI practice) is to label their motte-and-bailey fallacy as a “moving the goalposts” lack of vision on the part of skeptics.
It’s roughly like believing that building better and better airplanes will converge to a time machine or hyperspace drive. The adjective “general” does way too much work in a context where “generality” is a fraught matter.
I should add: I don’t believe humans are AGIs either.
In fact, I don’t think “AGI” is a well-posed concept at all. There are approximately turing complete systems, but assuming the generality of a UTM relative to the clean notion of computability is the same thing as generality of “intelligence” is invalid.
At least the really silly foundation on IQ and psychometrics is withering away. I think the Bostrom style simulationist foundation is at least fun to think about though even sillier taken literally. But it highlights the connection to the hard problem of consciousness.
I’ve been following this conversation since the beginning about 15 years ago, and I feel I need to re-declare my skepticism every few years, since it’s such a powerful attractor around these parts. Like periodically letting my extended religious family know I’m not religious.