AGI is the eschatological leap of faith that a series of mottes will converge to a bailey. The big achievement of the AGI thought-experiment crowd (not AI practice) is to label their motte-and-bailey fallacy as a “moving the goalposts” lack of vision on the part of skeptics.
It’s roughly like believing that building better and better airplanes will converge to a time machine or hyperspace drive. The adjective “general” does way too much work in a context where “generality” is a fraught matter.
I should add: I don’t believe humans are AGIs either.
In fact, I don’t think “AGI” is a well-posed concept at all. There are approximately turing complete systems, but assuming the generality of a UTM relative to the clean notion of computability is the same thing as generality of “intelligence” is invalid.
Turing completeness is a red herring in discussions of AGI. The two have nothing to do with each other.
Particularly since all universal machines have the same power (modulo speed and memory).
Well, I would not go quite that far. Non-tarpit UTMs are an important subset, and do seem to correlate to what can be called intelligence, so it’s reasonable to compare them in terms of both.