AGI is the eschatological leap of faith that a series of mottes will converge to a bailey. The big achievement of the AGI thought-experiment crowd (not AI practice) is to label their motte-and-bailey fallacy as a “moving the goalposts” lack of vision on the part of skeptics.
It’s roughly like believing that building better and better airplanes will converge to a time machine or hyperspace drive. The adjective “general” does way too much work in a context where “generality” is a fraught matter.
I should add: I don’t believe humans are AGIs either.
In fact, I don’t think “AGI” is a well-posed concept at all. There are approximately turing complete systems, but assuming the generality of a UTM relative to the clean notion of computability is the same thing as generality of “intelligence” is invalid.
Curious for your take on: https://arxiv.org/abs/1703.10987 which satirizes your position. Of course it's snarky but I figure it's within your style to appreciate it?
A kinda silly category error? Size is a low-level physical parameter. "Intelligence" is a bureaucratic measure of a complex system. This sort of argument would work against, for eg. skeptics of Moore's law.