I really just don't see how the human race can be entrusted with something like artificial consciousness. This worries me much more than the question of an AI that can rewrite it's own code. Emergent confusions at least aren't claims to mass eternal torture as a fundamental good
Eternal hell for non believers of nonsense is a moral good to a substantial portion of humans. Like that's the terminal value, not an incidental confusion along the way. If an AI has good terminal values I would probably take my chances with it over democratic or human process.