AI alignment should be reframed as the problem of AI representation. The worry is that AI won't match human interests. Human interests are diverse. AI that just does what humans would do, but faster and with more intelligence, is probably the closest to "aligned" we can get.
The general punchline here is that AI can either do something radically alien or it can do what humans were already going to do but really really fast, and the latter is the closest we can possibly get to a meaningful solution to any AI anxieties.