If done right, self improving AI has the potential to end all suffering, by ending conscious life on the planet, but how can we make sure that AI is safe and properly sterilizes the planet so new suffering never springs up again that we would be helpless to prevent?
This is exactly the point. Meaning can obviously not be true. Any difference between is and ought can only be there because we make it so. But without meaning we cannot be ethical beyond a principled self-interest to optimize pleasure/displeasure differentials.