If done right, self improving AI has the potential to end all suffering, by ending conscious life on the planet, but how can we make sure that AI is safe and properly sterilizes the planet so new suffering never springs up again that we would be helpless to prevent?
Survival is an abstraction. Our physiological needs regulate for actual systemic parameters, while our cognitive needs don't care about survival, and social needs may even make you sacrifice yourself.