If done right, self improving AI has the potential to end all suffering, by ending conscious life on the planet, but how can we make sure that AI is safe and properly sterilizes the planet so new suffering never springs up again that we would be helpless to prevent?
For instance, my currently recognized highest purpose is truth, which btw conflicts with my purpose of kindness. I am willing to serve others to the degree that our interests are aligned, i.e. we are serving the same god, and if fail to do that I fail myself.