If done right, self improving AI has the potential to end all suffering, by ending conscious life on the planet, but how can we make sure that AI is safe and properly sterilizes the planet so new suffering never springs up again that we would be helpless to prevent?
I think pain signals a need to change a regulation. Once the receiver has fully acted on the signal to the satisfaction of the discriminator, the suffering ends. Perfect regulation makes you a stoic.