Isn’t this the entire point of the Effective Altruism movement? Ranking the suffering in the world, so that it can be addressed in the most cost-efficient manner?
I know you’re doing a rah-rah anti-woke thing but I feel like EA fits your criteria better.
Can you expand on why EA is naive, leaving aside the focus on unaligned AGI?
I guess my worry about the project would be that world suffering isn’t a legible thing that can easily be measured and managed. However, the problem is too important to give up on.