No, I don't see why knowing where my morality came from indicates nihilism. We already know it is there, that we didn't chose it and that we can't lose it without a frontal lobotomy. Trying to work down to what is the best for humans is something we'll be doing forever.
If you want something bigger than us to come & validate those aspects of us that are humane, humanitarian and humanist, I can't help you. I can only tell you and provide sources for what is already there - the best aspects of our nature and argue that it should be extended to all
-
-
The optimal human morality would be the system which maximises human wellbeing, minimises human suffering because this is what all humans want for themselves and is extended to all humans for that reason too. We can disagree and go our own way within that on how to live.
-
There is no more to it than this. You can ask how do I justify this rationally to someone who doesn't want to do it and if this requires something outside shared human needs and moral foundations to justify it, I can't. This is a human morality expanded to all humans.
-
Perhaps I'm reading you incorrectly, but it appears that you are conceding that empirical reasoning alone is insufficient to provide a rational argument as to why people ought to expand their circle of empathy too all humans. Do I have that right?
-
??? I really can't explain any better than that, I'm afraid. If you still don't understand what I mean, saying it all over again is unlikely to help. I am speaking to what is optimum human morality which SH explains by saying it is akin to an optimum human diet.
-
You're not following me for the reason religious people generally don't. You'd first need to accept, at least for the sake of argument, that morality is not something humans seek outside themselves but a quality of us, that we can understand as a whole load of 'is's & get right.
-
I am perfectly willing to entertain the idea. What I can't understand is how all those 'is's' can possibly provide a rational argument to follow moral precepts that might, on occasion, be against our individual self interest.
-
It's the same as how all these 'is's can provide a rational argument for eating an optimal diet that might, on occasion, be against our preferences. These are different things - what is optimal and what we want to do.
-
If we could programme all the 'is's that pertain to human morality into a computer & it could calculate the optimum human morality in any given situation, this still won't wash with someone who isn't thinking morally but selfishly. Those would be different calculations.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.