I don’t know if this works, since I could build Rube Goldberg machines of arbitrary complexity, more complex than any human, and then in this model, they would have more moral value than humans
-
-
The older I get, the more I tend toward a subjective-eye rather than a god-eye view of morality, in the sense that I believe ethics should preferentially focus on "the feeling human subject" rather than their bodies or physical existence.
-
The EA "suffering is inherently bad" stuff is the first step, but it's also obviously wrong: a life without suffering would not be "a good life." There must be a balance.
-
Personally I agree but I don't really know how to answer lol.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.