Yes… in the presence of conflicting goals, one would need an objective function (or something roughly equivalent) expressing how to trade them off. Otherwise the framework doesn’t apply.
I don’t actually care what Gods can/can’t do. But, since actions, outcomes, and preferences are not objective features of reality, I don’t think they could always apply DT either.
-
-
And, if the force of the claim is “should,” the question would be what sort of should that is.
-
My suspicion is that this turns out to be circular. You start with the implicit assumption that there must be something to maximize. That is what gives “should” its force: you *should* apply DT, because if you don’t, you won’t maximize.
- 1 more reply
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.