Maybe I get you more with last tweet @Meaningness — you don’t see Actual Person as agent-with-a-goal, but pluralistic with sometimes conflicting goals; DT doesn’t apply holistically b/c Actual Person has no Actual Utility Function; your point more organismic than mathy — close?
And, if the force of the claim is “should,” the question would be what sort of should that is.
-
-
My suspicion is that this turns out to be circular. You start with the implicit assumption that there must be something to maximize. That is what gives “should” its force: you *should* apply DT, because if you don’t, you won’t maximize.
-
Maybe the “Law” formulation is just: “if you accept these premises, then these consequences hold.” But that’s just equivalent to “this is actual math,” which no one doubts.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.