Are there behaviors that aren’t representable as the maximum of a utility function? My gut says that you can add enough epicycles to a utility function to get whatever you want, and that the real meta-rational consideration is that the utility function be “simple”
-
-
-
Simplicity trades off against other relevant considerations. See the discussion of multiple stakeholders in
@JohnDCook’s OP e.g.
End of conversation
New conversation -
-
-
Formulating *all* problems in terms of ‘utility maximization’ is like the original sin from which the rest of troubles arise
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
- Let's make a utility-maximizer. - That is too computationally intensive. Precompute the environment-to-best-action mapping, do clustering [!!!] on it, call the result deontology. - Hm, humans actually make most decisions via habits. Call habits virtuous/vicious, then.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Agree completely. And real world problems make this worse because our intuitive sense of utility involves multiple ones that are simultaneously in play and dynamically changing in relative importance.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
I think there are symmetry conditions which can be viewed as rational
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.