utilitarians but they're maximizing absolute utility
-
-
-
-
instead of having max(∑u) or max(min(u)) or whatever as an objective function one simply arranges things to maximize ∑|u|https://twitter.com/WrappedInThFlag/status/1305688584998080512?s=19 …
Show this thread -
im not worried about AI risk because the first thing AI would do to maximize utility would be to conceive of a utility function that diverges positively for any outcome and call it a day (computers are lazy as hell)
Show this thread
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.