luv 2 live in a post-pain world where upon encountering an object which should be avoided i spend the next three weeks meticulously updating the hedonic value of every other object
it makes sense. eg take a simple RL agent, you can get the same behavior from it whether you give negative reward for lava contact, or positive reward for being far from lava but i think it'd get insanely computationally expensive for more complex agents like us
-
-
the fact that behavior is unchanged seems troubling...so if you did this to humans, they would continue to act the exact same way, including screaming "in agony" when in pain, talking about how they are in pain, etc.
-
Since behavior is invariant under addition of constants to the utility function, I think whatever account of sense experience we come up with should also be invariant.
- 15 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.