it makes sense. eg take a simple RL agent, you can get the same behavior from it whether you give negative reward for lava contact, or positive reward for being far from lava but i think it'd get insanely computationally expensive for more complex agents like us
an unnecessarily powerful and complex agent, evolved for a complex world, living in an extremely simple world in which simple agents make the correct moves, whose output is the same as that of a simple agent is the complex agent's experience the same as the simple agent's exper?
-
-
I don't think so. Depending on how complex the emulation of the simple agent is, there could actually be TWO sense-experience sets. (e.g. I think a Chinese room would be like this)
-
I think there is a difference between 'agent pretending to be simple' and 'RL agent with only positive reward', though. In the case of the RL agent, even the *internals* of the agent can be simply described with a model that includes suffering.
- 5 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.