It's quite interesting that we can unilaterally decide to pursue certain artificial goals, and feel physically gratified when we reach them
-
-
-
In effect, we have the power to completely hack our own reward system, down to the ability to override e.g. pain avoidance
- Show replies
New conversation -
-
-
"Automatic Goal Generation for Reinforcement Learning Agents" already exists: https://arxiv.org/abs/1705.06366
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Sounds like a non-difference, just pushing the objective function a few levels deeper. Reminds me of the "God of the gaps" fallacy argument.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
A major missing piece for humans is the ability to set our own goals and reward functions in line with our priorities and values :)
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
It can be learned from video. eg: seeing a man with food, then seeing "man eat food". It doesn't understand human intent, but it can mimic.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Humans only have one goal - to evolve (including to live and to reproduce). All the other goals are subgoals of the main goal. What is AI's?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
What about actor critic and GANs?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.