A major missing piece for AGI, is humans' ability to set their own goals and reward functions. E.g. feeling rewarded for winning a "game"https://twitter.com/fchollet/status/876227262437179392 …
-
-
When thinking of humans in terms of RL, it's surprisingly hard to figure out where to draw the boundary between agent and environment.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
@pyoudeyer work is a great baseline for thinking about these issues -
My reference is Prof Nova's work on Karmatron Dynamics, which explains goal-forming via the movement of particles called "karmatrons"
End of conversation
New conversation -
-
-
agree for cases where it can increase intelligence/achievement, but more broadly, goal creation as intelligence seems problematic-
-
seems like more of a wisdom thing in the general case, bordering on ethics in some cases, than intelligence per se, no?
- Show replies
New conversation -
-
-
And goal-exploration independence seems problematic if we are to interface with AIs only as tools rather than as co-agents
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
At a high level all humans and many animals have similar goals. Eat, reproduce, stay safe. Interesting part is the different subgoals
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Most people don't actualize that ability though. It's very rare that a person thinks about his/her goals from scratch.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
I guess there's a hierarchy of costs functions. E.g. "I want to become a better hunter." But the real objetive is "prevent entropy increase"
-
Like curriculum learning but with objective functions. Perhaps some people can hack objectives that are upper in the hierarchy.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.