A major missing piece for AGI, is humans' ability to set their own goals and reward functions. E.g. feeling rewarded for winning a "game"https://twitter.com/fchollet/status/876227262437179392 …
-
-
Replying to @fchollet
It's quite interesting that we can unilaterally decide to pursue certain artificial goals, and feel physically gratified when we reach them
1 reply 1 retweet 12 likes -
Replying to @fchollet
In effect, we have the power to completely hack our own reward system, down to the ability to override e.g. pain avoidance
2 replies 2 retweets 14 likes -
Replying to @fchollet
In that sense, "general intelligence" is not just the ability to find optimal solutions to externally-provided problems (making paperclips)
2 replies 6 retweets 13 likes -
Replying to @fchollet
It is also the ability to figure out what goals you should pursue. Which in turn guides the development of your own intelligence
11 replies 9 retweets 24 likes -
Replying to @fchollet
@pyoudeyer work is a great baseline for thinking about these issues1 reply 0 retweets 0 likes
My reference is Prof Nova's work on Karmatron Dynamics, which explains goal-forming via the movement of particles called "karmatrons"
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.