A major missing piece for AGI, is humans' ability to set their own goals and reward functions. E.g. feeling rewarded for winning a "game"https://twitter.com/fchollet/status/876227262437179392 …
-
-
It is also the ability to figure out what goals you should pursue. Which in turn guides the development of your own intelligence
-
When thinking of humans in terms of RL, it's surprisingly hard to figure out where to draw the boundary between agent and environment.
End of conversation
New conversation -
-
-
Satinder Singh was discussing learning intrinsic rewards for RL back at ICML '09 http://web.eecs.umich.edu/~baveja/Papers/FinalNIPSIMRL.pdf …
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.