Posit: the conceptual framework of reinforcement learning brings misconceptions about intelligence that have & will set back the field of AI
-
-
Wrt creating I agree: intrinsic rewards to guide super task learning.
@lawrennd has advocated inter agent communication as important.Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Wrt *creating* or *measuring* intelligence? For the latter the RL framework is quite useful (-> Legg-Hutter Intelligence).
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Well, from the perspective of human interaction, I'll argue intelligence is a search optimization problem; best path among alternatives.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
I agree & such exploration is not hard to try. Even simple experiments like this one makes agents seem less static:https://twitter.com/hardmaru/status/775577429536407553 …
0:41 -
whatever happened to "Artificial Life" community? http://alife2018.alife.org/ http://cognet.mit.edu/journal/ecal2017 … seems like a largely parallel universe
End of conversation
New conversation -
-
-
Could you give the reasons to why you think about it this way?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Agree intelligence isn’t an Opt process but reinforcement learning is the closest to how kids learn! Who says reward can’t be dynamic!
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.