we don't see this all the time because simulating a different emotion is inefficient and exhausting, and inefficient agents get outcompeted but if computational expense weren't an issue, you could make agents who pretend full time
yes, you've changed my mind! (about an RL agent with the same complexity & behavior but only positive reward having a different experience) (though i still have enough uncertainty that when implementing RL i'd use positive rewards instead of negative when easy to do so)
-
-
maybe you'll change my mind some more in a contrived scenario where my brain is hooked up to simple world, and i feel myself thinking through the problem and making decisions, you would carve my experience into two? the part equivalent to a simpler agent and the rest of me?
-
I'm confused about this. I don't think this makes sense for humans(or maybe...method actors?) but can imagine some architectures(like Chinese room) for which it would. Depends on the fidelity of the emulation I think.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.