only taking into account "behavior outside the brain" is missing half the picture, especially in non-evolutionary contexts
-
-
Replying to @gracecondition
Good point about acting. I think the reason this is possible is 1) people use relatively superficial heuristics to judge emotions of others 2) we don't observe others most of the time
1 reply 0 retweets 0 likes -
Replying to @an_interstice @gracecondition
Against a more powerful and patient observer, I think you would essentially need to implement some computation equivalent to whatever agent you are 'faking'. But then it seems like this computation ought to be conscious and have emotions
1 reply 0 retweets 0 likes -
Replying to @an_interstice @gracecondition
I just thought of a counter-example(?), PNSE. People with PNSE verbally report complete absence of suffering, but loved ones say that they can still appear visibly upset http://www.nonsymbolic.org/PNSE-Article.pdf … (page 29)
1 reply 0 retweets 1 like -
Replying to @an_interstice
an unnecessarily powerful and complex agent, evolved for a complex world, living in an extremely simple world in which simple agents make the correct moves, whose output is the same as that of a simple agent is the complex agent's experience the same as the simple agent's exper?
1 reply 0 retweets 0 likes -
Replying to @gracecondition
I don't think so. Depending on how complex the emulation of the simple agent is, there could actually be TWO sense-experience sets. (e.g. I think a Chinese room would be like this)
1 reply 0 retweets 0 likes -
Replying to @an_interstice @gracecondition
I think there is a difference between 'agent pretending to be simple' and 'RL agent with only positive reward', though. In the case of the RL agent, even the *internals* of the agent can be simply described with a model that includes suffering.
1 reply 0 retweets 0 likes -
Replying to @an_interstice @gracecondition
That is, the internals can be described 'update a normal suffering-based model, but store the resulting updates in this strange way that looks like no negative reinforcement occured'. and this description is simple(low K-complexity)
1 reply 0 retweets 0 likes -
Replying to @an_interstice @gracecondition
in contrast, there is no such simple description of the internals of 'agent pretending to be simple' in terms of the model it is imitating, so it makes less sense to do that.
1 reply 0 retweets 0 likes -
Replying to @an_interstice
yes, you've changed my mind! (about an RL agent with the same complexity & behavior but only positive reward having a different experience) (though i still have enough uncertainty that when implementing RL i'd use positive rewards instead of negative when easy to do so)
1 reply 0 retweets 0 likes
maybe you'll change my mind some more in a contrived scenario where my brain is hooked up to simple world, and i feel myself thinking through the problem and making decisions, you would carve my experience into two? the part equivalent to a simpler agent and the rest of me?
-
-
Replying to @gracecondition
I'm confused about this. I don't think this makes sense for humans(or maybe...method actors?) but can imagine some architectures(like Chinese room) for which it would. Depends on the fidelity of the emulation I think.
0 replies 0 retweets 0 likesThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.