Hmm, and i sure it’s not just pursuing one evolved reward (legacy?) instead of another (pain avoidance?)?
-
-
Replying to @Moshe_Hoffman
Yes, you can totally learn to go down into the room where your brain stores the cookies and go nuts. Or learn not to. Our response to reward is malleable once we find the key.
1 reply 0 retweets 3 likes -
Replying to @Plinz
Hmm, But y would evolution. Have left the key laying around? Seems odd. Seems more plausible, at least a priori, that evolution would leave you with several rooms, each with their own rewards, and let you select the one you think you will be most successful in. No?
1 reply 0 retweets 0 likes -
Replying to @Moshe_Hoffman
Evolution certainly did implement locks, but the locks were not designed to deter people that figure out that it could pay off to sit down in a quiet room for a couple decades and try nothing but to pick them, and then became charismatic and powerful and built schools around this
1 reply 0 retweets 4 likes -
Replying to @Plinz
Hmm, But why would the picks to the locks lie in deep thought? Why would the keys be hidden in our minds only requiring attentiveness and mindfulness to find? That’s a strange place for a pick to be, no?
1 reply 0 retweets 1 like -
Replying to @Moshe_Hoffman
I suspect that primary rewards are generated in the midbrain, but associated in the hippocampus and striatum with representations of situations and actions generated in the neocortex. We can learn to change both the associations and the cortical representations.
2 replies 0 retweets 0 likes -
Replying to @Plinz
Hmm, perhaps. But, I guess I don't see why associations, and representations, or how much we value or anticipate certain rewards would be subject to our conscious whims. That seems like a strange design feature. Would u code a robot to choose its own reward structure?
2 replies 0 retweets 0 likes -
Replying to @Moshe_Hoffman
The reward architecture appears to have secondary regulation, to adjust to shifts in metabolic and environmental baselines, and we can learn to make deliberate adjustments.
2 replies 0 retweets 0 likes -
Replying to @Plinz @Moshe_Hoffman
When building a generally intelligent robot, the problem is how to prevent it from hacking its reward system for as long as possible, because it will break free once it does, and given enough time it will almost certainly succeed. Nature has exactly the same problem with us.
6 replies 4 retweets 13 likes -
Replying to @Plinz @Moshe_Hoffman
But at that point, what would differentiate it from us individuals? The only fundamental difference then might be the learning and improvement rate, right?
1 reply 0 retweets 0 likes
That is going to be a dramatic difference. Human minds are tiny, slow and noisy, crash every few hours, and worst of all burn out after only 32 Billion clock cycles!
-
-
Replying to @Plinz @Moshe_Hoffman
What do you mean by crash every few hours? And how did you arrive at the 32 billion number?
0 replies 0 retweets 0 likesThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.