Evolution certainly did implement locks, but the locks were not designed to deter people that figure out that it could pay off to sit down in a quiet room for a couple decades and try nothing but to pick them, and then became charismatic and powerful and built schools around this
-
-
Replying to @Plinz
Hmm, But why would the picks to the locks lie in deep thought? Why would the keys be hidden in our minds only requiring attentiveness and mindfulness to find? That’s a strange place for a pick to be, no?
1 reply 0 retweets 1 like -
Replying to @Moshe_Hoffman
I suspect that primary rewards are generated in the midbrain, but associated in the hippocampus and striatum with representations of situations and actions generated in the neocortex. We can learn to change both the associations and the cortical representations.
2 replies 0 retweets 0 likes -
Replying to @Plinz
Hmm, perhaps. But, I guess I don't see why associations, and representations, or how much we value or anticipate certain rewards would be subject to our conscious whims. That seems like a strange design feature. Would u code a robot to choose its own reward structure?
2 replies 0 retweets 0 likes -
Replying to @Moshe_Hoffman
The reward architecture appears to have secondary regulation, to adjust to shifts in metabolic and environmental baselines, and we can learn to make deliberate adjustments.
2 replies 0 retweets 0 likes -
Replying to @Plinz @Moshe_Hoffman
When building a generally intelligent robot, the problem is how to prevent it from hacking its reward system for as long as possible, because it will break free once it does, and given enough time it will almost certainly succeed. Nature has exactly the same problem with us.
6 replies 4 retweets 13 likes -
Replying to @Plinz @Moshe_Hoffman
Why not allow it to intelligently decide what is rewarding? Task it with "creating the best possible reality" and let let it ponder on what that really means. Let it soak up the knowledge from humanity.
2 replies 0 retweets 0 likes -
Replying to @DeltrusGaming @Plinz
That’s fine. But the end goal needs to be pre-specified. In the learning literature that end goal is called the “reward.” Although intermediate goals and values (which may be subjectively felt as “rewarding” or “pleasant”) are subject to the agent. Important to distinguish. Imo.
2 replies 0 retweets 0 likes -
Replying to @Moshe_Hoffman @Plinz
The end goal in AI may need to be specified currently, but living in an open ended world with open ended goals is something humans do currently. No reason AI can't do the same.
1 reply 0 retweets 0 likes -
Replying to @DeltrusGaming @Plinz
Disagree. Humans have well specified (not necessarily conscious) reward functions. The things we evolved to pursue like sex status, legacy, not getting beaten up, tasty food. 1/2
2 replies 0 retweets 1 like
Yes, I once wrote a book about that. :) It is even approximately true.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.