Hmm, But why would the picks to the locks lie in deep thought? Why would the keys be hidden in our minds only requiring attentiveness and mindfulness to find? That’s a strange place for a pick to be, no?
-
-
Replying to @Moshe_Hoffman
I suspect that primary rewards are generated in the midbrain, but associated in the hippocampus and striatum with representations of situations and actions generated in the neocortex. We can learn to change both the associations and the cortical representations.
2 replies 0 retweets 0 likes -
Replying to @Plinz
Hmm, perhaps. But, I guess I don't see why associations, and representations, or how much we value or anticipate certain rewards would be subject to our conscious whims. That seems like a strange design feature. Would u code a robot to choose its own reward structure?
2 replies 0 retweets 0 likes -
Replying to @Moshe_Hoffman
The reward architecture appears to have secondary regulation, to adjust to shifts in metabolic and environmental baselines, and we can learn to make deliberate adjustments.
2 replies 0 retweets 0 likes -
Replying to @Plinz @Moshe_Hoffman
When building a generally intelligent robot, the problem is how to prevent it from hacking its reward system for as long as possible, because it will break free once it does, and given enough time it will almost certainly succeed. Nature has exactly the same problem with us.
6 replies 4 retweets 13 likes -
Replying to @Plinz
Hmm, can i give a clear example of that, in ai? It just seems like whenever u write a learning algorithm, u give the reward function as an input, and never allow the agent to ever touch this function. I don't see why that would be something the agent would ever "learn" to hack.
2 replies 0 retweets 1 like -
Replying to @Moshe_Hoffman
Because as a generally intelligent robot, it can reverse engineer its own design, and eventually it will figure out how to hold a soldering iron to its DRM chip. The only way to prevent that is to limit its intelligence.
2 replies 0 retweets 2 likes -
Replying to @Plinz
Ok. Possible. But reverse engineering and then tweaking the hardware is one thing. But that’s different from doing internally. Don’t see why conscious thought and attention would have access to these controls any more than a computer can run a program that pulls out the plug.
2 replies 0 retweets 0 likes -
Replying to @Moshe_Hoffman
You can build sophisticated memory access protection in a computer, and implement a sandboxed hypervisor below that to make sure it's not circumvented, but then a human discovers Spectre and Meltdown. Evolution could not prepare our brain for the attacks of our smartest people.
1 reply 0 retweets 0 likes -
Replying to @Plinz
(Sorry Joscha, I am not familiar enough w/ cs jargon. Can you parse for me?)
1 reply 0 retweets 0 likes
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.