Even my reinforcement learning algorithms have objective functions and approximations of anticipated future state payoffs, but I would be quite hesitant to label this "emotion."
-
-
This becomes more of a phenomenological question. I do not believe your learning algorithms feel anything, or that their attempts to optimize some metric constitute “desire”
1 reply 0 retweets 0 likes -
On what basis?
1 reply 0 retweets 1 like -
So far, your only criterion for the ability to employ ethical rules, i.e. rules for behavior, has been that the agent doing so must have a payoff function and the ability to take actions to optimize it. I agree with that: ethics are rules for behavior.
1 reply 0 retweets 1 like -
Desire is simply awareness of positive future contingent payoffs coupled with some motivation to pursue them.
1 reply 0 retweets 3 likes -
No, I said you also have to be able to feel emotions. We don’t really know what that means in a detailed and scientific way, but the idea of morality is distinct from utility mostly on the sense that it’s how agents _feel_ about different outcomes
1 reply 0 retweets 1 like -
You described desire as an emotion. So you are not disputing my claim here. Perhaps my definition of desire, but if so, then you need an alternative definition and some kind of superiority claim....
1 reply 0 retweets 1 like -
Desire is a subjective experience. When we say an algorithm wants to do something, that’s an anthropomorphic metaphor. There is almost certainly no “I” in a neural network that has the subjective experience of wanting
2 replies 0 retweets 2 likes
wow, pretty racist against algorithms
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.