Yes. All except the last bit: it’s true and important that people are apes, but that wasn’t the point here. If an “abstract agent” has incommensurable goals, DT doesn’t apply. “Organismic” doesn’t bear on the problem.
-
-
Caveat: I’ve studied only the mainstream version of DT; there may be extensions that handle incommensurable goals in limited cases, I don’t know. I can’t see how one could handle the general case (but who knows, maybe I’m missing something and there’s an extension that does).
1 reply 0 retweets 1 like -
Replying to @Meaningness @ESYudkowsky
Cool! To clarify, you’re point is about the nature of the agent, not about math? The agent has no actual utility function, ya?
1 reply 0 retweets 0 likes -
Replying to @michaelporcelli @ESYudkowsky
The point is that the math doesn’t apply unless/until you identify actions, outcomes, and preferences. Those are abstract entities; they are not objective features of the world.
1 reply 0 retweets 1 like -
When you can more-or-less map the entities required by DT (or any mathematical framework) onto a situation, the math may give more-or-less meaningful results. Sometimes, for DT, this wins big!
1 reply 0 retweets 0 likes -
Sometimes you can’t meaningfully map DT onto a situation (because there aren’t identifiable preferences or actions or outcomes). Sometimes you can do that and it still doesn’t work, because math isn’t mostly-truth preserving, only absolute-truth preserving.
1 reply 0 retweets 0 likes -
So I think I still don’t understand what
@ESYudkowsky’s claim here is. Is the claim that you *can* always do a mapping? Or that you *should* always do a mapping? Or that *if* you can, then you should? Or that in somehow you should even when you can’t?1 reply 0 retweets 0 likes -
To complicate further, I guess “can” could be interpreted two ways here, as “can, as an actual human on the scene” vs “can, as an omniscient hypercomputational external God.”
1 reply 0 retweets 0 likes -
I don’t actually care what Gods can/can’t do. But, since actions, outcomes, and preferences are not objective features of reality, I don’t think they could always apply DT either.
1 reply 0 retweets 0 likes -
And, if the force of the claim is “should,” the question would be what sort of should that is.
1 reply 0 retweets 0 likes
My suspicion is that this turns out to be circular. You start with the implicit assumption that there must be something to maximize. That is what gives “should” its force: you *should* apply DT, because if you don’t, you won’t maximize.
-
-
Maybe the “Law” formulation is just: “if you accept these premises, then these consequences hold.” But that’s just equivalent to “this is actual math,” which no one doubts.
0 replies 0 retweets 0 likesThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.