Thank you! Will read when I get a chance.
-
-
Replying to @Meaningness @ESYudkowsky
Awesome. Thanks. Looking forward to your response,
@Meaningness1 reply 0 retweets 0 likes -
Replying to @michaelporcelli @ESYudkowsky
I took a quick look. Overall, it appears that neither of us feels the other is getting our respective points. I don’t think the LW post characterizes my pov accurately. This is puzzling, but seems difficult to sort out, and probably not important for either of us.
1 reply 0 retweets 0 likes -
A side conversation developed a possible alternative crux: “Maybe
@ESYudkowsky thinks (a) everyone has a True Objective Function, even if they aren't aware of it, or (b) everyone _ought_ to have an objective function and it's irrational not to have one.” And I disagree.1 reply 0 retweets 0 likes -
Replying to @Meaningness @ESYudkowsky
That seems off to me. I think
@ESYudkowsky is saying something like — for any agent with a goal, then there exists, in theory, an objective means to asses the agent’s decision making procedure relative to an ideal (even if the ideal is unknown or uncomputable)1 reply 0 retweets 0 likes -
Replying to @michaelporcelli @ESYudkowsky
Yes… in the presence of conflicting goals, one would need an objective function (or something roughly equivalent) expressing how to trade them off. Otherwise the framework doesn’t apply.
1 reply 0 retweets 0 likes -
Replying to @Meaningness @ESYudkowsky
Maybe I get you more with last tweet
@Meaningness — you don’t see Actual Person as agent-with-a-goal, but pluralistic with sometimes conflicting goals; DT doesn’t apply holistically b/c Actual Person has no Actual Utility Function; your point more organismic than mathy — close?1 reply 0 retweets 0 likes -
Replying to @michaelporcelli @ESYudkowsky
Yes. All except the last bit: it’s true and important that people are apes, but that wasn’t the point here. If an “abstract agent” has incommensurable goals, DT doesn’t apply. “Organismic” doesn’t bear on the problem.
1 reply 0 retweets 0 likes -
Caveat: I’ve studied only the mainstream version of DT; there may be extensions that handle incommensurable goals in limited cases, I don’t know. I can’t see how one could handle the general case (but who knows, maybe I’m missing something and there’s an extension that does).
1 reply 0 retweets 1 like -
Replying to @Meaningness @ESYudkowsky
Cool! To clarify, you’re point is about the nature of the agent, not about math? The agent has no actual utility function, ya?
1 reply 0 retweets 0 likes
The point is that the math doesn’t apply unless/until you identify actions, outcomes, and preferences. Those are abstract entities; they are not objective features of the world.
-
-
When you can more-or-less map the entities required by DT (or any mathematical framework) onto a situation, the math may give more-or-less meaningful results. Sometimes, for DT, this wins big!
1 reply 0 retweets 0 likes -
Sometimes you can’t meaningfully map DT onto a situation (because there aren’t identifiable preferences or actions or outcomes). Sometimes you can do that and it still doesn’t work, because math isn’t mostly-truth preserving, only absolute-truth preserving.
1 reply 0 retweets 0 likes - 6 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.