That seems off to me. I think @ESYudkowsky is saying something like — for any agent with a goal, then there exists, in theory, an objective means to asses the agent’s decision making procedure relative to an ideal (even if the ideal is unknown or uncomputable)
To complicate further, I guess “can” could be interpreted two ways here, as “can, as an actual human on the scene” vs “can, as an omniscient hypercomputational external God.”
-
-
I don’t actually care what Gods can/can’t do. But, since actions, outcomes, and preferences are not objective features of reality, I don’t think they could always apply DT either.
-
And, if the force of the claim is “should,” the question would be what sort of should that is.
- 2 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.