Definition of “rationalism” from the Eggplant book draft. If you identify as a rationalist, I’m curious whether you find this accurate, and if not, why not?pic.twitter.com/2cvo7478fj
You can add location information to your Tweets, such as your city or precise location, from the web and via third-party applications. You always have the option to delete your Tweet location history. Learn more
Thank you! Will read when I get a chance.
Awesome. Thanks. Looking forward to your response, @Meaningness
I took a quick look. Overall, it appears that neither of us feels the other is getting our respective points. I don’t think the LW post characterizes my pov accurately. This is puzzling, but seems difficult to sort out, and probably not important for either of us.
A side conversation developed a possible alternative crux: “Maybe @ESYudkowsky thinks (a) everyone has a True Objective Function, even if they aren't aware of it, or (b) everyone _ought_ to have an objective function and it's irrational not to have one.” And I disagree.
That seems off to me. I think @ESYudkowsky is saying something like — for any agent with a goal, then there exists, in theory, an objective means to asses the agent’s decision making procedure relative to an ideal (even if the ideal is unknown or uncomputable)
Yes… in the presence of conflicting goals, one would need an objective function (or something roughly equivalent) expressing how to trade them off. Otherwise the framework doesn’t apply.
Maybe I get you more with last tweet @Meaningness — you don’t see Actual Person as agent-with-a-goal, but pluralistic with sometimes conflicting goals; DT doesn’t apply holistically b/c Actual Person has no Actual Utility Function; your point more organismic than mathy — close?
Yes. All except the last bit: it’s true and important that people are apes, but that wasn’t the point here. If an “abstract agent” has incommensurable goals, DT doesn’t apply. “Organismic” doesn’t bear on the problem.
So I've read a fair amount of both @ESYudkowsky and @Meaningness and I don;t think the Tools vs. Laws post captures what David is talking about. Though it is an interesting post. It sounds like you two are talking past each other.
I agree on both counts. I find it mysterious that neither of us understands the other. I think I do understand typical rationalists… Eliezer seems to me to have a unique, atypical viewpoint that I haven’t yet been able to figure out.
In both of our defenses: I’m not sure I’ve read whatever is his definitive statement (although I’ve read a fair chunk of his Sequences). And I have mostly talked around my central points rather than stating them clearly. Some people somehow grok them anyway; others don’t.
The Eggplant book is supposed to lead from rationality to meta-rationality in easy steps, so it may help. Unfortunately it’s now ~300 pages and still growing, so whether anyone will read it I don’t know!
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.