Yeah 99% of the work is figuring out how a human, with a messy human brain, can approximate the normative model. What heuristics work well, in what contexts? When is following explicit rules useful vs. just training your intuition through experience? etc. Lots of diff. ideas here
-
-
Replying to @juliagalef @PereGrimmer and
Baron’s distinction between “normative” and “prescriptive” is one I haven’t seen before. That seems useful and maybe key. OTOH, if we’re looking for a disagreement crux, it might be whether a normative theory that can’t be achieved, even in principle, is a good thing.
2 replies 0 retweets 8 likes -
Replying to @Meaningness @juliagalef and
But the normative theories have technical uses! Tons of them! All of the coherence theorems! Papers calculating an algorithm's distance from an unreachable optimum! Why wouldn't you just have prescriptions based on the goal of getting closer to unreachable normativity?
2 replies 0 retweets 3 likes -
Replying to @ESYudkowsky @juliagalef and
Ah! This is very interesting… here you seem to have a “harder” take on rationality than some other people from the LW-derived community I’ve been discussing this with. 1/2
1 reply 0 retweets 0 likes -
Replying to @Meaningness @ESYudkowsky and
Hard to answer accurately or comprehensibly in 280, but: I think those benefits are rarely (not never, but rarely) useful in practice, and they trade off against other desirable features that are more often useful.
1 reply 0 retweets 0 likes -
Replying to @Meaningness @juliagalef and
Your position seems to me like saying that if we can't see the shortest path through a maze, then it must have no shortest path or at least the concept of a shortest path must not be useful. Seems useful to me. I don't get your weird ban? What else can be said?
1 reply 0 retweets 2 likes -
Replying to @ESYudkowsky @juliagalef and
I’m saying that in many/most cases there is no one correct metric, and therefore no shortest path. It’s an ontological objection, not an epistemological one. (Relatedly: I see rationalism as pervasively misunderstanding ontological questions as being epistemological ones.)
2 replies 0 retweets 4 likes -
Replying to @Meaningness @juliagalef and
So relativize the "shortest path" to a metric, like all preference orderings on options are relativized to a utility function. These ideas are technically straightforward, and if somebody manages to shoot themselves in the psychological foot, I would not blame the theory.
3 replies 0 retweets 1 like -
Replying to @ESYudkowsky @juliagalef and
Right: in order to apply any rational method, you first have to fix the ontological parameters (e.g. metric of goodness). My objection to rationalism is that it doesn’t want to look at the “meta-rational” process whereby you make those ontological choices.
2 replies 0 retweets 5 likes -
Replying to @Meaningness @juliagalef and
Choosing the utility function is a different subject matter with different solutions, but here you go: https://arbital.com/p/normative_extrapolated_volition/ …. Or if you want priors, well, that is more complicated but I can't be accused of not mentioning the subject.
2 replies 0 retweets 1 like
To the guy in class who wants to know how to build an app that recognizes apples, so as to fix an ontology for counting them, we may legitimately say:. "That's going to take calculus and linear algebra, which requires learning 2 apples + 2 apples first."
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.