But the normative theories have technical uses! Tons of them! All of the coherence theorems! Papers calculating an algorithm's distance from an unreachable optimum! Why wouldn't you just have prescriptions based on the goal of getting closer to unreachable normativity?
-
-
Replying to @ESYudkowsky @juliagalef and
Ah! This is very interesting… here you seem to have a “harder” take on rationality than some other people from the LW-derived community I’ve been discussing this with. 1/2
1 reply 0 retweets 0 likes -
Replying to @Meaningness @ESYudkowsky and
Hard to answer accurately or comprehensibly in 280, but: I think those benefits are rarely (not never, but rarely) useful in practice, and they trade off against other desirable features that are more often useful.
1 reply 0 retweets 0 likes -
Replying to @Meaningness @juliagalef and
Your position seems to me like saying that if we can't see the shortest path through a maze, then it must have no shortest path or at least the concept of a shortest path must not be useful. Seems useful to me. I don't get your weird ban? What else can be said?
1 reply 0 retweets 2 likes -
Replying to @ESYudkowsky @juliagalef and
I’m saying that in many/most cases there is no one correct metric, and therefore no shortest path. It’s an ontological objection, not an epistemological one. (Relatedly: I see rationalism as pervasively misunderstanding ontological questions as being epistemological ones.)
2 replies 0 retweets 4 likes -
Replying to @Meaningness @juliagalef and
So relativize the "shortest path" to a metric, like all preference orderings on options are relativized to a utility function. These ideas are technically straightforward, and if somebody manages to shoot themselves in the psychological foot, I would not blame the theory.
3 replies 0 retweets 1 like -
Replying to @ESYudkowsky @juliagalef and
Right: in order to apply any rational method, you first have to fix the ontological parameters (e.g. metric of goodness). My objection to rationalism is that it doesn’t want to look at the “meta-rational” process whereby you make those ontological choices.
2 replies 0 retweets 5 likes -
Replying to @Meaningness @juliagalef and
Choosing the utility function is a different subject matter with different solutions, but here you go: https://arbital.com/p/normative_extrapolated_volition/ …. Or if you want priors, well, that is more complicated but I can't be accused of not mentioning the subject.
2 replies 0 retweets 1 like -
Replying to @ESYudkowsky @juliagalef and
Priors are an epistemological matter, not an ontological one. A utility function is ontological, I guess… but not at all what I have in mind. Rather: what is the right vocabulary for describing this sort of situation? Where “right” means not “ultimately correct,” but “helpful.”
1 reply 0 retweets 0 likes -
Replying to @Meaningness @ESYudkowsky and
I've been holding back from saying anything like the following, because it is indeed an obnoxious move, but I keep wondering what "helpful" grounds out to, for you, if anything - How do you know what's "helpful"? (And yes, it's epistemics again; feel free to throw ontology back)
1 reply 0 retweets 1 like
I think all categories (outside math) are necessarily somewhat vague, and that this is not generally a problem. That includes “helpful.” There’s no general criteria. In particular domains, one can make cogent arguments about whether a particular thing is helpful or not.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.