Perhaps a better analogy: why are you worried about how the boiler behaves if it gets overpressurized? Why not just focus on how well it runs the rest of the time?
-
-
Replying to @drethelin @Meaningness
I don't really get this analogy. As I understand David, he's saying that rationalism focuses on how to maximize EV at explicit decision points, but ignores how to deal with facets of the system that don't show up as explicit choices
2 replies 2 retweets 12 likes -
It's not [rational = bugs] vs. [metarational = working smoothly], but rather [rational = choice points] vs. [metarational = total behavior of the system]
2 replies 1 retweet 13 likes -
Replying to @KevinSimler @drethelin
Yes! And not just total behavior of system, but how it fits into its context, and the design space of alternative systems and possible revisions
1 reply 0 retweets 4 likes -
Replying to @Meaningness @KevinSimler
But this is literally what conversations about decision theories on Lesswrong are like!
2 replies 0 retweets 3 likes -
Replying to @drethelin @KevinSimler
Hmm... maybe a specific example of that would help?
1 reply 0 retweets 1 like -
Thanks! I read the intro and conclusion of this (and very quickly scanned the stuff in between to see if anything looked surprising at a glance, which it didn’t). If I TL;DR’d it as “here are a bunch of reasons decision theory doesn’t work in the real world” would I be wrong?
1 reply 0 retweets 2 likes -
i would TL;DR it as "here are a bunch of problems a decision theory would need to solve in order to work in the real world", where "work in the real world" means we can use the concepts from it to think non-confusedly about the properties of real AI systems
3 replies 0 retweets 4 likes -
So, at what point should one look at the list of apparently-very-hard problems and say “this looks like the wrong approach, let’s try Plan B”? For me that was 1987: https://www.aaai.org/Papers/AAAI/1987/AAAI87-048.pdf …
2 replies 0 retweets 4 likes
-
-
if you expected to face decisions where you only had time for reflexes, you'd prove theorems related to what are (on expectation, across uncertain complex environments) the best reflexes, or the best reflex-choosing algorithms, or so on
1 reply 0 retweets 0 likes -
Replying to @VesselOfSpirit @Meaningness and
but this seems like the sort of argument where we're probably retreading old ground, and the current context is probably not the best context to do that productively (i haven't read your writings recently and just came here to post the link)
1 reply 0 retweets 0 likes - 1 more reply
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.