Perhaps a better analogy: why are you worried about how the boiler behaves if it gets overpressurized? Why not just focus on how well it runs the rest of the time?
-
-
Replying to @drethelin @Meaningness
I don't really get this analogy. As I understand David, he's saying that rationalism focuses on how to maximize EV at explicit decision points, but ignores how to deal with facets of the system that don't show up as explicit choices
2 replies 2 retweets 12 likes -
It's not [rational = bugs] vs. [metarational = working smoothly], but rather [rational = choice points] vs. [metarational = total behavior of the system]
2 replies 1 retweet 13 likes -
Replying to @KevinSimler @drethelin
Yes! And not just total behavior of system, but how it fits into its context, and the design space of alternative systems and possible revisions
1 reply 0 retweets 4 likes -
Replying to @Meaningness @KevinSimler
But this is literally what conversations about decision theories on Lesswrong are like!
2 replies 0 retweets 3 likes -
Replying to @drethelin @KevinSimler
Hmm... maybe a specific example of that would help?
1 reply 0 retweets 1 like -
Thanks! I read the intro and conclusion of this (and very quickly scanned the stuff in between to see if anything looked surprising at a glance, which it didn’t). If I TL;DR’d it as “here are a bunch of reasons decision theory doesn’t work in the real world” would I be wrong?
1 reply 0 retweets 2 likes -
i would TL;DR it as "here are a bunch of problems a decision theory would need to solve in order to work in the real world", where "work in the real world" means we can use the concepts from it to think non-confusedly about the properties of real AI systems
3 replies 0 retweets 4 likes -
Replying to @VesselOfSpirit @Meaningness and
What kind of construction will real AI systems need to have in order for this approach to work? Are there any restrictions, or would we be able to use it to reason about any piece of software? MS Excel, say? Solitaire? A display driver? The Linux kernel? Salesforce?
1 reply 0 retweets 0 likes
i don't think it would help with any of those. where the boundary lies between it helping and not helping is a good question that i don't know the answer to. maybe something like general modeling of its environment, its effects thereon, and the effects thereof on it
-
-
Replying to @VesselOfSpirit @Meaningness and
Thanks! Would you expect it to help w/ reasoning about humans? (If not, why would AI be different?) Re general modeling of environment & effects on/from it, what do you mean by "general"? eg if AI is general in the same way as a well-stocked toolbox is general, would that count?
0 replies 0 retweets 0 likesThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.