I don't really get this analogy. As I understand David, he's saying that rationalism focuses on how to maximize EV at explicit decision points, but ignores how to deal with facets of the system that don't show up as explicit choices
if you expected to face decisions where you only had time for reflexes, you'd prove theorems related to what are (on expectation, across uncertain complex environments) the best reflexes, or the best reflex-choosing algorithms, or so on
-
-
but this seems like the sort of argument where we're probably retreading old ground, and the current context is probably not the best context to do that productively (i haven't read your writings recently and just came here to post the link)
-
Yes… just to clarify, my 1987 Plan B was also a dead end, as were all the others I could think of subsequently, so in 1992 I gave up on AI.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.