Conversation

I think the core difference with LW-type rationality is: “The world is not insane. If you think the world is insane, then your model of the world is wrong and you won’t be effective.” But of course, I don’t allow myself to write about AI-safety, so my job is a lot easier.
1
2
Replying to
I've tried to keep to the standard of intellectual rigour of the best rationality blogs. I HAVE been hugely influenced by LessWrong and its writers. But I think their approach is fundamentally wrong: instrumental rationality doesn't demand epistemic correctness.
1
5
(Also related: communities that are obsessed with epistemic correctness will eventually devolve into a place where people sit around, debating the finer points of some idea, never doing anything, and therefore never accomplishing anything.)
2
7