Conversation

I think the core difference with LW-type rationality is: “The world is not insane. If you think the world is insane, then your model of the world is wrong and you won’t be effective.” But of course, I don’t allow myself to write about AI-safety, so my job is a lot easier.
1
2
The alternative frame I've picked is: "The world is a complex adaptive system. Like all CASs, there are simple rules at the bottom. You can figure out those rules by observation, and verify them through action. If you do this, you will win."
1
3
I've tried to keep to the standard of intellectual rigour of the best rationality blogs. I HAVE been hugely influenced by LessWrong and its writers. But I think their approach is fundamentally wrong: instrumental rationality doesn't demand epistemic correctness.
1
5