Conversation

Are there good mental models for reading and following theorems and proofs? I don’t mean any concrete examples. But more like, what’s the “machinery” in your head when you read complex nested statements with “suppose”, “implies”, quantifiers, etc.
17
93
One way to look at it is — if you were to implement a theorem prover, how would you reify its internal state? What internal state is there? How does encountering statements or proof goals change that state? As a comparison, in programming, you “imagine” the call stack.
2
8
I can follow simple logic. But when it gets to things like “show that (for each X, P implies Q) implies (for each X, R)”, my brain just turns off. But this seems similar to reading nested function calls. I’m not supposed to read it all at once, I just need to understand the path.
7
24
I think my current model is that I keep a list of assertions that are known to be true. I can combine them to produce more. I also keep a list of assertions that I hope to “reach” (or their negatives). When we get to implications I get a bit lost though.
5
9
Updated mental model re: implications. When you have A => B, what you do with it depends on whether it’s an assumption or a goal. If it’s a goal, you “step into” it by assuming A. Then you need to show B. If it’s an assumption, you can use it whenever you have A to get B.
3
16