Conversation

This seems to be missing the way that LLMs don't reason at the level of their encoding, they reason verbally, and the verbal output can reason about causality. So perhaps the argument is that if they don't encode causal relationships, they can't "truly" do causal reasoning?
1
1
To make a somewhat parallel argument, human brains can perform causal reasoning, but the substrate, neurons, aren't encoding causal representations, they are just chemicals. It seems like gradient descent built a non-causal substrate for the language model, in a similar fashion.
2
2