Just struck me that an important AI problem, formal logic reasoning, is actually defined at the wrong level. The problem is not to get a computer to do logic, but to get it to conclude logic is in fact a thing to do, and invent/discover the *idea* of logic via ML
Conversation
Logic is easy. Discovering that logic is a thing you can do, uncovering its rules, and deciding when to use them, is the hard part.
2
4
26
There is like zero line of sight to how to do this in current leading edge ML research.
2
1
7
Replying to
Doesn't this depend on whether scaling holds? My impression has been that the scaling argument as applied to prosaic LMs extends to the idea of learnable logic.
1
Replying to
what do you mean 'scaling argument'? I am not sure what you're referring to.
1
1
Replying to
Roughly, as AI models are scaled up (data + compute + model params) they develop deeper, more general and abstract reasoning capabilities.
The view among proponents of the scaling hypothesis is that these capabilities may (will) eventually include principles of logic.
1
Replying to
Ah, okay. Yeah, I've heard that argument, but I think it's kinda a leap of faith at this point. There's no clear reason to believe it other than that biological brains seem to be an existence proof if you squint enough.
Replying to
hmm I've tended to take it more seriously, given that it seems to explain the history of language modelling quite well.
As model size, data, compute ↑, models went from writing words → phrases → sentences → paragraphs, to now being able to write blogs & do math (GPT+)
1

