Just struck me that an important AI problem, formal logic reasoning, is actually defined at the wrong level. The problem is not to get a computer to do logic, but to get it to conclude logic is in fact a thing to do, and invent/discover the *idea* of logic via ML
Conversation
Logic is easy. Discovering that logic is a thing you can do, uncovering its rules, and deciding when to use them, is the hard part.
2
4
26
Replying to
Or to generalize, the problem isn't to synthesize GOFAI and deep learning. The problem is to get deep learning to discover/invent GOFAI. While embodied as a robot.
2
1
12
The best approach I can think of would be to make "rules" a learning domain somehow. You need to dataify rules somehow. Like maybe learn off a huge corpus of rules of various levels of rigor for a domain, from superstitions to heuristics to rules with formal properties
4
7
I think the mistake made in the past is to distinguish strongly between informal and rigorous reasoning, elevate the latter, and solve it in isolation. It's a spectrum and necessarily so. Watertight formal logic doesn't work in a useful way when divorced from superstitions.
2
15
This sort of thing seems like it's the right idea twitter.com/alexsteer/stat
This Tweet is unavailable.
5
Replying to
Doesn't this depend on whether scaling holds? My impression has been that the scaling argument as applied to prosaic LMs extends to the idea of learnable logic.
1
Replying to
what do you mean 'scaling argument'? I am not sure what you're referring to.
1
1
Show replies

