ONNX: https://onnx.ai/ :
> ONNX is an open format built to represent machine learning models. ONNX defines a common set of operators - the building blocks of machine learning and deep learning models - and a common file format to enable AI developers to use models with a variety of frameworks, tools, runtimes, and compilers
RIF (~FOL): https://en.wikipedia.org/wiki/Rule_Interchange_Format
Datalog (not Turing-complete): https://en.wikipedia.org/wiki/Datalog
"HOList: An Environment for Machine Learning of Higher-Order Theorem Proving" (2019) https://arxiv.org/abs/1904.03241
> Abstract: We present an environment, benchmark, and deep learning driven automated theorem prover for higher-order logic. Higher-order interactive theorem provers enable the formalization of arbitrary mathematical theories and thereby present an interesting, open-ended challenge for deep learning. We provide an open-source framework based on the HOL Light theorem prover that can be used as a reinforcement learning environment. HOL Light comes with a broad coverage of basic mathematical theorems on calculus and the formal proof of the Kepler conjecture, from which we derive a challenging benchmark for automated reasoning. We also present a deep reinforcement learning driven automated theorem prover, DeepHOL, with strong initial results on this benchmark.
Really cool work though!
I think theorem proving is much different. The space of possible heuristics you may apply to a proof is basically infinite. It can take a tremendous amount of creativity and intuition to come up with a complicated and novel proof. While I can see ML models being of use for simpler proofs or lemmas within bigger proofs (such as simple epsilon-delta proofs etc.), I have a hard time imagining that they will really be able to do real proofs anytime soon.
[^1]: I emphasise "symbolically" because it is my understanding that outside of simple situations and university lectures, most people don't bother with that, instead solving differential equations numerically.
[^2] There are some subtleties around the fact that, afaik, deciding whether two expressions are equal is undecidable in the general case because at some point you will have to compare e.g. coefficients and the equality of real numbers is undecidable. In practice, you will consider two numbers to be equal if their difference is below a certain threshold, which could in theory also yield false positives, but I find it unlikely that this would occur in practice in the situation of solving an equation.
You can generate possible transformations from one proposition to another one, yes. But the space of possible transformations is infinite, how does the system know which ones to try? Moreover, even if you try some local transformation, how do you know that you have made progress?
A system like that could probably perform well on typical undergraduate proofs: epsilon-delta, proof that such and such a thing is a norm, etc., but it's gonna have a hard time with more difficult problems.
Logical connective: https://en.wikipedia.org/wiki/Logical_connective
Propositional logic: https://en.wikipedia.org/wiki/Propositional_calculus
Rules of inference: https://en.wikipedia.org/wiki/Rule_of_inference
DL: Description logic: https://en.wikipedia.org/wiki/Description_logic (... The OWL 2 profiles (EL, QR, RL; DL, Full) have established decideability and complexity: https://www.w3.org/TR/owl2-profiles/ )
FOL: First-order logic: https://en.wikipedia.org/wiki/First-order_logic
HOL: Higher-order logic: https://en.wikipedia.org/wiki/Higher-order_logic
In terms of regurgitating without critical reasoning?
Critical reasoning: https://en.wikipedia.org/wiki/Critical_thinking
Transformer can represent some very complex logical operations, and per this article is turing complete: https://arxiv.org/abs/1901.03429, meaning any computable function, including theorem prover can be represented as transformer.
Another question is if it is feasible/viable/rational to build transformer for this? My intuition says: no.