Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: i2forge – A Platform for Verified Reasoning (i2forge.com)
35 points by akiarie on March 18, 2023 | hide | past | favorite | 13 comments
Hi! We're Amisi and Claude, builders of the i2 language and the i2forge platform. i2 is an (early draft of a) language designed to make formal verification easy for mathematicians.

We are launching the language as an open source project today (https://i2lang.org) together with a closed alpha for i2forge.

However, we have a publicly accessible demo page which anyone can use, and we would love your feedback.

Thanks.




> It occupies the place of Peano's * first axiom that 1 ∈ N. */

Natural numbers should start with 0 [0].

Suspiciously, the following lines in the proof can be entirely removed while still verifying:

    @func nat(x any) bool;
    term 1 nat;
[0] https://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/E...


You raise two fine points.

> Natural numbers should start with 0 [0].

I agree 100% with Dijkstra about where the numbers should begin. In fact, here's an early draft in which we were using 0 instead of 1: [0]. The reason we ended up going with 1 is we were imitating the structure of the proofs in Landau's "Foundations", and he uses 1 as his first number.

An interesting takeaway was the following. The way he structures his theorems and proofs, he makes use of his assumption that 0 is not a natural number. For example, Theorem 7 in the book is "y != x + y (for all natural numbers x, y)". So I found that sticking to my roots as a programmer (and admirer of Dijkstra) and insisting on zero-indexing led to more complicated theorem statements (e.g. "y != x + y for all natural numbers x, y > 0"). I'm not sure whether this is something structural about how the properties of natural numbers, integers, etc. relate; or whether it's an artefact of Landau's approach.

> Suspiciously, the following lines in the proof can be entirely removed while still verifying:

Suspicion warranted! The first (or zeroth) of these is a bug we hadn't fixed yet (see [1]). Our implementation is rather rough around the edges, but I've put a small fix for this—try again and tell me what you think.

The line "term 1 nat" interestingly isn't used in the theorem we've written, so it's more a mistake on our part for including it in the sample code. Good spot! See the language page [2] for an example where it is used.

As an aside, have you noticed that our proof style is based on what we call "Dijkstra-chaining"? Any thoughts on this? [3]

[0] https://paste.sr.ht/~akiarie/d7a2ae368faae4ba9a8bc13eeefcf18....

[1] https://todo.sr.ht/~lbnz/i2/1

[2] https://i2lang.org/#uses1

[3] https://www.cs.utexas.edu/users/EWD/transcriptions/EWD10xx/E...


Presumably the theorem becomes `y >= x + y`?

Difficult to convince the entire field of mathematics that they are wrong though. It's hard enough to convince programmers - humans just have first=1 drummed into us so much it's like trying to convince a stone age man that the world is round.

I think mathematicians care even less because most of the problems with starting from 1 can be hand waved away when you're just writing non-executable formulas and can make up any syntax you want on the spot. If anything is really awkward with 1-based numbering, just change the notation so that it isn't! Programmers can't really do that so they feel the pain more.


I think the challenge is it isn't completely obvious to me that starting with 0 is pedagogically correct, though it is so logically. `y >= x + y` is different from `y != x + y`; the former says that addition (of natural numbers) is monotonically increasing, the latter that there is no identity element. Is it possible that admitting an identity element is confusing for beginners? Or is this just a consequence of my own educational conditioning?

(For the record I think all the arguments Dijkstra gives for 0-indexing are correct. I just lack the mathematical intuition to see why the special case above appears to be one in which admission of 0 as complicates things.)


Interesting project. How does i2 compare with other languages used in verification / theorem proving, such as Agda, Lean, Isabelle, etc?

From your site:

> i2forge is a commercial venture, unlike i2

How are you planning to monetize i2forge?


> How does i2 compare with other languages used in verification?

I have to plead a measure of ignorance here as the context for my response, though we are working to understand the existing languages.

Perhaps what distinguishes i2 from the existing languages is we're treating this as an engineering rather than a science problem. We aren't trying to build up from the ideal logical system (which is maybe what Coq, Agda, Lean etc. do) but rather, viewing things pragmatically, trying to build a language that requires the least amount of investment for the average practicing mathematician to learn and begin using.

The sense I get (again appealing to ignorance) when I look at most of Coq-style options is that one has to learn type theory and/or constructivism before starting. In this way maybe we're closer to HOL and Isabelle.

Another angle is we're coming from a programming background, and view C as the model for what we're trying to achieve. C somehow captured all the essential capabilities of a Von Neumann machine at the right level of abstraction, and with a very "orthonormal" feature set, such that nearly every major system in the world is written in it or in a language based upon it. Our goal (which may greatly exceed our capacities) is to design something that does the same for maths.

> How are you planning to monetize i2forge?

We aren't sure about this at all. We're just convinced that we have unique insights into this problem and want to work on it full time, so we will be working towards monetising. One idea we've been playing around with is building tools for teachers, or for independent learners.


I like the way you are thinking about this. And yes, formal math is the future! If you want to check out how to deal with quantification properly, or more generally with operators, without a need to explicitly introduce types, check out https://practal.com .


We're looking at this. Some very fascinating stuff. Somewhere at the beginning of "The Calculi of Lambda Conversion" Church talks about the fact that quantification can be reduced to lambdas. I have not read to the end of the paper (still learning these things step by step) and as we have struggled to understand how we should represent quantification, one of the things that became clear is the analogy between functions and quantifiers on the ground that both bind variables, so on the surface what you're talking about has the ring of truth.


Church tried to reduce logic to his invention, the untyped lambda calculus, but failed in a first attempt [1]. He then proceeded later on to invent the simple theory of types, [2], and type theory is pretty much a descendant of that. My argument with abstraction logic is that trying to use lambda+types for logic is actually misguided and overly restrictive. Abstraction logic is simpler, and can nevertheless also express variable binding naturally.

You are kind of lucky that you are relatively fresh to all this, so you can approach both type theory and abstraction logic with an open mind, and see for yourself which you find more appealing.

[1] S. C. Kleene, J. B. Rosser. The Inconsistency of Certain Formal Logics. Annals of Mathematics, 1935.

[2] Alonzo Church. A Formulation of the Simple Theory of Types. Journal of Symbolic Logic, 1940.


How does your system compare to Metamath[0], aside from the obvious difference of not having set theory baked in? I notice both your Abstraction Logic and Metamath's system can state the induction axiom scheme as a single statement, which seems to me like a huge usability advantage over FOL.

[0] https://us.metamath.org/mpeuni/mmset.html


Metamath itself is a logical framework, and doesn't really have a semantics. You can encode rules of a logic with it, and then you have to convince yourself that this encoding is what you want. I believe most of Metamath's content is based on an encoding of first-order logic, and it sounds like they allow some sort of schematic variables, which make it possible to replace axiom schemata by single axioms, but it is still FOL.

Given Metamath has no semantics, there is no built-in notion of soundness with Metamath. I think that has been fixed with Metamath Zero, but as a consequence, Metamath Zero is based on multi-sorted first-order logic only.

On the other hand, Practal uses Abstraction Logic, which can also act as a logical framework, but is itself already a logic with a simple semantics, simpler and more flexible than first-order logic (or any other general logic I know of). An important difference to first-order logic is that Abstraction Logic supports general operators, while first-order logic supports only two operators out of the box: universal quantification ∀, and existential quantification ∃.


Thanks - I hadn't looked into Metamath Zero before, but it sounds like that would be the right thing to compare Abstraction Logic to! Skimming https://arxiv.org/abs/1910.10703 makes it seem like Metamath Zero still operates at the level of schema, but has some other changes compared to Metamath that are too subtle for me to digest in an afternoon.


I just skimmed the paper you linked (I read it before, but forgot its details). So Metamath Zero is still a logical framework, like Metamath, but has a few more tools to ensure soundness of the logics you formulate in it. Nevertheless, just like Metamath, it does not have a semantics, because it operates on a purely syntactic level. You can formulate object logics in it, like FOL, which come with their own semantics, but it is up to you to show that this semantics is actually preserved by your encoding in Metamath Zero.

So I would say that this is the main difference between a logical framework (LF) (like Metamath and Metamath Zero) and Abstraction Logic (AL): the LF is based on proof-theory and syntax only (BYOS, bring your own semantics), while AL gives you in addition to proofs and syntax also a simple semantics.

Some LFs, like Isabelle, are based on intuitionistic type theory, and so they actually DO come with a semantics as well. But I wouldn't describe this semantics as simple (check out for example [1]), so when you describe an object logic with such an LF, you cannot really rely on that semantics to explain your object logic semantics, or prove properties like completeness, but are again left to your own purely syntactic devices, and are back to BYOS.

Does that actually make a difference in practice? Is there a practical benefit to AL having a simple semantics, and other LFs not? I am convinced that yes, it makes a big difference, because it makes it simpler (or even possible) compared to other LFs to implement features which are simple yet general and powerful, and it makes it also simpler to interface with other software like computer algebra software. But in the end, this can only be proven by actually building Practal and showing its practical benefits.

[1] Chad E. Brown. A semantics for intuitionistic higher-order logic .... https://www.ps.uni-saarland.de/iholhoas/msethoas.pdf




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: