It's too long for HN, OP presumably reworded to fit. If you have a better suggestion one of the mods might see and change it, I think OP's chance to edit it has passed though.
The difference is that no cubical proof assistant has been implemented by Leo de Moura. Get Leo de Moura to implement cubical, and you will have a proof assistant more powerful than you could ever imagine, built on extremely satisfying foundations.
Anonymous comments are cowardly; please show yourself. If you stay anonymous I am not aware of power dynamics, and I cannot adjust my response accordingly.
But that wasn't the point: He's right in that constructive/intuitionistic mathematics is so niche that it can be safely ignored when discussing formalization of the mathematics he and almost every other mathematician or scientist is interested in.
Kevin gets attention because he's an accomplished mathematician who tries to formalize his work, or at least what it's based on. I find it ridiculous to dismiss his opinion the way you do.
I know Kevin. He does good work. He also heckled every talk at ITP, including my own, in the middle to express his opinions. I have never met another researcher in my life who does that. Media outlets interview him and not the other 50 years of researchers doing work on proof assistants and using them both for mathematics and for verifying systems. I wrote a whole book about the history of program verification and for sure nobody has ever interviewed me about it. Kevin is loud and bold, that is why his opinions spread like fire. But they are just that, opinions.
LEM is used everywhere because it's actually a theorem. It follows from functional extentionality and the existence of quotient types. https://github.com/leanprover/lean/blob/master/library/init/...
The axiom of choice is an actual axiom, and it too ends up being used everywhere, but mostly to introduce decidable instances to permit things like double negation elimination inside proofs. An illustration:  introduces these instances to the file and  implicitly uses the instance.
I'm not saying these concerns about constructibility aren't unimportant. I do appreciate both POVs. I just don't think it's something you have to force mathematicians to worry about if all they're interested in is formalizing their math on a computer.
Something that's interesting about Lean is that double-negation elimination is used in Prop but not generally in Type and above. You have to explicitly reach for the axiom of choice if you want to "construct" a term of a type that isn't empty.
With all due respect, that seems a bit vague to me. What are the "questions of ergonomics" you're thinking of here? Choosing the negative form of a goal whenever you have to prove something non-constructively seems even easier ergonomically than, e.g. adding LEM as an axiom. You basically don't have to change your proof methods as long as the statement you're proving is of the appropriate form.
I don't think having to think about constructibility all the time is more ergonomic than accepting that the system has a well-accepted axiom. There doesn't need to be a one-true system, of course, and for people interested in foundations, reverse mathematics, constructive mathematics, higher category theory, and homotopy type theory, there are other systems. Most mathematicians don't study these things, and accepting LEM is natural for them.
Well, there are many possible constructive 'versions' of any non-constructive statement, so 'not having to do anything or be reminde[d] of questions of constructivity', strictly speaking, leads to just sticking to the non-constructive version by default. Since it's easy, even in constructive logic, to derive the non-constructive 'version' of some statement from a constructive one, this transformation (properly understood) is as good as automatic.
Whether or not a proposition is constructive is not something I've ever heard any mathematicians in my department ever worry about. I have only ever heard about, for example, showing a certain bound was "effective" after showing that there was a bound at all. Effective means that there is some procedure to construct the number -- but no one cares about whether the argument that shows this number is an upper bound is itself constructive. Lean with its Prop, Type 0, Type 1, ... universe hierarchy is sort of well suited for this.
I'm sure that there are interesting questions that hinge on constructibility in a proof, but again, mathematicians I talk to have never seriously thought about these things. In my field (topology), we do worry about constructing objects and invariants (and sometimes we even care about the complexity class), but at the proof level everything is classical.
If you're involved in developing proof assistants and you want mathematicians to use it, then I would suggest making it so you can change the "reasoning monad" in a file. If you put it in classical mode, it should look like you're able to use LEM, etc., and the goal window should show only untransformed statements, but then when outside the classical monad a constructivist would see the transformed statements. This would be sufficiently ergonomic.
Broadly speaking, because it's useful to have a marker of "was this statement derived via a proof by contradiction, or something like that?", and using the negative form is a sensible way of showing this.
> In my field (topology), ... at the proof level everything is classical.
On the contrary, constructive logic is found quite naturally (even in ordinary mathematics!) as the "internal language/logic" of some structures that are of interest to topologists. HoTT itself was designed as a logical foundation that works naturally with homotopies.
To whom? (That is rhetorical. The point is that this is not useful information for most mathematicians -- if you picked up a random graduate textbook in, say, algebra or combinatorics, I am certain the theorems will not be annotated with which are proved by contradiction. Quoting you from earlier, "With all due respect, that seems a bit vague to me.")
> are of interest to topologists
I didn't mean to suggest I was speaking for all of topology (more specifically, I'm interested in low-dimensional topology), but still topology is a large field and what you are talking about is a small part of it. Also, just because there are constructive internal logics, I'm not sure that means the system you use to study them has to be constructive itself, and if that's what you are interested in studying you might want a more specialized system anyway.
You don't need to convince me that these things are interesting -- I have some knowledge about internal logics and I do some higher category theory. It's just that languages should be ergonomic for their users. Saying something is easy does not prove it is ergonomic.
Think about this: Most mathematicians I know that use Lean seem to turn on all the additional features that make it as classical as possible. They don't even want the inconvenience of marking definitions noncomputable.
(Maybe you're already familiar with how Lean works, but LEM and double negation elimination do not let you leak values to the Type level from the Prop level. A proof that there is a number with a specific property doesn't mean you actually can have such a number. You need the noncomputable choice function to "obtain" a value from an existential.)
I don't know Kevin as well as you do (apparently) but maybe he is just ignorant on the topics you know so much about. Why something gets attention of media (Quanta Magazine?) is complicated. From your comments, you also seem very "bold".
Anonymous comments scare me because there are (a few, thankfully not many) abusive people in the PL community I am afraid of engaging with. I don't want to accidentally find myself arguing with one of them. I need to know when to exit the conversation.
Proof relevant mathematics, higher category theory, lots of topology. Sorry. I don't spend all of time on HackerNews, I sometimes don't get around to responding to things. I am talking to Kevin about this framing so we can figure out how to make it healthier in the future. Some of my comments about Kevin were likewise out of line because I interpreted some of our previous interactions as rooted in a gender-based power dynamic when they were actually just rooted in something Kevin finds difficult and wants to do better at, and that is my honest mistake.
Now, UIP is valid if the only way of proving an equality is to show that the two terms are definitionally equal. However, there are various type theories, such as Homotopy Type Theory, in which this axiom does not hold because propositional equality can have other proofs.
The HoTT people are doing good work and I don't doubt that automation can help with the _significant_ additional complexity of higher dimensional cubes, but that's not really all that would be missing.
Finally, if you can't claim Lean is good enough for all Mathematics then you can't claim it for any existing system or any system that doesn't take classical logic seriously (postulating an axiom doesn't count).
Correct me if I'm wrong (I may well be), but couldn't one work classically just by sticking to (-1)-truncated types ("mere prepositions") in a HOTT based system, for which LEM is true by default?
Lean is the calculus of inductive constructions with uniqueness of identity proofs. Classical logic in Lean requires using axioms.
What makes the cubical approach more powerful? What makes lean weak?
Lean is not weak, it just commits to something called Uniqueness of Identity Proofs (UIP). This is inconsistent with Univalence, a property that cubical type theories have. Necessarily, neither system can express "all of math." So it goes.
Leo chose UIP to get a novel way to get really powerful automation of equality proofs. Leo is very, very good at this; this is his whole shindig. But he had to hack around things a bit in order to get what he wanted. The paper by Bas and his student shows that in cubical, you don't have to hack around things; this automation of equality proofs arises in the natural way.
The gap is not fundamental, and it irritates me when it is attributed to the choice of UIP. The reason Lean is so useful is because the people working on it are good at building automation. The people working on cubical are much more interested in foundations. We need people who are interested in automating cubical; once we get that, the proof assistant that arises from it will be better than anything we have ever seen.
But if that is too hard to read, I recommend telling Anders directly when you get confused. He is open to improving the notes.
In most cases it is not possible to have an objective criterion for deciding which is the best choice, so the choice remains based on personal preferences.
For example, I could never accept the idea that set theory can be considered to belong to the foundations of mathematics.
I have always believed that it is more convenient to view the sets not as primitives, but as classes of equivalence of the ordered sequences, which in turn are constructed from ordered pairs, which are a primitive concept.
So instead of using set theory as a base, I believe it to be more convenient to start from some primitives that include some of the concepts and operations on which LISP was also based, plus some definitions, which normally are introduced using sets, modified to use ordered sequences instead.
All the set theory (and the number theory) can be constructed from these alternative primitive concepts.
In dependent type theory if you've proved "A implies B" you can extract from that proof a program that takes an argument of type A and always halts, returning a value of type B. Moreover if you prove some property about proofs that A implies B, you've also proved the corresponding proposition about programs.
This means your proofs get to deal with programs directly, rather than having to formalize them as something like Turing machines whose tapes are set-theoretic encodings of lists of integers. It's excruciatingly painful to write non-hand-wavy proofs of anything that way. Hand-wavy proofs that can't be machine-checked aren't so bad of course; complexity theory folks have been doing that for decades.
If you aren't proving things about programs or homotopy then there really aren't any obvious reasons to prefer dependent type theory. I say this as somebody who loves programming with dependent types. But I'm being honest here, and I wish more people in the mechanized mathematics world would be honest about this.
The OP gives a reason, and it's the same reason programs use types. Untyped proofs are usually wrong, like untyped programs.
 A. Church, A Formulation of the Simple Theory of Types. https://pdfs.semanticscholar.org/28bf/123690205ae5bbd9f8c84b...
Anyone know how this relates to Lewis Carrol?