Hacker News new | comments | show | ask | jobs | submit login
The weird and wonderful world of constructive mathematics (2017) [pdf] (sandiego.edu)
164 points by lainon 6 days ago | hide | past | web | favorite | 68 comments

There are a lot of folks here asking for a bit more. Here are two versions of the same well-done talk, from Andrej Bauer: a video https://www.youtube.com/watch?v=21qPOReu4FI, and a paper http://www.ams.org/journals/bull/2017-54-03/S0273-0979-2016-... (sorry, the paper link may not work for everyone, but the video contains basically the same content.)

The remarks on computable and continuous functions can also be thought about as follows.

The law of the excluded middle is only one non-constructive aspect of classical mathematics. Dropping it leaves you with what is usually called "intuionistic logic", and the characterization of constructive mathematics as broader than classic mathematics is literally correct in that every intuionistic theorem implies a classical theorem, and every classical theorem is equivalent to an intuionistic theorem.

Another non-constructive aspect of classical mathematics, which remains non-constructive in intuionistic logic, are axioms whose conclusion involves the existential quantifier. In both classical and intuionistic logic, the inference rule, which governs how the existential quantifer may be used, looks as follows.

> Asserting that "the existence of x such that phi(x) holds entails psi" where x is a variable bound by the existential quantifer and not occurring in psi, is equivalent to assering that "phi(y) entails psi" where y is a free variable not ocurring in psi.

This inference rule (assembly language of logic) seems weird, so an example of its use might help.

> "there exists x such that phi(x) holds entails that there exists x such that phi(x) holds" by tautological inference, and by the existential quantifier inference we have that "phi(y) entails the existence of x such that phi(x) holds" where y is a free variable not appearing in "exists x such that phi(x) holds".

When doing classical or intuionistic mathematics, you would usually use axioms of the form "psi entails that there exists x such that phi(x) holds". But such axioms are non-constructive because the inference rule does not allow you to start with "there exists x such that phi(x) holds" and somehow construct an actual term t such that phi(t) holds.

Now what Mike says in his slides is that it is consistent to modify intuionistic mathematics into a "fully constructive mathematics" by strengthening the existential quantifier so that "exists x such that phi(x) holds" would imply that we can (computably) come up with a specific term t such that phi(t) holds.

As a consequence, any statement in fully constructive mathematics involving the existential quantifier as a conclusion is a stronger statement than such in classical mathematics, which is roughly why all functions end up being continuous: the definition of the notion of a function itself requires that the function be computable, i.e. that its values exist in the sense of the strengthened existential quantifier.

> every classical theorem is equivalent to an intuionistic theorem.

I understand that every intuitionistic theorem implies a classical one: give the same proof, and see what theorem you end up with in the classical world. But how is every classical theorem equivalent to an intuitionistic one? Can we construct the equivalent intuitionistic theorem in a similar way from any classical one, or is this a nonconstructive proof?

The negation of the negation not being equal to what you started with always reminds me of the Hegelian dialectic, or dialectic (in the philosophical sense) in general.

> In constructive mathematics, there can also be numbers d such that d^2 = 0 but not necessarily d = 0.

In the footnote, it mentions that d is not a real number (which seems right). But I find this very confusing -- this seems to be using a deliberately non-constructible number to facilitate proofs in constructive mathematics?

Regardless of whether this meets a formal criteria for being a tool in constructive mathematics, it does seem to be in direct opposition to the notion of intuitionalist mathematics, which has as a fundamental goal to dispense with axiom-of-choice-style proofs of things that are clearly ridiculous (like Banach-Tarski).

We can explicitly construct such numbers using 2x2 matrices. These are called "dual numbers"[0], and the matrix representation is essentially an analoge of the same for complex numbers. So if you accept the complex unit, i, as a reasonable algebraic thing, then the "dual unit" shouldn't be too much of a stretch.

Anyway, all that is completely fine in standard foundations. The LEM contradictions only come in when we impose other axioms. For example, in Anders Kock's Sythetic Differential Geometry[1], we essentially start out by assuming that every function is differentiable.

This seems sorta crazy at first blush, and it does give contradictions, but the interesting bit as that those problems hinge solely on LEM. Thus, in a way, we're left with chosing between universal differentiability or LEM. Kock explores what happens if we pick the former, and it turns out to be pretty deep, interesting and useful.



Always see this type of slide template used in mathematics. What do people use to build these slides? Is it a specific powerpoint template?

It’s called Beamer, which is a LaTeX template/package which generates a PDF slideshow rather than an academic-looking paper.

Beamer LaTeX. Here is a nifty overview: https://hartwork.org/beamer-theme-matrix/

> we can assume powerful axioms that would classically be inconsistent

Where could I find some explicit examples of this being useful?

One idea I had is that this might actually be very useful for proving things about cryptography. Generally, if you want to formally prove something is cryptographically sound, you can only do so in terms of the asymptotics of families of functions, rather than proving anything specific regarding, say, SHA256. But if you could prove constructively a statement like (an adversary can break program P) -> (exists x,y: x != y and SHA256(x)=SHA256(y)) it seems that would serve as a proof that P is secure. Statements of this form are trivially provable classically since (exists x,y: x != y and SHA256(x)=SHA256(y)) can be proven on its own by the pigeonhole principle, regardless of the correctness of P, but it seems that in order to prove this constructively you would actually have to find a collision in SHA256.

No. If a set A is finite, its elements are enumerable, and some property P is decidable, then (\forall x \in A) P(x) \vee ~P(x) is true even intuitionistically.

A list of examples of "anti-classical" axioms:

- All functions are continuous, and all sets automatically carry the "right" topology. This means, for instance, that constructing the Real numbers in the standard ways (Cauchy sequences or Dedekind cuts) also endows them with the desired topology (the Euclidean topology in this case).

- All functions are computable. This axiom is called "Church's thesis". It's nice because it shows that a lot of Computability Theory is similar to classical set theory, but with the edition of Church's thesis, and the absence of LEM.

- Some numbers in a set called the "smooth reals" square to 0... without being zero. This seems to create a universe in which every geometric object is "infinitesimally straight". I haven't fully grasped why this works. It's like the dual numbers, but somehow made all-pervasive.

> Some numbers in a set called the "smooth reals" square to 0... without being zero. This seems to create a universe in which every geometric object is "infinitesimally straight". I haven't fully grasped why this works. It's like the dual numbers, but somehow made all-pervasive.

It's not the existence of the set of "smooth reals" that square to 0 without being 0 that creates the universe in which every geometric object is infinitesimally straight.

Rather, for any ring of "smooth reals" R, the set D={d in R:^2=0} allows us to construct a "dual ring" as follows. First, take the map RxR -> R^D sending a pair (a,b) of "smooth reals" to the linear function D->R given by d|->a+bd. Then induce a ring structure on RxR that would make the map into a ring homomorphism when R^D, the set of R-valued functions on D, is given the pointwise ring structure.

Explicitly, (a_1,b_1)+(a_2,b_2)=(a_1+a_2,b_1+b_2) since (a_1+b_1d)+(a_2+b_2d)=(a_1+a_2)+(b_1+b_2)d, and (a_1,b_1)(a_2,b_2)=(a_1a_2+a_1b_2+a_2b_1) since (a_1+b_1d)(a_2+b_2d)=a_1a_2+(a_1b_2+a_2b_1)d+d^2b_1b_2 and d^2=0.

Thus, we have the structure of a ring on the set of RxR, which is tradtionally denoted by R[e] where e(psilon) is a variable x assumed to square to zero, i.e. an indeterminate, so hypothetical, order 2 infinitesimal.

Now, Axiom 1 of synthetic differential geometry asserts that the map sending the ring of dual numbers to the ring of R-valued functions on D is a bijection. In other words, it restricts the domain of discourse so that, when it comes to functions on order 2 infinitesimals, we may only ever talk linear functions, and it also expands the discourse in requiring that we have distinct linear functions, one for every pair y-intercept,slope pair.

It is the second bit that forces every geometric object to admit "infinitsimally straight" approximations, because every geometric object is essentially contained in a copy of R^n (n-tuples of "smooth real numbers"), and so it is forced to have non-trivial linear functions from D.


One reference for the relationship to the dual numbers is the first few pages of Anders Kock's book Synthetic Differential Geometry, available from his web-site at home.imf.au.dk/kock/sdg99.pdf. Another, more readable reference, are Mike Shulman's notes available at http://home.sandiego.edu/~shulman/papers/sdg-pizza-seminar.p....

Certainly it is _true_, since it is decidable and classically true, but I think that the relevant question is can you come up with a proof without actually finding a collision in SHA256? In essence, what I am asking is if there are axioms we can add in an intuitionistic system that we know to be inconsistent (but that we can't prove inconsistent without doing things that we believe are hard cryptograhically), such that the resulting system corresponds to a notion of cryptographic security.

I think this wouldn't work, because you can prove intuitionistically that every finite set is "decidable" using the following ingredients:

- An upper bound N on the size of the set.

- A surjection f:[1,N] -> S where [1,N] is the set of integers from 1 to N.

- Proof by induction that every set S where the above two ingredients can be found is decidable. The induction will be in N. This can be carried out internally in any intuitionistic system.

In the case of SHA-256, there is an upper bound N on the number of 256-bit inputs (which is equal to 2^256), and a surjection f which maps integers to corresponding bitstrings. Then the general lemma I described above is true, and you can prove that there must be a collision.

It doesn't entirely prove you wrong. (I need to do some real work). But almost certainly what you describe can't work. Some sort of induction will probably thwart what you're trying to do.

You have far more freedom in modeling intuitionistic logic than you think. :)

In this case, the point is that in a logic for cryptography, you would have a type "S" of "bit strings of arbitrary but unknown length" which is restricted in such a way that you cannot eliminate into discrete types (such as the natural numbers). In particular, there would be no way of proving "x != y" for x, y in S at all unless you assume that there was some function which established this.

One way to get a model for such a logic is to interpret types as (reflexive) graphs and functions as graph homomorphisms. The resulting category is a topos and hence gives you a model of type theory.

There is such a wealth of knowledge about building models for intuitionistic logic that it is difficult to say what can and cannot be done without a thorough literature review.

This makes sense, thanks

> - All functions are continuous, and all sets automatically carry the "right" topology. This means, for instance, that constructing the Real numbers in the standard ways (Cauchy sequences or Dedekind cuts) also endows them with the desired topology (the Euclidean topology in this case).

I don’t understand how it is possible to have such an axiom. Why can you not construct the function $f : x \mapsto I(x > 2)$ where $I$ is 1 for true, 0 for false. And then prove that the preimage of $(-1,0.5)$ is $(-\infty,2]$ which is not open?

Technically, f is not well defined. Or rather, indicator functions aren't well defined.

In your case it is not provable that (x > 2 or x =< 2), that would require exactly the law of the excluded middle.

This is explained in one of the later slides.

I’ll accept that indicator functions are not well defined in general however I don’t understand why f should be ill-defined. As I understand it, negation is still allowed in intuitionist mathematics, but you don’t get that (not not X -> X). You still get that (not not not X -> not X), and that (X and not X -> bot). Furthermore it is only a general law of the excluded middle that is not allowed and it is quite possible to prove that e.g. (x : R -> (x > 2 and not x < 2 and not x = 2) or (x <= 2 and not x > 2)).

Does it break down somewhere in the definition of continuity?

If `x` is a representation of a real number, how do you compute if it is above or below 2? Suppose you start computing its digits, and you get 2.0000000, you don't know if you will get a non-0 digit later if you keep computing.

One "application" (in quotes because honestly for all the fanfare I haven't come across something that was worked out to be usable by non-specialists) is automatic differentiation, see http://blog.sigfpe.com/2006/09/practical-synthetic-different...

Generally speaking, the ability to extended intuionistic mathematics by new axioms that are classically inconsistent is a bit of a misnomer. The real power is that these "axioms" allow you to easily and immediately specialize the generic language of intuionistic mathematics into a domain-specific language for various areas.

So that

1. adding the continuity axiom gives you a domain-specific language which can only describe continuous functions and no others

2. adding the differentiation axiom gives you a domain-specific language called synthetic differential geometry in which you can only talk about differentiable functions

3. adding the computability axiom gives you a domain-specific language in which you can only talk about computable functions.

4. adding the law of the excluded middle gives you the domain-specific language of classical mathematics.

Now, each of the first three can also be interpreted within classical mathematics (which is how we know the extensions are consistent with classical mathematics), but that's analogous to writing an interpreter in C versus merely adding new syntax to an existing language (e.g. how DSLs can be implemented in LISP).

I don't think you'll find much use for it when reasoning about a specific computable hash function because there's e.g. no clear distinction between using the pingeonhole principle and using the properties of the hash function itself.

You could however reason about a 'generic' hash function and show that a particular program can only be broken if someone can find a hash collision. That doesn't really require constructionist reasoning though.

I would like to point out that we don't have even classical way of proving something to be cryptographically sound, since we don't know if really P!=NP. Perhaps there is a constructive proof of P!=NP and this will actually let you prove cryptographic strength of something constructively.

And if you are already taking P!=NP as an axiom, what difference it makes that constructivism takes out just another axiom that you assume to be true?

I’m a bit confused about the functions always being continuous. Step functions being ill-defined makes sense, but how about modulos? f(x) = x mod 1 should be trivially discontinuous. I should also be able to contruct the set of integers Z (using the unit 1 and succession operator) and identify the largest integer that is smaller than x by intersecting [x-1, x) with Z. Do I need to show the intersection always contain only one element?

Well, x mod 1 is just x minus a series of Heaviside step functions, each of which are non-constructive. The core problem is determining if your real number x is equal to an integer. You might be able to prove that x is close to a particular n, but there may not be a constructive proof of either x<n, x=n, or x>n. Hence you don't know if x mod 1 should be x-(n-1) or x-n.

I think your definition depends on being able to decide for every real x whether n ∈ [x-1, x), for which you need that n < x. According to the slides that is not generally possible in constructive mathematics.

Well I guess nearly all of the reals must not exist since we can't construct them.

You sound to me like you think you're sarcastically saying something that's just so obviously false that it constitutes a devastating argument against the entire idea of constructive mathematics, but whether the reals "exist" depends a lot on your definition of "exist" and is still an ongoing debate for some definitions of "exist" that are relevant, even without bring constructive mathematics into it. And it's not that hard to draw a perfectly conventional mathematician into an interesting discussion about whether the real numbers deserve to be as fundamental as they are when we can't point to any of them. That is, it isn't just that we can't "construct" them for some definition of construct, but we can't even identify them for any definition of "identify".

Let me highlight again that the question isn't whether they are useful, or a mathematically valid construction; there are all kinds of other mathematically valid constructions far stranger than the "reals". The question is whether they deserve such a fundamental place in mathematics and education. I do recall even in high school that truly understanding the real numbers even at that level was definitely a dividing point for the advanced math class; there are some who really got the quirks, and there are some who never really did and just parroted their way through the tests wondering how the question of "numbers" got so complicated... and it was still a very partial and somewhat ad-hoc introduction to the topic ("Dedekind" doesn't come up, etc.) just enough to try to motivate limits.

It's self-evident that for everyday usage all you need are the rationals.

Bingo! From a computational perspective, the non-computable reals are objects that not only require an infinite amount of memory to represent a single one, but also require an infinite amount of memory to even describe an algorithm to produce a single one. How could such an object possibly be constructed?

Can't we just produce a function from Nat -> 0|1|2|3..9? Which in effect can represent reals?

Though addition might not be defined on some of the reals, if represent it like that.

But Proof checkers like lean/coq all use Cauchy sequences to represent reals. So they are in effect constructible.

You can easily do so (usually not that specific representation) but there "are" more reals in the classical construction than you can represent with computations. Constructive mathematics just lets you say "No, this is all of them."

It never says, its all of them (Sort of LEM is working in all of them). It just says, thats all you can work with.

It's all of them. You can't give an example of a real number that's not computable.

Chaitin's omega number? I can give you an exact but incomplete equation for it, and tell you how it starts. Not every non-computable number is "pretend you could choose a random normal number, somehow".

I'm not sure if this answer is relevant, but might as well mention it.

The real numbers, infinity, etc. are abstractions. Being able to think abstractly is one of the things that make the humans different from other animals.

So from that point of view, where the real numbers exist in the same way infinity exists -- they're not really numbers but ideas about numbers. Pi isn't a number but an idea about the circle, which isn't a shape but an idea about shapes. Is that where constructive math takes you? This is all quite fascinating but totally unfamiliar to me.

They can be numbers and abstractions. There's not really any definitive singular notion of what a number is, so it wouldn't be very useful to say something like pi isn't a number. (If you can do arithmetic on it, chances are it is a number, regardless of what else it represents)

Indeed, numbers are abstractions. Even the number ‘5’ is an abstraction, let alone the numbers that cannot even be measured precisely - not even in principle.

>idea about a circle

As if a circle is a real thing. (Circles do not exist, either - or, rather, they ‘exist’ in a very particular sense of the word, as they are themselves abstractions; round things do.)

> As if a circle is a real thing. (Circles do not exist, either - or, rather, they ‘exist’ in a very particular sense of the word, as they are themselves abstractions; round things do.)

Well, yes, exactly. The thing that intrigues me about this discussion is that it seems like mathematicians have a much clearer idea than I do about what the differences between these kinds of abstractions are.

the constructive mathematics reminds me the state of math before calculus, i.e. Achilles and turtle. Such step back in power and expressiveness is probably a necessary first step before the some kind of "closure" technique, like the limit in calculus, can be developed.

Constructive mathematics is the way forward. It's connected to linear logic via Chu spaces [0] (https://arxiv.org/abs/1805.07518). This correspondence is unreal as constructive mathematics / linear logic unlocks the door to several key advantages such as being closer to computation, probability theory, type theory, Lie groups, a generalized topology, optimization, “differentiable Turing machines”, all the quantum shit, game theory etc the list goes on.

[0] https://en.wikipedia.org/wiki/Chu_space

Linear logic is fundamentally about formulating computation as an adversarial relationship between two parties. It's also the foundation of e.g. the Rust ownership system (I know, I know, affine types).

It's also the foundation of the min-max algorithm which in turn is the foundation of Generative adversarial neural networks.

Ironically, this idea isn't exactly new. It's directly related to Dialectics (https://en.wikipedia.org/wiki/Dialectic) which has been present in Western philosophy since Aristotle through Kant, Hegel and Marx (it's a funny story).

> Constructive mathematics is the way forward.

This sweeping assumption leads to my sweeping assumption that you are "a programmer".


I’m just reading it but I disagree as would most mathematicians.

Amazing that you are just reading but already know what most mathematicians think about it :)

Realistically and honestly, most mathematicians agree with it. Most mathematicians consider constructivism to be useful AND a subset of normal mathematics.

Also, it intellectually pains me to see proofs of meta theorems in intuitionistic logic use classical logic (not talking about object level proofs).

No offense, but that is really handwavy. In what way does constructive mathematics unlock the door to being closer to all the quantum shit?

I am not sure what the parent meant specifically, but yes, there is quite a lot of work on quantum physics and logic, with regular conferences (https://arxiv.org/html/1701.00242) and some truly fascinating models for intuitionistic logics (https://arxiv.org/abs/0909.3468).

Not being a physicist I have no idea whether this is "the way forward" for quantum physics or anything, but the results that I do understand are non-trivial so there seems to be something there...

Ask specific questions. Like I have a papers backing up most of my claims. Sometimes you need to take a leap of faith but you don’t need to draw every line in the connect the dot puzzle to see that it’s a fucking elephant.

https://arxiv.org/pdf/0805.2859.pdf Page 14. But do a google search “constructive mathematics” “quantum” or “linear logic” “quantum”, a lot has been written.

I started teaching myself Racket recently and eventually ended up looking into the innards of lambda calculus formalism. It got me thinking about what the ramifications of embracing constructivism would be towards computability in general. Can you recommend any good introductory literature on constructivism ? I'm familiar with the foundational crisis and the whole formalism vs logicism vs constructivism debate but never really dug deeper. Thanks.

I would recommend starting with linear logic to be honest. Logic is very simple to understand, mathematics is hard. Analogously, linear logic is simpler than constructive mathematics.


I know this might not be what you were looking for but to be honest Linear logic is the simplest to understand of all of these.

I’m still struggling with these ideas, but all the signs are pointing towards this being the case.

How is constructive mathematics closer to Lie groups than regular mathematics? From what I read in that presentation, a bunch of basic assumptions in calculus (such as the intermediate value theorem) no longer hold, and the theory of Lie groups is built on manifolds/calculus/real analysis etc.

Lie groups come up very naturally in synthetic differential geometry (SDG) (http://www.fuw.edu.pl/~kostecki/sdg.pdf). In fact, the main advantage of SDG over the classical formulation of differential geometry is that it can be developed in a largely coordinate free fashion and that you can talk natively about differential forms.

That said, there is a design space here and while SDG is close enough to ordinary mathematics that you can explain it to high-school students, it might not be the best fit for developing modern physics. There are several different approaches (e.g., working in the internal language of a "synthetic differential infinity-topos") which seem promising. It's an area of active research and one that is still extraordinarily difficult to get into.

Does this have anything to do with constructive mathematics though? I still don’t see the connection.

These are all models of a constructive type theory/intuitionistic logic. The axiom of excluded middle fails in SDG and in any infinity-topos.

Good question. I don’t have a direct answer answer but fundamentally the connection has something to do with adjoints. Google for adjoints and constructive mathematics and Lie groups and adjoints.

This particular pair I understand only indirectly, I somewhat understand the relationship between linear logic and Lie groups and constructive mathematics and linear logic and this is a transitive relationship.

Edit: in the presentation they mention “In constructive mathematics, there can also be numbers† d such that d^2 = 0 but not necessarily d = 0.”. They are talking about alternarivity, one of the three requirements for a Lie algebra https://en.m.wikipedia.org/wiki/Lie_algebra

“Something to do with adjoints” is hopelessly vague, and what do you mean by transitive?

In regular mathematics there can be nonzero numbers whose square is zero (for example, the “dual numbers”). But I don’t see how this has anything to do with constructive vs nonconstructive mathematics.

> hopelessly vague,

You didn’t do the reading so hold your horses with hopelessly.

From the slides “In constructive mathematics, every function is continuous”. (From the presentation)

Sounds like Lie groups to me.

I’m aware of dual numbers funny that you bring them up as they correspond to Lie groups. I have been talking about dual quaternions nonstop on hn.

I'm sorry, but he is right.

Adjoints are everywhere [1]. Referencing adjoints is indeed pretty vague. You can't make a case for why things are interesting by quoting a bunch of resources of other people making cases why things are interesting. (I guess you can, but then, why make the argument at all if it is not yours?)

You don't even need funny names to get x^2=0. This is true for any of the two elements of the group of order 2.

Linear logic is indeed having a mini revival, but so is general category theory. I don't see the particular advantage for people to evangelise certain parts of mathematics, especially if the case being made is not clear.

[1] Not attributed to me.

> I'm sorry, but he is right.

That’s like your opinion, man.

You conveniently ignored the continousness of both constructive mathematics and Lie groups.

Did you read the paper on chu spaces and constructive mathematics? What did it say about adjoints?

There’s a reason why intuitionistic logic comes into studying Lie Groupoids, but I’m not quite sure you understand why. It’s great to see some enthusiasm, but I’d recommend spending a few months working through Barr and Wells before you start publicly advocating like this.

Is there a constructive version of Gleason and Montgomery-Zippin solution of the Hilbert 5th problem? Reference pls if any.

What about it?

Interesting! Do you have a book on the subject to recommend?

Read the wiki on every term you aren't familiar with and then do pairwise google searches for papers containing any two of the words in the set, so e.g. "probability" "Lie groups". If you have specific question I can maybe answer them or provide a link at least.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact