
Proofs are Programs – 19th Century Logic and 21st Century Computing (2000) [pdf] - michaelsbradley
http://homepages.inf.ed.ac.uk/wadler/papers/frege/frege.pdf
======
tpetricek
People like to see logic and computing in this way and the author is a great
storyteller. But it is a somewhat misleading perspective that is kind of
similar to "Whig historiography"
([https://en.wikipedia.org/wiki/Whig_history](https://en.wikipedia.org/wiki/Whig_history)):

> _Whig is an approach to historiography that presents the past as an
> inevitable progression towards ever greater liberty and enlightenment,
> culminating in modern forms of liberal democracy and constitutional
> monarchy._

Just replace the "greater liberty" bit with what computer scientists (and the
author) argue is the right way of doing computer science today.

I'd love to read a historical treatment of the topic that is written by
someone who is a historian rather than computer scientist twisting history to
support their world view.

~~~
tikhonj
That's not a charitable interpretation ("twisting history to support their
world view"), and your comment seems to be conflating the language-logic view
_itself_ with the _history_ presented here. The relationship between programs
and logic is compelling for many reasons and can stand independent of how the
two happened to be developed; my take is that the historical story here—which
is definitely too pat—is presented more as an illustration of the idea than
anything else.

This is no different than how math textbooks present the history of any
particular mathematical abstraction or theorem: they don't go in depth on all
the false starts and dead ends people explored before making real progress.
That's certainly something interesting but it has no place—and no room—in
textbooks dedicated to _explaining_ abstractions. The simplified story of how
something _could_ have been developed is, I believe, more relevant to
understanding the importance of an idea than the way it _was_ actually
developed. The real history is full of noise, a function of the person who
first happened upon the idea. It's not hard to image thousands of alternate
worlds where the exact development of an idea was slightly different, but
reflecting the same fundamentals—those differences end up as irrelevant
details due to chance.

~~~
jessriedel
> This is no different than how math textbooks present the history of any
> particular mathematical abstraction or theorem: they don't go in depth on
> all the false starts and dead ends people explored before making real
> progress. That's certainly something interesting but it has no place—and no
> room—in textbooks dedicated to explaining abstractions. The simplified story
> of how something could have been developed is, I believe, more relevant to
> understanding the importance of an idea than the way it was actually
> developed.

Unfortunately, textbooks rarely do even this. That is, they don't supply
nearly enough justification/motivation for the complicated definition they
posit and the structures they develop. For the canonical (and accessible) bit
of philosophy of mathematics on this, I highly recommend Lakatos’ "Proof and
refutations" [1]. As a demonstration, it gives a very non-historical and
exhaustive explanation for the definition of an "honest" polyhedron with
respect to Euler characteristics [2], and is liberally footnoted with the
actual history. Unfortunately, textbooks have not improved much since it was
written in 1963.

[1]:
[https://math.berkeley.edu/~kpmann/Lakatos.pdf](https://math.berkeley.edu/~kpmann/Lakatos.pdf)
[2]:
[https://en.wikipedia.org/wiki/Euler_characteristic](https://en.wikipedia.org/wiki/Euler_characteristic)

~~~
tikhonj
Ah, that's a fair point. Most math textbooks are not that great
pedagogically—perhaps I was just projecting how I think mathematical ideas
_should_ be introduced :).

I'm definitely trying to do this myself: whenever I teach an abstract concept
(more from PL than math), I've started to explicitly note both the formal
definition _and_ why it's interesting or relevant. Many explanations of
abstract concepts I run into forget one of these components or the other, and
it never ends well...

------
agentultra
I love Wadler's papers. He did one on Monads that was approachable and
revealing for me. I also love logic and the predicate calculus.

Highly recommend reading this one. :)

~~~
michaelsbradley
Which paper?

~~~
agentultra
[http://homepages.inf.ed.ac.uk/wadler/papers/marktoberdorf/ba...](http://homepages.inf.ed.ac.uk/wadler/papers/marktoberdorf/baastad.pdf)

------
jwtadvice
Anyone really familiar with this: are non-constructive proofs, things like
Cantor Diagonalization or Erdos's Probabilistic Method, programs? Are they
just non-terminating programs?

I can see Cantor Diagonalization as a non-terminating program really easily,
but I don't understand what the Probabilistic Method looks like as a program.

~~~
black_knight
Cantor's diagonalisation is certainly constructive. Given a a sequence of
sequences of binary digits, it gives a way of constructing a sequence not
occurring in the sequence of sequences. The resulting sequence is as
computable as the input.

But to answer your question: Many mathematicians regard their (classical)
proofs as algorithms. They just allow themselves to use an oracle call for
_any_ well-formed question:

Say I want to prove B. Then, at some point in my proof, I formulate a sentence
A, and make a call to the oracle. My proof/algorithm then branches: in one
branch I assume A is true, in the other I assume A is false. If I can continue
such each branch gives B true, then my proof works (classically). [0]

The constructive critique is that such an algorithm cannot be executed my
humans, or turing machines for that matter.

Also there is a whole continuum of stronger and stronger oracles below this
ultimate oracle allowed by classical logic — which is an interesting continuum
to study. Often one can locate an exact class of oracles needed for a given
classical proof to work out.

[0]: This is the «church encoding» version of the law of excluded middle: For
all B, we have (A → B) → (¬A → B) → B, which is equivalent to the usual
formulation (A ∨ ¬A).

~~~
jwtadvice
Gotcha. So basically the "axiom" of the law of excluded middle would become an
oracle, and then with a call to that oracle a problem requiring the law of
excluded middle becomes a proof. Basically - the axiomatic decisions made by
mathematicians can be represented as programs by turning them into oracles.

In this case, with the axiom of choice implemented as an oracle, the program
for Cantor's Diagonalization can be written.

This makes a kind of ultimate sense. Rice's Theorem shows how a program with
an oracle about the behavior of a Turing Machine is contradictory, and this
implies that "even with a finite set of axioms" there are no proofs about
general classes of Turing Machines and thus there are statements independent
of mathematics (a kind of Godelian statement).

Is that right?

If so, that's profoundly simple and also incredible to me. (Hopefully I
haven't excited myself into thinking I "got it".)

Could you help me understand what Erdo's Probablistic Method looks like as a
program? Which oracles might be required and how you would call them?

~~~
kmill
I was thinking about LEM today. Without the oracle, you can prove that the
double negation of the law of the excluded middle is true, which can be
represented with the following type:

    
    
        negneg_lem :: ((Either a (a -> r)) -> r) -> r
    

I'm using a polymorphic r to represent the void type, and I'm using the idea
that "not a" is the same as "a -> r".

The proof of negneg_lem is just

    
    
        negneg_lem f = (f . Right)(f . Left)
    

No oracle there. If you had an oracle

    
    
        oracle :: ((a -> r) -> r) -> a
    

then you would get the normal LEM as

    
    
        lem = oracle negneg_lem
    

An interesting thing about the type (a -> r) -> r, i.e., double negation, is
that it is the type for continuation-passing style. This suggests that a
classical proof is something which is allowed to do backtracking search, but
I'm still trying to understand exactly how that is.

(Also, the oracle is somewhat absurd. Really, if you have a function which
takes functions, you can produce an element of the function's domain? It might
almost make more sense if r was a void type.)

~~~
vilhelm_s
Yes, classical propositional logic can be given a constructive interpretation
using call-with-current-continuation. This was first worked out by Timothy
Griffin [1], and there is a cute story by Philip Wadler about making a deal
with the devil [2, section 4].

[1]
[http://www.cl.cam.ac.uk/~tgg22/publications/popl90.pdf](http://www.cl.cam.ac.uk/~tgg22/publications/popl90.pdf)
[2]
[http://homepages.inf.ed.ac.uk/wadler/papers/dual/dual.pdf](http://homepages.inf.ed.ac.uk/wadler/papers/dual/dual.pdf)

------
cr0sh
This may be slightly OT - but the one thing I have always wondered about the
19th century is why Babbage didn't go down the route of Boolean logic/math for
computation? Furthermore, why he didn't incorporate and use relays (from
telegraphy)? For that matter, why didn't Hollerith?

Ultimately, to me it seems all connected to this "fixed" idea (that is,
thinking "inside the box") that computation was about numbers - that there was
only calculation and tabulation - counting, iow.

While Boole came up with the math/logic - few pursued the idea of making
machines to work with it (a few -did- exist, but were obscure, iirc);
embodying logic using electrical circuits would have to wait. Then there was
the "revelation" of computation being about symbols, not numbers - which
opened everything up.

I'm just kinda rambling here; my apologies. And, hindsight is always easier
than foresight, of course. It just bugs me, and makes me wonder what "obvious"
ideas we are missing today, that will change the world, that we won't know
until we look back. I guess I wonder why many of the most world changing ideas
(in the long run) are often very simple, and why did it take so long for
humanity to see/discover them? I wonder if anyone has studied this - of if it
is something that can be studied...?

~~~
jdmichal
I would think that computers were being made to be _useful_ first and
foremost. And binary computing was not really useful for logic and number
crunching until a number of advancements.

Babbage analytical engine - 1833

Boolean algebra - 1847

two's complement - 1945

~~~
ZenoArrow
Your list is missing a very important advancement from 1937:

[https://en.m.wikipedia.org/wiki/A_Symbolic_Analysis_of_Relay...](https://en.m.wikipedia.org/wiki/A_Symbolic_Analysis_of_Relay_and_Switching_Circuits)

------
pron
That's certainly a very language-and-logic perspective on things. But one of
the reasons Church missed the general concept of computation whereas Turing
didn't was precisely because Church was stuck in the formalism of logic, and
couldn't recognize that computation extends well beyond it. So proofs are
programs, but so are, say, metabolic networks, neuron networks, and just about
_any_ discrete dynamical system. But are metabolic networks _proofs_? That's
not so clear, and therefore it's not so clear that there is a universal
equivalence here.

~~~
paulajohnson
Its the program itself that is the proof, not the computer running the
program.

In a sufficiently powerful type system you can express any proposition as a
function type. A proof of that proposition is also a program. The proof
checker is a type checker. If your proof is invalid, your program won't type-
check.

Coq ([https://coq.inria.fr/](https://coq.inria.fr/)) is a real-world example
of exactly this.

~~~
misja111
But what if the program that runs the proof will never terminate? For example
I could write a 'proof' for some property of natural numbers by just iterating
through all of them and every time validating the property.

Isn't a proof only valid if it can be validated within a reasonable amount of
time, either by some human or by a computer?

~~~
hiker
Quoting from Wikipedia: "Because of the possibility of writing non-terminating
programs, Turing-complete models of computation (such as languages with
arbitrary recursive functions) must be interpreted with care, as naive
application of the correspondence leads to an inconsistent logic. The best way
of dealing with arbitrary computation from a logical point of view is still an
actively debated research question, but one popular approach is based on using
monads to segregate provably terminating from potentially non-terminating code
(an approach that also generalizes to much richer models of computation,[6]
and is itself related to modal logic by a natural extension of the
Curry–Howard isomorphism[ext 1]). A more radical approach, advocated by total
functional programming, is to eliminate unrestricted recursion (and forgo
Turing completeness, although still retaining high computational complexity),
using more controlled corecursion wherever non-terminating behavior is
actually desired."

------
27182818284
I had math professors that would express this sentiment in undergrad. It is
very true.

However, problem sets generally aren't graded like they're programs, haha.

------
imode
curry-howard and Wadler's "Propositions as Types" talk at StrangeLoop* opened
my eyes pretty early on to the sweet, sweet utility of logic in my every day
programming. it sounds odd to start with but I never realized why these things
mattered prior to viewing this talk.

props to wadler. best ~40 minutes I've ever spent.

*[https://www.youtube.com/watch?v=IOiZatlZtGU](https://www.youtube.com/watch?v=IOiZatlZtGU)

~~~
protomikron
Thank you, that was a delightful talk, I did not know Wadler was so
enthusiastic (but I have to say, most logicians and theoretical computer
scientists are).

------
protomikron
Does there exists research that uses results from computation theory and
applies them to physics? Admittedly I am not sure what exactly I mean by that
or what I am searching for, but I believe there is some deep relationship
between limits in computation and limits about what one can say about the
physical universe (which could be the same).

~~~
hiker
"Physics, Topology, Logic and Computation: A Rosetta Stone"[1] might be a
starting point.

[1]
[http://math.ucr.edu/home/baez/rosetta.pdf](http://math.ucr.edu/home/baez/rosetta.pdf)

------
mrcactu5
i am having my doubts these days.

i would like to read my programs as proofs. then I ask myself what is the
Amazon.com front page proving?

we talk about proof writing in Haskell or Agda. What about Python? Can we
prove things there?

And finally I think about the statements I would actually like to prove? Much
of mathematics requires second order logic or beyond, such as the Fundamental
Theorem of Calculus.

So for me the triad between Math and Logic and Computer Science is less than
air-tight and will remain that way.

~~~
User23
The Amazon.com front page generating code is a proof relating some properties
of its output to its input and that it halts. However it's also basically
incomprehensible, so I doubt anyone alive could actually tell you precisely
what it's proving.

Your ability to prove something in any language is dependent on having a
formal semantic for that language, or at least for a cognitively manageable
subset that you're willing to restrict yourself to. The language semantics and
memory model serve as axioms.

Given how American programmers are trained, it's not realistic to expect them
to be able to think of programs in a logical semantic way. There is something
of an art to it and it's hard to learn a new way of thinking when you're
satisfied with a way you already have.

I agree that you can have an enjoyable hobby or make a living programming
without really understanding what you're doing beyond intuitively and that's
totally fine. Computing time is cheap and most applications don't cause any
great harm when they behave in unintended ways. However the isomorphism
between predicate calculus and computer programming isn't subjective. Programs
really are proofs, just some of them are very sloppy.

------
didibus
Write a program to prove another one. It's pretty cool in concept, but is it
useful in practice?

~~~
ufo
This isn't necessarily about proving other programs. It is about showing that
at a very fundamental level, formal logic (as studied by logicians) and typed
programming languages (as studied by computer scientists) are actually the
same thing, but written in different ways.

The basic idea is that there every typed programming language is analogous to
a mathematical logic system and that

* programs are analogous to logic statements (theorem) * a program's source code is equivalent to a proof of the theorem. * type-checking a program is equivalent to checking if a proof is correct. * Executing a program is equivalent to simplifying a proof.

If we restrict ourselves to the sort of type systems in mainstream languages
the logic statements that the types can express are somewhat limited and not
very interesting. The proofs as programs analogy really starts getting
interesting when you have a very powerful type system...

One place where you see these powerful type systems is in the proof assistants
for people who want to formalize mathematics. It turns out that writing proofs
on these proof assistants is surprisingly similar to programming in a typed
functional language! It is kind of interesting because all that you care about
is that the program you write (the proof) type checks (is valid), and you
don't actually ever run it.

Coming closer to the software development side of things, some of these proof
assistants can be used to produce formally verified software. Basically, what
they do is that instead of throwing away the programs (proofs) that you write,
they compile them down to executable code.

This is not something that is very applicable to the general software
population though since formal verification is a niche thing. It also doesn't
help that these tools are currently super clunky to use and have a steep
learning curve involving lots of difficult math.

~~~
didibus
Hum, admittedly, most of this is past my knowledge graph, since I know nothing
of mathematical logic.

Are you saying the practical relevancy here is actually to logicians? In that
it means they can now more easily come up with proofs for their theorems by
simply writing a program instead of whatever more complicated paper proof they
used to have to come up with?

And that for software, it's only helpful in special cases where you need to
guarantee a small piece of logic is 100% correct?

------
anentropic
shame about the misplaced apostrophes

