> Whig is an approach to historiography that presents the past as an inevitable progression towards ever greater liberty and enlightenment, culminating in modern forms of liberal democracy and constitutional monarchy.
Just replace the "greater liberty" bit with what computer scientists (and the author) argue is the right way of doing computer science today.
I'd love to read a historical treatment of the topic that is written by someone who is a historian rather than computer scientist twisting history to support their world view.
This is no different than how math textbooks present the history of any particular mathematical abstraction or theorem: they don't go in depth on all the false starts and dead ends people explored before making real progress. That's certainly something interesting but it has no place—and no room—in textbooks dedicated to explaining abstractions. The simplified story of how something could have been developed is, I believe, more relevant to understanding the importance of an idea than the way it was actually developed. The real history is full of noise, a function of the person who first happened upon the idea. It's not hard to image thousands of alternate worlds where the exact development of an idea was slightly different, but reflecting the same fundamentals—those differences end up as irrelevant details due to chance.
Unfortunately, textbooks rarely do even this. That is, they don't supply nearly enough justification/motivation for the complicated definition they posit and the structures they develop. For the canonical (and accessible) bit of philosophy of mathematics on this, I highly recommend Lakatos’ "Proof and refutations" . As a demonstration, it gives a very non-historical and exhaustive explanation for the definition of an "honest" polyhedron with respect to Euler characteristics , and is liberally footnoted with the actual history. Unfortunately, textbooks have not improved much since it was written in 1963.
I'm definitely trying to do this myself: whenever I teach an abstract concept (more from PL than math), I've started to explicitly note both the formal definition and why it's interesting or relevant. Many explanations of abstract concepts I run into forget one of these components or the other, and it never ends well...
That's true, but it is important to point out when the story is told in a specific way to promote a certain ideology. I'm not saying it's wrong to promote an ideology (on the contrary; ideology is often very important) or that it's rare, but readers should be aware that a certain ideology is being promoted. Wadler is trying to promote functional programming by presenting it as a natural result of logic, and in many ways the essence of his story is correct (from other, more neutral accounts that I've read). But what readers may not know is that logic has not been the only motivating force in theoretical computer science (nor the most powerful), that there are other useful analogies between computation and other phenomena, and that what's presented here is just one perspective on computation.
For example, Turing, the father of theoretical computer science which only gets a brief mention in this account, turned his attention to neural networks, biological computation and possibly even quantum computation (although he died while working on some physics results). Much of the main thrust of theoretical computer science was similarly influenced by information theory and the search for the "physical laws" of computation which led to complexity theory, something that is nearly entirely missing from the logical treatment.
: Both Wadler and Bob Harper often try to downplay Turing's role for ideological reasons (which, I think, is pretty funny considering the history of computer science), and neglect to mention that Church's relatively narrow perspective caused him to miss something very basic about computation, while Turing, who was opposed to logicism, i.e. reducing math to logic, was able to therefore grasp something much more fundamental about computation than Church. Interestingly, Brouwer, who is another figure that provides inspiration to proponents of FP, actually did not fall into this trap, and had the philosophical breadth that had led him to come closer to Turing's more general view of computation than Church, by trying, like Turing, to think -- philosophically rather than formally -- how the mind of a mathematician works regardless of a specific formalism.
: https://mdetlefsen.nd.edu/assets/201037/jf.turing.pdf Some interesting quotes:
> On 1 December 1933, the minutes in the Moral Science Club records: “A.M. Turing read a paper on ‘Mathematics and Logic’. He suggested that a purely logistic view of mathematics was inadequate; and that mathematical propositions possessed a variety of interpretations, of which the logistic was merely one.”
> He was engaged in a program of putting Russell’s new logic into place, critiquing, not only its position or office, but perspectives on mathematics being forwarded by the “purely logistic” views. Turing was insisting that mathematical discourse stand on its own autonomous feet. And he was rejecting the idea that Principia — or any other construal of mathematics as a whole in terms of symbolic logic — offers mathematics a “foundation” of any ultimate or privileged sort, or reduces it to logic. A “purely logistic view” is but one (philosophically tendentious) “interpretation” among others, and is therefore “inadequate”. This is thoroughly consonant with the opening of Wittgenstein’s 1932 Cambridge lectures “Philosophy for Mathematicians” we have quoted above: Principia is simply part of mathematics, and there is nothing wrong with mathematics before its foundations are laid.
> Turing developed an “anthropological” approach to the foundations of logic, influenced by Wittgenstein, in which “common sense” plays a foundational role.
But saying that proofs are programs is a more general statement. Computations are always possible to write down as programs, and by the Church-Turing thesis we know that we can pick any reasonably computational model (Turing machines, say, or untyped lambda calculus), and know that we have captured all of them--we don't have to worry about "missing" some extra computations outside the formalism.
The flips side is that without a type system, you can't in general tell which programs are proofs and which are not. But that's as it should be---truth is always an undecidable notion. This means that if we want a foundation for mathematics we should consider _realizability_, not _typeability/provability_ as the most basic notion.
Bob Harper wrote a blog post about this: https://existentialtype.wordpress.com/2012/08/11/extensional...
(Edit: ok, I see now that you already discussed this with fmap in another thread.)
See the discussion here about realizers vs proofs: https://news.ycombinator.com/item?id=13495547. Perhaps you can shed more light on the subject. Harper's post is interesting, but while it does offer less narrow a view than Wadler's using a different definition of proof (which I don't think deserves the name proof, but nevermind), it is still too narrow a definition of computation (and his introduction of types seems to muddy the water and complicate matters further, partly because the use of types necessitates yet another computation, because types are certificates). Computation is a wider notion than constructive mathematics (as every construction is a computation but not every computation is a construction, and two different computations can yield the same construct).
As to the choice of a foundation of mathematics, I think this is largely an aesthetic matter. It is certainly a separate discussion from computation in the sense that computation does not depend on the choice of mathematical foundation, nor that the choice of mathematical foundation must be built around the notion of computation. I like Turing's view that there are no privileged foundations, and a mathematical formalism should be chosen based on utility, on a case-by-case basis.
For the history of logic, Jean van Heijenoort's annotated anthology, From Frege to Gödel is fascinating.
And, BTW, I think your hunch (about Whig history) is correct.
Side note: an even better example than Whig history might be Marx's historical materialism. (Also an analysis I am sympathetic to but equally fraught with this peril.)
Highly recommend reading this one. :)
I can see Cantor Diagonalization as a non-terminating program really easily, but I don't understand what the Probabilistic Method looks like as a program.
But to answer your question: Many mathematicians regard their (classical) proofs as algorithms. They just allow themselves to use an oracle call for any well-formed question:
Say I want to prove B. Then, at some point in my proof, I formulate a sentence A, and make a call to the oracle. My proof/algorithm then branches: in one branch I assume A is true, in the other I assume A is false. If I can continue such each branch gives B true, then my proof works (classically). 
The constructive critique is that such an algorithm cannot be executed my humans, or turing machines for that matter.
Also there is a whole continuum of stronger and stronger oracles below this ultimate oracle allowed by classical logic — which is an interesting continuum to study. Often one can locate an exact class of oracles needed for a given classical proof to work out.
: This is the «church encoding» version of the law of excluded middle: For all B, we have (A → B) → (¬A → B) → B, which is equivalent to the usual formulation (A ∨ ¬A).
In this case, with the axiom of choice implemented as an oracle, the program for Cantor's Diagonalization can be written.
This makes a kind of ultimate sense. Rice's Theorem shows how a program with an oracle about the behavior of a Turing Machine is contradictory, and this implies that "even with a finite set of axioms" there are no proofs about general classes of Turing Machines and thus there are statements independent of mathematics (a kind of Godelian statement).
Is that right?
If so, that's profoundly simple and also incredible to me. (Hopefully I haven't excited myself into thinking I "got it".)
Could you help me understand what Erdo's Probablistic Method looks like as a program? Which oracles might be required and how you would call them?
negneg_lem :: ((Either a (a -> r)) -> r) -> r
The proof of negneg_lem is just
negneg_lem f = (f . Right)(f . Left)
oracle :: ((a -> r) -> r) -> a
lem = oracle negneg_lem
(Also, the oracle is somewhat absurd. Really, if you have a function which takes functions, you can produce an element of the function's domain? It might almost make more sense if r was a void type.)
I'm still trying to understand what the Probablistic Method looks like as a program. Any thoughts on that?
A more down to earth example of a non-constructive proof would be proofs using non-constructive elements of classical logic like excluded middle (P V ~P), double negation elimination (~~P -> P) or Peirce's law (((P->Q)->P)->P). For example, this proof.
There are some similarities between non-constructive logic and non-purely-functional programming. It it possible to see language features such as mutable assignment, exceptions and call-with-current-continuation as analogous to the non-constructive parts of classical logic. These features all make the language depend on the order of evaluation, which destroys makes the program non-constructive
Anyway, I had a really good reference for this but I can't find it right now :( Hopefully someone else can provide a better link explaining how call/cc is equivalent to peirce's law.
But most everyone here knows Dijkstra, so I'm mostly just replying to thank you for bringing up Peirce. He's wildly under acknowledged and more importantly under-appreciated. Truly an American genius.
Ultimately, to me it seems all connected to this "fixed" idea (that is, thinking "inside the box") that computation was about numbers - that there was only calculation and tabulation - counting, iow.
While Boole came up with the math/logic - few pursued the idea of making machines to work with it (a few -did- exist, but were obscure, iirc); embodying logic using electrical circuits would have to wait. Then there was the "revelation" of computation being about symbols, not numbers - which opened everything up.
I'm just kinda rambling here; my apologies. And, hindsight is always easier than foresight, of course. It just bugs me, and makes me wonder what "obvious" ideas we are missing today, that will change the world, that we won't know until we look back. I guess I wonder why many of the most world changing ideas (in the long run) are often very simple, and why did it take so long for humanity to see/discover them? I wonder if anyone has studied this - of if it is something that can be studied...?
As for Hollerith, he did use relays. The 1890 Census gear was just counters, but he followed up, establishing the Tabulating Machine Company, which became the Computing-Tabulating-Recording company, which became International Business Machines Corporation, which is today's IBM. Tabulators gained more relays and electrical components over the decades, although they still used mechanical adding wheels.
19th century relays were bulky, unreliable, and needed manual adjustment and contact cleaning. With telephony, telegraph, railroad signaling, and IBM efforts, they were made much more reliable and somewhat smaller in the early 20th century.
Babbage analytical engine - 1833
Boolean algebra - 1847
two's complement - 1945
Lovelace was probably the first to think
about using it for something other than
 W. Zhao, Rendering the Illusion of Life: Art, Science and Politics in Automata from the Central European Princely Collections.
 D. Summers Stay, Machinamenta: The thousand year quest to build a creative machine.
In a similar vein, the next big thing has probably been discovered, it just isn't big yet.
In a sufficiently powerful type system you can express any proposition as a function type. A proof of that proposition is also a program. The proof checker is a type checker. If your proof is invalid, your program won't type-check.
Coq (https://coq.inria.fr/) is a real-world example of exactly this.
Isn't a proof only valid if it can be validated within a reasonable amount of time, either by some human or by a computer?
This is part of the problem with proofs-as-programs: it's a beautiful theory, and inspires a lot of useful work, but if taken too literally it's wrong and useless.
Or what about ones with side effects?
It seems to assume away a lot of (what is called) programs in actual practice.
Side effects are a little harder to see proof-theoretically; for some effects there are reasonable stories (exceptions, for example, are a special case of continuations, which have connections to classical logic), but for many I think the story is still murky.
The Curry-Howard correspondence is more of a lens through which to approach PL theory, or an evolving research agenda, than it is a settled account of all programming practice.
For your example of a server, look at the Servant library in haskell. It allows you to define type safe routes and links, ensuring that your program exactly matches the API, and that no links are ever broken.
The reason is that "computation" is very useful concept but it has taken us ages to define what it means precisely. But that didn't stop people of reinventing it in other areas.
Curry-Howard is only one of many other isomorphism between programs and other areas. See "Physics, Topology, Logic and Computation: A Rosetta Stone" and in particular Table 4 on pg. 66. Curry-Howard correspondence is the last two columns of the table; the other three columns are other "correspondences" in topology, physics and category theory.
p |= A /\ B: if p evaluates to a pair of a proof of A and a proof of B.
p |= A -> B: if p implements a total function mapping proofs of A into proofs of B.
p |= exists n, A: if p evaluates to a pair of a number k and a proof of the statement A where n is replaced by k.
This is, for instance, implemented in NuPRL. It validates all laws of intuitionistic logic, but is also open ended. You can always add new axioms, so long as you can implement a realizer for them. E.g. if P is a boolean predicate on natural numbers, the statement "(~forall n, ~P n) -> exists n, P n" is true, since we can realize it using linear search, which terminates because of the assumption on P. Similarly, we have a realizer for the statement that "all function on real numbers are continuous", so what you get here is truly distinct from classical logic.
Of course, realizability has the problem that it already presupposes that you know how to show properties of arbitrary programs! On the other hand, if you have a proof for a positive statement you can always check the truth of special instances by running a program and looking at the results.
Anyway, to get back to your question. Under this interpretation, you can implement a proof on a metabolic network, in the same way as you can implement it using sticks and stones. Whether or not that has anything to do with the "Curry-Howard" correspondence is another question, though.
And I didn't try to cast it as a debate between Church and Turing (which never existed, AFAIK), but to point out that equating programs with proofs is one rather narrow perspective of looking at programs. Arbitrary programs expressed in arbitrary ways are rarely proofs of anything (or anything nontrivial, that is), even if they happen to correspond realizers. I mentioned Turing to point out how this narrow view may result in missing out on fundamental qualities of computation.
What I can say is that these days the Curry-Howard correspondence is typically taken to state that "Propositions are Types". And for typed languages, the difference between intrinsic types (Church style) and types as properties of programs (Curry style) are just two extremes on a very fluid spectrum.
You are absolutely right that this realizability interpretation as stated doesn't really help you define a complete notion of truth, since it doesn't solve the (unsolvable) problem of telling which proposition a program realizes. This is one of the reasons why Brouwer was so opposed to basing everything in mathematics on a system of inference rules. Without "intuition" about whether a particular program realizes a proposition you cannot really get off the ground.
Are they not equivalent to computers running some program? Equivalence is transitive.
But I don't think you will be able to get any useful proof from one of them. That's because they are hard to program, they are still running some proof.
Do you know whether anything like this been formalized and proven?
But let me give an example that would make it more concrete. Suppose you want to build a computer that simulates the behavior of n bodies in gravitational interaction in space. It's obvious that with a digital computer, there would eventually be discrepancies between the actual system and the simulation (which would only grow). But now consider this: suppose that instead you want to do the simulation by building a "computer" which is nothing more than a copy of the original system (i.e., the "simulation" is composed of another set of n bodies in space). Is it possible to have this analog "simulation" more accurate than a digital simulation? The answer is no, because even supposing a true continuum of positions and velocities exists in the first place, in order to reconstruct the system precisely, you would need to pass an infinite amount of information to the "simulation", which is impossible.
However, problem sets generally aren't graded like they're programs, haha.
props to wadler. best ~40 minutes I've ever spent.
i would like to read my programs as proofs. then I ask myself what is the Amazon.com front page proving?
we talk about proof writing in Haskell or Agda. What about Python? Can we prove things there?
And finally I think about the statements I would actually like to prove? Much of mathematics requires second order logic or beyond, such as the Fundamental Theorem of Calculus.
So for me the triad between Math and Logic and Computer Science is less than air-tight and will remain that way.
Your ability to prove something in any language is dependent on having a formal semantic for that language, or at least for a cognitively manageable subset that you're willing to restrict yourself to. The language semantics and memory model serve as axioms.
Given how American programmers are trained, it's not realistic to expect them to be able to think of programs in a logical semantic way. There is something of an art to it and it's hard to learn a new way of thinking when you're satisfied with a way you already have.
I agree that you can have an enjoyable hobby or make a living programming without really understanding what you're doing beyond intuitively and that's totally fine. Computing time is cheap and most applications don't cause any great harm when they behave in unintended ways. However the isomorphism between predicate calculus and computer programming isn't subjective. Programs really are proofs, just some of them are very sloppy.
Python is dynamically typed, so it doesn't prove much either. But in a statically typed language, if you have a method that takes an integer and returns a string, then you have proven you can generate a string from an integer. If it runs, then you have proven it. Granted, that's also not particularly useful, but it's proven if it compiles/runs.
To prove, you need types. Typeless proofs/programs are just really general, not-very-useful proofs, same with dynamic typing. That's why static typing gets you a little closer to the correspondence/isomorphism.
Dependent types gets you closer still - like, if you have a method that will only compile if the method takes an array of numbers, and returns only the subset of those numbers that are even - then the correspondence is starting to get a bit useful. Then you know your output won't have numbers that weren't in the input.
I don't have much experience with this so I'm probably explaining it badly.
The basic idea is that there every typed programming language is analogous to a mathematical logic system and that
* programs are analogous to logic statements (theorem)
* a program's source code is equivalent to a proof of the theorem.
* type-checking a program is equivalent to checking if a proof is correct.
* Executing a program is equivalent to simplifying a proof.
If we restrict ourselves to the sort of type systems in mainstream languages the logic statements that the types can express are somewhat limited and not very interesting. The proofs as programs analogy really starts getting interesting when you have a very powerful type system...
One place where you see these powerful type systems is in the proof assistants for people who want to formalize mathematics. It turns out that writing proofs on these proof assistants is surprisingly similar to programming in a typed functional language! It is kind of interesting because all that you care about is that the program you write (the proof) type checks (is valid), and you don't actually ever run it.
Coming closer to the software development side of things, some of these proof assistants can be used to produce formally verified software. Basically, what they do is that instead of throwing away the programs (proofs) that you write, they compile them down to executable code.
This is not something that is very applicable to the general software population though since formal verification is a niche thing. It also doesn't help that these tools are currently super clunky to use and have a steep learning curve involving lots of difficult math.
Are you saying the practical relevancy here is actually to logicians? In that it means they can now more easily come up with proofs for their theorems by simply writing a program instead of whatever more complicated paper proof they used to have to come up with?
And that for software, it's only helpful in special cases where you need to guarantee a small piece of logic is 100% correct?
i'd say yes
Formal verification hasn't really caught on yet. The extra effort needed to formally prove even a trivial program seems to be too high for the value it gives to most stakeholders.
I'm really hopeful for simpler methods of formalized proofs, or even automated testing. I think today, the practical cost is maybe still too high for people to care.
On the other hand, formal construction is an idea whose time is long overdue. Rather than asking if program text T satisfies predicate P, instead start with P and then construct a T mechanically such that it must satisfy P. The latter is a much easier task than the former. Sort of like how it's easy to know N is the product of two large primes if you just multiplied two large primes together to get it.