Hacker News new | past | comments | ask | show | jobs | submit login

That's not a charitable interpretation ("twisting history to support their world view"), and your comment seems to be conflating the language-logic view itself with the history presented here. The relationship between programs and logic is compelling for many reasons and can stand independent of how the two happened to be developed; my take is that the historical story here—which is definitely too pat—is presented more as an illustration of the idea than anything else.

This is no different than how math textbooks present the history of any particular mathematical abstraction or theorem: they don't go in depth on all the false starts and dead ends people explored before making real progress. That's certainly something interesting but it has no place—and no room—in textbooks dedicated to explaining abstractions. The simplified story of how something could have been developed is, I believe, more relevant to understanding the importance of an idea than the way it was actually developed. The real history is full of noise, a function of the person who first happened upon the idea. It's not hard to image thousands of alternate worlds where the exact development of an idea was slightly different, but reflecting the same fundamentals—those differences end up as irrelevant details due to chance.




> This is no different than how math textbooks present the history of any particular mathematical abstraction or theorem: they don't go in depth on all the false starts and dead ends people explored before making real progress. That's certainly something interesting but it has no place—and no room—in textbooks dedicated to explaining abstractions. The simplified story of how something could have been developed is, I believe, more relevant to understanding the importance of an idea than the way it was actually developed.

Unfortunately, textbooks rarely do even this. That is, they don't supply nearly enough justification/motivation for the complicated definition they posit and the structures they develop. For the canonical (and accessible) bit of philosophy of mathematics on this, I highly recommend Lakatos’ "Proof and refutations" [1]. As a demonstration, it gives a very non-historical and exhaustive explanation for the definition of an "honest" polyhedron with respect to Euler characteristics [2], and is liberally footnoted with the actual history. Unfortunately, textbooks have not improved much since it was written in 1963.

[1]: https://math.berkeley.edu/~kpmann/Lakatos.pdf [2]: https://en.wikipedia.org/wiki/Euler_characteristic


Ah, that's a fair point. Most math textbooks are not that great pedagogically—perhaps I was just projecting how I think mathematical ideas should be introduced :).

I'm definitely trying to do this myself: whenever I teach an abstract concept (more from PL than math), I've started to explicitly note both the formal definition and why it's interesting or relevant. Many explanations of abstract concepts I run into forget one of these components or the other, and it never ends well...


> This is no different than how math textbooks present the history of any particular mathematical abstraction or theorem: they don't go in depth on all the false starts and dead ends people explored before making real progress.

That's true, but it is important to point out when the story is told in a specific way to promote a certain ideology. I'm not saying it's wrong to promote an ideology (on the contrary; ideology is often very important) or that it's rare, but readers should be aware that a certain ideology is being promoted. Wadler is trying to promote functional programming by presenting it as a natural result of logic, and in many ways the essence of his story is correct (from other, more neutral accounts that I've read). But what readers may not know is that logic has not been the only motivating force in theoretical computer science (nor the most powerful), that there are other useful analogies between computation and other phenomena, and that what's presented here is just one perspective on computation.

For example, Turing, the father of theoretical computer science which only gets a brief mention in this account[1], turned his attention to neural networks, biological computation and possibly even quantum computation (although he died while working on some physics results). Much of the main thrust of theoretical computer science was similarly influenced by information theory and the search for the "physical laws" of computation which led to complexity theory, something that is nearly entirely missing from the logical treatment.

[1]: Both Wadler and Bob Harper often try to downplay Turing's role for ideological reasons (which, I think, is pretty funny considering the history of computer science), and neglect to mention that Church's relatively narrow perspective caused him to miss something very basic about computation, while Turing, who was opposed to logicism[2], i.e. reducing math to logic, was able to therefore grasp something much more fundamental about computation than Church. Interestingly, Brouwer, who is another figure that provides inspiration to proponents of FP, actually did not fall into this trap, and had the philosophical breadth that had led him to come closer to Turing's more general view of computation than Church, by trying, like Turing, to think -- philosophically rather than formally -- how the mind of a mathematician works regardless of a specific formalism.

[2]: https://mdetlefsen.nd.edu/assets/201037/jf.turing.pdf Some interesting quotes:

> On 1 December 1933, the minutes in the Moral Science Club records: “A.M. Turing read a paper on ‘Mathematics and Logic’. He suggested that a purely logistic view of mathematics was inadequate; and that mathematical propositions possessed a variety of interpretations, of which the logistic was merely one.”

> He was engaged in a program of putting Russell’s new logic into place, critiquing, not only its position or office, but perspectives on mathematics being forwarded by the “purely logistic” views. Turing was insisting that mathematical discourse stand on its own autonomous feet. And he was rejecting the idea that Principia — or any other construal of mathematics as a whole in terms of symbolic logic — offers mathematics a “foundation” of any ultimate or privileged sort, or reduces it to logic. A “purely logistic view” is but one (philosophically tendentious) “interpretation” among others, and is therefore “inadequate”. This is thoroughly consonant with the opening of Wittgenstein’s 1932 Cambridge lectures “Philosophy for Mathematicians” we have quoted above: Principia is simply part of mathematics, and there is nothing wrong with mathematics before its foundations are laid.

> Turing developed an “anthropological” approach to the foundations of logic, influenced by Wittgenstein, in which “common sense” plays a foundational role.


About Turing's critique of Russell's logicism: is it really applicable to proofs-as-programs? It seems to me that constructive mathematics is pretty far from logicism (not that I understand the subject very well, sorry if I'm talking nonsense).


I think it is applicable to "programs as proofs" (i.e. the opposite direction, which Wadler also makes), but more as an analogy. Once you say programs are proofs, then you reduce programs to logic, while computation is a natural (as well as mathematical) phenomenon that exists outside of any formalism.


I'm not sure that's true. Certainly if you fix a particular programming language, that's equivalent to picking a particular logical formalism (and by Gödel's incompleteness theorem we know that can't be the whole story).

But saying that proofs are programs is a more general statement. Computations are always possible to write down as programs, and by the Church-Turing thesis we know that we can pick any reasonably computational model (Turing machines, say, or untyped lambda calculus), and know that we have captured all of them--we don't have to worry about "missing" some extra computations outside the formalism.

The flips side is that without a type system, you can't in general tell which programs are proofs and which are not. But that's as it should be---truth is always an undecidable notion. This means that if we want a foundation for mathematics we should consider _realizability_, not _typeability/provability_ as the most basic notion.

Bob Harper wrote a blog post about this: https://existentialtype.wordpress.com/2012/08/11/extensional...

(Edit: ok, I see now that you already discussed this with fmap in another thread.)


I have no problem with "proofs are programs", only with "programs are proofs", and I think that most of theoretical computer science refuses, almost on principle, to view computation merely as programs expressed in any particular formalism, just as physicists would say that a certain ODE is a description of a falling apple, not that a falling apple is an ODE, nor that an apple can only fall if we can describe it in some formalism.

See the discussion here about realizers vs proofs: https://news.ycombinator.com/item?id=13495547. Perhaps you can shed more light on the subject. Harper's post is interesting, but while it does offer less narrow a view than Wadler's using a different definition of proof (which I don't think deserves the name proof, but nevermind), it is still too narrow a definition of computation (and his introduction of types seems to muddy the water and complicate matters further, partly because the use of types necessitates yet another computation, because types are certificates). Computation is a wider notion than constructive mathematics (as every construction is a computation but not every computation is a construction, and two different computations can yield the same construct).

As to the choice of a foundation of mathematics, I think this is largely an aesthetic matter. It is certainly a separate discussion from computation in the sense that computation does not depend on the choice of mathematical foundation, nor that the choice of mathematical foundation must be built around the notion of computation. I like Turing's view that there are no privileged foundations, and a mathematical formalism should be chosen based on utility, on a case-by-case basis.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: