
The Holy Trinity: Logic, Languages, Categories (2011) - rpbertp13
https://existentialtype.wordpress.com/2011/03/27/the-holy-trinity/
======
orbifold
An interesting published review of this relationship is "Physics, Topology,
Logic and Computation: A Rosetta Stone", by John C. Baez, Mike Stay
[https://arxiv.org/abs/0903.0340](https://arxiv.org/abs/0903.0340). The first
author alos has lots of interesting stuff related to (quantum) computation on
his website.

------
igravious
This link surfaced on HN before. :)

505 days ago to be precise during a discussion of the article “Relation
Between Type Theory, Category Theory and Logic” on
[https://ncatlab.org/nlab/show/relation+between+type+theory+a...](https://ncatlab.org/nlab/show/relation+between+type+theory+and+category+theory)

Fascinating topic, previous discussion:
[https://news.ycombinator.com/item?id=9867465](https://news.ycombinator.com/item?id=9867465)

~~~
posterboy
The second comment in the discussion is confusing. What's the significance of
the order in the tuple? An unordered set should be enough, so the tuples would
be obvious permutations. Is the order decoded in the elements, eg. the
operator *?

~~~
lmm
You need to be able to write the group axioms down, so you need to distinguish
the three elements. E.g. one of the axioms is 1 ∈ G, which means something
very different from G ∈ 1. It all bottoms out in set theory, but (G, 1, X)
really is a different set from (G, X, 1) (e.g. they might be represented as
the sets {{G, 1, X}, {G, 1}, {G}} and {{G, 1, X}, {G, X}, {G}} respectively)
so you do need to decide which is the "canonical" representation of that
group, or else you need a rule that allows you to tell whether two different
sets are representations of the same group.

------
dvt
> Imagine a world in which logic, programming, and mathematics are unified, in
> which every proof corresponds to a program, every program to a mapping,
> every mapping to a proof!

For those of you that may not know, this is called the Curry-Howard(-Lambek)
Correspondence[1]. It establishes an isomorphism between well-formed computer
programs and formal logic. This isomorphism is fairly intuitive when doing
something like (typed) λ-calculus, but it also applies to things like Java and
C++.

[1]
[https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspon...](https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence)

~~~
mafribe
Curry-Howard is not known to apply to Java and C++. That's because the typing
systems of these languages are no logics. For a start, in both languages, you
can write non-terminating programs, hence any type is inhabited, which means
inconsistency if you interpret types as truth-bearing propositions. Currently
Curry-Howard mostly only works for strongly normalising, pure functional
languages.

It's also not clear if Curry-Howard is really an isomorphism. For a map to be
an isomorphism, structure has to be preserved in both directions, but what is
the structure on proofs? That's a wide open question. We can't even agree on
what it means for proofs to be equal. (The structure of is the content of
Hilbert's 24th problem that he decided not to include in his famous 23
problems [1].) It's better to speak of the Curry-Howard correspondence.

[1] [https://en.wikipedia.org/wiki/Hilbert%27s_twenty-
fourth_prob...](https://en.wikipedia.org/wiki/Hilbert%27s_twenty-
fourth_problem)

~~~
chriswarbo
> Curry-Howard is not known to apply to Java and C++. That's because the
> typing systems of these languages are no logics. For a start, in both
> languages, you can write non-terminating programs, hence any type is
> inhabited, which means inconsistency if you interpret types as truth-bearing
> propositions. Currently Curry-Howard mostly only works for strongly
> normalising, pure functional languages.

As far as I'm aware, a complicated, inscrutable, unsound logic is still a
logic. It's debatable whether it's a _useful_ logic, but even that's not a
fatal flaw for these languages, seeing as their main design focus is making
normalisation of programs/proofs convenient.

I suppose the two perspectives of Curry-Howard aren't so much about
proving/programming as they are about caring-about-the-theorem/caring-about-
the-proof. The _existence_ of non-termination is a problem if you care about
theorems, since you can't trust proofs and must deal with them manually to see
if they're doing anything fishy.

Java and C++ care _so much_ about proofs that they don't really understand
what else you might want to do _other_ than dealing with proofs (programs)
manually. If you presented a C++ programmer with an automated theorem prover
(given a C++ type signature, return a C++ program fragment of that type) I
imagine they wouldn't be particularly impressed, since types are just one
small part of the concerns that a C++ programmer has.

~~~
mafribe

       unsound logic is still a logic
    

Hmmm ... If your logic is unsound, anything can be proven. Soundness is a key
requirement of any logic for this reason.

~~~
chriswarbo
I would hedge my bets, and say that soundness is _often_ a _useful_ property
of logics, especially when they're used for proving theorems/conjectures.

In the case of C++ and Java, they're not often (ever?) used for proving, yet
Curry-Howard tells us that they are logics.

Curry-Howard goes both ways, it's not a _reduction_ of programming to theorem
proving; if it were, then we could write optimising compilers for all
dynamically typed languages, which output "null" for every input.

Likewise, mathematics cannot be summarised as "getting stuff to compile".

Instead, the concerns and interests of the practitioners from both sides are
unchanged by Curry-Howard; it just tells us that both those groups are
actually working within the same context, and hence they may be able to share
ideas/tools/techniques/etc.

Whilst an unsound logic might be useless to a logician, a programmer (or even
computer scientist!) may not care too much, since they might be much more
concerned about, say, the computational complexity of an algorithm rather than
its types.

~~~
mafribe
Can you explain to me in what sense the Java and C++ typing systems are
logics? Because I don't see it.

~~~
dvt
Any typing system that has at least two types (functional and non-functional)
is a logic. In fact, _modus ponens_ follows axiomatically via →elimination:

Let α → H be a functional type where α is a Person type, and H is a boolean
type.

    
    
       Γ ⊢ α → H     Γ ⊢ α
      --------------------- (→e)
                Γ ⊢ H 
    

What this means is that you can have a function isAtHome(x) which takes a
Person and returns a boolean. This schema itself is a logic insofar that it
follows the same rules as the proposition A → B where A is true:

    
    
      A → B
      A
      ∴ B
    

(I wrote a simple and short paper[1] about typing λ-calculus in the context of
Frege's concept horse paradox a few years ago.)

[1] [http://dvt.name/logic/horse2.pdf](http://dvt.name/logic/horse2.pdf)

~~~
chriswarbo
Thanks, this is what I was getting at: the type systems of C++ and Java are
logics in the sense of being formal syntactic systems with deduction rules.

I was struggling to think of a concrete example less trivial than function
application, but I suppose my difficulty just shows how terrible these systems
are when viewed as logics (but they're still logics!)

~~~
dvt
> Thanks, this is what I was getting at: the type systems of C++ and Java are
> logics in the sense of being formal syntactic systems with deduction rules.

Yep, exactly. Why I think it's hard to think of a more concrete example is
because any type theory (like vanilla λ-calculus) is not deductive.

Let me clarify: once you add types to λ-calculus, it (amazingly) becomes
deductive! But the type theory itself is not deductive (because types are not
tractable, among other reasons).

------
cousin_it
Unfortunately the most beautiful form of that analogy is the one that's
crippled on both sides: computation (without Turing completeness) is
equivalent to logic (without excluded middle). Add Turing completeness and you
lose soundness. Add excluded middle and you lose confluence. Of course you can
put epicycles on top of the simple idea and make it do anything you want, but
as far as I know, no one has ever used the Curry-Howard toolbox to solve a
nontrivial algorithmic problem for the first time. Any exciting paper about
algorithms (like quicksort, Dijkstra's algorithm, or Primes in P) will usually
have a lot of math involving numbers, but no math involving types.

~~~
s-phi-nl
Could you point to a reference for the non-confluence of logic without
excluded middle? It sounds interesting and a quick Google search did not find
anything.

~~~
cousin_it
With excluded middle. This is the explanation that clicked for me:
[http://stackoverflow.com/questions/24711643/how-come-that-
we...](http://stackoverflow.com/questions/24711643/how-come-that-we-can-
implement-call-cc-but-the-classical-logic-intuitionistic)

~~~
s-phi-nl
Thanks! Yes, with excluded middle is what I meant.

------
pmoriarty
_" In this sense all three have ontological force; they codify what is, not
how to describe what is already given to us."_

What's the difference between "what is" and "what is already given to us"?
What's the difference between codifying and describing it?

~~~
danharaj
20th century mathematics takes first order classical logic and set theory as
foundational. All other mathematics then becomes the analysis of sets via
proofs in first order classical logic. The ugliness of this definition is that
it is unnatural. Mathematicians don't always think in sets, they think
directly about mathematical objects intuitively. No one cares which set
encodes the number _e_ , for example, but foundationally speaking the number
_e_ must be a set, and the properties of _e_ are the properties of that set.
This is unsatisfactory from both an aesthetic and practical point of view: the
Set foundations of modern mathematics obscure the fundamental, recurring
themes of mathematics.

This is true even in computer science. Turing machines are sets too! I know,
ridiculous.

Harper is saying that the correspondence between logic, language, and category
is a suitable foundation for mathematical/logical/computational activity and
they're all the same thing. This is not merely in the sense that the objects
we work with are given beforehand by some more fundamental theory and we are
merely analyzing them, but in the sense that logic/language/category is the
appropriate setting in which to define our objects, to do synthetic as well as
analytical reasoning. That is what makes a particular theory foundational.

~~~
AnimalMuppet
> All other mathematics then becomes the analysis of sets via proofs in first
> order classical logic.

That's like saying that home construction becomes an exercise in atomic
physics. Yes, in some sense it's true, but it's so far removed from what's
actually going on that it's completely uninteresting.

Take the person working on differentiable manifolds. Do they care about that
as problems in set theory? Not at all. Change it to problems in
logic/language/category theory? They still don't care. Axiomatize mathematics
how you will, they won't care, because they're as many layers away from it as
the home construction people are from atomic physics.

~~~
danharaj
[https://ncatlab.org/nlab/show/motivation+for+higher+differen...](https://ncatlab.org/nlab/show/motivation+for+higher+differential+geometry)

Without category theory, certain generalizations of differential geometry
would be virtually impossible to describe in any useful matter. These
generalizations are extremely important for applications to physics and
elucidate a lot of the fundamental structure of the classical theory.
Furthermore, you get a much nicer theory when you consider higher geometric
structures.

The power of category theory as foundations is that it lets you efficiently
study the structure of a theory and how to modify that structure in principled
ways.

Sure, the mathematician who is doing hard analysis probably does not care too
much about category theory, but every mathematician being educated today knows
at least a little category theory because it is the natural setting for
certain key mathematical ideas, like cohomology.

In contrast, Set Theory is powerful but it doesn't organize mathematics the
way logic/languagecategory theory does.

------
threepipeproblm
What a coincidence... just ran across this video --
[https://www.youtube.com/watch?v=YRu7Xi-
mNK8](https://www.youtube.com/watch?v=YRu7Xi-mNK8) \-- last night, where Prof.
Frank Pfenning makes a similar point in the first few minutes.

------
adamnemecek
I think that probability also deserves it's place in this tetrad.

~~~
lambdasquirrel
Probability is an application of this aforementioned math, not foundational to
it.

~~~
adamnemecek
The empirical point of view is definitely worth mentioning.

~~~
SomeStupidPoint
Could you elaborate on what you mean here? Im not sure I understand what
you're trying to say.

~~~
adamnemecek
Well fundamentally probability is a framework for reasoning about the future
based on previous events. This framework is analogous to logic but it's
separate and even though it's not as clean it's a lot more useful for dealing
with the real world.

~~~
danharaj
Probability and probabilistic reasoning can be described by this framework.
The theory is not yet as well developed as the classical theories. A
discussion is here:
[https://golem.ph.utexas.edu/category/2007/02/category_theore...](https://golem.ph.utexas.edu/category/2007/02/category_theoretic_probability.html)

~~~
aminorex
The framework itself needs description in probabilistic terms. Otherwise it
has only pragmatic utility, and degraded epistemic status.

------
zengid
I feel like this mystical mumbo-jumbo is counterproductive.

~~~
throwaway729
It's a metaphor and meant to be tounge-in-cheek. There's nothing mystical
about the actual content of the post.

------
xchaotic
How is this going to make my code better? As far as I remember attempts to use
proofs in production were counter productive. For most systems knowing that
print "hello"; will do that is good enough.

~~~
throwaway729
_> How is this going to make my code better?_

This blog post is describing a research agenda that has already provided
tremendous fruit to language designers. You can find the echoes of this
research agenda in the design and implementation of basically every modern
typed language, as well as a lot of the infrastructure for dynamic languages.

If you're a language designer and you're not aware of the research agenda
described in this blog post, learning more about programming language theory
will be immensely beneficial.

If you're not a language designer, then your attitude is kinda weird and off-
putting. Like demanding an explanation for how fluid mechanics will make your
code better. (As it turns out, understanding the theory behind the languages
you're using _will_ make you a better programmer, but that's really beside the
point.)

 _> As far as I remember attempts to use proofs in production were counter
productive_

You remember incorrectly.

 _> For most systems knowing that print "hello"; will do that is good enough._

Isn't this attitude exactly how we end up with vulnerability-riddled, brittle,
and barely maintainable code? I can't think of a single system I've designed
or implemented where all I needed to know was that STDOUT worked properly.
Most of my test cases capture _far_ more interesting properties. The more of
those test cases my tooling can rule out for me, the better.

~~~
posterboy
> > As far as I remember attempts to use proofs in production were counter
> productive

Wait, does that mean in development? Otherwise, as I understood _production so
far_ it would be dependent typing. Supposedly that infers and automates a lot
of otherwise handwritten safety checks.

> You remember incorrectly.

Indeed, Test Driven Development is huge and it's not even really formal.
Formalisms should facilitate it a lot.

~~~
hyperpape
At some level of abstraction, types of any kind are already proofs, and
they're tremendously important in many actual existing languages.

What you're probably glomming on to is that efforts to push the bounds of what
you can prove (higher kinded or dependent types) have had mixed results (mixed
in the sense that some people think they're great, others are unconvinced).

So we're in a weird situation where many many people are using a tool, but
relatively few are interested in new developments concerning how to use that
tool.

