
Curry-Howard, the Ontological Ultimate - kryptiskt
http://psnively.github.io/blog/2014/10/14/Curry-Howard-the-Ontological-Ultimate/
======
jules
This is very vague about what Curry-Howard means. Curry-Howard means that any
type can be interpreted as a theorem in some logical system, and any term can
be interpreted as a proof of its type.

This does _not_ mean that those theorems have anything to do with your
program. Take the following function:

    
    
        swap : forall a,b. (a,b) -> (b,a)
        swap pair = (snd pair, fst pair)
    

The type here is forall a,b. (a,b) -> (b,a). The logical meaning of this type
is (a and b) => (b and a). Note that this is a theorem in _logic_ , not a
theorem _about your program_.

This is entirely different than what the types say about your program. Those
properties you get from progress/preservation proofs of the type system. Now
it is certainly the case that in dependently typed languages you can get the
logic from Curry-Howard to express theorems about values in your program.
That's the whole point of dependent typing. But it's important to to keep in
mind that statements such as this:

> Look again at the OCaml Set module example above. You already get a
> universally-quantified property just from using the type system to guarantee
> an invariant expressed in an ADT. That’s the import of the Curry-Howard
> Isomorphism, minus any appeals to more expressively powerful type systems.

are nonsense. The fact that the module enforces the property

    
    
        P(n) === S1.mem n (S1.singleton n) = true
        Q(n) === forall m \in int. m /= n -> S1.mem m (S1.singleton n) = false
    

has nothing to do with Curry-Howard. Which proves Kell's point about the
equivocation around type theory.

------
cousin_it
> _You express your specification as types (theorems), and your implementation
> proves those theorems. Then your code is correct by construction. This is a
> true statement; it isn’t open to debate, not being a matter of opinion._

That's technically correct, but a little bit divorced from reality. Let's say
I want to write a function that counts the primes below one million. Since
that's a function with no arguments that returns a single integer, the
corresponding theorem by Curry-Howard is that "at least one integer exists".
The function "return 0" is a proof of that theorem. That doesn't seem too
useful!

You might argue that we should use a richer type system to express the
specification. Sure! We just need to express the idea of "primes below one
million" in the type system. But that's at least as difficult and bug-prone as
writing the program in the first place, so I have a hard time seeing the
benefit.

Most real world functions are the same way. Say you're programming a game and
you need a function to fire the gun. Play the sound, add some screenshake,
create a bullet object, etc. Go ahead, express that as a theorem. Don't forget
to come back and tell us what you gained by doing that.

~~~
codemac
Um, maybe you missed the point about dependent types? You can express a
function, that takes no arguments, that returns primes below 1,000,000 as it's
type! That's kind of the point of the talk, and is supposed to be possible in
languages like Agda and Idris (I don't know enough about them to give any
syntax here). See jules' comment below about how curry-howard does not mean
your implementation is a full proof as well.

Considering code that flies planes and shoots guns generally is written using
proofs on control systems (if not required by law), your examples are a bit
weak.

~~~
kestert
The constructive nature of Idris makes it hard (at least for me) to express a
prime under 1,000,000 as a type. For example, it's trivial to express a
composite as a GADT:

    
    
      data Composite : Nat -> Type where
        factors : (x : Nat) -> (y : Nat) -> Composite ((S (S x)) * (S (S y)))
    

but there is no analogous construction of a prime type. Luckily most day to
day programming looks more like the composite case than the prime case.

~~~
gergoerdi
As an example, here's how primality is defined in the Agda standard library.
It is simply the negation of the property 'has a divisor and is greater than
2'.

[http://agda.github.io/agda-
stdlib/html/Data.Nat.Primality.ht...](http://agda.github.io/agda-
stdlib/html/Data.Nat.Primality.html)

------
Confusion

      Again, how he gets this from Amanda’s and my presentation is beyond 
      me
    

Yes, I don't doubt it is beyond him. Yet he is one of only two very smart
people with very interesting things to say that managed to get themselves
removed from my various feeds, because I just couldn't stand their idolatry of
static-typing and never ending condescension of anything remotely critical of
it anymore.

I haven't seen the presentation, but I don't for a second doubt they implied
that:

    
    
      If somebody stays away from typed languages, it [means]
      they’re stupid, or lack education, or are afraid of maths. There 
      are [no] valid practical reasons for staying away [from type 
      languages]. 
    

He _believes_ they were being very constructive, showing an understanding for
other opinions and taking a nuanced view of static typing, with an eye for
valid criticisms. I _know_ , from the online reviews, many did not experience
the talk that way.

~~~
insulttogingery
Can you recommend some even-handed reading on the opposing views here?

~~~
rntz
This comment by Matthias Felleisen's is a very polite and well-considered
position from an academic on the dynamically-typed side of the spectrum:
[http://existentialtype.wordpress.com/2014/04/21/bellman-
conf...](http://existentialtype.wordpress.com/2014/04/21/bellman-confirms-a-
suspicion/#comment-1362)

Harper himself can't be relied on to be so cordial, unfortunately.

~~~
Confusion
Speaking of the culture around static typing: Bob Harper's blog, where
Matthias Felleisen gets 15 downvotes per courteous, thoughtful response...

~~~
jules
Seems like voting has now been turned off, or at least I don't see any votes
any more.

------
ddellacosta
The talk he's defending was tiresome and dripping with condescension (I agree
with Stephen Diehl:
[https://twitter.com/smdiehl/status/519160854517149696](https://twitter.com/smdiehl/status/519160854517149696)).
Typed-language folks don't need advocates who waste breath talking about
"incompetent Ruby developers moving to Clojure" and whatnot. There's enough
positive stuff to talk about! I think the folks who gave that talk need to
read Gershom Bazerman's (a person who has no trouble giving talks filled with
excitement about type- and category-theoretic concepts with no condescension)
excellent post here: [http://comonad.com/reader/2014/letter-to-a-young-
haskell-ent...](http://comonad.com/reader/2014/letter-to-a-young-haskell-
enthusiast/)

------
StefanKarpinski
It seems oddly defensive for Paul Snively to interpret Stephen Kell's "Seven
Sins" essay as being mostly about his (and Amanda Laucher's) Strange Loop talk
– sure, their talk was mentioned as an example in passing (along with another
talk about Rust), but in no way is the essay specifically about their talk.
The seven sins essay addresses the entire discourse about static and dynamic
typing; whether one particular talk exhibits a particular "sin" or not doesn't
mean there's not a real problem.

------
nl
_> >In a typed language, only polymorphism which can be proved correct is
admissible. In an untyped language, arbitrarily complex polymorphism is
expressible.

>This misses the point, which is that “arbitrary” has proven to be unlikely to
be correct. Ultimately, this argument comes down to “you should trust human
programmer intuition,” which I gladly admit to being an assumption I’m
unwilling to make._

It's true that _“arbitrary” has proven to be unlikely to be correct_ in the
general case.

However, allowing untyped types(?) has also proven to be _useful_. I think
this is because there is yet to be a language that makes expressing complex
types _easy_.

We work in an environment where FizzBuzz is considered a good way of filtering
people who can program.

I'm generally a proponent of "strongly typed" languages. But I do worry about
the cognition overhead the syntax causes, and I do wonder if there is a
"better way".

~~~
gcanti
> I'm generally a proponent of "strongly typed" languages. But I do worry
> about the cognition overhead the syntax causes, and I do wonder if there is
> a "better way".

What about dynamic language + runtime type checking + (nearly) 100% test
coverage? You get the flexibility of a dynamic language and the safety of a
static type language (in fact you get more, since runtime type checking is
more powerful).

~~~
agumonkey
I still don't understand the notion that dynamic types gives you anything more
than static types. Are there really usefull programs that can't be written in,
say, Haskell?

~~~
gcanti
Unfortunately I do not know so well Haskell to give you an answer. I was
referring to this:

> I think this is because there is yet to be a language that makes expressing
> complex types easy.

Since I'm obliged to use a dynamic language (JavaScript) I'm looking for a
solution in it. And in JavaScript is really easy to express a complex type,
say "the set of all the even integers between 2 and 14 but 10", using runtime
checking to enforce the constraints. Is it simple in Haskell to define such a
set in its type system?

~~~
lmm
Simple? Yes. Pleasant? Not really. I'm not a Haskell expert but I can write
that type in Scala (the Haskell version probably has less boilerplate, but
still some):

    
    
        sealed trait Even[N <: Nat]
        object Even {
          implicit object zero extends Even[_0]
          implicit def evenSS[N <: Nat](implicit e: Even[N]) = new Even[S[S[N]]]{}
        }
        sealed trait EvenAndBetweenTwoAndFourteenAndNotTen
        object EvenAndBetweenTwoAndFourteenAndNotTen {
          implicit def forN[N <: Nat](implicit e: Even[N], le2: Le[_2, N], le14: Le[N, _14]) =
            new EvenAndBetweenTwoAndFourteenAndNotTen[N]{}
          implicit object interfere10 extends EvenAndBetweenTwoAndFourteenAndTen[_10]
        }
    

Like I said, it's cumbersome, but there's nothing actually complicated about
writing this, it's just boilerplate to represent it in the type system.

On the other hand, in Idris or the other languages mentioned in the article,
you should be able to write that kind of constraint at the type level just as
easily as you can at the value level. That's where it gets really exciting.

------
pohl
Regarding the referenced talk at Strange Loop, I have noticed that I have yet
to see a good talk ever done in that style where dual-presenters oscillate
back & forth. Especially when they're trying to elicit laughter. I did manage
to enjoy the talk between cringeworthy moments, at least.

There is an interesting slide at around 21:00 where they show four quadrants
that represent:

What do you get when you pass a Term to a Term? (a function)

What do you get when you pass a Type to a Type? (templates, generics, type
variables)

What do you get when you pass a Term to a Type? (dependent types)

What do you get when you pass a Type to a Term? (inheritance, overloading)

It was an interesting slide. It manages to make dependent types seem like an
essential missing piece for the sake of symmetry. I'm not sure I understand
the last one, though: how is inheritance and/or overloading the act of passing
a type to a term?

Does anybody know if this idea is explained better elsewhere?

~~~
bazzargh
That slide was a simplification of the Lambda Cube that they showed right at
the end. That's probably a good starting point for better references:
[http://en.wikipedia.org/wiki/Lambda_cube](http://en.wikipedia.org/wiki/Lambda_cube)
... the wiki page isn't particularly approachable, but there's more detail if
you follow the links.

------
rbehrends
One problem with this talk (and many other talks about static typing) is that
it seems to take at faith value the common claim that static typing is a net
benefit for program correctness because it prevents programs from compiling
that would otherwise create runtime errors (e.g., at around the 13 minute mark
and again after the 18 minute mark). [1]

This seems like it's just common sense, but common sense claims do not always
hold up in science and (as with many other claims regarding software
engineering benefits that presumably derive from this or that language
feature), it has rarely been tested.

This is not surprising: most computer science departments don't have the
infrastructure to do work that involves human test subjects (unlike, say,
psychology departments). Not only is there an ethics review board that has to
sign off on such tests (no matter how harmless they may seem), creating
meaningful tests involving non-trivial programming tasks can be pretty
involved (a talk by the author of the first paper I discuss below was that one
problem with the study was that it had been very expensive).

But at least some such tests have been done, and the conclusions are not
necessarily what common sense would tell us.

Consider for example, the following series of papers:

Hanenberg, S. _An Experiment About Static and Dynamic Type Systems - Doubts
About the Positive Impact of Static Type Systems on Development Time._ In
Proceedings of OOPSLA/SPLASH 2010. ACM, Reno 2010. [2]

From the conclusion: "In this paper we presented results of an experiment that
explores the impact of static and dynamic type systems on the development of a
piece of software (a parser) with 49 subjects. We measured two different
points in the development: first, the development time until a minimal scanner
has been implemented, and second the quality of the resulting test cases
fulfilled by the parser. In non of these measured points the use of the static
type system turned out to have a significant positive impact. In the first
case, the use of the statically typed programming language had a significant
negative impact, in the latter one, no significant difference could be
measured."

Now, that is one data point that only relates to the initial development of a
piece of software, too (and as we'll see in a moment, it's not as though
dynamic typing is a free lunch, once we look at maintenance tasks), but it
does contradict common sense: The defect rate of the statically typed software
is not actually better (at least not in a statistically significant way).
Earlier studies had also shown that statically typed languages
(unsurprisingly) did have benefits over languages that performed neither
compile-time nor runtime type checks (such as ANSI C vs. K&R C w.r.t. function
arguments).

Now, a follow-up paper:

Kleinschmager, S.; Hanenberg, S.; Robbes, R.; Tanter, É. & Stefik, A. _Do
static type systems improve the maintainability of software systems? An
empirical study._ IEEE 20th International Conference on Program Comprehension,
ICPC 2012, Passau, Germany 2012, S. 153-162. [3]

Three conclusions from this paper on the effect of static type systems on
software maintenance: (1) "Static type systems help humans use a new set of
classes." (2) "Static type systems make it easier for humans to fix type
errors." (3) "For fixing semantic errors, [...] no differences with respect to
human development times [was observed]."

The paper also references a number of other studies with related results that
may be interesting.

A second follow-up paper was the following:

Spiza, S. & Hanenberg, S. _Type Names without Static Type Checking already
Improve the Usability of APIs (As Long as the Type Names are Correct) - An
Empirical Study._ Proceedings of the ACM Conference on Aspect-Oriented
Software Development, AOSD '14, Lugano, Swizerland 2014. [4]

From the abstract: "This paper describes an experiment with 20 participants
that has been performed in order to check whether developers using an unknown
API already benefit (in terms of development time) from the pure syntactical
representation of type names without static type checking. The result of the
study is that developers do benefit from the type names in an API’s source
code. But already a single wrong type name has a measurable significant
negative impact on the development time in comparison to APIs without type
names."

What we're seeing is that while there do seem to be measurable benefits from
static typing, the actual nature of these benefits may not be what one may
expect on the basis of common sense alone (it's actually even more complicated
than I can convey in this brief summary).

[1] To be clear, the talk makes many other immensely sensible points, and I
don't think there's even enough material to conclusively say this one is right
or wrong; my point is that it's essentially unproven either way, and that
there is evidence that our intuition may be wrong here.

[2]
[https://courses.cs.washington.edu/courses/cse590n/10au/hanen...](https://courses.cs.washington.edu/courses/cse590n/10au/hanenberg-
oopsla2010.pdf)

[3] [http://pleiad.dcc.uchile.cl/papers/2012/kleinschmagerAl-
icpc...](http://pleiad.dcc.uchile.cl/papers/2012/kleinschmagerAl-icpc2012.pdf)

[4]
[http://dl.acm.org/citation.cfm?id=2577098](http://dl.acm.org/citation.cfm?id=2577098)

~~~
bkirwi
Just took a quick look at paper #2, which is quite fascinating -- to try and
isolate the effects of static / dynamic typing, they developed _two_ slightly
different programming languages which varied along only this axis.

I have some sympathy with this approach, but I'm not convinced you can
actually separate things in this way. The type systems I actually find
_useful_ are not just a static analysis layer, but have significant impacts on
the semantics of the language and the way libraries are designed. I'm not sure
you could just 'turn off' the type system in Haskell and have anything even
remotely sensible, for example.

I sometimes wonder if there's enough mature open-source projects in the wild
now to be able to draw some statistical conclusions about the long-term
effects of language choice, but that would be quite a project...

~~~
StefanKarpinski
> I'm not sure you could just 'turn off' the type system in Haskell and have
> anything even remotely sensible, for example.

You can actually do exactly this:

[https://ghc.haskell.org/trac/ghc/wiki/DeferErrorsToRuntime](https://ghc.haskell.org/trac/ghc/wiki/DeferErrorsToRuntime)

Of course, the type system doesn't vanish but checking types is deferred until
runtime, turing Haskell into a dynamic language with an unusually elaborate
tag system.

~~~
jules
It's still not comparable to a dynamic language however. Simply constructing a
list [1, "Hello"] will already produce an error. Dynamic languages usually
allow this, and only signal errors when the values are used in an erroneous
manner, like 1 + "Hello". It does exactly what it says on the tin: it displays
type errors that were _found at compile time_ with the same type system that
regular Haskell uses, but the actual displaying is deferred until run time.
There is no actual tag checking going on at run-time.

