Hacker News new | past | comments | ask | show | jobs | submit login
Lambda Cube (wikipedia.org)
139 points by Cieplak 50 days ago | hide | past | web | favorite | 53 comments




Reminds me a bit of the "Commutative diagram of harmonic wave" from wikipedia

https://commons.wikimedia.org/wiki/File:Commutative_diagram_...


Why is it that when I look at equations it’s all just nonsense but send me a gist example and all the sudden it’s the music of the universe.


Because the definitions (eg as seen in OP) describe how things may be synthesised but not what one might want to synthesise whereas a gist example just gives you something to stare at and decide if you can understand how it fits together. If all one has is the definitions one must build everything from scratch but with a preexisting example one can start at the top level and build an understanding down without needing to plan ahead through all the gory details.

At first I wanted to finish this up dismissively by suggesting that maybe it’s just that only the easier things would end up in a gist but having thought about a tiny bit I think there’s a lot to gain from larger examples that won’t fit in half a column in a paper.


Can anyone recommend an online course for learning about the lambda calculus and the introductory type theory?

I've tried learning about them on my own using books, but I have trouble staying focused and motivated enough to get very far.


Language theory and lambda calculus classes were some of my favourite in college. I wonder how many serious bugs would be averted if programming languages came with more complete type systems.


I am currently checking in a fix due to a null reference exception. How I miss Option/Maybe when working in C#.


Nullable reference types in C#8 will help.


Very true. I'm very excited about nullable reference types.


You might think this: but there is a major time and complexity cost to using more advanced type systems that sometimes doesn't pay off in simpler applications that must ship fast!


This is a common meme, but I'd like to see some robust empirical evidence either way (the only studies I've seen so far are with e.g. a handful of undergraduates, implementing a toy problem, well-specified up-front, with no maintenance/feature-drift/etc.)

As a trivial counterargument, there's nothing preventing programmers from ignoring even the most advanced type system (see "stringly typed programming", for example). In fact, my experience with "advanced type systems" (e.g. Agda, Idris and Coq), the types have never been a hindrance; it's their totality checking that can be painful (that's why Idris makes it easy to turn that off, either completely or for certain pieces of code).


> This is a common meme, but I'd like to see some robust empirical evidence either way (the only studies I've seen so far are with e.g. a handful of undergraduates, implementing a toy problem, well-specified up-front, with no maintenance/feature-drift/etc.)

> In fact, my experience with "advanced type systems" (e.g. Agda, Idris and Coq), the types have never been a hindrance; it's their totality checking that can be painful (that's why Idris makes it easy to turn that off, either completely or for certain pieces of code).

A data point: if I'm reading this right, the seL4 formally verified kernel is 14,400 lines of code which required 2.2 person years to write and 20 person years to verify (proofs are done with Isabelle so it's not using types for proofs like e.g. Coq does but you can't escape a similar verification effort):

https://www.sigops.org/s/conferences/sosp/2009/papers/klein-...

"The abstract spec took about 4 person months (pm) to develop. About 2 person years (py) went into the Haskell prototype (over all project phases), including design, documentation, coding, and testing. The executable spec only required setting up the translator; this took 3 pm. The initial C translation was done in 3 weeks, in total the C implementation took about 2 pm, for a total cost of 2.2 py including the Haskell effort."

"The cost of the proof is higher, in total about 20 py. This includes significant research and about 9 py invested in formal language frameworks, proof tools, proof automation, theorem prover extensions and libraries. The total effort for the seL4-specific proof was 11 py"

Why do you think function termination checking is a big factor? I find the majority of loops in apps are of the form "for everything in this collection" or a similar restriction that is simple to check.

Either way, if you start pushing complicated program properties into your types, you're very likely to have complicated proofs to solve by hand. What properties were you proving with types where you say they weren't getting in the way?


Thanks for the info about seL4, I hadn't seen those effort estimates before.

However, I don't think it's relevant to the point I was making. You seem to be talking about formally proving things. If you look through my comment that you're replying to, you'll see that I don't talk about proving anything, let alone "complicated software properties".

I was talking about writing programs in languages with 'advanced type systems', as opposed to dynamically typed languages; and that I've not seen compelling evidence to back up common claims like "dynamic languages are faster to program in".

You've given compelling evidence that "formally verifying complex properties of large systems is slow and hard", which I think many would already agree with. That doesn't seem relevant to the typed/untyped programming comparison, since formal verification is still a very uncommon thing to do (ignoring language-provided guarantees like memory safety or type safety).

> proofs are done with Isabelle so it's not using types for proofs like e.g. Coq does but you can't escape a similar verification effort

I don't understand what you mean by "you can't escape a similar verification effort". Not only was I not talking about proving anything, I even stated that the type system can often be ignored and gave "stringly typed programming" as an example (where a program is strongly typed, but almost everything has type `String`, which gives us something quite close to a dynamically typed language; including no guarantees that our variables even have meaningful contents, let alone for any "complicated program properties"!).

> Why do you think function termination checking is a big factor? I find the majority of loops in apps are of the form "for everything in this collection" or a similar restriction that is simple to check.

I tend to write in a functional programming style, which makes heavy use of recursion.

Termination checking (AFAIK) doesn't have the same sort of powerful, composable nature that type checking has, so languages like Coq and Agda (which require termination checking) are very conservative in whether to allow certain forms of recursion or not. It's easy to do "structural recursion", where recursive calls are run on a sub-expression of the argument, e.g.

    odd Zero     = False
    odd (Succ n) = not (odd n)
It's harder to do non-structural recursion, where the recursive call isn't acting directly on part of the argument, e.g. the greatest common divisor algorithm:

    gcd a 0 = a
    gcd a b = gcd b (mod a b)
This requires ad-hoc proofs to convince the system that the arguments are getting "smaller" by some measure (in this case the sum `a + b` always gets smaller). These can get very hairy, and don't really compose. In this case, for the system to accept the above definition of `gcd` we need to go off and prove theorems about `mod` (i.e. that `b > 0 -> mod a b < b`).

The hardest cases I tend to run into are co-termination, which Coq is especially bad at, both in practice (co-recursive calls must be manually unwrapped, and there doesn't seem to be any way to abstract over the raw `cofix` form) and in theory ( https://github.com/coq/coq/issues/5288 https://github.com/coq/coq/issues/6768 ). Agda's co-patterns are nicer to work with, but still require weird type-level hacks (like the "sharp" and "flat" operators, "sized types", etc.)

When languages require termination, these sorts of overly-conservative restrictions are inevitable, and we have to deal with them regardless of whether we actually care about termination (or type soundness). That's why I like programming in languages like Idris, where we can selectively ignore termination, and hence the restrictions on what forms we're allowed to write.

> What properties were you proving with types where you say they weren't getting in the way?

I wasn't talking about proving anything, using types or otherwise. I was specifically talking about not proving things, despite the type system allowing us to do so if we want to (which, if we're happy to consider dynamic types, we probably don't want to).

However, when I do happen to be using types to prove things (which, again, is absolutely not required at all, whether or not we're using an 'advanced type system') I find it hard to say that types are "getting in the way", since they're the system of logic for the proofs. We could get such types 'out of the way', but that would also throw away the point of the endeavour. After all, we can 'prove' all sorts of things if we don't allow logic to get in the way ;)


> I was talking about writing programs in languages with 'advanced type systems', as opposed to dynamically typed languages; and that I've not seen compelling evidence to back up common claims like "dynamic languages are faster to program in".

Well I strongly agree with that if you meant strongly statically type programming languages. I think they're a good sweet spot and I'd pick them over dynamic languages even for small projects.

The reason I went down the route of software verification is you said "In fact, my experience with "advanced type systems" (e.g. Agda, Idris and Coq)". Wouldn't OCaml or Haskell have been better examples? Why use Agda, Idris and Coq at all if you're not going to prove advanced program properties?

I would say Coq is used solely for software verification with the help of dependent types, while Agda and Idris are an attempt to make software verification with dependent types more palatable. I'm not sure why you'd use them over OCaml or Haskell if you weren't going to use types that were more advanced.

> It's harder to do non-structural recursion, where the recursive call isn't acting directly on part of the argument, e.g. the greatest common divisor algorithm:

How often are you really doing non-structural recursion though? It's exceptionally rare to write complex loops like a GCD algorithm was my point. Either way, I agree an escape hatch for proving properties can be a practical solution.


> Wouldn't OCaml or Haskell have been better examples? Why use Agda, Idris and Coq at all if you're not going to prove advanced program properties?

Why not? Just because they have advanced type systems doesn't mean that we need to make use of those features; the same way writing a Haskell program doesn't require us to use closed type families, multi-parameter type classes with functional dependencies, etc. or OCaml doesn't require everything to be an applicative functor over a row-polymorphic existential, etc.

As an aside, when I am doing fancy things with types, I personally find dependent types much simpler to think about and work with than the tower of cards that Haskell and GHC have been building via language extensions. The downside is limited type inference :(

In any case I was trying to highlight the common implicit assumption that programs written in languages with feature X (like powerful types) must therefore use feature X. There's no reason for that to be the case: programs in Haskell and OCaml can be stringly typed, so can those in Coq and Idris. Yet this clouds comparisons, where on one side we might have a formally verified microkernel, and on the other we might have a pile of bash string interpolations.

I will readily admit that Coq would be a pretty horrible choice for writing regular software, but mostly due to ergonomics like having multiple nested languages (vernacular, gallina and ltac) and having to "extract" programs to run them. On the other hand I think Idris is really nice, and would enjoy using it for projects which don't rely on some exising library infrastructure.

> How often are you really doing non-structural recursion though? It's exceptionally rare to write complex loops like a GCD algorithm was my point.

It's definitely a small fraction of the number of `map` or `fold` calls I write, but it's not insignificant. I worked on a project recently that made heavy use of sets, so there was a lot of sorting going on which IIRC wasn't structurally recursive; it also calculated least common multiples (similar to greatest common divisor) as part of a random sampling function. That project was in Racket though :P

> Either way, I agree an escape hatch for proving properties can be a practical solution.

In Coq I often just stick on a decreasing "fuel" parameter, turn the result into an `option` and bail out with `none` if the fuel runs out :P


> Why not? Just because they have advanced type systems doesn't mean that we need to make use of those features; the same way writing a Haskell program doesn't require us to use closed type families, multi-parameter type classes with functional dependencies, etc. or OCaml doesn't require everything to be an applicative functor over a row-polymorphic existential, etc.

What specific program properties have you tried proving that have required hand proofs you don't think are that difficult? The core issue is that once you try to capture a program property in one part of your program, you have to start capturing it everywhere and it's very difficult to contain the complexity. Simple looking properties can require very deep knowledge and expertise of writing proofs.

OCaml etc. are only likely to add feature that give a good amount of power for a small amount of expertise whereas with Coq, Isabelle etc. they don't need to hold back on what level of expertise is expected from users (e.g. many users are in research publishing journal papers from the expertise required of them). Dependent types give a massive amount of power for a massive rise in the expertise expected from the user.


> What specific program properties have you tried proving that have required hand proofs you don't think are that difficult?

I don't do much proving by hand; I tend to sketch out possible encodings by hand, but work on actual proofs on a computer. Incidentally I find that such exploratory work is where CoqIDE, ProofGeneral, etc. shine; as opposed to Agda or Idris which seem to work best for "writing up" and "filling in the holes" of proofs we're already confident in.

Most non-difficult things I've formally proven been toy examples. I did enjoy formalising a particular AI approach, which was pretty simple to prove properties about once I figured out a nice encoding https://github.com/warbo/powerplay (for theorems see https://github.com/Warbo/powerplay/blob/master/SimpleTests.v ). Though that's more of a toy theory, rather than a huge programming effort like seL4, CompCert, etc.

> The core issue is that once you try to capture a program property in one part of your program, you have to start capturing it everywhere and it's very difficult to contain the complexity. Simple looking properties can require very deep knowledge and expertise of writing proofs.

Very true. There are essentially two ways to do it:

- Use standard encodings (e.g. lists and peano naturals), then build up a mountain of lemmas about them, and plug those together in different combinations each time we want to prove a property (e.g. the elements are sorted).

- Use "correct by construction" encodings (e.g. a list of differences) which guarantee the properties we want, then build up a mountain of helper functions which act on those encodings and preserve the properties (e.g. `mapDiffs`, `foldDiffs`, `filterDiffs`, etc.)

I've seen a couple of potential improvements for this proposed; though I'm not sure whether they scale:

- Adam Chlipala likes to build up a custom proof tactic (in Certified Programming with Dependent Types this is called `crush`). Whenever that tactic doesn't work, he proves the statement manually, then alters the tactic's search algorithm so it can find that proof, then replaces the manual proof by a call to the improved tactic.

- Conor McBride proposed "ornaments" as a way to "add on" property-guaranteeing information to standard encodings, e.g. treating normal lists as lists-of-differences to preserve sortedness.

> OCaml etc. are only likely to add feature that give a good amount of power for a small amount of expertise whereas with Coq, Isabelle etc. they don't need to hold back on what level of expertise is expected from users (e.g. many users are in research publishing journal papers from the expertise required of them). Dependent types give a massive amount of power for a massive rise in the expertise expected from the user.

I don't think dependent types themselves are particularly difficult to grok, although they can certainly be used for all sorts of elaborate shenanigans. I don't think that's any different from programming languages themselves though, e.g. some languages like Basic and Scheme are routinely taught to children, despite there presumably being some monstrously fiendish projects out there which use those languages. I've also encountered some bewilderingly complex spreadsheets in my time, but I wouldn't say that spreadsheets themselves require difficult expertise.

I certainly find the few, simple rules of dependent types much easier to understand than the mess that is type level programming in Haskell (data kinds, singletons, associated types, open/closed type families, etc.). I still don't fully understand that stuff, but I do know that:

- Data kinds in Haskell are just normal values in dependently typed languages

- (Closed) type families in Haskell are just normal functions in dependently typed languages

- Type classes and instances in Haskell are just normal record types and values in dependently typed languages

- Constraints in Haskell are just function arguments in dependently typed languages (this also generalises constraint solving to implicit arguments)

- Associated types in Haskell are just normal record fields in dependently typed languages

Most of these features are just a "shadow language" ( https://gbracha.blogspot.com/2014/09/a-domain-of-shadows.htm... ), which are completely unnecessary when types are first-class, and especially when dependent types are allowed.

Again, just because something is available doesn't mean that we need to use it. The only pressure to do so comes from existing libraries. As another example, many Haskell libraries, including the Prelude (built-ins) and "do notation" make use of monads (via the `Monad` type class). Yet we're free to ignore all of that and never touch monads at all; we could spend our entire programming career in the language, blisfully unaware that such a thing even exists. The language does require that we define a `main` value as our program's entry point (just like Java and C), and that value has a type like `IO ()`; hence we're forced to use `IO`. Yet we can use `IO` without ever knowing or caring that it just-so-happens to implement the `Monad` type class: we can do everything we want via functions like `runIO :: IO (IO a) -> IO a` instead of `join :: (Monad m) => m (m a) -> m a`, and `sequenceIO :: [IO a] -> IO a` instead of `sequence :: (Monad m) => [m a] -> m a`, etc. in the same way that many programmers spend their entire careers making heavy use of lists (perhaps even using a LISt Processing language!) without having to know or care that `List` just-so-happens to be a monad.


> This is a common meme, but I'd like to see some robust empirical evidence either way

I wouldn't call that datapoint robust nor empirical, but Scala developers have been complaining about the slowness of their builds for more than ten years now, and it's pretty much accepted that the Scala compiler will never get marginally faster than it is today. The Scala 3 compiler is not looking much better either.


What does that has to do with type systems? OCaml is one of the fastest compiler around, and yet it has a very rich type system. Julia also has a very rich type system, but is interpreted/JITed. Many compilers are very slow, even thought their typecheckers barely does anything.

The speed of the compiler is related to the design of both the language and the compiler as a whole, not the presence or absence of a decent type system. I would even argue that one of the point to recognize a well designed type system is that it's conceived in a way that admit fast typechecking.


Scala's builds from fresh are slow, but this is never a hindrance during development; you just leave `sbt ~compile` running in a terminal, which gives you constant feedback. I find in this case the type system of Scala allows you to quickly reach a sensible solution using the compiler and types to guide you. This allows faster shipping, not slower.


That sounds like the compiler vs interpreter flamewar, rather than the strong vs weak or static vs dynamic typing flamewars ;) Is there a Scala interpreter to compare performance against?

I've not used Scala much, but I know that while Haskell's de facto implementation is the Glasgow Haskell Compiler, there are also (unmaintained) interpreters like HUGS, so compilation isn't a necessity.

Plus there's the whole spectrum of Futamura projections, JITs, etc. ;)


My thought experiment is this: take a simple, single page app that take user information, runs some calculation, and returns the results. Add login/SSO/or what ever third-party integration. It will be faster to bring value to your customers using a dynamic language (Ruby/Python/PhP), then using (Haskell/Scala). I don't think you really see the benefits of Hindley-Miller type inference, unless you are building complex services (think: compilers, real-time allocation services) that require multiple layers of domain specific abstraction. Advantages of using strongly typed languages for simple SaaS often doesn't hold, and there are very few, if any? successful SaaS start ups that have taken this route. I just think not moving fast enough will kill your start up before you ever get to the point of having to worry about 'tech debt'

Note: This could also be true for programming tooling/environmental reasons, and strongly typed languages do help smaller teams handle larger codebases/reduce tests!


Again, I haven't seen robust evidence either way about development speed, and any such anecdata will have an equivalent from the other side.

One clear failure of dynamic types for Web programming is injection attacks, which are trivially solved by treating HTML, SQL, etc. as separate types (e.g. http://blog.moertel.com/posts/2006-10-18-a-type-based-soluti... )


> It will be faster to bring value to your customers using a dynamic language (Ruby/Python/PhP), then using (Haskell/Scala).

If we are talking about a first iteration/mvp then I can agree with you. On the other once you have to maintain/extend/refactor this first iteration you'll start seeing the (imo) enormous benefits of static typing.


Re: Diminishing Returns of Static Typing: https://blog.merovius.de/2017/09/12/diminishing-returns-of-s...


I don't think its type checking that is bad per se, but classes specifically.


Can you support when this trade off ought to go one way or the other with any objectivity?


But the question is why "must" it ship fast? That seems to be a poor, irrational agreement we've made as a tech society for no better reason except because people are impatient.


I don’t think the argument is centered around impatience.

There is a (perhaps wrong) belief that being first to market greatly increases your chances of success if not outright dominance in an area.

People are unconcerned with the possibility that if successful they will be left with a mess to clean up and a system that is hard to evolve. They believe those are easy problems to fix.


Yea, its at least my thinking that having a tech debt problem is better than a failed company that results from not moving fast enough with constrained funding to either find product market fit, or fully self actualize the product concept.


In my case it's because the paying customers only find any value in the product if it helps them remain compliant with controls that go into and out of effect on specified dates.

But it's mostly staticly typed already anyway.


"Each dimension of the cube corresponds to a new way of making objects depend on other objects"

Is it normal to refer to data structures as objects? Just to my knowledge most typed languages abhor the awesome potential of object-oriented programming, like the CLOS.


In this context I think it refers to mathematical objects, since the dependencies can be between either values or types in either direction (values to values, types to types, types to values, values to types).

Also, lexical closures in a functional language are "classes" from the OO setting, so it is natural to implement classes if you want them; see e.g. OCaml. What you rarely see in the functional setting are common OO design patterns, because many of those patterns are unnecessary or awkward in functional languages.


I don't think you meant to imply this, but for anyone else - please note that the reverse can also be said - closures from an FP setting are objects from an OO setting, and it is natural to implement closures if you want them; see e.g. C++ "functors".

That is, objects and closures can both be implemented in terms of the other, if your language doesn't happen to offer both.


Lots of discussion and links on this at http://wiki.c2.com/?ClosuresAndObjectsAreEquivalent



I think Scala does a good job of unifying OO and FP. And because of everything being first class, you get to eliminate all the ridiculeous design patterns and end up with a very nice FP system based upon objects and traits or typeclasses


Thank you, I never knew how ignorant I was about that.


Could you explain what you mean by closures being classes? My only familiarity is from Emacs/Common Lisp and Clojure.


Define a set of functions that close over their lexical environment, then treat the closed-over variables as internal/private state, and treat those functions as you would methods in OOP (reading/mutating the aforementioned variables).


That's basically the same level of OOP compared to how one can implement numbers and numeric operations as functions.


Classes are a combination of member variables, methods, and method dispatch (which is important for inheritance). So let's emulate that combination with a closure in Lisp:

  (defun constructor (x y)
    (list (lambda (a) (+ x a)) (lambda (a) (* y a))))
This is essentially a "v-table" as you would have in C++. Inheritance would be exactly what you expect: replacing v-table methods in child class constructors. Lisp is untyped but if you wanted to do this in a typed language then a class type would just be the types of the methods that can be applied and what index in the v-table each method type has, similar to duck typing (except that the methods are "named" by v-table index). Note that members are private, and can only be accessed if the interface (v-table type) allows it, and that in a pure implementation you would have immutable objects so "setters" would have to recursively call the constructor.


It's type theory, so the `objects' in question are either systems of terms and/or types. See where it says:

    terms allowed to depend on types, corresponding to polymorphism.
    types depending on terms, corresponding to dependent types.
    types depending on types, corresponding to type operators.
So I think they're using the word `objects' either loosely (as in, they could have said `things' but maybe that's too informal?) or they're using it as a shorthand for `type-theoretic objects', in which case they're talking about languages of a certain constrained expressive power: the language which features polymorphism, the language which features dependent types, the language which features type operator.

`object' here refers neither to the objects of object-oriented programming nor to object in common lisp object system.

edit: IANATT


I think this is the mathematical/logical distinction between "object level" and "meta level".

The meta level is about lambda terms, involving things like variable names, abstractions, judgements, derivations, equivalences, reductions, substitutions, etc.

The object level is what lambda terms are about, involving things like values, types, variable contents, etc.


What would be a good text book about language theory, type systems etc.?


I personally hope that I will eventually find the time to read Practical Foundations for Programming Languages by Robert Harper

https://www.cs.cmu.edu/~rwh/pfpl/index.html

direct link to a preview of the 2nd edition:

https://www.cs.cmu.edu/~rwh/pfpl/2nded.pdf

direct link to an unabridged version of the 1st edition:

http://profs.sci.univr.it/~merro/files/harper.pdf



since TAPL has been mentioned already: Concepts, Techniques, and Models of Computer Programming[0] (CTM) is a great book on language theory.

The whole book is basically taking a core minimal language (Oz) and extending it in various ways to obtain different paradigms (functional programming, declarative concurrency, message passing concurrency, imperative/stateful programming, OOP, shared state concurrency, constraint programming, logical programming etc).

It's very fun and illuminating.

It also includes a part on formalizing semantics, of which I recall nothing.

[0] https://www.info.ucl.ac.be/~pvr/book.html


Software Foundations seems to be really good.

https://softwarefoundations.cis.upenn.edu/

And it's free.


Slightly disappointing. I wanted to learn about four simultaneous 24-hour days within a single rotation of the Earth.




I know about Gene Ray. But I can't understand the actual lambda cube.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: