Hacker News new | comments | show | ask | jobs | submit login
Algebraic Data Types (esper.com)
157 points by ALee on July 31, 2014 | hide | past | web | favorite | 58 comments



If you haven't seen any category theory, you might be interested to know that the diagrams the author draws are valuable and important in modern pure math as well. Those last two diagrams totally capture the idea of what a product or variant (coproduct, in math terminology) is.

In general, there are a lot of algebraic structures whose essence is captured by some diagramatic property like these, such as tensor products or fiber products. They're called universal properties. Unfortunately the wikipedia page looks pretty poorly written, but you might find it interesting nonetheless.

http://en.wikipedia.org/wiki/Universal_property


Nice article, I never considered the duality of product and sum types before.

The importance of effective ADT usage is generally under-emphasized when discussing the benefits of statically typed FP languages* like OCaml and Haskell — even though it is the way to maximize the utility of such languages, IMO.

With the right data representation, composability improves and algorithms fall into place naturally. This is further enhanced by a language like OCaml — since the compiler catches type errors and non-exhaustive matches on ADTs, any bugs are likely holes in your application model.

There are many benefits in using ADTs, modules and other type-system features to guide your program design. Others have described those benefits very well already — just read anything by Yaron Minsky or Jane Street.

* Data modeling is critical for any problem domain, and statically typed, non-FP languages (Java, C++, etc) certainly have facilities to encourage effective modelling, but languages in the vein of OCaml and Haskell really push to make the type system do as much work as possible.


I wrote a post emphasizing the duality of products and sums. It's a really deep subject!

http://tel.github.io/2014/07/23/types_of_data/


I loved reading this the other day. Thanks again for writing it. I'll be coming to this post often, I think.


I'm glad you enjoyed it! I'm working on one or two most posts in this vein in order to introduce quantified types and recursive types. So: more to come!


ADTs are a bit like garbage collection: they originated in functional languages, they provide huge productivity gains, and they are (in theory) very easy to prop onto any kind of language. You can imagine a variant of C that has ADTs---they'd just be called tagged unions.


ADTs and pattern matching go together. Having tagged unions is awful without a good way to discriminate upon them.

Switch statements are ok but not typesafe since nothing prevents you from desynchronizing the tag and data


If you think of ADT definitions as defining how you construct types then you get a natural method for destructing them or using them automatically. This could be pattern matching or it could be explicit functions

More concretely, even without pattern matching, so long as, for instance, these three functions are all defined

    pair : a -> b -> Pair a b
    pi1  : Pair a b -> a
    pi2  : Pair a b -> b
You'll be equivalently expressive.


C already has product types, so it's sums that are more interesting, and indeed the destructors for sums are harder in C. The constructors

    in1 : a -> Sum a b
    in2 : b -> Sum a b
are no problem, but the destructor

    sum : Sum a b -> (a -> c) -> (b -> c) -> c
needs first-class functions.


There's a difference between "having a destructor" and having a first-class representation of it. For instance, basically every language has a destructor for booleans—it's just if.

The trick is that unless you have first-class functions you probably cannot reify such a destructor. It always just lives in your syntax.

On the flip side, that's really what it means for a language to, say, lack sum types. It's not that the language doesn't have the ability to have choice, it's that it lacks the ability to represent choice in data. In other words, you probably can eliminate choice using if/switch/case/what-have-you but you can't construct it.


Right. I interpreted your previous comment as meaning that you don't need pattern matching because you can just used the (reified) destructor. Conversely, you can avoid the higher-order destructor using pattern matching.


Yep! They are "the same".


C already has sum types, just slightly sloppy ones (as unions aren't required to have a tag).


Enjoyed this article. Another good read on ADTs: http://tel.github.io/2014/07/23/types_of_data/, and for a Swift-specific take: http://tel.github.io/2014/07/26/types_of_data_in_swift/


thank you for the link!


BTW: Union types ("tagged unions") predate OOP by some years and thus were there long before OCaml or Haskell. Algol68 had them and also Pascal for example. The invention of ML was pattern matching on those data structures.


FWIW, Peter Landin sowed the seeds in "Mechanical Evaluation of Expressions" in 1964 in his description of recursive discriminated union for lists:

A list is either null or else has a head (h) and a tail (t) which is a list.

Rod Burstall picked up on this in "Proving Properties of Progams by Structural Induction" (1968) in an extension to Landin's ISWIM that used pattern matching on these. If you look closely the pattern matching was present in his if-then-else syntax, but he changed it to the more convenient "case".


Pascal's "tagged unions" don't offer anything more than C's though. It's still unsafe and always requires as much memory as the combination of largest cases. OCaml / Haskell / others don't allow type changes at least. That would be useful even without the pattern matching.


I wrote recently about the option type specifically (called Maybe in Haskell): http://asivitz.com/posts/learn_to_love_maybe_kick_null

Swift has it, but it lets you easily subvert its meaning ('trust me, it's not null'), so I don't think it's as useful as it is in the usual functional languages.


I can never get enough of the arrow theory.

Here is a malformed question that occurs to me just now:

In a language like python i can form instances of a product type using tuples, but making instances of a sum (variant) type ... ?

How is it that the duality is broken when I no longer have variable declarations? Or is every object instance already a variant (of every possible subclass) ?


In object-oriented languages, I think the closest analogue to sum types are type hierarchies. It's more verbose and cumbersome, but you can get the same effect through dynamic polymorphism. Consider the example from the post:

  type AST = Expr of ...
           | Stmt of ...
           | Comment of ...

  let rec evaluate e =
    match e with
      | Expr # evaluate the ... of an Expr
      | Stmt # ditto
      | Comment # ditto
In the object-oriented world, I think this most naturally maps to (forgive me for not using Python syntax; I love Python, but I rarely do much OO in it):

  class AST {
  public:
    virtual bool evaluate() = 0; // again, forgive the C++ syntax
  };

  class Expr: public AST {
  public:
    virtual bool evaluate()
    {
      // Expr specific evaluate code
    }
  };

  class Stmt: public AST {
  public:
    virtual bool evaluate()
    {
      // Ditto
    }
  };

  class Comment: public AST {
  public:
    virtual bool evaluate()
    {
      // Ditto
    }
  };
As seanmcdirmid points out, this is not the same, as the discrimination happens at runtime. But, I think it's the closest analogue. Also note that in the algebraic data-type world, we normalize on functions. That is, what to do in order to evaluate any given AST exists in the one evaluate function. But, in the OO world, we normalize on the types; how we evaluate a given AST is spread out over the many evaluate methods.


"As seanmcdirmid points out, this is not the same, as the discrimination happens at runtime."

Doesn't the discrimination of which branch of a sum type your data represents generally happen at runtime?


Yes. I was thinking there was an important distinction about knowing the types statically, and being able to make sure that all options have been handled. But, if in the interface you say "everyone must implement this method" (as I did), and then you don't, the compiler will issue an error in much the same way.

What I was getting at is that when we use the algebraic version of AST, we know exhaustively what forms it can take. The definition tells us. In the OO version, this is note true. If we restrict our use to the polymorphic functions, then this is fine. But if we try to inspect the OO version of AST, ask its true type, and then act on it, then we may get into trouble.


In a language like Python you can consider there to be one type called, say, ThePythonType, or which all types are a variant. Then all objects have type ThePythonType.

A more useful type system would have "union types". They differ from sum types in that you can construct a union type from existing types in an ad-hoc manner. For example, if you have a function that returns a string or nil you can construct a union type "Union of string or nil". To do this with a sum type you'd have to have a common super type of string and nil and no other types.

There are also intersection types, about which I know very little.


> To do this with a sum type you'd have to have a common super type of string and nil and no other types.

I'm not sure you understand sum types. Sum types subsume normal unions; sum types and unions are essentially the same thing and capturing this kind of return type in Haskell or ML is extremely easy. "Supertypes" are completely unrelated and subtyping is not actually needed in languages with sum types (for example Haskell has sum types but doesn't have any real form of subtyping).

For example in Haskell you'd use the `Maybe` type constructor. If your type returns a `Maybe String` that means it either returns a string or nothing.


You can implement a (open) sum type with supertypes, though, can't you?


Sort of. You can effectively do so in a language like Java, which allows you to inspect the types of objects at runtime. For example if you have a class `Foo` with two subclasses `SubA` and `SubB` you could do something like this:

    public static void test(Foo f) {
        if (f isinstanceof SubA) {
            SubA a = (SubA)f;
            /* ... */
        } else if (f isinstanceof SubB) {
            SubB b = (SubB)f;
        }
    }
This works okay but you have to implement all the machinery yourself, and for a statically typed language Java is notoriously type-unsafe so if you make a mistake with your casts you're basically not going to find out until you get an exception at runtime. And of course there might be other existing subclasses of `Foo` which means that you get no compile-time guarantee that you didn't miss a possible case.

Standard ML approaches subtyping in a very different way that renders this kind of approach impossible. It gives you less freedom in that it eliminates downcasting, but that ultimately results in a safer language. Of course Standard ML has proper sum types with all of their compile-time guarantees so there's little reason for it anyway.


The way I was envisioning it, there needn't be any down-casting at the language level. Particularly if you want to close the sum type (which you need anyway if you want exhaustiveness checks):

   public static void test(MaybeFoo mf) {
      switch(mf.type) {
         case MaybeFoo.NOTHING:
             /* ... */
             break;
         case MaybeFoo.JUST:
             Foo f = mf.getFoo();

             /* ... */
             break;
      }
   }

Still has exhaustiveness checking and no downcasts. Still can throw if you use the wrong get* under one of those branches, so I'm certainly not saying it's ideal, but I think it legitimately expresses a sum type and can be implemented in most any language with small adjustments.


This is sort of a different way of saying the same thing: this is effectively no different from the standard tagged-union approach, with the exact same danger of calling the wrong `get` function and triggering a runtime error that you have in C et al. Don't get me wrong, it's certainly a valid representation of a sum type, but when you hear fans of Haskell and ML talking about how useful sum types are, it's generally because the compiler provides some support to ensure that they're safe and usable.


I think we mostly agree.

I would note that when I'm dealing with tagged unions in C, as soon as the operation gets at all complicated (more than maybe 3 lines?) I try to pass it off to another function operating on the internal value which restores safety outside the boilerplate. I agree that support for a true tagged union is ideal, though. (Actually, this might make a good attribute in GCC...)


I'd never thought about this before in the context of Java. IIRC, Java can not have type-parameterized enums (i.e. generic enums), so a Maybe<T> isn't possible. I hope I'm wrong — it's been a while since I did much Java.


Well, you certainly can by expanding the parameterization manually like I did above. I think you're otherwise correct, though - I don't think Java will presently do it for you.


This works fine, it just suffers from boolean blindness.


Could you elaborate? I think I understand "boolean blindness" but I'm not sure how it applies here. MaybeFoo.type is deliberately an enum.

(I could totally be missing/misunderstanding something, though...)


Checking whether mf.type is NOTHING or JUST is your boolean. It's linked by providence to whether or not getFoo() is a runtime exception. With a true ADT getFoo would not be accessible unless mf.type == JUST, but in this example it's up to the programmer to enforce that providence.

So that's the blindness. The pre/post conditions around how mf.type relates to its surroundings cannot be expressed first-class in the code. You are required to do risky things like call potentially partial code.


So it's specifically the separation of the check and the action? I totally agree this is undesirable; I understood "boolean blindness" to be closer to "bare primitive types lack meaning", specialized to Bool.


I always think of it as your data types being too unrestricted so as to allow for invalid states. The eliminator for Maybe

    (a -> r) -> r -> Maybe a -> r
means you only access the `a` if it's there and the access/matching are inextricably linked.

So, I think "bare primitive types lack meaning" is basically correct given that you interpret "meaning" as "meaningful uses/actions".


Yeah, given the nature of the types involved here, I think the two concepts I was trying to distinguish are not as far apart as I'd thought.


> In a language like Python you can consider there to be one type called, say, ThePythonType, or which all types are a variant. Then all objects have type ThePythonType.

This is only if you reject the notion of a dynamic type (as most type theorists do, but programmers do not). Otherwise, Python has a rich type system that is checked at run-time. It lacks sum types, but we could easily imagine a dynamically checked realization.


A good way of getting at this is to call things at runtime "classes" or "tags" to make it apparent that their discrimination happens at runtime, not compile-time. The problem is that if you call both things types, you quickly run into a situation where you are essentially talking about different concepts entirely.

Note that most sum-types are implemented by having a runtime tag on which you can discriminate.


Classes exist statically also. Tags are meaningless outside of implementation discussions.


I don't think type theorists reject that at all. They just don't call them "types". Which I don't know why anyone would care in the context of a technical argument.

If an "average programmer" here and a type theorist spent 1 minute at the beginning of the conversation normalizing terms then nobody would have any argument here. I don't think anyone tries to claim that "types" and "static types" (or, in the type theorist's parlance, "tags" and "types") are the same... they just try to argue whether or not it's "right" to call one or the other by the name "type".


Most type theorists reject that dynamic typing is meaningful and not a misnomer, hence Noel Welsh's assertion that Python has only one type. It doesn't start the conversation off very well.


I don't think anyone argues that Python has more than one (static) type. It however has a dynamic type system. Significantly, static types and dynamic types do not behave the same way and their corresponding theories have somewhat minor overlap.

In particular, many things I can state about static types do not carry over to dynamic types and visa versa.

I believe many conversations get derailed early because among static type researchers, static types are known merely as "types" and dynamic types as "tags" or "classes". There are meaningful ways to compare these two kinds of things but somehow or another they ought to have different words since confusing the concepts will inhibit understanding quite severely.


Did you mean that to be a restatement of what tel said? It seems to be.


All but the "if we just listened to each other, we'd understand each other." The type theorist's and programmer's different perspective s means basically that we'd fight over the meaning of type for our entire conversation.


It wasn't "if we listened", but "if we normalized terms". I agree that getting people to normalize terms with the aim of successfully communicating can be difficult...


"To do this with a sum type you'd have to have a common super type of string and nil and no other types."

This seems wrong. You can make a new type that simply contains one of those, right?

In a sense that's what you're doing with a sum type in Haskell, though there you get to produce new types automatically from higher kinded types.


"Arrow theory" here is better known as Category Theory. I highly, highly recommend learning more about it if you're interested. I cannot understate its beauty.

As a starter guide, I highly recommend reading Lawvere's Conceptual Mathematics.


If they say "cross product", don't they actually mean "cartesian product"?


Beware here! There are a couple of different products you can sensibly form on data. So you have to make it very explicit which you mean when you talk about them. You can't generally interchange them in all contexts, but in some contexts you definitely can.

In Type theory, the product is a product in the sense of category theory, which is a very abstract notion of product indeed. The idea is that given objects, here types, A and B; you can form their product, A * B and that is also a valid object in the category. The diagram then universally identifies the A * B element as the one where you can project out of it via the 1st and 2nd projection such the at given diagram commutes, etc. And it is important to stress there are many products for which this property hold.

(Edited a bit to appease the automatic formatter at HN)


Any two of those many products, however, are isomorphic, and hence, while still distinguishable, they are “essentially” the same.


If I'm reading the op right, this is incorrect. Oftentimes "products" in various contexts are "categorical products cut down by some sensible quotienting". In any category the categorical product will always be the largest (this is essentially what its definition entails) and only by careful choice of category will that correspond to some of the less formal "products" available.


No, altough the wording might have been misleading—I was refering to the second paragraph, while you seem to have applied it to the first. I hope this clarifies the situation a bit:

For any category, the product A * B of two objects A, B (if it exists) is unique only up to isomorphism (this follows from the universal mapping property).

In general, products A * B and C * D need not be isomorphic (and seldomly are), and neither do products from different categories. Many of the commonly-used products actually arise from a product in a suitable category, the cartesian product, for example, is a product in the category of sets.


Yeah I think I just misunderstood your statement: I agree with this extension upon it completely.


To mathematicians, the two are kind of interchangeable. I suppose "cartesian product" is a bit more specific, as "cross product" is also used to mean the vector cross product, which is different.


Sum types with compiler-enforced exhaustive pattern matching feel like a massive win and are the main thing I'd love to import in to traditional procedural programming. Switch statements and their ad hoc nature pale in comparison.


so the monad tutorials stopped being cool?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: