Hacker News new | comments | show | ask | jobs | submit login
The Expression Problem and its solutions (thegreenplace.net)
73 points by ingve on May 12, 2016 | hide | past | web | favorite | 49 comments



I consider expression problem pretty much solved. There are two composable solutions: (1) multi-methods (described in this article), suitable mainly for dynamic languages (LISP, Clojure, Julia), and (2) type classes/extension methods, suitable for static languages (Haskell, C#, Scala, Rust, Go (?)).

Having solved the expressivity, the only remaining problem seems to be performance; single-inheritance dispatch just seems much more efficient than multi-methods (time-wise) and quite a bit more efficient than type classes (space-wise and time-wise).


Rust's implementation of "type classes" (bounded generics) guarantees monomorphization and therefore static dispatch, so there's zero time overhead to using them. But they're also less expressive than type classes in other languages.


If you guarantee static dispatch, then you haven't solved the expression problem, which requires some sort of dynamic dispatch (even a switch/pattern match). The only way to address the expression problem and provide zero time overhead in practice (though it can't be guaranteed) is with a JIT/profile-guided optimization.

This, BTW, is why I find Rust's use of the term "zero-cost abstractions" confusing. Where Rust guarantees zero-cost it does so for weak abstractions only, and where it provides stronger abstractions, their cost may be higher than in other languages (because Rust's compiler is still behind on optimizations). It's a little like a restaurant advertising "free food", when they're really only offering free popcorn at the bar, while their steaks are rather pricey. It may be a great offering, but the copy is a little confusing.


Hey, I'm not the one claiming that Rust solves the expression problem. :)

  > where it provides stronger abstractions, their cost may 
  > be higher than in other languages
What is this referring to? I can't think of anything in Rust that matches this description, unless you're referring to something like e.g. devirtualization via JIT compilation. Rust's abstractions are actually supremely impressive IMO; just yesterday I was marvelling at how `(0..x).fold(0, |sum, i| sum + i)` (which involves an external iterator, a generic function, and a closure) gets compiled down to the closed-form `((x - 1) * (x - 2) / 2) + x - 1` (which admittedly slightly misses Gauss' famous `(x * (x + 1)) / 2`, but that might just be nitpicking :P ).


> unless you're referring to something like e.g. devirtualization via JIT compilation.

Yep.

> I was marvelling at how `(0..x).fold(0, |sum, i| sum + i)` (which involves an external iterator, a generic function, and a closure) gets compiled down to the closed-form `((x - 1) * (x - 2) / 2) + x - 1`

That is very cool, but once iterators are turned into loops (perhaps they're even treated specially) and the closure inlined (I would expect both from a decent compiler), it is a rather straightforward optimization (a loop with a constant step). OTOH, a JIT can do this: https://twitter.com/ChrisGSeaton/status/619885182104043520


  > Yep.
Unlike some other languages, the vast majority of calls in Rust are static rather than dynamic (even moreso than C++), which averts the problem from the outset. And even then, LLVM is still capable of AOT devirtualization. It's very rare to reach for a trait object rather than a bounded generic, so I'm afraid that categorizing bounded generics as a "weak abstraction" is hyperbolic on your part. :)

  > a JIT can do this
Constant folding is hardly the exclusive purview of JIT. :P When coming up with the above example I had to repeatedly jump through hoops in order to keep LLVM from statically evaluating the entire function; even when I managed to force LLVM to stop inlining the function, it went behind my back and propagated the constant into the uninlined function, creating a symbol that did nothing other than return the integer 5050. :P


> I'm afraid that categorizing bounded generics as a "weak abstraction" is hyperbolic on your part.

Yeah, probably.

> Constant folding is hardly the exclusive purview of JIT.

Oh, sure, but in that example, an AOT would have a hard time doing that (certainly without special rules).


Rust provides dynamic dispatch through trait objects : https://doc.rust-lang.org/book/trait-objects.html It provides it with a simple vtable like other languages, with the same efficiency. And PGO is possible with the use of LLVM, but not easily for now : https://github.com/Geal/pgo-rust


Does Rust at least make the stronger abstractions as low-cost as they can be with no JIT compiler?


Oh, it certainly could (I don't know whether it currently does, simply due to compiler maturity), but my "complaint" isn't about Rust's abilities but its use of the term zero cost abstractions.


The phrase was taken from C++, and refers to the implementation of said abstractions. If you built them yourself, you couldn't have built them better. This still presumes things like an AOT model, but that's what it's meant to call back to, not literally everything in every situation ever.


I looked into type classes in Haskell, but my understanding was they they provide static, not dynamic dispatch. Is that wrong?

FWIW I've also been wondering about Go. On one hand in Go you don't have to pre-declare which interfaces your type is implementing. OTOH you can't just extend an existing type from a different package with new methods, AFAIU


Depends what you mean by "static/dynamic dispatch". Strictly speaking, the only example of static dispatch is when the called function is known at compile time. Dynamic dispatch allows selection of the function being called at runtime; either via inheritance (the called method depends on the exact subclass of the object), or via typeclasses (the called method depends on the typeclass passed along with the values - e.g. the addition typeclass will be different for a list of ints and a list of matrices).


Nope, type classes use dynamic dispatch (I might be wrong, but I think higher kinded types are impossible with pure static dispatch.).


They can be implemented via method dictionaries but, sans type system extensions, polymorphic recursion and separate compilation, all uses of a type variable constrained by a type class could, in principle, be monomorphised and the dictionaries eliminated. This is not true of dynamic dispatch.


That's true, but I believe less relevant than most would think. Most of the "dynamic dispatching" people talk about could be similarly monomoprhized. The only real dynamic dispatch occurring in most programs is due to dynamic code loading (separate compilation from your post).


I disagree, and since we are talking about the expression problem, we should not be conflating dynamic dispatching code which could be statically resolved from dynamic dispatching code which is necessarily dynamic.

A standard use-case of dynamic dispatch is for representing different types of term in a parse tree, where we want to dispatch on nodes according to type. This sort of dispatch cannot be monomorphised, since the the exact callee depends on the shape of the trees being traversed and this almost certainly cannot be figured out statically.

In Haskell, you do not do this sort of dispatch with typeclasses, but instead use ADTs.

When it comes to the expression problem, the tradeoff is that ADTs allow you to easily write new sorts of traversal but do not allow you to extend the tree type in a modular way. By contrast, single dispatch OO allows you to easily extend the tree type by providing new classes to implement the traversal methods, at the cost that you cannot easily define new traversals.


> we should not be conflating dynamic dispatching code which could be statically resolved from dynamic dispatching code which is necessarily dynamic.

This is exactly what I said: most of what people call "dynamic dispatching" is not truly dynamic, and the use of dispatching code is merely an implementation detail.

> In Haskell, you do not do this sort of dispatch with typeclasses, but instead use ADTs.

Except you can, and this is one way to solve the expression problem in Haskell, ie. by lifting constructors to their own types with pattern matching functions lifted to type classes. At which point we come full circle: most uses of ADTs can also be monomorphized and so are not dynamically dispatched either.

Dynamic dispatches then occur only on input at runtime to select the type being instantiated (and so the type class being dispatched into).


Please explain how to lift the constructors of Maybe to typeclass instances where its use can be monomorphised.


I don't understand what's unclear, lifting values to the type-level and pattern matching on those values to type classes is a standard construction [1].

I only claimed that operations on Maybe can be lifted to type classes, you claimed that all uses of type classes can be monomorphized (barring "fancy types", of which Maybe is presumably not one).

Since all pattern matching functions on simple sums like Maybe can be lifted to type classes using the above translation, and given you said all such type classes can be monomorphized, therefore all uses of Maybe can be monomorphized. Alternately, your original claim could be wrong, or you can provide a counter-example to my claim that pattern matching on Maybe can be translated to type class dispatch.

[1] http://koerbitz.me/posts/Solving-the-Expression-Problem-in-H...


Object Algebras are a nice solution to the expression problem for OO languages: https://www.cs.utexas.edu/~wcook/Drafts/2012/ecoop2012.pdf (LtU discussion: http://lambda-the-ultimate.org/node/4572 )

Operations are implemented in a class with a method for each data type. Data is created via factories. Adding data types is done by adding new factory methods, possibly by extending another factory. Adding new operations is done by adding operation methods, possibly by extending existing operation classes.

The paper is quite accessible and the patterns are simple and familiar. The paper goes into type safety and composing algebras.


Unfortunately, no OO languages save Scala implement the higher kinded types needed to truly exploit object algebras.


I don't know what you mean by "fully exploit", and maybe there are other, nicer, solutions, but object algebras were specifically designed for Java/C# type systems. The entire idea is predicated on that:

This paper presents a new solution to the expression problem that works in OO languages with simple generics (including Java or C#). A key novelty of this solution is that advanced typing features, including F-bounded quantification, wildcards and variance annotations, are not needed.


> I don't know what you mean by "fully exploit",

If the return type of one of your overloads is itself a generic type, you can't represent that in the type of the method. Consider the IntBoolFactory from the paper, and then add two operations for lambda abstraction and function application:

    public Exp lambda(Exp=>Exp body);
    public Exp apply(Exp lambda, Exp arg);
Where Exp=>Exp represents a Java lambda taking an Exp and returnin an Exp. Notice how the type of the lambda and its argument must now be dynamically type checked to ensure that 'arg' has the same type as the parameter for the lambda. With higher kinded types, you could write something like this instead:

    public <T0, T1> Exp lambda(Exp<T0>=>Exp<T1> body);
    public <T0, T1> Exp<T1> apply(Exp<T0,T1> lambda, Exp<T0> arg);
And now everything is statically type checked by the Java compiler. Without higher kinded types, you gain extensibility in every direction using object algebras, but you lose some type checking.


That actually should read:

    public <T0, T1> Exp<T0, T1> lambda(Exp<T0>=>Exp<T1> body)
But you get the idea.


it can be easily implemented in mainstream languages like Java and C#." https://oleksandrmanzyuk.wordpress.com/2014/06/18/from-objec...


https://en.wikipedia.org/wiki/Expression_problem

> The goal is to define a datatype by cases, where one can add new cases to the datatype and new functions over the datatype, without recompiling existing code, and while retaining /static type safety/

So, while there's an easy way to write something with lispy multiple dispatch, it is not a solution, sorry.


This was an interesting read for me, but I'm struggling to understand, from a design/organizational perspective, why solving the expression problem is a good thing.

If I don't try to solve the expression problem, I'm left with two common [OOP] implementation approaches:

1. A file for each type, within which all operations are also defined for that type.

2. A file for each type (POD), and a file for each operation which contains implementations for all types.

With either implementation, I achieve a good deal of consistency. The questions of "where should I put this new code", "what is the responsibility of this file", or "where could I find the implementation of operator <x> on type <y> within the code repository" are easily answered.

Solutions to the expression problem appear to mix these two approaches. The result seems to be much more loosely structured code. If I want to find the definition of ToString for type Constant, where do I look? It could either be in `src/types/Constant.code` or `src/operations/ToString.code` within my code repository. And as a result, it's no longer obvious what the full responsibilities of any file are within a structured repository without skimming the source.

I've heard some brief discussion of extending libraries with both types and operators - for example, adding a new image format to an image processing library and also adding a new operation. But at that point, doesn't your new code belong in the library (where you can implement it using approach 1 or 2), rather than in application code?


The Expression Problem is a non-problem, if you can modify all your codebase. Just change the image processing library. You probably want to static type system so you quickly find all places, where you have to change something.

The Expression Problem is ugly, if you cannot change the image processing library for some reason.


Sometimes you add a new protocol (or interface) and you really want some existing types to implement that protocol. Some languages make that impossible (or really really ugly) when you didn't define those existing types yourself.

What if the + operator didn't exist in Java? Think about how you would add it. Now think about how would like to be able to add it. What language features would have to be added to Java to support that?


The thing that's really undesirable about the clojure code is an unfortunate consequence of dynamic typing.

Note that the bottom right corner of the expression problem matrix isn't filled in. This means that newOp(newType) will fail at run time - you've simply replaced the difficult task of filling in the matrix with the easier task of partially filling it in and hoping for the best. Hope you've got a great test suite!

I must say that I strongly prefer the Haskell/Scala/etc approach where that missing square is an explicit compile time warning/error.


In my experience, if you write tests to see whether the function returns what it should (or just an integration test for a few use cases), you got types covered.

YMMV


You can make use of clojure.typed or schema.

Yes, they aren't the same as static typing, but they help nonetheless.


In OCaml polymorphic variants solve the expression problem.

Solutions to the expression problem in Haskell feel heavyweight. I've personally never encountered the problem in serious or production code. I cannot say whether that's because it isn't that important in practice or because its difficulty causes me to avoid designs that run into it.


Slight digression but - PLT-Scheme (or Racket or whatever it's called now) : CL :: Haskell : OCaml

Exploration of new ideas in Scheme and Haskell 'feel' more versatile. The representation of concepts are often more succinct and almost always have more of that that je ne sais quoi one gets from an elegant[1] math proof. In day-today coding with a team, I've used more of the CL/OCaml than the latter though for reasons I'm not entirely sure about. OCaml's modules are strictly more powerful than Hask98's type classes.

[1] https://en.wikipedia.org/wiki/Mathematical_beauty Didn't know this existed until just now, but glad Russell and Erdos are both mentioned in the first paragraph, and even more amused, considering their histories with formalism/ordered logic haha. Where's the Frege tragedy and the Wiggenstein Viennese fanboys?!


To be fair, production (vs. library) code has to usually care much less about being extensible, since whoever needs to use it can also usually change it.

In servant dealing with the expression problem (albeit in a pretty different way than the usual tagless final approach) was in my opinion a pretty big win - it means you can add terms the the API DSL as well as interpretations for it. I pretty regularly do the former when using it, and there are lots of libraries doing the latter. (Disclaimer: I'm one of the authors of servant.)


I'm not a C++ expert but can you solve this using something more like the functional approach in C++, by defining a templated function and then specializing it in the extension modules for the various classes? (I guess the specialized declarations would need to be #included somehow, but that's not too big of a hurdle.)


I'm having a hard time deciphering your idea - do you have a minimal snippet/gist to show what you mean?


It is well known that in the untyped (or dynamically typed if you prefer that language) case then the expression problem trivializes. In fact, if we sacrifice type-safety then the problem trivializes in a typed language too.


Not sure if I interpreted the problem correctly, but in C# composing interfaces and using generic extension methods with type predicates:

interface IFooAble { ... }

interface IBarAble : IFooAble { ... }

static class FooProcessor {

    static T ProcessFoo<T>(T fooInst) 

        where T: IFooAble { ... }
}

This gives us something that as long as the object implements IFooAble, we can use this extension for free. I.E: IBarAble can use ProcessFoo just fine. This works really well for writing fluent interfaces as demoed above.


The question is can you do this to sealed/final classes like String? Say you wanted a Canadian String that appended ",eh" ever time it saw a question mark, could you do this? Could you make Money do the same?


Yeah, AFAIK, extension methods just have to work on POCOs, which string and money (or decimal) are.


I've been thinking about this problem lately, and I thought the article gave it a good treatment. However, the problem I saw is that unless someone has already used the right provided language primitives for their data, you suffer the expression problem again anyway.

A pattern that I believe can help solve even this level of expression problem is the facade pattern (https://en.wikipedia.org/wiki/Facade_pattern). In other words, to expand upon what has existed before, one writes a layer on top of the third-party package that belongs to the current project. That layer implements, by default, everything in the external package, and it gives you the ability to extend it as you see fit. In addition, you get all the other benefits of the facade, like being able to swap out, fix, or extend specific implementation details without having to alter many places in code where you relied upon some external package's API decisions directly.

For me, it's still a thought experiment. I do not have a lot of personal experience trying this methodology out in the extreme in production. Has anyone else tried it and can comment?


The facade approach is discussed in the article, along with some of its issues. I call it "extending the visitor pattern" following the paper the article refers to several times.


I think I am misunderstanding what I am reading; I do not see the Facade pattern addressed in the paper or in the article. My understanding is...

Unlike the Visitor pattern, the Facade pattern does not require any help on the side of the thing being extended -- no interface required. That is the primary contributor to the problems described in the article, right? that someone must explicitly provide a backdoor? And in some cases, multiple things must be modified to support extension due to the nature of the backdoors?

With a Facade, one writes a new, additive layer, sometimes directly mimicking the API they are wrapping. To extend Expr, I make MyExpr, and I call upon Expr within MyExpr. The tradeoff is API repetition/delegation, which can be mitigated with compile-time or run-time code generation, depending on the language.

Generally, this approach is used to coerce external APIs into an API that is more appropriate for the domain. For example, suppose I have a choice of three different XML packages: one package has great features for parsing and validating XML but has no interface for building XML; another lets me build it but not read it; a third provides some obscure namespace feature that I wish the builder had, but it is otherwise the slow performer in the set. I can unify these three packages under one package that delegates to the three appropriately.

I would also use this approach for extending existing functionality. For example, suppose I am depending on some foo(V) in an ML-style language that dispatches on V of type X and Y; usually, adding dispatch for something of type Z would require modifying the original code, right? But I can instead write a my_foo(V) that handles Z and defers all other dispatching to foo(V). I may introduce whatever complexity I need for my specific problem without needing the original source code of the thing I am extending. Acknowledged tradeoff: complex dispatch order may require reinventing the wheel to some extent.

My intuition suggests that this approach works well for most languages without requiring language-enforced extensibility on all the things.

Again, great article! It really does cover a lot of ground and comparisons.


Martin Odersky illustrates this very same problem in his scala course very well:

https://class.coursera.org/progfun-002

Lecture 4.5 Decomposition

Lecture 4.6 Pattern matching

(ps. I am not associated with Coursera or Martin in any way, just found his particular explanations very easy to understand)

Edit: formatting


I'd be remiss if I didn't mention the seminal StrangeLoop 2010 talk by Chris Houser on this[0] (whoa, it's been a while..)

Stuart Sierra from around the same vintage[1]. (Again, whoa, time flies.)

[0] http://www.infoq.com/presentations/Clojure-Expression-Proble... [1] http://www.ibm.com/developerworks/java/library/j-clojure-pro...


I'm still bitter, that undergrad school didn't tell about this before starting the functional programming course(s).


Why the downvotes? Most curricula I'm aware of only teach FP at the beginner level and thus one hardly sees the benefits.

Or it's an tiny mid-/upper-level chorus taken by those already hooked.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: