Hacker News new | past | comments | ask | show | jobs | submit login

Pattern matching without support for algebraic datatypes really misses the point. What you want is to be able to declare all the shapes of your data, with descriptive names, and then be able to say definitely "these are the only such shapes."

Pattern matching without exhaustiveness checking and algebraic datatypes is really just a disguise: syntax level changes when the issue is a semantic (i.e. language-level) problem.




> Pattern matching without support for algebraic datatypes really misses the point.

That's really not true, Erlang has neither static types nor sum types (which I expect is what you mean by ADT rather than including product types) yet several of its construct (not least of which "assignment") make great use of pattern matching. Pattern matching is one of the most important tools in Erlang's toolbox.

> What you want is to be able to declare all the shapes of your data, with descriptive names, and then be able to say definitely "these are the only such shapes."

> Pattern matching without exhaustiveness checking and algebraic datatypes is really just a disguise

Also not necessarily true, closed sum types are probably the best default but OCaml's polymorphic and extensible variants are genuinely great tools.


'the point' that OP cares about is exhaustiveness. one can disagree of course; but without exhaustiveness what you really have is destructuring (this is not what OP cares about)- erlang is a fine example


Without exhaustiveness checking, you still get a more advanced typecase-like control flow construct, not just destructuring.

In practice (and non-trivial code), exhaustiveness checking plays much less of a role, especially if you already have an object-oriented language.

In basic OO languages, you get exhaustiveness through abstract methods AND get the ability to add subtypes to an existing sum type.

One advantage of pattern matching is that it's easier to add actions (instead of subtypes), a.k.a. the expression problem. But that's a modularity issue, has nothing to do with exhaustiveness, and can also go the other way (as pattern matching makes it hard to add new variants to an ADT in a modular fashion).

Another advantage is that pattern matching can be a poor person's tree parser, but for that, exhaustiveness doesn't buy you much, because ADT patterns also typically allow for patterns that aren't allowed by the underlying tree grammar, so you'll have an `_ -> assert false` fallback case often enough.


> But that's a modularity issue, has nothing to do with exhaustiveness, and can also go the other way (as pattern matching makes it hard to add new variants to an ADT in a modular fashion).

I would disagree on this point. With exhaustiveness checks you get at least errors in every place you match against a data type you just extended with a new variant. So your compiler can point you to the places you have to edit after adding new variants to your ADT. That's a helpful feature.


Note that I talked about doing it in a modular fashion. With an ADT, you have to rewrite every single pattern match in your code to support the new variant. (And you don't even get warnings if you have a general fallback case that matches everything.)

In an OO language, the equivalent to simple pattern matching would be to dispatch on an abstract method of a shared supertype. If you add a new type and forget to implement the abstract method, you get a compiler error. There's your exhaustiveness checking and it's entirely modular as you don't have to touch your original code.

As I said, that's one side of the expression problem. OO languages conversely have a problem (unless they support multiple dispatch or methods external to a class) when they have to add new actions. With ADTs, that's just a new function. With inheritance, you have to add a new method to every single class that implements a variant.


The expression problem has two sides, right. I don't advocated that pattern matching on ADTs is "the better" solution. Neither the OO approach is.

There isn't usually a 100% modular solution, because you want to stay flexible respectively in either dimension, to some extend. You have to chose between trade-off (as always).

That's way I like hybrid languages like Scala where I can actually make an informed choice whether it's more likely to add new variants or more functionality in the future. But when I have to do the thing I didn't anticipate in the first place I want help from the compiler to avoid errors. That's way an exhaustiveness checker for pattern matching is a very welcomed and neat feature.


> There isn't usually a 100% modular solution, because you want to stay flexible respectively in either dimension, to some extend. You have to chose between trade-off (as always).

There are actually language mechanisms that can get you both. multimethods, for example, or their ultimate generalization, predicate dispatch. Few languages support them, though, because they are overkill for most application domains (though there are some, such as computer algebra, where they can be very useful).

Note also that while many OO languages constrain you in having methods only occur within a class definition, this is not a requirement. Multimethods invariably need to be more flexible (because there's no privileged "self" object), but the multimethod approach can also be applied to single dispatch. See also partial classes and related techniques.

Also, ADTs in their basic form are still a very specialized, limited form of sum types. Even if you want sum types, better extensibility is often needed (though ADTs remain a very useful special case), as well as the ability to actually describe subtypes of a sum type.

See how OCaml ended up with having polymorphic variants and extensible variant types and Scala with case classes, for example.


it is well known the expression problem is trivial in dynamic (untyped) languages—however, static type safety is a requirement of the problem definition of the expression problem.


I didn't mention dynamic typing anywhere?


no, but multi-methods are 99% of the time found in dynamic languages, right?


1. Multimethods simply are a rare feature in general.

2. Multiple dispatch and static typing are completely orthogonal features.

3. Strictly speaking, you don't even need multiple dispatch. The key feature of multimethods that you need is the ability to add a method outside the class body.


under definition 3 then extension methods are also multi-methods - doesn't seem right. even c# has extension methods


Extension methods do not support dynamic dispatch and hence are not proper instance methods. Try MultiJava's open classes for an approach that works.


> without exhaustiveness what you really have is destructuring- erlang is a fine example

That's completely nonsensical. Erlang does not have exhaustiveness yet is not limited to destructuring.

In fact, mere destructuring is probably the least common of pattern matching's uses in Erlang, much more common are asserting, filtering and dispatching (all of which may make further use of destructuring internally but destructuring is a sub-component of the match rather than the pricipal).


Not just destructuring, see https://elixir-lang.org/getting-started/pattern-matching.htm... for example


The implementation in the blog post does handle exhaustiveness via the "default" case of the switch statement.


What we want from exhaustive pattern checks is the ability to check on all cases without falling back on the "default" case. Having the compiler warn you about missing checks is a powerful thing to have.

This is easy to achieve if you're matching on a variant type, since there's only a finite number of them.


> Pattern matching without support for algebraic datatypes really misses the point.

The idea that pattern matching has only one point misses the point.

> What you want is to be able to declare all the shapes of your data, with descriptive names, and then be able to say definitely "these are the only such shapes."

While that is useful, it's not always the most important thing I want with pattern matching, it's usually a nice to have. If I can match on the shapes I can meaningfully handle in a particular point and use a default case with appropriate behavior (which may be to report an error condition) for any others, that's often enough.


The problem is that blog posts often start out with the phrase "pattern matching like haskell or scala" and then describe a mechanism that is not like those—it misrepresents and conflates what those languages provide.


You can get exhaustiveness checking in some cases in TS by adding a default statement to a switch and assigning the value to type 'never'. This is a bit cumbersome, of course, and the fact that it's opt in partly defeats the purpose. You typically want to check exhaustivity to get more safety, but there the safety only comes if you add it in manually so everywhere you forget you lose safety.


The `Payment` example in the post is effectively an implementation of algebraic datatypes. `PaymentPattern` is how the code specifies "these are the only such shapes", and exhaustiveness is checked because the typechecker won't let you provide a `PaymentPattern` that omits any cases. It's certainly not as flexible as pattern matching like you'd get in a serious functional language, but I think it handles the use case "condition over cases of an algebraic type" just fine.


> The `Payment` example in the post is effectively an implementation of algebraic datatypes.

Algebraic data types are what we call "initial": they're characterized by how they're built up structurally. The 'PaymentMatcher' example, while using the terms "matcher" and "pattern" in the variable names, is actually closer to a "final" solution, since they're characterized by how they behave.

This is a bit abstract, but to put it simply, the key difference between what you and I are talking about is the same difference between algebraic datatypes and type classes. Algebraic datatypes are built up with constructors, and then exist structurally. On the other hand, type classes are more ephemeral. When declaring a type class, we say "these N methods are the ways you can 'poke' and 'prod' me to get a response," that is, it's the behavior of the type class that's most important.

So in fact, it's not that algebraic datatypes are more flexible than type classes or vice versa, but rather that they're duals of each other; they codify similar concepts, but approach the problem from two different sides.

I find this sort of stuff fascinating! Some of my favorite articles dealing with these sorts of nuances:

- Typed Tagless Final Interpretters[1]

- Practical Foundations for Programming Languages, Chapter 15 "Inductive and Coinductive Types"[2]

[1]: http://okmij.org/ftp/tagless-final/course/lecture.pdf

[2]: http://www.cs.cmu.edu/~rwh/pfpl.html


This would be news for Prolog programmers, I think (or any other logic programming language).

Consider also tree parsers or languages with predicate dispatch (both of which are, functionality-wise, a strict superset of pattern matching).




Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: