Pattern matching without exhaustiveness checking and algebraic datatypes is really just a disguise: syntax level changes when the issue is a semantic (i.e. language-level) problem.
That's really not true, Erlang has neither static types nor sum types (which I expect is what you mean by ADT rather than including product types) yet several of its construct (not least of which "assignment") make great use of pattern matching. Pattern matching is one of the most important tools in Erlang's toolbox.
> What you want is to be able to declare all the shapes of your data, with descriptive names, and then be able to say definitely "these are the only such shapes."
> Pattern matching without exhaustiveness checking and algebraic datatypes is really just a disguise
Also not necessarily true, closed sum types are probably the best default but OCaml's polymorphic and extensible variants are genuinely great tools.
In practice (and non-trivial code), exhaustiveness checking plays much less of a role, especially if you already have an object-oriented language.
In basic OO languages, you get exhaustiveness through abstract methods AND get the ability to add subtypes to an existing sum type.
One advantage of pattern matching is that it's easier to add actions (instead of subtypes), a.k.a. the expression problem. But that's a modularity issue, has nothing to do with exhaustiveness, and can also go the other way (as pattern matching makes it hard to add new variants to an ADT in a modular fashion).
Another advantage is that pattern matching can be a poor person's tree parser, but for that, exhaustiveness doesn't buy you much, because ADT patterns also typically allow for patterns that aren't allowed by the underlying tree grammar, so you'll have an `_ -> assert false` fallback case often enough.
I would disagree on this point. With exhaustiveness checks you get at least errors in every place you match against a data type you just extended with a new variant. So your compiler can point you to the places you have to edit after adding new variants to your ADT. That's a helpful feature.
In an OO language, the equivalent to simple pattern matching would be to dispatch on an abstract method of a shared supertype. If you add a new type and forget to implement the abstract method, you get a compiler error. There's your exhaustiveness checking and it's entirely modular as you don't have to touch your original code.
As I said, that's one side of the expression problem. OO languages conversely have a problem (unless they support multiple dispatch or methods external to a class) when they have to add new actions. With ADTs, that's just a new function. With inheritance, you have to add a new method to every single class that implements a variant.
There isn't usually a 100% modular solution, because you want to stay flexible respectively in either dimension, to some extend. You have to chose between trade-off (as always).
That's way I like hybrid languages like Scala where I can actually make an informed choice whether it's more likely to add new variants or more functionality in the future. But when I have to do the thing I didn't anticipate in the first place I want help from the compiler to avoid errors. That's way an exhaustiveness checker for pattern matching is a very welcomed and neat feature.
There are actually language mechanisms that can get you both. multimethods, for example, or their ultimate generalization, predicate dispatch. Few languages support them, though, because they are overkill for most application domains (though there are some, such as computer algebra, where they can be very useful).
Note also that while many OO languages constrain you in having methods only occur within a class definition, this is not a requirement. Multimethods invariably need to be more flexible (because there's no privileged "self" object), but the multimethod approach can also be applied to single dispatch. See also partial classes and related techniques.
Also, ADTs in their basic form are still a very specialized, limited form of sum types. Even if you want sum types, better extensibility is often needed (though ADTs remain a very useful special case), as well as the ability to actually describe subtypes of a sum type.
See how OCaml ended up with having polymorphic variants and extensible variant types and Scala with case classes, for example.
2. Multiple dispatch and static typing are completely orthogonal features.
3. Strictly speaking, you don't even need multiple dispatch. The key feature of multimethods that you need is the ability to add a method outside the class body.
That's completely nonsensical. Erlang does not have exhaustiveness yet is not limited to destructuring.
In fact, mere destructuring is probably the least common of pattern matching's uses in Erlang, much more common are asserting, filtering and dispatching (all of which may make further use of destructuring internally but destructuring is a sub-component of the match rather than the pricipal).
This is easy to achieve if you're matching on a variant type, since there's only a finite number of them.
The idea that pattern matching has only one point misses the point.
While that is useful, it's not always the most important thing I want with pattern matching, it's usually a nice to have. If I can match on the shapes I can meaningfully handle in a particular point and use a default case with appropriate behavior (which may be to report an error condition) for any others, that's often enough.
Algebraic data types are what we call "initial": they're characterized by how they're built up structurally. The 'PaymentMatcher' example, while using the terms "matcher" and "pattern" in the variable names, is actually closer to a "final" solution, since they're characterized by how they behave.
This is a bit abstract, but to put it simply, the key difference between what you and I are talking about is the same difference between algebraic datatypes and type classes. Algebraic datatypes are built up with constructors, and then exist structurally. On the other hand, type classes are more ephemeral. When declaring a type class, we say "these N methods are the ways you can 'poke' and 'prod' me to get a response," that is, it's the behavior of the type class that's most important.
So in fact, it's not that algebraic datatypes are more flexible than type classes or vice versa, but rather that they're duals of each other; they codify similar concepts, but approach the problem from two different sides.
I find this sort of stuff fascinating! Some of my favorite articles dealing with these sorts of nuances:
- Typed Tagless Final Interpretters
- Practical Foundations for Programming Languages, Chapter 15 "Inductive and Coinductive Types"
Consider also tree parsers or languages with predicate dispatch (both of which are, functionality-wise, a strict superset of pattern matching).