Hacker News new | past | comments | ask | show | jobs | submit login

One of the very reasons why Haskell and Rust are so safe is because pattern match checking in these languages is exhaustive. If you don't cover every possible type constructor for an enum or pattern the compiler will throw an error.

For example, the Maybe monad used with match must have Nothing and Just handled during pattern matching. Precompile time logic checking.

The below will throw a precompile time error:

  handleMaybeNum :: Maybe int -> int
  handleMaybeNum Just a = a
The below will not:

  handleMaybeNum :: Maybe int -> int
  handleMaybeNum Just a = a
  handleMaybeNUm Nothing = -1
Could the same be said for this library? If so when combined with mypy type checking and functional programming this can transform python into a language with haskell level saftey.



    $ cat maybe.hs 
    handleMaybeNum :: Maybe int -> int
    handleMaybeNum Just a = a
    $ ghci maybe.hs 
    GHCi, version 7.10.3: http://www.haskell.org/ghc/  :? for help
    [1 of 1] Compiling Main             ( maybe.hs, interpreted )
    
    maybe.hs:2:16:
        Constructor ‘Just’ should have 1 argument, but has been given none
        In the pattern: Just
        In an equation for ‘handleMaybeNum’: handleMaybeNum Just a = a
    Failed, modules loaded: none.
You're right about the "precompile time error", although it's not what you claim...

Let's fix the syntax error:

    $ cat maybe_fixed.hs 
    handleMaybeNum :: Maybe int -> int
    handleMaybeNum (Just a) = a
    $ ghci maybe_fixed.hs 
    GHCi, version 7.10.3: http://www.haskell.org/ghc/  :? for help
    [1 of 1] Compiling Main             ( maybe_fixed.hs, interpreted )
    Ok, modules loaded: Main.
    *Main>
No error now. You do get a warning with -Wall, but that's not quite what you claimed. (Newer versions may be more strict, I don't know.)


Hmm my mistake. The match operator in rust is exhaustive though and will raise a compile time error rather than a warning. Elm does this as well. I'm just getting mixed up.


Yeah, I think it's the same for OCaml. I found it funny that Haskell would be the odd one out here.


Author here. Pampy and Pampy.js (https://github.com/santinic/pampy.js) are more similar to Lisp-style Pattern Matching. Of course they cannot provide static analysis, but they can improve understandability, and they are useful with nested structures which are typical of how you solve problems in dynamic languages (less types, more nested dicts-and-lists).

An example:

pampy.matchAll(json, {_: {age: Number}}, (key, age) => age)

Finds any object with a Numeric age, that is value of any key in a json object we don't know the structure of.


> Could the same be said for this library?

This is clearly answered on the page.

"By default match() is strict. If no pattern matches, it raises a MatchError"


That's different. Haskell and Rust pattern matching isn't just sugar for convenience. It doesn't just check if a pattern matched. It checks if your patterns covers every possible state.

Please note that my psuedo code in haskell was not function calls. It's a function definition using pattern matching in the function signature.

Think of Maybe as an enum with two values. A Just(a) and a Nothing.

The below definition basically says if the enum enters a function as a Just(int a) return int a. (Just(str) is a type error handled by the signature). If the enum enters the function as a Nothing return -1.

  handleMaybeNum :: Maybe int -> int
  handleMaybeNum Just a = a
  handleMaybeNUm Nothing = -1 
if the last pattern match (Nothing) isn't written, the compiler knows that your logic isn't handling a specific case, and it lets you know about this BEFORE doing any matching. The haskell compiler will literally tell you that your function is missing logic to handle certain cases.

It is a simple mechanism that actually proves your programs to be correct. Type checking proves your program to be type safe. Exclusive use of pattern matching in haskell proves that your program logic handled every possible permutation of that type entering a function.

It is a powerful mechanism that is far greater than unit testing. Unit testing just says your is program correct for a specific test case. Haskell pattern matching and type checking provides a proof across every parameter permutation.

I realize python is an interpreted language. So an error can only be thrown at runtime. What I expect for an exhaustive match check in python is this:

  match(Just(1), 
        Just(a), lambda a: a)

  match(Just(1),
        Just(a), lambda a: a
        Nothing, 1)
Both matches above have produce a successful match with Just(1) matching Just(a) in the first expression. However for haskell pattern matching the first match will STILL throw an error because the Nothing case for the Maybe enum was not handled.

Apparently from the examples in the docs it clearly does not do this. You can literally match across types and it is unreasonable for the match operator to throw an error if you didn't cover every possible type that exists.


> That's different.

I didn't say it was the same. I said that if he bothered to read the link he would have seen the answer to his question.


Yes, while interesting this is pretty irrelevant for Python which does not have static checks.


You clearly aren't caught up with the latest changes in python. The interpreter doesn't have static checks, but the language supports external tooling to do this.

Look up type annotations for python 3 and the mypy external type checker.

Static type checking is now a very real thing with modern versions of python.

Additionally it is possible to code the match function to implement the functionality I am talking about during runtime instead of compile time. Definitely not irrelevant.


> Additionally it is possible to code the match function to implement the functionality I am talking about during runtime instead of compile time.

Maybe, but I'm not sure there's more value in that.


Having match crash immediately helps you identify the branch error without explicitly writing a test case for it. It is equivalent to raising an exception.


That's misleading. It will only raise an exception of the actual value is not matched. Haskell generates a compile error before even running if that's something that might happen. It is the old distinction between static and dynamic typing applied to pattern matching.


When exactly are you expecting Python, an interpreted language, to raise a compiler error?


You can raise the error as soon as the match operator is executed. Even if match has a valid match with the expression, if all the branches aren't evaluated the function can still raise an error.

This allows for early catching of logic errors during unit testing and negates the need for additional tests.


I guess, but I think a possible solution here would be a pylint plugin to show a warning so you don't have to wait for a runtime exception.


> That's misleading.

It's not misleading. It answers the question: there is no compile-time check. The OP didn't have to "wonder" they could have just read the link.


What's the theoretical difference between this kind of pattern matching and polymorphism? Instead of encoding the logic for each branch in a switch, do we get the same kind of exhaustive static guarantees if instead we encode each branch as a polymorphic implementation on each type?

For example, Java's Optional type seems to provide the same static guarantees without pattern matching (assuming it didn't have the unsafe get() method).


You are right assuming the more or less equality between two approaches.

For example, you can encode Just as fJust :: a -> (a -> b) -> b and Nothing as fNothing :: b -> b. fJust receives a computation that accepts argument of Just constructor, while fNothing receives just a value to be returned. Or, alternatively, you can look at the type of maybe function which is pattern matching on Maybe in disguise: maybe :: b -> (a -> b) -> Maybe a -> b.

(I believe this method is called Church encoding)

But when you go from structure of types and patterns to functions, you lose the ability of analysis (of exhaustivity or anything else). You now deal with something that is Turing complete instead of fixed size structure.

And this is delimition of "possible" and "impossible". With the Church encoding analysis of matching completeness is impossible and even writing matchers for lists or trees is nigh to impossible, actually.


I believe this is actually Scott Encoding.


i think they're considered two sides of the same coin - exhaustive vs extensible, each with its benefits and tradeoffs. this is known as "The Expression Problem"; what follows is a quick and messy intro.

sum types:

you can easily add new functions that accept your sum type, but adding a new "variant" to the type will require modifying every function you've written to support the new case

polymorphism (interfaces):

you can easily add new types that implement your interface, but if you want to add a new method/function to it, you're going to have to implement it for every type that implements your interface

essentially, we've got two "extensibility dimensions": one for adding new operations, and one for adding new types. sum types are good on the operation axis, interfaces are good on the type axis. (it's an open research question if there's a solution that's good on both.) i hope this makes sense!

as a side note, when discussing this, it's useful to distinguish ad-hoc polymorphism (subclassing, interfaces/typeclasses) and parametric polymorphism (generics) – here, we're talking about the "ad-hoc" variety. I mention this because i saw some confusion about this in a sibling thread but can't reply there.


Javas optional type was directly inspired by scala and haskell. The concept is the same. However, because java doesn't have pattern matching as a feature internally it is implemented as a design pattern.

Haskell's implementation of "match" is so elegant that it will take you a while to realize that the "exhaustive" pattern matching is a big part of what is contributing to haskells safety.


I've used pattern matching languages quite a bit (in my case, it's OCaml), so I know what it's like. I'm just curious if there's any fundamental difference between pattern matching and polymorphism. From a cursory glance, it seems like they can statically prove the same properties about programs and are equivalent from that perspective.

If that's the case, then the conversation shifts from "you should use pattern matching" to "when should I use pattern matching?" Which is a pretty interesting question. Logic locality seems to be an obvious criteria (does the logic live with the type, or with the consumer of the type?).


I'm not clear what you're asking here. It sounds to me your asking the difference between a table and a chair.

Pattern matching is syntactic sugar for extracting values from a pattern. In a statically typed language the compiler has the option of forcing the user to handle every possible type permutation with a pattern thus ensuring safety.

Polymorphism is just a generic type.

That's as far as I know... could you elaborate more on what you mean?


It seems match checking requires static typing, so that’ll be a job for a mypy extension.


or a pylint extension




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: