Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Intro To Pattern Matching – Covers C# 9 (c-sharpcorner.com)
60 points by alugili on Feb 19, 2020 | hide | past | favorite | 62 comments


I don't agree that growing a language necessarily makes it harder to learn. Quite the opposite, I think it makes it easier to learn.

Look at how asynchronous code was handled in C# in the past versus today. In the past, you first had to define a delegate type for the signature of the procedure you wanted to run. Then you'd assign something to that delegate, probably a method sitting in a class somewhere. You can now call BeginInvoke on that reference, passing in a callback function. That callback function has to be another method sitting in a class somewhere. You also get this IAsyncResult object returned, with no immediate guidance as to what it's supposed to do. Is it different than the IAsyncResult that gets passed to the callback? No? There's this "AsyncState" object you're supposed to pass, with no clue what it's meant for. If you don't need it, you pass null, which is pretty much every case. Need to return a value? Call EndInvoke with the IAsyncResult object you received in the callback. But wait, where's my delegate instance on which I need to call EndInvoke? Guess I need to hoist it to be a field in my class now. I suppose you could pass it as that AsyncState parameter, but now you're having to do in your head the type-checking job that the compiler should be doing for you.

None of this is guessable from just reading the code. You have to read the documentation in multiple places to figure out what is going on. For added fun, now try composing multiple async operations in sequence.

Contrast to today, we do "Task.Run(<some lambda expression>)". You tell me what is easier to learn?

You see "fewer keywords, fewer features" and see "simplicity". I see those things and I see "more boilerplate for implementing code" > "complexity".


Yes, I am very happy that I never have to deal with IAsyncResult and BeginOperation()/EndOperation() method pairs, with callbacks and waithandles to make sure things happen in the correct sequence, ever again.

It was just horrible.


if the old stuff will be considered as "deprecated" then it is ok. I love c# and I hope this language will not blowing up like c++. Thank you for the tip!


Too bad pattern matching implementations in C# (possibly in JS [1] and other languages [2]) don't go beyond basically matching in switches and maybe perhaps somwewhat inside function bodies.

One of the greatest powers of pattern matching comes from being abole to match in function definitions. The simplest example:

  fac(0) -> 1;
  fac(N) when N > 0 -> N*fac(N-1).
This makes a lot of code much more readable and more concise.

[1] https://github.com/tc39/proposal-pattern-matching

[2] http://cr.openjdk.java.net/~briangoetz/amber/pattern-match.h...


C#'s pattern matching does not include the ability to check for exhaustiveness, which is need to make pattern matching comparable in utility and power to inheritance (i.e. so you can actually replace inheritance w/ pattern matching.)

Much like how the compiler assists you when writing subclasseses so that you've implemented all the required methods, real Pattern Matching (F#) gives you the same guarantees from the compiler that you've handled all the possible cases appropriately.

Exhaustiveness checking is what elevates pattern matching to the other side of the expression problem, which is about getting feedback from the compiler to help you write programs.

a glorified switch statement does not help you do that, and so you still have to write subclasses to get compiler feedback.


Nor should it, assuming a fixed set of derived classes is a bad OOP/SOLID practice as it breaks the "objects should be opened to extension" principle and prior to introduction of RTTI in C++ it wasn't even a design possibility - you either specified the concrete type or relied on abstraction and polymorphism.


That's not what 'fixed set' means here.

It's 'fixed' in the same sense as the compiler knowing what are the subclasses of a base class.

Just as you can always 'extend' the OOP base class by adding a subclass, you can always 'extend' an algebraic data for by adding another case.

Both can be extended, but more importantly they can be enumerated.


> It's 'fixed' in the same sense as the compiler knowing what are the subclasses of a base class.

... but the compiler does not know ? new classes are loaded at run-time all the time with e.g. plug-in systems, etc.


That's correct, and exhaustively enumerating derived classes at compile-time is usually a bad design either via RTTI type-switching or pattern matching (which is really just syntactic sugar for the former). It prevents for example having derived types injected at run-time in a plugin setting. Explicitly matching a subset of well-known derived types otoh is a practical compromise in many cases.


> enumerating derived classes at compile-time is usually a bad design

Depends on what kinds of guarantees you want. At the extreme, you can make everything in the system late-bound, and then you simply have a dynamic language instead of a static one.


> enumerating derived classes at compile-time is usually a bad design did you deliberately omit the word 'exhaustively' here ?

late-bound != dynamically typed, that's exactly the point - you wish to preserve to the ability to have types loaded at run time without knowing their exact type at compile time while still enjoying strong type guarantees. We can do without run-time loading scenarios at all and still see why this is bad, if one adds a derived class to the code base and then has to address it in a bunch of places rather than have all the new functionality localized to the class then it's a smell


Erlang doesn't check for exhaustiveness either, and pattern matching is awesome :)

Without exhaustiveness it might not be feasible for a statically types PL though, but I don't know nearly enough to even speculate on that.


I'm not implying exhaustiveness is always necessary, I'm saying if you want feedback from the compiler to support a type-driven extend-refactor workflow that is commensurate with inheritance-based typechecking, as is one of the major arguments for inheritance and subclassing to work they way it does in a static language, then you need exhaustivess to deliver the guarantees and maintainability experience if pattern matching is to replace inheritance+sublassing. Dynlangs here do not apply.


Do you know why they dont do exhaustiveness checking?

They don't in scala either and I had assumed that was to do with type erasure. Given F# does however, presumably there is no limitation to C# doing it?


> Do you know why they dont do exhaustiveness checking?

Because then you would need to limit matching to some kind of closed algebraic data type (ADT) so that compiler knows what is left to be matched on, or provide a catch-all clause.

I thought scala does produce partial warnings matching on case classes


I think the most of us are agree that PM will make the code more readable and clean bla bla bla, but the problem in the C# language. Do you know how many ways that you can check the null in C#? (object == null) try to collect them and tell me which one is good. I think the problem here is C# blowing-up


I would love if the not operator is usable instead of ! in other places than pattern matching. It's something I've wanted for a long time.


Do you mean the word 'not'?


I'm just going to leave this here, for all the "X > Y" talk, inserting either OO and FP into X and Y in whatever order you want: https://en.wikipedia.org/wiki/Expression_problem


Several languages on either side of the OOP vs. FP divide include both solutions to the problem -- for example, open subtyping vs. sealed case classes in Scala, and traits vs. sum types in Rust. There's more to both OOP and FP than the expression problem.


That's my point. There are people who claim "my-current-shiny is the only way". Yes, OOP and FP languages have different solutions to the problem, because it's a real problem. You need to have both OO and Functional features in your language. "OOP vs FP" is a false dichotomy.


I agree, but my point was slightly different. There are legimate points to be made -- if not agreed with -- concerning the benefits of immutability, of single-ownership patterns, of equational reasoning, etc. which are central to the modern FP ethos.

You can make OOP and FP appear equally useful by framing the discussion with the expression problem, but I think that framing is myopic, and flattens a lot of the actually interesting distinctions and philosophical differences between the two.


Indeed, in fact most successful mainstream languages including both paradigms, they just differ on the default approach (FP first vs OOP first).


what about Active pattern? will come to C# or not?


oh my god people still do class Car examples. So hard to use a real world example with all this open source code...


They do. I have a side job as an external examiner for CS students and the first years are still taught the same Car/Animal examples when learning about polymorphism that I was decades ago.

I’ve never once used that knowledge professionally to any sort of success. The complexity of polymorphism always ended up biting us (any company/org I’ve worked at) in the ass when things eventually grew way too far apart than what was originally envisioned when the polymorphism made sense.

Kinda hilarious that they are still teaching it with those useless examples though. You’d have thought they could come up with something better by now. I mean, I get why you’d chose something obvious, but in my experience CS students seem to understand polymorphism perfectly well without making a duck quack and a dog go woof.

But what do I know, I can’t even make it work.


I doubt there's any programme you've ever written that makes no use of polymorphism (the contextual resolution between multiple semantics to a term).

Even 1 + 1 is polymorphic, ie., "1" + "1" selects for a different behaviour.

Method overriding may rely on inheritance, and the problem is usually the inheritance not the polymorphism.


They did not specifically state it but I believe he/she was referring to subtype polymorphism. Parametric polymorphism as you describe is far more useful imho.


There are many types -- it is phrased generally and may lead readers to believe that there is some issue with polymorphism.

I still object on the subtype grounds though -- because the issue is not the polymorphism (that is useful and usually a good idea). It's the inheritance relation, which is not the subtype relation -- eg., you can do subtype polymorphism with just implementing interfaces and there shouldn't be any problems.

This comment is actually expressing issues which have arisen because of inheritance and confusing it with polymorphism.


I have far fewer problems with the Car than I do with the rest of the example.

> The problem comes when you want to add continually new methods all the time

This is not a real problem! Violating encapsulation to avoid dealing with the occasional merge conflict is probably the worst use of pattern matching I've ever seen. The tug-of-war between OO and FP is greatly exaggerated.


and the other are still using Foo and Bar :-/


One of the advantages of F# is the simplicity and ‘graspability’ it yields but I have nothing against C# adopting more functional features. I just think this will confuse beginners quite a bit so the ladder is getting sparser rungs in a way. Maybe it is a good think maybe it’s not, only time will tell


exactly this is the problem.I think small language with limited features is better of one language which can do everything and it need a lot of expertise. Simple is beautiful. If the developers need FP than why not F# integrated in C#?


We’re all slowing moving to functional programming.

One day we wake up and decide we basically know Haskell, OCaml, etc and take off the training wheels?


I don't think so. Pattern matching and the like are just window dressing. The two true distinguishing features of Haskell are that it's lazy, and the compiler enforces referential transparency. Neither of those is something you can get by simply adding features to an existing language.

And, if you aren't working within those strictures, you're still just not even in the same universe. No matter how many variables you make immutable, you'll still be able to log events by just executing a log statement, and you'll still be able to create a function that talks to a remote service, and call it from anywhere. So, you might use a few functional-inspired features when they're convenient, but the actual structure of your program is probably still imperative to the core.


By this standard OCaml and Lisp aren't functional either.


I wasn't intending to imply that they aren't. The standard being proposed when I originally wrote my comment was just "Haskell"; I think the "ocaml, etc" was a later edit.


It turns out that that "window dressing" is by far the most practically useful part of functional languages.


I keep wondering what it would be like to work in a language that gave some compiler support for ensuring that functions remain mostly pure. Like, perhaps mutating variables that are visible outside the function or getting input from external sources are disallowed, but logging and writing debug output are still permitted.


Haskell is like that with System.IO.Unsafe and Debug.Trace.


Haskell still doesn't have real referential transparency. It's all a matter of degrees and no language is pure, even Agda or Coq. The concept of IO as a world state you pass around is deeply flawed, especially with multiple cores. You cannot be in control of the world, so you don't really have equational reasoning. At any rate, there can be unsafePerformIOs lurking anywhere.

I really think delimited-continuation based algebraic effects are the way to go.


Well F# is based on OCaml, and it have been recognized for some time that C# is adopting more and more features from F#. As the article alludes to, this is not without controversy. Some feel that C# is becoming too bloated, and if you want to program in a functional style, you could just use F# in the first place. C# is much more widespread though, so this makes functional idioms more broadly available.

On the other hand, C# have also gotten deeper support for pass-by-reference and stack allocation which I consider more low level features. So I don't think the development is purely in functional direction.


Every C# release, F# starts looking less and less compelling. There's already a level of friction, since so much of the underlying framework is C#-centric and littered with exceptions and nulls, and if you want to expose F# code to be consumed by C#, you have to dumb things down. And the tooling for F# is considerably less advanced than you get with ReSharper or even out-of-the-box in Visual Studio.

It's a bit of a shame; I suppose that is one of the reasons that some people in the F# community are really pivoting over to WASM and Fable for using F# in the browser.


I think you need less refactoring in F# in first place. Recently, i've tried fsbolero and was really impressed. No way I return back to C#.


C# can only go so far down the road towards F# without introducing currying and partial application. I can't see those ever being introduced into C#.


You can sorta get partial application using lambdas, or whatever they called in C#

Like a poor mans hacky partial application, one argument at a time.

     namespace Bored
     {
        using System;
        using System.Collections.Generic;
        using System.Linq;
        using static System.Console;
        internal static class Program
        {
           private static void Main()
           {
              var addFive = Add.Partial(5);
              addFive(6).Print(); // Prints 11
              Enumerable.Range(0, 5).Select(addFive).ForEach(WriteLine); // Prints 5 to 9
           }
           private static Func<int, int, int> Add = (int x, int y) => x + y;
        }

        internal static class Extensions
        {
             public static Func<T2, TR> Partial<T1, T2, TR>(this Func<T1, T2, TR> func, T1 t1) => t2 => func(t1, t2);
             public static void Print<T>(this T t) => WriteLine(t);
             public static void ForEach<T>(this IEnumerable<T> xs, Action<T> f) => xs.ToList().ForEach(f);
         }
     }
Define your partial extension method for each arrity to Func. I think it maxes at 16?


me to you can solve the same problem in 100 ways and each way can have its own tips and ticks and pors and cons! C# will be complex like c++ not for new developers only for experts!


While I don't entirely disagree with you, "default" c# style won't screw you up the way default c++ will, while masterful c# will give you a non-trivial amount of the benefits if they continue to make intelligent improvements.

Will it ever be perfect? Of course not. But I feel like the direction they are taking is a reasonable one.


This is the basic dilemma for any language. You can evolve and grow bigger, or you can stay simple and pure. I think history show languages have to evolve to stay relevant - "small and elegant" languages like Scheme and Smalltalk are not widely used. But since you can't really remove existing features, evolving leads to more complex languages.


I think the gravity is towards a middle ground that looks like imperative, but is quite functional compared to C, Java, C++ or early C#.

Sum types, powerful type inference and and immutable-as-default.

Kotlin, Rust, Typescript are all recent-ish languages an all fit into this category. Meanwhile F# approaches it from the other direction (start with OCaml and sprinkle on some imperative and OO).


yes did you see the uncle bob video about programming future? before 11 years. --> it happing today


The ability to change value by ref is crucial to most real-world apps. This is not surprising, as those apps are modeled to represent a real, dynamic world.

Functional languages tend to disregard this, and demand composition over mutation. While this works for some domains, it's a burdensome complication for others.

Imperative languages are here to stay.


All the more reason to explicitly model time and not conflate it with the system runtime. As an example, banks have always added entries to a ledger such that the time history of events is explicit and preserved. Functional programming models this very well. An imperative programmer would probably write the final balance on a post-it note in pencil so that it could be destructively updated. Some OOP programmers have actually tried to model banking like this, because their language encourages it.

Another example might be a text editor buffer. Immutable persistent data structures allow one to get many features for free such as versioning, undo, lockless autosave. Memory is cheap now, there's less of a need to throw away some my edits with destructive updates. Even filesystems and databases have realised this is good idea. Destructive updates are bad for valuable data.

Imperative programming is only better for writing low-level system code, which is full of state that ultimately no one in the real world cares about.


Your points are not 'wrong' but I think you are being overly broad in the conclusion you are drawing.

Imperative programming is the dominant programming model that 99.99% of all real world code for the last 60 years has been programmed in. It very well may be due to be replaced, but certainly cant just be hand-waved away


50 years ago 99.9% of computer programs were written in completely imperative assembly language programs, such that even arithmetic involved statements and temporary mutable variables. Fortran was a more mathematical higher-level language with boolean and arithmetic expressions. Functional programming just continues this trend, resulting in more mathematical, higher-level declarative languages, which as far as I am concerned, makes them more suitable for modelling real world business problems.


Turing has destroy everything what Alonzo Church did ;-)


FP didn't invent immutable data, or mathematical functions. It just makes it easier to use them for some domains, while complicating others.

Imperative programs can have isolated, pure subsections of functionality which are based on composition and immutability, without having to deal with the "ideological" strictness of FP.

You're considering only a minute aspect of large systems. Any real banking system has much more than a ledger to deal with. And most of it involves huge amounts of state, which you can't just pile on top of RAM, or waste CPU on. Anything that doesn't serve a real business/user purpose is just waste.

And text editors don't need FP to support undo either. I'm not sure what point are you trying to make.

The most successful editors in the market, supporting all the features you mentioned, do quite fine with imperative, and in fact, I'm not aware of any meaningful editor written in a FL.

> Memory is cheap now

Not really. This comment strikes me as if it comes from someone who never dealt with significant amount of data.

FP can be extremely wasteful; take head-tail lists for example, which are the default container in many FPs. They waste object-container data, pointer data, and scatter the list items, causing cpu cache misses.

Claiming that imperative is only better for low-level is extremely disconnected from reality. Try writing a game with FP, and I assure you no one will care for code "beauty" when FPS sucks.

Sometimes, arguing with FP purists feels like debating socialists.

Also: go ahead and downvote me, as apparently FP zealots can't tolerate questioning their religion.


> Imperative programs can have isolated, pure subsections of functionality which are based on composition and immutability.

And functional programming languages can support imperative programming and mutation, even Haskell. It's a question of what should be the default, which depends on the domain you are working in.

> Any real banking system has much more than a ledger to deal with.

Most banking systems are cobbled together from third-party components and libraries. Low-level languages aren't really needed. Many banks use Python, which is orders of magnitude slower than Haskell.

> This comment strikes me as if it comes from someone who never dealt with significant amount of data.

Actually I have a background in distributed computing, where functional programming abstractions have made processing large amounts of data much easier (MapReduce, Spark).

> FP can be extremely wasteful; take head-tail lists for example, which are the default container in many FPs.

This is a strawman. I can write inefficient code in any language. C++ code can be littered with pointer indirection and memory barriers.

> Sometimes, arguing with FP purists feels like debating socialists.

I am a socialist.


This is rather heavily debunked already. You might want to investigate that.


You'd be more convincing providing a substantial response instead of referring me to "investigate". I've used FP extensively, and they all make mutation very hard, by design.

Do you disagree with this premise? Do you disagree with my claim that the world is dynamic, and that FPs are limited in representing them?

Please address my points specifically if you have anything meaningful to contribute.


I would be more convincing, but the whole point of my comment was that you need to do the work to get up to date with where the debate is. Don't ask other people to do the work for you.

Anyway, FYI that's why you've been multiply downvoted. Not because there's some FP cabal who downvote pro-imperative comments.


Didn't expect otherwise. You'd use a logical response if you had one.

My comment is a direct response to the OP who implied FPs are going to "take over".

I'm being downvoted probably by FP zealots who can't deal with the idea that their solutions aren't perfect or universal.

BTW, I'm not "against" FP in general at all. I have used it in the past, and it has useful idioms to contribute to imperative languages, and it's perhaps useful on its own in some domains; I'm just pointing out its limitations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: