Hacker News new | past | comments | ask | show | jobs | submit login
C# 8: Switch expressions (alexatnet.com)
139 points by azhenley 38 days ago | hide | past | web | favorite | 120 comments




To me, all these IIFE’s make a good case for adding a block-as-expression construct. Either take from Scala, where every block evaluates to the last expression executed in that block (and every statement is an expression); or maybe introduce some new syntax (‘do { .... }’ maybe?) if the prior would be a breaking change.


There's a bytecode representation already - you can use it when compiling to IL: https://docs.microsoft.com/en-us/dotnet/api/system.linq.expr...

What's missing is a way to express an anonymous block expression in C#. I'm not sure that's a gap worth filling, though - if the logic is worth a block, isn't it worth a name?


“If” statements and loops already make use of anonymous blocks, and I think serve as ample demonstration that, no, not all blocks are better off named. What’s missing is the ability to use those blocks wherever an expression is expected.


This might be better titled "C# 8: switch expressions" (edit: it was previously "statement", as the author's post is titled).

Or actually, to bait a few more hn clicks (and provide a fuller description): "Pattern matching in C# 8 with switch expressions"


Yeah this is a feature ported from fsharp / F#, but somewhat watered down a bit to play nice with C#'s existing architectural choices.


The lineage can be further traced back to the first ML languages such as SML, and even further back to LISP and RegEx-based languages.

https://en.m.wikipedia.org/wiki/Pattern_matching


Agreed. I was wondering what they could have possibly changed in ye olde switch statements we’ve all been using for decades.

Edit: ye, iOS, ye!


This might be better titled "changing things for the sake of changing things and appealing to the programming language geeks".

The new syntax is not intuitive, not more readable, just a bit more terse.


Using switch as an expression makes sense, just like using if as an expression (often written as a?b:c) does. It makes it obvious that all you are doing is changing one variable, and it makes initialising constants a lot cleaner.

Once you start making switch expressions, the break keyword makes no sense. And with pattern matching the case keyword technically wouldn't be nessesary for normal switch statements already, might as well drop it for the new syntax.

I find this completely reasonable, and I think it's very readable, especially compared to the closest equivalents: putting a single switch in a function and calling that, or using nested tertiary operators:

    const sqlOp = op switch {
                    "&&" => "AND",
                    "||" => "OR",
                    _ => throw new NotSupportedException()
                  }
vs

    function getSqlOp(string op) {
      switch op {
        case "&&": return "AND";
        case "||": return "OR";
        default: throw new NotSupportedException();
      }
    }
    
    // ... far away
    const sqlOp = getSqlOp(op);
vs

    const sqlOp = op=="&&" ? "AND" : op=="||" ? "OR" : throw new NotSupportedException();


The break keyword in switch statements makes sense only if the programmer considers switch as a syntax sugar for goto, which are the semantics in C. Once you have a language that doesn't use goto but that still requires breaking each switch case, the only rationale is inertia for people familiar with C. (That said, C# does have goto, right?)


the break keyword in switch statements also allows fallthrough, allowing for a terser way to write state machines, at the cost of making the whole construct less intuitive and readable.

In a switch expression of course that is moot since break is a statement, and statements can't appear in expressions.


The thrust of my above comment is largely about how the idea "enabling fallthrough" demonstrates how poorly the switch statement is usually taught: most people have an intuitive grasp of it as an alternative to if/else if, when in truth it's sugar for a jump table. Break exists to prevent fallthrough rather than enable it! And indeed fallthrough isn't useless, though decades of long experience has demonstrated that it is the wrong default, and newer languages like Go do better by making break the default and having a dedicated fallthrough keyword for when you want it.


Ofc, C# doesn't actually support non-trivial fall-through. You can have

x: y: DoX()

but you can't have

x: DoX() y: DoY()

at all.


But one can do

    case x:
        DoX(); goto case y;
    case y:
        DoY(); break;


C# doesn't allow fallthrough. Every branch in a case clause must exit the case clause. Fallthrough can only be aachieved with a goto at the end of the case clause. The `break;` requirement is completely superficial; there's no reason why it couldn't be a classical block/statement choice:

    case "foo" { /* ... */ }
    case "bar" /* ... */;


Yes, C# does have goto. Ayende made a blog post [1] about some interesting uses of goto for performance reasons. Also likely for performance reasons, async/await state machines in .NET essentially use goto (br.s in CIL). VB.NET has GoTo as well.

1: https://ayende.com/blog/183553-A/using-goto-in-c


Actually, I think it is rather intuitive and readable - but then again I programmed in lisp for years.


For some time, I've thought that the direction of C#'s evolution makes a lot of sense if you look at as the designers' effort to cope with Haskell envy (pattern matching, type inference (which is really just class inference), and monad comprehensions).

But, it makes almost complete sense if you instead look at it as coping with Lisp envy. I'm still amazed that they managed to sneak in function literals and FEXPRs (Linq), a metaobject system (dynamic + DLR), a limited form of call/cc (aysnc/await), first-class metaprogramming (Roslyn -- albeit more in the style of Smalltalk rather than Lisp), and a REPL.


I don't think it's Lisp envy. They're porting stuff from F#, which as far as I know is from the ML family.


This change was definitely inspired by F# (or ML ultimately). It's far from the first feature to be brought over. Look at async/await, usable tuples, lambda functions, range syntax, etc.


AFAICT, this is kind of true for a lot of F#: F# is where they do the experimentation, C# is where they do the more heavily engineered "fast" version.


That seems to be true at least to some extent. However, there are some noteworthy parts of the two languages that seem to be intentionally different. For example, F# optimizes tail recursion while C# doesn't.


LINQ actually came from Haskell.

"Confessions of a Used Programming Language Salesman, Getting the Masses Hooked on Haskell"

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.72....


Well, to be clear, I was being tongue-in-cheek when calling it "envy".

I just mean that C# looks messy from the point-of-view of an ML-wannabe, but pretty elegant as an aspiring Lisp. And I say this as someone that is fluent in F# and prefers functional programming in the style of ML rather than that of Lisp.


This actually seems to be somewhat of a trend in C# as of late. Ever since they introduced lambdas, really.

While I use them every day, my favorite is still “replacing” String.Format with string literals. Shorter, sure, but less intuitive particularly for people coming from another language. You can do roughly the same thing in PHP with backticks, but most people avoid them for the same reasons.

I’m not sure how I feel about the trend as a whole. As long as I can continue using the old methods as well I suppose I don’t have much room to complain, but sometimes it does feel like we’re just changing for the sake of change and that’s usually a bad indicator.


string interpolation isn't simply a wrapper around String.Format(); it also offers compile time checking which solved a real limitation of String.Format() while offering better performance than concatenation.

For example:

     var h = "Hello ";  
     var w = "world";     
     var output = String.Format("{1} {2}", h, w);    
     output = $"{h} {w}";   
Would compile fine but throw on the String.Format() line due to the typo.


I wasn’t necessarily saying it doesn’t offer any benefit, more that it’s not really a benefit that seems to benefit a lot of people. Between code intelligence tools and actually running the code you write (not even in an actual QA process, but simply executing each line) that just doesn’t seem like a big deal to me.

There are also instances where, such as with Console.Write you are now formatting a string in order to pass it into... a method that formats strings. Yes, the same benefit is still provided, but with a fairly obvious decrease in clarity and increase in redundancy.


> fairly obvious decrease in clarity

You feel that "{0} {1} {2}" provides better clarity than $"{FirstName} {MiddleInitial} {LastName}"?


>a fairly obvious decrease in clarity

In what's ways specifically? Using String.Format you have to keep your indices correct and 'compiling' the string in your head requires jumping back and forth between the format string and the args.

Interpolation wins on every account as far as I'm concerned.


String interpolation is more intuitive than trying to match a placeholder in the string with a list outside of the string.


I suppose that depends upon where you’re coming from. Many, many languages have an sprintf-style construct of one form or another, so I think it’s actually much more familiar to most people to have a placeholder with an optional formatter. To each their own, though.


> Many, many languages have an sprintf-style construct of one form or another, so I think it’s actually much more familiar to most people to have a placeholder with an optional formatter

Many languages also have format strings with interpolation: JavaScript, Python, Ruby, Swift, PHP, etc.


Well, let's take the TIOBE index as a rough indicator of popularity: https://www.tiobe.com/tiobe-index/

Java, C/C++/Objective-C/Go (I'm lumping them up because they're basically C + C NextGens), Matlab, SQL, ASM don't have them.

Everything else, has them.

On the list of languages that don't have it, Java is planning to add it, I don't see the C family to change because it's very conservative, and the others are domain specific and don't care that much about string operations.


C++ has string streams which actually work more like string interpolation than like sprintf just without adding custom syntax.


I have a hard time believing any programmer with more than a couple of months experience would have a moment of difficult figuring out what an interpolated string is doing.


Sprintf is very error prone. Maybe people are used to it but string interpolation is clearly superior because it removes a lot of potential errors. With the trend towards safer languages sprintf and similar techniques need to go away.


String.Format("Is {0} to read {2} {1} of the {3} out of place", "harder", "half", "because", "string is").


Ironically that’s actually also one of the key advantages of it. If you have a long string with many placeholders in it and need to add a new one in the middle... you just add it and don’t have to figure out where in the list of params to insert the new arg.


This is not much of an advantage with respect to string interpolation as you could just as easily put your new value in a (nicely named) variable and insert it wherever.


Not so nice for the next guy, though.


Yes, God forbid the variable appear more than once.


I think string interpolation is one of the best things they did to C#. String.Format is pretty error prone.


I'm on your side. I've found very little use for most improvements since C# 4, but I always use literals now. Things like value tuples and string literals are huge improvements to my code's readability and flow.


Value tuples are good too. Somehow I don't seem to get too friendly with the syntax but they are very useful.


Local functions can be a serious win over the alternatives of far away functions with no closure semantics or huge lambdas.


Very true.


> but less intuitive particularly for people coming from another language

Maybe for people coming from C++, but string interpolation is a feature in Python, JavaScript, PHP, etc. It's not exactly an unusal concept (like it was back when PHP started doing it)


It's not simply a matter of "uncomfortable" syntactic sugar. It's a fundamental change from a statement to an expression.

Also, there is an elegance to the functional style of pattern matching. And C# has been incorporating a lot of functional paradigms for more than a decade now which may initially feel "out of place" but ultimately benefited the language greatly.


More like the trend of “bolting everything they’ve ever heard on to the language spec”.

This is a half-assed, syntactically ugly implementation of case statements in functional languages like Elm or Haskell - the point of which is to cover all possible executions of a branch explicitly, to prevent runtime errors.


    "+" => ((Func<int>)(() => {
        Log("addition");
        return a + b;
    }))(),
This is incredibly ugly. Why couldn’t it be “+” => {stuff}


That is ugly.

What he's doing is creating an anonymous function which takes no parameters and returns an int. That's what a Func<int> is, and then he is executing that function, that's the () bit after }))

He could write it nicer as:

   "+" => Add(),
 
Where he defines add as

   int Add() => a + b;
He's just doing too much inline, 99% of C# developers would not write code as he has done there.

Its' the bloggers code style that is the problem here, not the new switch syntax. Although he probably did it to make the example more self contained and terse.


> He's just doing too much inline

It's two short lines.

The problem is that he's writing a small but more than one line bit of code, so the overhead of the anonymous function is just as big as the code itself, and arguably more complex.


I agree with your conclusion, but let's keep the conversation civil.


The conversation is quite civil already.


Ah, too late to edit/delete my comment. They edited their post, perhaps in response, which is now much more considerate.


Rather than adding syntax for it, the sane approach would just be to generalize the functionality.

    T Log<T>(string msg, Func<T> func) {
        Log(msg);
        return func();
    }

    var result = operation switch {
        "+" => Log("addition", () => a + b), 
        "-" => Log("subtraction", () => a - b), 
        "/" => Log("division", () => a / b), 
        _ => throw new NotSupportedException()
    };
This is shorter, cleaner, and how most folks would actually write something like this, I imagine.


I don't know a great deal of C#, but couldn't you just do:

    T Log<T>(string msg, T result) {
        Log(msg);
        return result;
    }
    
    var result = operation switch {
        "+" => Log("addition", a + b), 
        "-" => Log("subtraction", a - b), 
        "/" => Log("division", a / b), 
        _ => throw new NotSupportedException()
    };
You already have a lambda for calling Log(), no need for a second to call the inner function. This does change the order of evaluation though.

I just mention it for academic interest. To be honest, I find the idea of passing a parameter to Log() (whether it's the value or the function) that your don't actually want logged, so that you can shoehorn two statements into an expression, abhorrent. Just write the two statements out! I think an old-fashioned switch statement is the right tool here.


I think the order of operations here is important. Take, for example, the case of operation == "/" and b == 0. You might find it valuable to log that you're doing a division, before it catches fire due to the divide by zero. Obviously a contrived example, but that was my thinking with the inner function. Either way, pretty straightforward.


>To be honest, I find the idea of passing a parameter to Log() (whether it's the value or the function) that your don't actually want logged, so that you can shoehorn two statements into an expression, abhorrent.

You've basically reimplemented Haskell's Debug.Trace[0] :)

The difference is of course that Haskell doesn't really have statements in the same way that C# does, so in Haskell it's nescessary to turn the log/trace-function into an identity function after you've applied the message-string.

I agree that it is ugly, and in any serious project I would use a proper logging setup, but I almost feel like the ugliness is a feature not a bug. It's like an extra cost when doing debug-by-print, which at least seem to keep me more mindful about cleaning up after myself.

[0] http://hackage.haskell.org/package/base-4.12.0.0/docs/Debug-...


The correct answer to "how to run multiple commands in the statement" would have been "don't", instead of this abonimation. If you want to do multiple things, don't use a syntax designed for expressions, use a syntax that supports blocks (like, you know, regular if/else or switch statements).


I was also wondering why simply opening curly braces after the => was not allowed. It would be consistent with lambda syntax generally.


I couldn't agree more, {} already denotes a block of code in C#. This is more than ugly and will soon become a nightmare for maintaining code.


I am not sure I like the path c# is going down.

For years the language has been trying to evolve from being aesthetic and easy to understand and read.

With the functional implementations and patterns I believe the language is trying to embrace too much.

This is not specific to the swith expression in this context but in general.

You can write C# in too many forms and I don't believe that's really good.


I would not take pattern matching as the right example. I learned pattern matching at the university and always missed it in C#. It is easy and straightforward. When you love lambdas and linq, so C# 3.0, pattern matching is easy.

However, when it comes to modern C# and memory management I also think that readability suffers a lot. Memory management on that level is well known to the professional C/C++ folks but most of us are just overloaded with it.


I agree. It has the flexibility of Scala without the extra power/abstractions Scalaa gives you. I don't love Scala's flexibility but I put up with it because of its power, the trade off isn't worth it for C# to me.


> You can write C# in too many forms and I don't believe that's really good.

Why not? And what, exactly, defines "too many"?


I know people meme a lot about Go and "not having generics," but I really feel that a lot of the things left out of Go have benefitted it. I've never really been able to jump into a project without a lot of mental overhead. With Go, most code everywhere is written pretty idiomatically, making it easy to approach and read. I've heard similar things about C#, but my experience hasn't been as great.

This is a big problem in the JavaScript world in my opinion. So many different ways to solve common problems that it's difficult to approach. OP might be worried this could be happening to C#.


We could go with a language with just if statements, while statements, function calls and "return" and be extremely easy to jump into. :)

While a bit snarky I do think it is one of the fundamental problems with programming language innovation. No matter how smart we are, learning is effortful. And we frequently judge languages not by how productive it is, but by how quickly we can start feeling productive in it. Which often preferences familiar concepts over more powerful concepts.

There is a very long feedback loop in properly evaluating a language. At least 1-2 years. You need not just figure out how to do what you already know in the language, you need to learn the new way to think in the language. And that takes time and solving lots of problems. This is only sped up if the language's concepts are kept very similar to a language you already know.

We are the bottleneck that slows down language improvement. There is no way anyone can convince me that the hodgepodge of Algol derived languages is one of the best ways out there of writing code. But most new languages will be similar because the hurdles of trying to teach people anything else alongside the actual new interesting concepts of your language are too much of a blocker for any reasonable adoption.


I suppose, but that seems like a natural consequence of C#'s desire to move the language "forward" in terms of implementing new features (such as switch expressions or non-nullable reference types) while also maintaining backwards compatibility (for example, you can still use the non-generic "ICollection" interface if you want to). Just a different set of priorities.


>I know people meme a lot about Go and "not having generics," but I really feel that a lot of the things left out of Go have benefitted it.

Yes, that's why we program in Brainf__k here: It only has the bare essentials and it's so easy to learn.


Perhaps he means something like Python's motto of "There should be one--and preferably only one--obvious way to do it."

https://www.python.org/dev/peps/pep-0020/


I think all the new features are improvements, but I agree the language is getting big with many ways to do the same thing. I don't think MS will ever do a 'Python 3' and cut backwards compatibility.


> Why not? And what, exactly, defines "too many"?

I think Scala was innovating in this respect before Kotlin ate its lunch (it may still be).

You could use Scala as everything between Java++ and Haskell--. Developers proficient in Java++ often had a difficult time understanding code written in Haskell--.


Language designers are not immune to the "code must change constantly otherwise it's abandonware" mentality of today's developers.


I think I agree. I used to write a lot of C# v2 and loved it. There didn't seem to be anything 'missing'. Each version since then has added nice things but are they really necessary? Now the language looks huge and complicated and because of that, I'm not sure I'd ever want to use it again.


C# 2? The freshness of C# 3! The comfort improvements of C# 4, C# 5 and C# 6! And now the new freshness in C# 7.0.

I agree with other posters that this introduces a new style, but when I look at C# 2.0 I see a language I do not want to use anymore. It is old and clumsy. Like Java :)


C# 2 did not have Linq. Linq is the best feature ever.


Actually, adding LINQ to C# did not introduce any new functionality at all. Everything you can do in LINQ you could already accomplish in C# without LINQ. Yes, I suppose someone could say that the older code was uglier, but it already had the same capabilities as LINQ.

That's been the case for the majority of supposed "new features" in C# for at least the past 8 years or so -- LINQ, inline variables, anonymous functions, string literals etc. It's all just syntactic sugar on top of existing features.


That's the general nature of a turing complete language. The purpose of every feature is making things less ugly.


I'd say it's more the general nature of Microsoft in this particular case. "There should be three different ways to do that" has been the case since Windows 3.1.


There is no way you can implement Linq in C# 2.0. C# 2 does not have lambdas and expression trees, so you cannot write an expression which can either be compiled into regular code or into an expression tree depending on context.


IIRC, LINQ can be translated to C# 3.0 without LINQ, so I guess it is syntactic suger but for C# 3.0. To implement it in C# 2.0 one would need to replace expression trees with “home made” query specification classes. It would be a lot of work, it would not be standardized, and it would not be integrated in the language.

So the real new feature in C# 3.0 are the expression trees. LINQ is only one place where expression trees are used.


LINQ is surprisingly powerful when you start overloading SelectMany, as that allows you to implement Monads nicely.


Do you know what "expression trees" are and how they relate to LINQ?


Java is also getting switch expressions next month (as a preview feature in JDK 12): https://openjdk.java.net/jeps/325


It is exactly the same isn't it?


It's not. C#'s switch can pattern match on value and type: https://blogs.msdn.microsoft.com/dotnet/2019/01/24/do-more-w...

Java's switch still operates only on String, int, short, byte, char, and their wrapper types: https://blog.codefx.org/java/switch-expressions/


    // now you kind of enforced to
    // handle all values, otherwise
    // it will not compile
I fail to see how it will not compile if all values are not handled. C# switch is not traditional pattern matching, because you don't get compile time feedback.

Claiming that is does is disingenuous.


This page[1] (which was linked elsewhere in the thread) provides the following info:

> Since an expression needs to either have a value or throw an exception, a switch expression that reaches the end without a match will throw an exception. The compiler does a great job of warning you when this may be the case, but will not force you to end all switch expressions with a catch-all: you may know better!

I haven't used the feature yet myself, so I can't say it definitively - but it sounds like it does provide compile time feedback while still allowing you to compile.

[1]: https://blogs.msdn.microsoft.com/dotnet/2019/01/24/do-more-w...


Specifically, it will throw SwitchExpressionException[1]

[1] https://github.com/dotnet/corefx/issues/33284#issuecomment-4...


it sounds like it does a "best effort" and then gives up, unfortunately, the article doesn't describe exactly to what lengths it goes here.


In TypeScript, this can be achieved using an assertion to the never type. I don't think it exists in C#, though.


They aren't real guarantees in TypeScript, because the type system doesn't guarantee anything about the actual values that are flowing through the system. i.e. you have to code everything in TypeScript and make sure everyone follows the best practices.


Most statically-typed languages don't verify types at runtime either. As long as you enforce types at the boundaries between typed and untyped code, you should be fine. And enforcing invariants at the boundaries of your code is pretty important for all languages.


the point is that in FP languages, pattern matching uses closed ADTs, so you always know at compile time if you've covered all the cases.


Huh, you have do something with returning from a switch in typescript if you want it to make sure all cases are covered. So it's close but not quite there.


Maybe this type of switch works differently, because there's no good default default case.


In my opinion, I find the Tuple patterns the most interesting way to use Switch Expressions. But (and maybe I'm stuck in my ways), I'm kind of on the fence in terms of using this. On the one hand, it shortens some code, but on the other, it's more difficult to read than an if statement (in the case of tuples). I guess I'm not completely sold on this yet. I know the big argument in the article was that this is a way to ensure your variable has a value when you go to use it. When I hear things like that, my first reaction is "then write better code if you're running into that problem."


This is perhaps the most obvious example yet of what PG described as "taking features from Lisp and gluing them to C". That's not necessarily a bad thing -- this is a great and useful feature, and one I frequently miss when using C-family languages.

I wonder, though, at what point a programming language collapses on itself, from too much syntax. I haven't used C# since the 4.x days, and I recall back then it was already on the verge of being too complex for me to fully understand. Admittedly my personal threshold of complexity is lower than most programmers', but this is a really big language. Do they plan to keep adding syntax indefinitely?

At some point, it's going to have to get eclipsed by something else. The next generation will be as effective a tool, for the kinds of things most people want to make, and not too complex for most people to understand. Perhaps part of that would consist of just using expressions for everything from the start.


This is my feeling as well. I suggested that a c# lite or a rethinking of c# would be awesome. I got downvoted to oblivion though. But I do think that there's too much mental capacity used just reading c# if the code really uses all the features of it. I like go lang cause of this, but go is a bit too far on the other side of the spectrum for me.


> I like go lang cause of this

I'm a huge fan of C#, but the lack of features makes Go so damned pleasant to read. As an extreme example, the new C# record type really breaks the neurons I have dedicated to the language:

    public struct Pair(object First, object Second);
As soon as I see parenthesis, it's a method. I want record types, but that syntax is foreign. I wish Anders would spend a few months back on the C# team, he has a great knack for keeping things as consistent and minimal as possible.


    public struct Pair(object First, object Second);

Scala, Kotlin, Rust among others use the same or similar syntax.


Having something more akin to typescript's strict compilation might be more reasonable.

You'd mark out older syntaxes/APIs as restricted or deprecated and compilations would yield warnings or errors (when those features were used) depending on the compiler level.

This would allow new projects to be developed using only the newer (standard) syntax, while allowing a transitional vector for older projects.


> I suggested that a c# lite or a rethinking of c# would be awesome.

"a rethinking of C#" is a good description of F#.


> I haven't used C# since the 4.x days, and I recall back then it was already on the verge of being too complex for me to fully understand.

I can't say it's complex. If anything, the complexity has been reduced with local type inference and lambdas.

I think you wouldn't be able to tell C# from Javascript these days :)


Though I'm sad that a lot of developers dislike scala, I'm certainly glad to see some of it's* good parts being incorporated into current and new languages: Pattern matching to various degrees in Kotlin, now C#, and soon Java 12.

* I acknowledge my ignorance in thinking that scala was the first one to the table with this.


If you are curious about where it came from look into Caml Light, ML and Miranda.


Its fair to call it the first substantive attempt to integrate a mainstream OO language with traditionally FP ideas.

That's a big part of the story here.


I'm not sure what this accomplishes over a normal ugly switch.


The new switch expression is to the old switch statement what the ?: operator is to the if/else statement.

Sometimes you want to provide one of two different values based on a condition. If you use if/else, then each statement body has to contain either an assignment (if the value is to be used later in the same function), or a return (if the value is to be immediately returned from the enclosing function). It is often more clear and concise to use a ?: expression at the site where the value is to be used.

Now consider the case where you need to choose among three or more different values. You can use chained if/else statements, which have the same drawback of needing to contain either assignments or returns. Or you can use chained ?: expressions, but there's really no good way to format this in a way that remains clean and easy to understand.

The new switch expression provides a cleaner alternative to chained ?: expressions.


For various reasons, probably due to linguistics, programmers will often make dumb compromises just to avoid writing an extra few lines of code. Many of these constructs (see especially Python's new walrus operator) are trying to let people express things in the way they want so they'll write better code.


You could use it as an argument to a chained base() constructor.


it's not ugly...


"The new switch expression is quite simple. Anyone familiar with the switch statement can say what the following code is doing"

Uhmmmm....I'm a professional C++ programmer who uses switch statements quite a lot and no, I couldn't instinctively tell what it was doing straight away. Am I being dumb?


This is clearly referring to C# switch statements. Not that they are very different from switch statements in most languages anyway.


This makes sense together with algebraic data types, but I'm not sure if that's implemented.


They’re saving ADTs for 9.0, at which point they’ll beat their chests and loudly proclaim them to be the Best Thing Ever of All Time.


It's OK, it's good direction, so I won't complain.


I wouldn’t hate it. Especially since my day job has me doing a bunch of C#, anyway.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: