Hacker News new | past | comments | ask | show | jobs | submit login
New in C# 10: Easier Lambda Expressions (dontcodetired.com)
133 points by FairDune on Nov 26, 2021 | hide | past | favorite | 105 comments



A couple decades ago everyone was on static types. But then people got sick of the boilerplate, and in what I think was a backlash, dynamic languages like JavaScript, Python, Ruby, etc. took the world by storm. With the raised bar of developer expectations when it comes to agility, static type systems were forced to innovate, and now type-inference and related features are coming to all static languages and bringing us back around to a best-of-both worlds situation. Exciting times.


Static type systems with global type inference haven't had this problem since the beginning (for example OCaml, right around the time Java came out). However, for some obscure reasons the shittier the technology, the more chances it has at becoming popular.

Try Elm as a simple example (can be done in a weekend), it'll probably blow your mind. You don't have to write type annotations at all, but the compiler complains at build time if the same function is called with two different types in two places.


Local inference is a gift sent from heaven, global inference not so much. Reading OCAML (and F#) is exhausting because you have to look into the implementation of each function (or into a separate interface file) to figure out how it's supposed to be called and what it is going to return.


My IDE puts annotations on every function and I can hover over values. Typing them out seems pointless with the right tooling.


You can hover one thing at a time, you can skim a page with your eyes very quickly. Making important information cumbersome to access is a terrible idea.


One could argue that this is better covered by IDE/tooling showing you inferred types / suggestions.


If the function is printed in a journal article, or you just use something like 'less' to look at the file all that information is not available.

I find it kind of weird that we've normalized not being able to read the code outside of the proper program for it. Since those IDEs often cost money it starts to feel a bit like steps towards a walled garden to me, and I'm not sure its good for actual computer science.


I think development containers are a better way to deliver code with a paper.

Besides, the paper itself should optimise for clarity and global type inference languages allow type annotation where it might help. They simply let you skip type annotations which only add noise.


Not sure I'd say the IDEs "often cost money" these days


Wait for VSCode Enterprise.


That's what your IDE or LSP is for.

That's also the difference in 'philosophy' between OCaml, F# and Haskell. Haskell has the 'same' global type inference but the community sees them as (often only ;) documentation.


> However, for some obscure reasons the shittier the technology, the more chances it has at becoming popular.

The reasons are simple: it's promoted by a big company. Same reason why C# and Go are popular.


Exactly my experience. Coming from C# around year 2014s to php then nodejs (due to company setup), typescript is the best of both worlds.

Won't be back to static typing for a while unless I'll need higher precision and higher reliability module.


Damn, I’m missing Union Types in C# so much! Typescript make it so simple to compose types.


Indeed. Union, intersection types are godsend. Furthermore arguments don't need to be class instances / explicit type, object with fulfilled properties are enough. It's much much easier to work especially during templating / developing engine.


What you see as innovation is just slow diffusion into mainstream, the last decade is basically 80s sml family.

The funny .. or sad.. part is that the c#9 explicit ~verbose style would have been the only one accepted before. If you wrote implicitely typed variables people would get angry (I think there are many online articles about how java 9 `var` was bad)


> What you see as innovation is just slow diffusion into mainstream

It can be both. Type inference isn't a brand-new idea, but it still takes work to diffuse it into mainstream, practical languages, especially retrofitting it onto existing languages that weren't designed for it. That still counts as innovation in my book.


Yeah fair point. I'm just a bit salty that a lot of people only see the late stage effect onto their language and might assume that it came out of a vacuum you know. Then they look at you weird with your scheme, sml and prologs. Alas


I love type inference. Strong typing makes maintenance and refactoring much easier and with type inference the code looks very good.

I still remember how ugly and tedious STL iterators were in C++. Now it’s just “auto”. LINQ also wouldn’t work without type inference.


Some form of type inference was proposed for C: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2735.pdf


Haskell has been quietly doing this since the 90s though. I guess being a research-first language it isn’t held back by trying to have mass appeal (eg to be C-like to be familiar)


Haskell is held back by pushing category theory into their tutorials. Want to do IO? Great, first learn about monads.


That's not true and never has been. Want to do I/O?

    main = do
        putStrLn "Who are you?"
        name <- readLn
        putStrLn ("Hello, " ++ name)
There. Do you really need to know something about monads to understand that example? No.

Do you need to know how do syntax and the assignment operator <- works? Sure. But that has nothing to do with category theory. That's just syntax.


To be fair, once you get a compiler error you’ll at least need to know:

a. do notation is converted to haskell b. what the bind and return functions do, for IO

So you can figure out;

c. Why your types are not lining up

To understand Haskell in general you need to realise do, bind and return are generic and can be used for not just IO but say for Maybe, List etc.

Basically you need to know most practical things about monads!

I had a bad time writing Haskell do notation until I understood monads. I used to write imperative code, try =, try <-, always undo typing back to known working states etc. to try and magic the code into compiling.


That's a good point. It was long enough since I learned this stuff that I mentally translate "Monad m => m a" into "IO String" for example in type errors, to the point of not even noticing how cryptic the generic types can be!


Cunningham's law in action, everyone!


I’ve not seen much category theory in tutorials. I have seen it in some conference talks but they are aimed at people who want that, and you don’t need to watch those.

I’m doing a take 2 now of getting into Haskell again but ignoring advanced language features (unless forced on me by a library) and ignoring category theory. The goal this time is using Haskell to just build stuff.


C# has type inference since a very long time ago.


With this the language gets a little more beautiful. On a related note, I just switched from a C# job at Microsoft (microservices) and moved to a Java shop where we're all really just starting to learn Kotlin and migrate our work there. I'm shocked how many times we've learned a fancy feature and I get to say, "actually, C# has something just like this too." (i.e. operator overloading) It is a surprisingly modern language keeping up with the other ones.


Err.. I'm too drunk to check, bit I'm fairly sure C# has had operator overloading for like ten years?

Having worked recently in it, I found Java to be relatively (as a language, not for tooling) backwards compared to C#.


C# has supported operator overloading since C# 1.0 in 2002. See section 7.2.2 of the original specification.


I think that's what the poster you're replying to is saying.


No, the reply raises a valid issue. I was implying that op overloading was a modern thing, but it's apparently been around for a very long time, and doesn't seem to be the best example of a "modern" feature of languages.


C++ had operator overloading in the 90s. Mostly we decided it was a bad idea.

Every generation must learn this for themselves though.


Operator overloading has been traditionally overused, especially in C++ (shift operators for iostreams, C++ iterators). Java was the peak of the push-back against that. C# has operator overloading, but forbids many things that were possible in C++:

  * `operator=` and `operator ,`
  * `operator new` and `operator delete`
  * `operator +=` overloaded differently from `operator +`
  * `operator &&` and `operator ||` overloading works differently in C#, so that it preserves the short-circuiting behavior
  * `operator ++` only can be overloaded once in C#, with the compiler automatically handling the difference between pre- and post-increment
But more crucially, the C# standard library uses operator overloading only for types like `decimal` and `BigInteger`. C# programmers can go years without ever overloading an operator, while still profiting from it whenever they use `BigInteger`. It's very different from the C++ culture where

  * everyone needs to learn about how to overload `operator=` (for memory management)
  * the standard library encourage abuses of operator overloading such as shift operators for iostreams
It should be unsurprising novice programmers abuse operator overloading when the C++ language teaches them exactly that.


Except C++ didn't learn that operator overloading is bad. C++ learned that operator overloading for everything was bad.

The language kind of put itself into a corner when what C# calls `MoveNext` is called `++` (with two variants of course) and what C# calls `Current` is called `*`. Taking what mathematically worked for iterating an array and taking it piecemeal to define your syntax is the kind of operator overloading that is a bad idea.


People don't appreciate how much of an impact C++ had on C#. People consider it a Java clone, but the first version had many features to support a large segment of C++ (COM) developers that Microsoft wanted to move to .NET. This brought a visible performance hubbub at the time which was often instigated by the folks more comfortable with C++, manual memory management, etc.


You are missing the part where Java's clone done by Microsoft, J++ already had those features, and it was the first project Anders worked at Microsoft, hence it also has the property/event ideas from Delphi, and a framework that was the predecessor of Windows Forms.

These were the major pain points for the Java lawsuit that Sun did against Microsoft.

The irony with all these attempts to make COM easier to use is that they always fail flat when the Windows dev team has their turn, and then they undo everything and we are back into C++ and IDL land (latest version of it, C++/WinRT).


Python has operator overloading now. (And as far as I know, always has.) Avoiding operator overloading was one of several weird, idiosyncratic decisions Java made.


Operator overloading in a language in which memory management is all manual would be more challenging. That doesn't really tell us about its suitability in a language with automatic memory management. We've had about 20 years of C# with operator overloading, and it's been totally fine.


>"Mostly we decided it was a bad idea."

We who? From where I stand people are fine with operator overloading where it makes sense and C++ is far from being the only language with this feature.

Problem comes when shitty programmers get orgasmic about some language feature and try to use it everywhere to everyone's detriment.


You have to have real good taste to use operator overloading. Often it just leads to obfuscated code where the operators have weird behavior. Extension methods are also way overused. I have had numerous occasions where I thought the .NET framework was buggy only find out that it was somebody’s badly written extension method causing problems.


>"You have to have real good taste"

Gross exaggeration. Just a bit of plain common sense. Do override operators to perform matrix ops. Do not override + operator to do multiplication instead. And do not subtract apples from cars.

As for your further example as I said give shitty programmer code in any language and they will manage to fuck it up.


I think you and the parent are saying basically the same thing. I've been working in Scala for the past 10ish years and we have definitely gone through the "hype cycle" of operator overloading. Early Scala libs went a little nuts with creating an incredible array of symbolic operators (most famously this atrocity http://www.flotsam.nl/dispatch-periodic-table.html). We collectively realized that this was overboard and now Scala libraries typically provide a more modest set of symbolic operators (and generally the symbolic operators are just aliases for named methods so you can decide which ones you want to use in any particular codebase).

I would also disagree with the characterization as shitty code. The operators are actually really nice if you manage to learn them all well. It can make the code both very concise and readable. But it optimized for people who know the libraries extremely well over people who don't, which usually is not a great tradeoff for widely used libraries.


>"The operators are actually really nice if you manage to learn them all well. It can make the code both very concise and readable."

This sounds suspiciously regex like. Concise it is, readable it is not. But regex is unique case and and harassing people with similar patterns in every line of code should be punishable by at least 10 years of maintaining complex software written in Brainfuck.


I thought we just decided C++ was a bad idea? :) (shhh, nobody tell the games guys)


I only know two or three game devs but they all use C++ spitefully because there isn’t any better options to them at the moment. They’re all itching for something better and there are lots of options that are becoming viable so maybe C++ won’t be the only game in town.


You’re going to love Kotlin, it really is a fantastic pragmatic language.

You’re also likely to discover that while C# the language is amazing, the CLR runtime looks frankly minor league when compared to the JVM.


> the CLR runtime looks frankly minor league when compared to the JVM.

This is comically backwards from my experience. I often see near-native optimized performance on C#/CLR, where I see similar code (SIMD friendly loops) literally hundreds if not thousands of times slower than it should be in Java.


See my longer reply but I think this basically comes down to it being much easier to write fast C#. Generally if you are experienced enough with Java you can make it do what you want and get to the same performance (or higher) with Java but it's definitely not as easy and I do think this is a serious downside to Java.


Very true, Java was late in the game in term of SIMD loop optimizations compared to C#.

Oracle engineers did the first implementation for Java 7, more recently it's mostly contributions from Intel, AMD and ARM [1].

[1] https://cr.openjdk.java.net/~vlivanov/talks/2019_CodeOne_MTE...


Which Java?

OpenJDK, GraalVM, Open J9, Aicas, PTC, microEJ, Azul, Amazon,....


What is better in the JVM?


I think both have different pros.

Java has higher peak JIT performance though after Rosalyn was introduced C# has made strides at closing that gap on many metrics. GC algorithms are more advanced, namely ZGC and Shenandoah. I would elaborate but I don't want to turn this post into an actual essay. JVM is more configurable. This is a double edged sword, it heavily favors users that have spent the time to really understand the platform but it has created a reputation of the JVM being hard to operate. JVM class loaders and the new(ish) Jigzaw module system are very powerful, especially for shipping customized/cut-down JVM + stdlib when you need to do distribution. JVM has better portability. .NET Core has obviously changed the game but other platforms are still treated second class to Windows in many ways. GraalVM is reviving the guest language ecosystem of the JVM. Clojure, Scala and Kotlin have all remained reasonably strong but there was a dark period there for Jython/JRuby and friends. GraalVM is changing the tide here and I have already been able to leverage this over Nashorn to embed essentially native speed JS code into one of my Java programs.

CLR has better primitive types, JVM's project Vahalla may resolve this. The CLR/.NET tends to introduce features at a quicker pace than the JVM/Java, this I feel is CLR's double edged sword. Async/await being the key example. It's been in the CLR for what feels like forever now vs JVM which is only now just getting close to landing Loom. However I feel like Loom is an infinitely better solution to the problem that leverages the unique advantages of JIT compiled VM languages that rely on relatively little unmanaged code. The experience on the CLR is much more integrated assuming you are on the well trodden path. i.e VS + Windows 10. MSIL is much much more pleasant than JVM bytecode in my experience and the community developed tooling for working with it is great. Optimizing programs on CLR is easier than JVM, it takes comparatively less effort to write allocation free code and get great memory layout using primitive arrays etc for high performance code. CLR interacts better with native code. JNI is poor in comparison. Project Panama could make this better on the JVM side.

This is by no means a complete list of ways the two differ, this is just my experiences having written a decent chunk of code for both.

FWIW I prefer the JVM but if I am forced not to use the JVM the CLR is the very next thing I propose.


>.NET Core has obviously changed the game but other platforms are still treated second class to Windows in many ways.

Not really. Maybe a tiny little bit, but not really. Microsoft has discovered that the "other platforms" are very important, because they want to sell Azure, and people there will use linux predominantly. Same as MS invested heavily in WSL to support "linux" workflows.

dotnet on linux might not be completely to parity with dotnet on windows when it comes to platform integration, but it's getting there. dotnet on macos is somewhat similar in this regard and maybe a little bit slower to catch up because may be a little lower in priority than linux.

Anyway, it's not my impression that platforms other than Windows are treated second class. It's just that Microsoft has already invested decades in the Windows implementation, while the linux and macos implementations are rather fresh (although dotnet using mono as a base certainly helped) and therefore less optimized yet in certain regards.

>Java has higher peak JIT performance though after Rosalyn was introduced C# has made strides at closing that gap on many metrics.

That isn't really true. JIT (peak) performance is pretty much comparable between Java and dotnet, maybe even a little better in dotnet/RyuJIT. There are certain areas where one is better than the other.

Roslyn is the C# to MSIL compiler, btw, while the JIT compiler for MSIL is RyuJIT. While Roslyn helps produce MSIL that is (hopefully) great to JIT compile, the actual JIT compilation work happens in RyuJIT.

> GC algorithms are more advanced, namely ZGC and Shenandoah.

Yes, GC is where the JVM really shines - if you go through the trouble of finetuning it to your particular need. At the same time the CLR GC seems to work better by default, but it's a lot less configurable too. In the CLR you basically have only 3 modes of operation: interactive GC (concurrent, trying to avoid GC pauses, with the trade off of less performance, for things like GUI applications), non-concurrent GC (which works well for a lot of CLI type tools and some services, where small GC pauses are less noticeable), and "server" (which is e.g. a lot less eager to "return" memory to the OS, aimed at long running processes that are then usually the only major thing running), and that's basically it, where on java you have tons of available schemes/algorithms and each one is usually heavily configurable as well.

As you already hinted at, dotnet made great strides in the last years to avoid a lot of heap allocations and the GC with value types, Span<> and friends, avoiding boxing a lot of times and bringing stackalloc to managed code, and their use in the standard library. So peak GC performance is a little less important in the CLR than in the JVM (but still very important, of course).

You say "may" a lot, referring to proposed java changes e.g. to add value types, make native code interaction less painful, add some form of async-await. Great. And I hope they will finish up and ship these things eventually, and it will be great implementations. The thing is dotnet already has all these things shipped and in production already, often for quite a while. And while java is still figuring out the design, dotnet can already improve their shipped implementations based on real world experience and data in every new version, and usually does so. Java/JVM is playing catch up here.

>Async/await being the key example. It's been in the CLR for what feels like forever now vs JVM which is only now just getting close to landing Loom. However I feel like Loom is an infinitely better solution to the problem that leverages the unique advantages of JIT compiled VM languages that rely on relatively little unmanaged code.

From what I have seen of Loom so far (which is admittedly not that much) my impression is the direct opposite. To me it so far looks worse conceptually and API-wise than what MS did, while still borrowing a lot of general async-await concepts and even C# and CLR specific concepts, but worse. Don't get me wrong, C# async-await and the CLR support for it has a lot of problematic areas too and is far from perfect and easy, aside from general async-await conceptual problems, but it doesn't seem to me like Loom is avoiding or solving those things, instead adding quite a few of their own warts on top. But as I said, my exposure to Loom is very limited, and it's not a final product yet anyways, so I may be very wrong here.

Loom reminds me of when Java tried to copy a lot of collections-based LINQ stuff but did a terrible job doing so in my humble opinion (they since fixed some of the worst mistakes). Or maybe I am just biased, which is certainly a possibility.


> Microsoft has discovered that the "other platforms" are very important

Very important to be able to run applications, not to develop them. I spent a few years doing dotnet development on Linux and have had enough of it (so I'm moving away to JVM which looks more and more interesting as long-term research projects and some language improvements land). It is very much a second class citizen. They still want to force developers use Windows and their tooling.


This hasn't been my experience tbh. Sure, VS doesn't exist on Linux, but VS isn't the best .NET dev environment (even with Resharper), Jetbrains' Rider is, and I haven't felt like the experience is any different on Linux, compared to Windows. The command line tooling introduced with .NET Core is excellent, I usually prefer it to clicking around in a GUI, and it works perfectly on Linux. VS Code also has full .NET support, and uses the same autocomplete engine as VS does. Imo it's not as good as Rider, but not worse than the support for other languages, like Go where it's my go-to editor.


Would you mind going more into details? Jetbrain’s IDE + dotnet CLI offers a very compelling experience on both macOS and Linux.


The ecosystem.


That’s very important. This is the reason why I would use Java over .NET for new code. MS is making a big mistake not providing means to call Java from .NET. There was a library called KVM but I think it has been abandoned.


What is missing in the .Net world?


Here's a link to the official summary.

https://docs.microsoft.com/en-us/dotnet/csharp/language-refe...

Lambdas can also have attributes now, too.


F# is really, really bleeding into C#. There'll be a convergence, for all intents and purposes, by about C# 15 at this rate.


Ehhh, not really. It really depends on what you mean by convergence. You could add every single one of F#'s features into C#, and I still wouldn't consider them to be same language or the other to be irrelevant. The strength of F# is the primary coding style: mostly functional, mostly immutable, expression-based, strongly typed with global type inference. The way most C# is written is almost the polar opposite: mostly OOP/imperative, mostly mutable, statement-based, statically typed but with a less expressive type system (no sum types, exceptions and nulls over Result and Option) and very limited local type inference.

The F# style is enabled by a set of features - some of which would be really hard to add to C# (such as currying and global type inference) - but even if they were added, the millions(?) of C# developers would be unlikely adopt the functional style just because it was possible. A language is not just a list of features; each language has its own culture and "idiomatic" way of doing things.


Look at JavaScript though. A few years back it was all about loose typing, mutation, OOP (with prototype), etc and now TypeScript is king, React and Rx are promoting functional idioms (immutability, purity). So I think that can teach us that when a language can be typed and functional there could be a vibrant community that uses it that way while there’s another that doesn’t


Is F# still moving forward? I am tempted to try it but I don’t want to bet on another tool MS may abandon soon like they did with the .NET frameworks.


Here's the blog post JUST for the changes in the newest release. It is surprisingly meaty for being something other than c#

https://devblogs.microsoft.com/dotnet/whats-new-in-fsharp-6/


I love how detailed those blog posts are. Always a (very long) delight to read :)


And F# is the single piece of the inner .NET which is truly community driven. And they have a vibrant community.


Yes. F# 6 was just released with .NET 6 and Visual Studio 2022. It's an awesome language with a vibrant community. Plus, it's fully open source and cross-platform, so if MS were to abandon it, I think it could carry on anyway.


I think the real question to ask is how it would fare without Don :)


There's something clearly wrong with the code samples: look at the </string> "closing tags" which somehow sneaked in in line 17 of the first sample


The markdown parser of the "Reddit Sync" app adds closing tags to things as well. Hasn't been annoying enough for me to switch.


Yeah, I tried those out in a scratch pad. They don't compile.


Seems the renderer has "helpfully" added closing tags for the `<string ...>` generic parameter lists since they look like *ml...


I love watching C# slowly grow from a boring MS Java clone into something elegant but still useful


Ever since LINQ, maybe even before that, I feel Java is a more boring C# clone. C# keeps innovating their language (borrowing a lot of times from other languages, of course), and then java seems to look and wait and do a half-assed implementation of a lot of tried-and-true C# language features some years later.

Java had two tech advantages over C#: It officially ran on platforms other than Windows and ran well, and their JVM was far ahead of the MS CLR.

Both of these things went away/are going away.

What really remains is Java's reputation over "Embrace, Extend, Extinguish Microsoft" (tho Oracle has been busy trying to destroy Java's reputation ever since they bought Sun) and the larger eco-system/community/existing code bases and sunk costs.


JVM's garbage collectors are at least a decade worth of effort ahead of CLR. I am talking out of my ass here, but from reading people working on large projects JVM has no problem chewing through multi-terabyte large heaps. The recently introduced collectors (Shenandoah and ZGC) even keep the same pause times (something like 99 percentile within 1 ms and 99.9 within 10 ms). The largest heap you might be able to use under CLR without ridiculous GC pauses and overhead is around 200 GB. Yes, value types and Span<T>/whatever lower the GC pressure, but not by a factor of 10×, and value types are coming soon™ anyway.

That's all of no use to me (I don't work on projects like that), but let's not just shove that aside.

I also don't really agree with this:

  > look and wait and do a half-assed implementation of a lot of tried-and-true C# language features some years later
Loom looks like a much more interesting approach compared to async/await hell (I've done it enough to know). I like records and upcoming "withers" much more than { get; set; } (which is easy to write but promotes shit code) because of their focus on immutability. (Yes, I am aware of { get; } and recently introduced { get; init; }, it doesn't really help with the thing withers are trying to solve — copying most fields while replacing one or two values, keeping both the old and the new value immutable). LINQ? maybe, I miss expression trees in Java.


You know C# has records too right? "With" and all.


That should be enough to know he/she hasn't done "enough" to know anything.


Had Oracle not bought Sun, Java would have died in version 6, and MaximeVM would never had become GraalVM.

There weren't other buyers in town, besides IBM, which I probably would have managed it just as Oracle has, given that the trio have been around in Java world since the early days.


Maybe, maybe not. Google was already heavily invested in java with Android. And lots of other places were too. If Oracle hadn't bought it, Google or one of the others might have bought java specifically from Sun. Or may have just let java die and be reborn as an "open source foundation" as they are en vogue now (which to some degree happened anyway, with the proliferation of the OpenJDK).


OpenJDK is developed 90% by Oracle employees, others just take it from there, there is no proliferation .


I think it's been elegant for a number of years now. Pretty much ever since Linq became usable.


Yeah, lambdas and reflection really.


Personally I think reflection is the devil and the reason why we can’t abandon VMs for native compilers.


Reflection does away the need to manually implement serializers. In a growing codebase where data models transform and references to those models begin to rot, that is incredibly useful.


That can be done with code generation at compile time. We don’t need to reflect at runtime for that.


Usability is worse, IMO.

Serialization libraries need to be decoupled from the records being serialized. These two things are compiled into different assemblies, and often written by different people.

Still possible to replace with compile time codegen, but the implementation gonna be complicated and fragile.


Type inference with `var` as well; which was necessary for LINQ anyway IIRC.


Why is it useful for LINQ? I feel like it’s mostly dependent on lambdas and generics.


There are anonymous types for intermediate results in the linq stream. And since they are anonymous types you need the c var keyword when working with the outcome.


….All these years and I’ve never known about Anonymous types. I know about records and dynamic objects. I’m guessing they aren’t used often because I have yet to find them.


LINQ GroupBy() is the most common use I can think of. I would definitely look into that as it's a nice tool to have. Otherwise, keep them in mind if you're ever pulling data from an external source (SQL, JSON, GQL, Redis, etc) and you're annoyed about having to make a class for some trivial operation.

If a library doesn't have an API for using them with generics, you can paper over that with:

    T fn<T>(T throwAway) => Library.Deserialize<T>();


Another example of Anonymous types would be a simple .Select(x=> new {x.ID, x.Name});

Sometimes people lump these in with dynamic objects but those are a very different class of thing indeed.

Now, what would be -really- nice is if we had a way to specify that an anonymous type implements an interface. That could potentially simplify a -lot- of DTO modeling.


C# 10 didn't introduce many new features.

I hate they delayed the introduction of Algebraic Data Types again. Maybe they will appear in C# 11.

Using ADT in F# makes it a breeze to do domain modeling.


Record classes and pattern matching get you 90% of the way there thankfully. Not perfect, but good enough. It's funny how much of a difference being able to write each case in a single line and have equality and all that stuff automatically taken care of makes. Honestly that's more important than the exhaustive pattern matching you get with ADTs. I can live with "default: throw".


C#10 brings some nice quality of life, which is always welcomed. Not all releases have to be about huge changes.

For example file scoped namespace is a very simple but great change, improvement for lambdas, improvement for structs, all of this make it nicer and simpler to write readable code.

Edit: I would also love to see ADT in C#!


I love how this make MapGet/MapPost/MapPut and such useless, and we can just have one MapAction to run them all. I hope .NET 6 would be getting more and more attention and competitive against Java. Java's language and syntax, it's so bad right now.


If you haven’t yet, check out the C#10 blog post, it has details covering all the changes: https://devblogs.microsoft.com/dotnet/welcome-to-csharp-10/.

This version brings quality of life changes that make it simpler and nicer to write readable code, which is great to see IMHO.


I love how the second to last code sample has three closing </string> tags, and the blog name is "don't code tired." (Also, really looking forward to using the new type inference.)


Nice, I like it.

I do wonder what was limiting them before


Every feature starts out with -100 points. So there's always a trade-off to do everything you can imagine to make every last corner case nice or shipping something in a reasonable timeframe. When lambda expressions were introduced they were designed to work well together with LINQ and that whole part of the language isn't a small one either. Storing a lambda expression in a local variable is not such a common occurrence to really need type inference or the ability to add types to lambda expressions.

Heck, I'd say, by now with local functions most lambdas that previously would have been a local could now just be a local function.


Yeah, while I run into the sharp edges that this feature addresses on a regular basis, it typically only costs me a small amount of typing to fix, and it's not that frequent. So it's nice to have but totally reasonable that they never got around to it before.


They added an explicit return type, which allows for the full delegate type to be inferred.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: