> Native AOT is different. It’s an evolution of CoreRT, which itself was an evolution of .NET Native, and it’s entirely free of a JIT. The binary that results from publishing a build is a completely standalone executable in the target platform’s platform-specific file format (e.g. COFF on Windows, ELF on Linux, Mach-O on macOS) with no external dependencies other than ones standard to that platform (e.g. libc). And it’s entirely native: no IL in sight, no JIT, no nothing. All required code is compiled and/or linked in to the executable, including the same GC that’s used with standard .NET apps and services, and a minimal runtime that provides services around threading and the like.
Of course it does have some downsides:
> It also brings limitations: no JIT means no dynamic loading of arbitrary assemblies (e.g. Assembly.LoadFile) and no reflection emit (e.g. DynamicMethod), everything compiled and linked in to the app means the more functionality that’s used (or might be used) the larger is your deployment, etc. Even with those limitations, for a certain class of application, Native AOT is an incredibly exciting and welcome addition to .NET 7.
I wonder how things like ASP .NET will run with Native AOT in the future.
We've been experimenting with NativeAOT for years with ASP.NET Core (which does runtime code generation all over the place). The most promising prototypes thus far are:
They are source generated version of what ASP.NET does today (in both MVC and minimal APIs). There are some ergonomic challenges with source generators that we'll likely be working through over the coming years so don't expect magic. Also its highly unlikely that ASP.NET Core will not depend on any form of reflection. Luckily, "statically described" reflection generally works fine with NativeAOT.
Things like configuration binding, DI, Logging, MVC, JSON serialization all rely on some form of reflection today and it will be non-trivial to remove all of it but we can get pretty far with NativeAOT if we accept some of the constraints.
Thanks for telling me :). On a serious note though, anything can work with source generators but it doesn't match the style of coding that we'd like (moving everything to be declarative isn't the path we want to go down for certain APIs). Also source generators don't compose, so any source generation that we would want to use would need to take advantage of the JSON source generator (if we wanted to keep things NativeAOT safe). Right now most of the APIs you use in ASP.NET Core are imperative and source generators cannot change the callsite so you need to resort to method declarations and attributes everywhere.
That's not an optimization, that's a programming model change.
Well in system text json the api keeps the same and you „only“ need to pass an autogenerated meta object its basically the same api you just need to pass another object instead of an generic Parameter. But yeah its a change.
Java is doing something similar with Graal Native Image thing. In a way it is funny to see few years back so many people were claiming that heavy CPU/memory usage of JIT based platforms would be even more non-issue in Cloud because one can scale as much they need on demand.
And now I see AOT/Slimmed Runtime/Compact packaging are happening largely because of cloud deployments.
Seems cloud bills are making impact in places where enterprise cloud architects were running amok with everything micro service deployments.
Actually I think it is more competition pressure of languages like Go and Rust than anything else.
I have been using Java and .NET languages for distributed computing for the last two decades, and JIT has always been good enough.
By the way, Google rolled back their AOT compiler introduced on Android 5, and since Android 7 it uses a mix of highly optimized interpreter written in Assembly, a JIT compiler with PGO feedback, and when the device is idle, those PGO profiles are used to only AOT compiler the application flows that matter. On more recent Android versions, those PGO profiles are shared across devices via the Playstore.
On the .NET front, I think the team has finally decided to make front of the whole C++ rulez of WinDev, specially after Singularity and Midori projects having failed to change their mind.
Nowadays Go is working on a PGO solution, so there is that.
I don't disagree but kinda doubt that new upstart languages with single digit market and mindshare in enterprise space would force .net /JVM behemoth to do anything. Forget Go, places I work would not know a single new thing beyond Java 1.7 or latest Java 1.8. But they have stood up a dozen cloud teams doing every buzzword you can hear about cloud.
So unless finance guys ask tough questions about rising Amazon billing IT wouldn't care if their SpringBoot crapola take 2GB of RAM or 32GB.
The new upstart languages are where the younger generations are, that is why you see .NET doing all those changes to make rolling an Hello World website as easy as doing it in Go, with global usings, single file code, simplified namespaces and naturally AOT.
I do extensive profiling of managed apps and while the JIT does eat a measurable amount of CPU time, it's really not much. And at least on .NET, you can manually ask the runtime to JIT methods, so you can go 'okay, it's startup time, I'm going to spin up a pool of threads and use them to JIT all my code' without blocking execution - for a game I'm working on it takes around 2 seconds to warm ~4000 methods while the other threads are loading textures and compiling shaders. At the end of that the CPU usage isn't a problem and you're mostly dealing with the memory used by your jitcode. I don't know how much of a problem jitcode memory usage actually is in production these days, but I can't imagine it would be that bad unless you're spinning up 64 entirely separate processes to handle work for some reason (in which case I would ask: Why? That's not how managed runtimes are meant to be used.)
I mostly do JIT warming to avoid having 1-5ms pauses on first use of a complex method (which is something that could be addressed via an interpreter + background JIT if warming isn't feasible.)
The slimmed runtime and compact packaging stuff is definitely attractive in terms of faster deploys and lower storage requirements, however, because the cost of needing to deploy 1gb+ worth of stuff to your machines multiple times a day is still a pain. I don't actually know if AOT is an improvement there though, since in some cases the IL that feeds the JIT can be smaller than the AOT output (especially if the AOT compiler is having to pre-generate lots of generic instances that may never get used.) You also have to ship debug information with AOT that the JIT could generate on demand instead.
I 've never understood the point of JITing. Just compile the thing once for your target architecture and you're done. No more spawning threads for doing the same thing over and over again. I'm glad that languages like Go and Rust are bringing back the lost simplicity of yesteryear's dinosaur languages. Life can be so easy.
Reflection in C# is a thing, it isn't in either Go or Rust. I've read some kind of compiler shenanigans can get something that resembles reflection in C++.
I love reflection, the fact that libraries can look at your types is really really cool.
Not saying it can't be done with a compiled language but I don't see it anywhere.
Being able to load a shared library, search it for classes implementing interfaces, instantiate them and call their methods is pretty slick too.
Does that cover all mentioned usecases though? They're discussing source generation in this thread, not even close to the same thing (for certain usecases)
IBM OS/400 binaries (nowadays IBM i), can be executed in a completly different architecture from 1988, without any changes, possibly the source code doesn't exist anymore, while taking full advantage of IBM Power10 on its last iteration.
Same applies to any other bytecode format making use of dynamic compilers.
Jitting is useful for things like dynamic loading/dispatch and simplifying distribution of libraries around it.
FBOW it's worth remembering the 'cruft' and legacy around .NET's Architecture designs and choices, even if some of the assumptions were wrong.
1. .NET seems to have been originally built under a heavy assumption that the framework was installed on a device. This could be a huge advantage for smaller/embedded devices of the time, if you were working 'close to framework' (as was the style at the time, NuGet wasn't a thing for the better part of a decade after .NET was introduced) you could get a surprisingly compact deployment object. Many of the 'in-house' apps I wrote with clickonce, the cert validation seemed to take longer than the network transfer on upgrade.
1.1: VERY worth noting, the idea that .NET browser plugins would compete with Java plugins; JIT is probably important to do this sanely.
2. Speaking of assumptions, I'm willing to bet that around the design time of many .NET particulars, they were still reeling from the pains of both Win16->Win32 breakages (Win95, Win98), Win16->NT breakages (NT4.0, 2K), Win32/WinNT breakages (XP), and X86/X64 breakages (Server 2003). JIT, alongside a framework API with strongly contracted public members [^0] allows problems to be fixed without vendors necessarily having to provide updated libraries [^1]
3. CAS (Code Access Security) and 'Partially Trusted code'. This somewhat ties back into 1.1, because the idea was that code had to have a certain level of 'verification' around what it was doing, what libraries it was calling, and whether that code -should- be allowed to execute such on a box. I'm thinking of scenarios where an 'intranet' app could have access to a local MSSQL database and open/write files, but an 'internet' app could not. Implementing such via a JIT is far simpler.
4. The combination of Generics and reflection complicate things. To be more specific, the way .NET handles generic methods, basically any reference types (objects) will share the code, as the size of the generic parameter(s) is going to be the size of a reference on the target platform. But in the case of a Value Type (structs), I'm 99% the runtime requires specific code for every different struct[^2] type that is slotted into one of the generic types of a class/method.
5. Simpler dynamic-ish code generation. One of the most lovely (if unloved/ignored) APIs in .NET is the Expressions namespace.
[^0] By this I mean there is a -strong- slant towards .NET APIs retaining existing behavior even if it is wrong/subpar, if that behavior is part of the accepted contract.
[^1] How well this worked out in practice, probably not so much. I remember my first experience with NET 1.0/1.1 versioning/deployment hell and going back to C++ for a few years. It wasn't until my apps had semi-complex UIs (i.e. not console and multi-window) alongside the renaissance of 3.5 that .NET became a more efficient workflow than the sometimes daunting MFC/ATL C++ winforms workflow.
[^2] AFAIK the runtime cannot share struct implementations between two structs with the same layout (I -think- go generics can based of GCShape but maybe not, I'm not a gopher,) but would be delighted to hear otherwise.
I think typical ASP.NET applications likely won't use AOT in the near future. There is quite some "magic" in ASP.NET that I suspect won't work out of the box with AOT because it is heavily reflection-based. And while there are solutions for cases like JSON serialization, as far as I understand they require writing different code right now, so it's not automatic.
AOT is much, much more interesting for cases where startup time matters, and ASP.NET usually isn't such a case. But I can imagine that it will be much more useful and easier to implement for smaller, focused tools that don't use that much reflection-based framework code. CLI tools might be the ideal case for trying this out.
As someone writing a .NET-based API hosted on AWS Lambda, I assure you, startup time matters a ton. I spent a bunch of time getting cold start time down to a semi-reasonable number, but there's still a ton of room for improvement. I for one am very much looking forward to progress on AOT Native.
The "typical" in my comment was meant to take care of that part, I probably should have mentioned this explicitly. AWS Lambda is of course a case where startup time matters a lot, but it still leaves the common problem of ASP.NET that it was designed around a lot of reflection. My understanding is that this simply won't work with AOT out of the box unless you adapt all the places where you use reflection-based code. For example you'd need to use the source generator versions of JSON serialization and the EF Core DbContext in this case, with all limitations those have. But I'm not sure my understanding here is entirely correct or complete.
I think you are correct. The DI container and JSON serialisation are two bigger areas that i suspect would need attention but as you say source generation solves the later and I think there is non reflection DI on the way if not already. I guess you would need to wait for any of the third party assemblies you depend on to allow for no reflection also...still a bit of a wait i think for my use case.
I don't know about ASP.NET, but the post says that Reflection.Emit is not available. Reflection.Emit is a namespace used for runtime code generation; I would assume most other parts of reflection are available, like with previous iterations of AOT support in .NET.
It is theoretically possible to have Reflection.Emit and DynamicMethod for less critical cases by using an interpreter - this happens in certain scenarios already.
As someone writing a .NET-based API hosted on AWS Lambda, your case is quite different to a "typical ASP.NET hosted application", that can run for days between restarts.
I would like to host .net based api on google cloud run and azure container apps, which often will go to sleep after about 10 minutes without a request, so cold start time do matter quite a bit. I've done pretty extensive testing and see ~4-7 second cold starts with classic non aot mvc versus ~500ms with natively compiled golang/.net.
The reality of the cloud is that sleep & cold starts are very common and in fact necessary to run efficient systems. Half a second is acceptable but 4+ seconds is not and so aot is very important for future apps.
FWIW the problems with cold starts could also be solved at the platform layer. App startup is what typically dominates cold start times and there's creative ways to potentially frontload that, serialize the state, and treat all invocations as "warm".
The good news is that .net7 aot supports console apps so lamdas can already be native aot. Json serialization was a concern of mine, but it was surprising easy to get working with very minor changes in the call. If I were building lambdas would move it asap.
> I wonder how things like ASP .NET will run with Native AOT in the future.
Let me give an example of an ASP.NET app lifecycle: an instance is launched, goes through startup code once. When it reports itself healthy, it is put into the Load balancer and then starts handling requests. Code in these paths is executed anywhere from occasionally to 1000s of times per second. After around 24 hours of this, it is shut down and restarted automatically.
So, compiler micro-optimising startup code to take ms off of it, is not interesting at all - it's only run once a day. Startup can take whole seconds, it makes little difference, the instance is ready when it's done. AOT in general isn't that important, but automatic tiered compilation based on usage data is very nice.
Running an ASP.NET app in AWS Lambda is just a few lines of code. However, all of a sudden startup time becomes important for both performance and cost.
These investments by Microsoft and others[1] allow .NET to remain relevant and viable for modern use cases.
Serverless is great, but if you want to go that route you should be aware that it is in no way a typical hosted ASP.NET app, and while you can "Run an ASP.NET app in AWS Lambda" with little code, there are are better ways to design a Lambda.
Can you elaborate on “is in no way a typical hosted ASP.NET app”?
I get that the machinery under the hood is different (ie. Kestrel web server may not get used). However, we typically don’t care about those details. Our ASP.NET code runs in 3 separate places (containers, servers, Lambda) and the only difference between all 3 is a single entry point file.
Do you mean because Lambda is only serving one request at a time and has a more ephemeral host process lifetime?
Anyone have any interest in a small c# aot micro web framework that cold starts up fast (~500ms, vs ~4s for non aot mvc) on serverless containers like google cloud run?
For people curious how much impact the .NET performance improvements over the last 5 years have on real, large scale web applications: At work we have a core piece of our online ordering system that has been running on .NET Framework 4.8. We run around 8 web servers and handle around 4,000 orders per minute, 300 requests per second per server.
We have been working on porting everything over to .NET 6 by making all the code compatible so we can build both from the same code base, and we just started putting the .NET 6 version live on one of the webservers for a few hours at a time to test it out.
CPU usage on that .NET 6 machine is 1/2 of the Framework machines! Impressive improvement, and .NET 7 should be even better.
"Framework" is such a poor differentiator to contrast with modern .NET (owing to the original poor naming decision in the first place--it's not like ".NET" itself is amazing) that it's hard, from a casual reading, to pick up on the fact that "Framework" is even being used a differentiator.
I propose that when people want to differentiate between the modern .NET Core vs the legacy closed source .NET implementation, the latter is referred to as "OG" .NET (in casual and business-casual contexts, at least).
I get where people are coming from, but I honestly don't think it's that hard. If someone is new to .NET, all they need to care about is .NET 6 and later, so it doesn't matter much.
I think everyone is honestly too harsh on Microsoft. They are the best at evolving software (not perfect) while keeping backwards compatibility. Apple and Google just throw things away or suddenly change things and say "deal with it". Of course it's easy to keep naming simple when you do that. There's no way Oracle, Apple, Google, etc. could have managed the transition from .NET Framework to .NET 6 like Microsoft has. Apple would have just changed to something completely different and thus named it something new, just like they did with Swift.
Whether new developers need to worry about developing for OG .NET or not (I agree that they shouldn't) doesn't really have anything to do with what people who talk about OG .NET should call it in order to make it clear that OG .NET is what they're talking about.
> If someone is new to .NET, all they need to care about is .NET 6 and later, so it doesn't matter much.
That is an optimistic viewpoint in some ways; Lots of shops still have FW projects around, possibly ones older than they should be, but the workflows have enough differences that I'd say you should know NET6 and the gotchas between that and FW.
Unfortunely anything related to UWP and WinUI seems to have plenty of folks on the teams used to the Apple and Google ways.
Also there are plenty of .NET Framework libraries on the enterprise space that are yet to work properly on .NET Core infrastructure, or outside Windows, as they are mere wrappers to native Windows APIs.
Since Java 6, Oracle has managed to evolve Java with less breakage (there is some specially on the Java 9 transition) than .NET Framework => .NET Core, or .NET Native => .NET Core.
IMHO, they should have just named the new one .CORE and called it a day.
It would have made it immeasurably easier to google, and could have been be incorporated into other names to differentiate them from old versions: ASP.CORE, ADO.CORE, WinForms.CORE…
It feels like it must be a company policy. When needing to compete with other major tools, choose a name so generic and so easily confused with other things that people who have no knowledge of the product will assume they need it. I can imagine some IT conversations 20 years ago along the lines of ".Net? I must need that for internet access!", just like I'm sure there are discussions today where managers assume they need Azure DevOps to get this new-fangled DevOps thing.
Nuh-uh! I had to fight that term at Microsoft for a few years because it was obvious that .NET Framework was legacy but some people were afraid to say it for fear that some customers would leave (as if leaving is somehow easy to do).
A suggestion I liked was to call ".Net Framework" ".Net classic" a la "ASP classic", nicely honors the original .Net while properly suggesting it is not the "new" .Net.
On the one hand it's cool that they are improving but how do people keep up with all these additions? I find this really hard.
Seems a lot of the changes are new stuff which you have to evaluate and see how they could actually be used productively. For example a while ago I tried the new nullable stuff and while in theory it looks straightforward it turned out to be very difficult to use nullable with existing APIs and code in a productive way. The same applies to a lot of the other new features.
I feel .NET Core after a good start is falling into the typical Microsoft trap of constantly cranking out new stuff to do the same thing and leaving it to developers to keep up. That's how we ended up with several .NET desktop UI frameworks that are more or less in maintenance mode without a real upgrade path to the currently fashionable framework (which will most likely be abandoned soon too).
Can you name other development platforms of similar scale/depth that don’t have this issue?
Compared to the whirring treadmill of frontend web development, I find .NET to be pretty easy to keep up with. Good IDEs (ReSharper or Rider) really help with the new language features (which are opt-in).
Outside of that, the ecosystem is large enough that there are quality blogs/articles/podcasts to stay current without sinking a ton of time into doing so.
Front end development may have a lot of frameworks, etc., but so much less magic than .Net.
What makes .Net so much harder (especially ASP.Net) are the new features but also all the magic that makes discoverability so hard.
Front end frameworks tend to follow similar patterns, and where they differ, they usually advertise the difference between the standards heavily.
But also, since they are genuinely different frameworks, it's easier to search the differences etc. So, if I open a Vue application codebase, I know almost immediately that it's a Vue application and not a React one. I can google vue and figure out (most likely from their getting started page) exactly all I need to know to get started.
If I enter an ASP.Net codebase, on the other hand, I have no idea whether I'm looking at ASP.Net Pages, .Net WebAPI, .Net WebServices, or more often than not, some combination of all of them. Throw in some Dependency Injection, with a combination of IOC containers used in the same project, and it's a massive lift to know what even to Google for.
Magic means “hidden complexity”, AKA “abstraction”. The negative connotation is that sometimes you can’t figure out why your software is behaving a certain way and what you need to do about it.
The positive connotation is that you write less code with less cognitive overhead.
Generally I prefer ASP.NET over things like Spring and Ruby on Rails because it has less magic, despite being clearly inspired by both of those.
Here’s a concrete ASP.NET example:
You can put an [ApiController] attribute on your controllers and it can change the structure of error responses, among other things. [1]
I don’t agree with some of the other parent points. For example, the comments on “multiple dependency injection container” —- ASP.NET is pretty prescriptive on DI patterns. That sounds like someone made a decision to add complexity, which is on them.
So "non magic" means to write everything then? Because the example you showed you have to understand a language feature, attributes. Then see the source code, which is open, and how it's being used.
So, are Rust macros, or C macros, C++ templating magic or it's just means "I don't want to know how this works therefore it's magic"?
It seems you had a bad time learning ASP.NET, but in my experience what you're asking for is not difficult at all.
You have to:
1. [Optional] Define a DTO that deserializes { "foo" : { "bar" : "baz" }} to an object
2. Write a class that subclasses Controller
3. Write a method, call it whatever you want, that accepts a JObject or one of your DTO types.
4. Add the attribute [HttpPut("ping/pong/yolo")] to tell the framework this method should be bound to PUTs for that path.
5. Write your business logic.
No one in their sane mind would use ASP.NET for new projects (it is legacy and a business risk) nowadays so it's reasonable to assume that everyone means ASP.NET Core given the .NET 7 context.
You really don't need to use any new features to get the benefit of improvements in many cases, because the standard library is updated to use them. So things like string and array operations just get faster.
I've started using 'ref readonly' and stuff like that in bits and pieces of my application code, but the rest of it is basically written in a .NET 4.0 style and it's fine, there's no obligation to use the new stuff unless you want to. I've started using some of the newer syntax sugar to make things more concise, like local functions, value tuples, and short property definitions - but again that's totally optional.
A lot of these things you will get the benefit from upgrading. Also see the last section of the post about analyzers. You can enable these and Visual Studio will identify and potentially automatic improve performance by using different APIs.
There are a performance ton knobs you can turn both at develop time and deployment time. There are 4 ways to run Regexes in .NET 7! (Interpreted, compiled, the new non-backtracking interpreter, the new compile time source generator). I do agree with the assessment that it’s hard to keep up with.
Most .NET devs only need to be peripherally aware of these changes, as most businesses using .NET will move slower than Core's new pacing. It's even less important for lower seniority devs as you typically need project changes to utilize new features, which is a call a senior would make, which involves approval/testing/deployment, so a slow process which gives you time to read up on the new features as they're needed.
You're better at your "craft" the more tools you know about and how to apply them, but if your day job prevents you from following the new stuff, I wouldn't worry too much about it.
>I feel .NET Core after a good start is falling into the typical Microsoft trap of constantly cranking out new stuff to do the same thing and leaving it to developers to keep up.
Yeah, the pacing has increasing dramatically from .NET Framework days, but that's probably a good thing. I would just stick to learning about what you do in your day job. .NET has a huge ecosystem compared to other languages, so it's going to be very hard to keep up with everything MAUI is doing if you're doing regular ASP.NET core APIs.
I very much like the ethos of Golang for this reason. Still not had a reason to use it but like the idea of mastering the fundamentals in a weekend. Even if I lose the flexibility of LINQ or Java streams.
It feels like a scale of language conservativeness Go all the way at the top, Java somewhat in the middle (a little above) and C# at the bottom. It is going the route with lots of features and complexity, which can be a great thing but not for grug devs https://grugbrain.dev like me.
But saying all that Blazor looks really good for WebDev, I just worry it gets abandoned. It feels everything does in the C# space.
C# also made a big mistake imo by going with async/await instead of lightweight threads which will add a ton of complexity in the future for if they decide to go the greenthread route like Goroutines/Project Loom.
> C# also made a big mistake imo by going with async/await instead of lightweight threads which will add a ton of complexity in the future for if they decide to go the greenthread route like Goroutines/Project Loom.
Could you expand on this? Async/await is just syntax magic for Task continuations (in other words, Promises [0]), which have very little to do with the underlying threading model. This statement is equivalent to saying "Completable Futures add a ton of complexity to Project Loom."
Yes I understand the function coloring "problem" (oh no functions need to specify in their signature whether they return results immediately or eventually). Regardless, I still don't understand how this prevents green threads a la Project Loom, if you have a function that returns a `CompletableFuture` in Java, it also needs to change its signature.
IIRC, the statement was from some Java blog about Loom. Idea is that with lightweight threads you can make everything sync and still be performant. While C# has gone ahead with making everything async
I understand the difference in approaches. However, the parent stated that this decision makes green threads harder in C#, which is what I don't understand.
I don't try to keep up with absolutely everything. I've been to this particular buffet for a really long time and have developed tastes for certain things.
For instance, I recognize AOT is elegant in theory, but I also notice it's a difficult thing to get right and has certain tradeoffs. Especially, for a complex application that may need to use legacy APIs. As a consequence, I have mostly avoided this in favor of focusing on other areas that do seem to deliver additional value for our product (and my side-projects). Examples of these being intrinsics (SIMD), UTF8 text/serializer, file I/O and more advanced GC techniques.
Ultimately, it's a matter of experience to know when you should and should not chase a particular rabbit. My general policy right now is to observe the new shiny while it's in the odd-version, and then wait to see how it shakes down by the time the LTS comes around. You almost never want to try to grab a Microsoft-branded rabbit upon the very first sighting.
Nullable's have been around for at least a decade and before .Net core existed? Are you thinking of the NotNull attribute argument checking instead(Likely the wrong term here)? Any change in a data type to an existing API or code will always be tough and a lot of work. Maybe an in32 to a in64 being the exception.
Library developers are looking at these and following them to make enhancements. Developers working on business apps get the automatic performance improvements of these changes and the library dev changes to improve performance and allocations. I personally quickly go over these so when I get into a performance issue I have a little bit to know where to go find more information.
Fortunately the upgrade path has not been too bad once you get to .NET Core. Usually one time changes to your Program.cs and Startup.cs files. The Top-Level statements addition is pretty jarring at first, but a cool addition. I did run into a breaking change from .NET 5 to 6 with an Encryption library not working right. But most everything else just works when upgrading. If it was just .NET additions life would be fine. It is the front-end churn that causes real stress. AngularJS, TypeScript, Angular, React, Blazor, WebPack, Vite, etc.
This is a really interesting, or rather terrifying part of that blog post. I thought I had a reasonably good knowledge about typical performance mistakes in .NET, but I never heard about this particular one.
If I understand it right, initializing a fresh JsonSerializerOptions object before each call to Serialize/Deserialize in System.Text.Json is incredibly expensive. It essentially disables the cache and means the serializer analyzes this case fully every time. And this is more on the order of 100x as expensive than using the cached version, so not a trivial amount.
A polite way to describe most serialization tooling in the .NET ecosystem would be "bad". It's unfortunate, but it's good to see improvements happening. I still try to avoid JSON in general on .NET since I've had so many bad experiences with the various JSON libraries out there.
Sadly if you want to ship well-performing software in .NET it's still mandatory to run it under a profiler on a regular basis, both to spot frequent unnecessary GC allocations and to spot CPU bottlenecks. Either one would probably catch this particular problem, thankfully!
I use the built in VS profilers sometimes, and other times I use Superluminal's excellent CPU profiling.
Most pitfalls in .NET are luckily along the lines of "punishing people who redo all the steps on each method call or write code Java style" which might even be a good thing. I wonder where does your bad experience with Json libraries come from? Even back in the days of .NET Framework, there was a variety of quality libraries such as Newtonsoft.JSON, Utf8Json or other more interesting but less feature-rich packages.
Nowadays, STJ provides quite good defaults and in places where they differ from Newtonsoft, there usually is a reason for that. I'm not saying all APIs are successful, namely, I consider OOB API for source generated serialization to be really user-unfriendly.
However, other than that, just don't do unnecessary work and cache stateless objects like serializer settings. This translates to most programming languages and isn't something C# specific.
I had a terrible experience with Newtonsoft every time I used it:
* compatibility-breaking changes happening without proper versioning, which meant if someone happened to install their copy in the GAC my app would break. I had to start shipping custom builds of Newtonsoft with the version 9.9.9.9 to stop the GAC from breaking me and I never had this problem with any other managed library
* Bad performance
* The necessity to do awkward things to solve basic serialization problems, like write a custom contract resolver just so that it wouldn't try to write to read-only properties
The standard JSON libraries in the BCL at the time were much worse. The new STJ looks alright but after so many bad experiences I can't bring myself to trust it.
For read-only properties/fields, the useful pattern for Newtonsoft is to just provide a parametrized constructor. There are multiple ways to implements this and all of them are well documented.
When it comes to performance - are you sure your code doesn't do anything that pessimizes it? Regular JsonConvert.Serialize/.Deserialize work well and are reasonably fast. Obviously, if you use a lot of custom contract revolvers with heavy uncached reflection and large allocations, it will destroy the performance but almost no library can fix bad user-provided implementations, best case is to design API in a way that will nudge towards efficient patterns but that's it.
Keep in mind that assumptions, especially brought from different languages, may cause a user to unnecessarily shoot themselves in the foot, especially when they are solved by spending 15 minutes to go over suggested usage patterns in the documentation.
In addition, I think it is counter-productive to judge a completely unrelated Json library in BCL by your experience on a limited scenario with Newtonsoft where its defaults were simply different from expectations. The criticism listed is applicable to software that was released about a decade ago, on a platform with features (GAC) that have long become legacy. (e.g. GAC is a very old feature, .netfw 4.6.2 was released 8 years ago, Newtonsoft 9.* - 6 years ago, etc.)
I'm not saying all libraries are perfect, but experimenting with implementations in Rust, Java and Swift left me with impression that C# packages are solid and relatively easy to use even not the most popular ones, and DOM-like JObject/JToken were quite nice to use too for untyped payloads.
Last but not least, during the days of .NET Framework, the fastest implementation was Utf8Json losing only to optimized binary serializers which had to perform less work, it was nowhere near as popular or as feature-rich as Newtonsoft but in performance-sensitive scenarios it was very nice due to pooling internal buffers, emitting efficient IL stubs for serialization and using many other tricks to reduce overhead as much as possible.
Anyway, Utf8Json is sadly deprecated but STJ provides very good performance OOB and if you need even more, SpanJson and SimdJson can give you even better numbers, enjoying newer perf-focused features of the language and research done on JSON serialization. .NET today is much, much better and very different from .NET Framework of 10 years ago. Saying otherwise is not much different behavior from the people who keep yelling "msft bad, closed source windows only garbage!" for whatever reason.
Yeah, it should be in a static property/field or a singleton. Seen measurable improvements based on fixing this. System.Text.Json really has many unnecessary pitfalls. Maybe they should have fixed the memory problems with Newtonsoft instead. A much more practical lib.
Edit: With that said, in most apps JSON speed is very not important. Never seen it consumed more than couple of percent of execution time for APIs since .NET 1.1.
I wish benchmarks with sample means in the tens of nanoseconds weren't reported and compared by the arithmetic mean. These are not normal distributions and a 10% improvement could just mean you improved the 99% percentile, which is quite typical in my experience given how skewed the distributions tend to be.
Looks like the author is explicitly excluding "Error", "StdDev", "Median", "RatioSD" from the ouput. It's in the setup[0]. I'm guessing the author omitted them for brevity.
... that awkward moment when video game tech reviewers like DigitalFoundry provide better benchmarks of video game performance than a tiny company like Microsoft does for its biggest inhouse programming ecosystem.
Video game performance is generally measured in milliseconds, not nanoseconds. The comparison would be if you were benchmarking things like physics sim updates on a per-object basis, or rendering on a per-triangle basis. Those would also be in nanoseconds.
As a game dev you're typically operating in terms of things like 'rendering our UI is taking 0.5ms, can we get it lower so we have more time for the terrain?'
There was a (very long) post recently talking about the Regex performance improvements in .NET 7[0]. It's comprehensive and well written and I found it very interesting. I love the new Regex source generator not just because of the performance but because you can step into the C# code and understand pretty well what the regex is doing.
I don't follow the .NET ecosystem closely enough, but it feels that releasing a mayor version every year feels like kind of unnecessary? Or if they intend to keep doing this, they should release LTS versions of some kind.
It feels kind of scary committing to .NET 7 right now, when in a year, it will be replaced by yet another major version promising ever increasing performance gains... Or is the difference between major versions not that big?
I have an issue with calling three years "long term" support. Most companies won't update to the new LTS for 9 to 12 months in order for any issues to be ironed out, so that means you get only 24 to 27 months on the LTS. Between CVEs and upgrading to dotnet6 before the end of life on dotnet3.1 most of my coworkers and I have gotten almost no new coding done on our applications this summer (and this was after a three month blitz to get all of our code up to dotnet3.1 ahead of the dotnet2.1 end of life less than a year ago). As long as the LTS doesn't introduce a lot of breaking changes then it's not that big of a deal but when you have to refactor and rewrite non-trivial chunks of code this gets to be really wasteful.
What did you have to rewrite when upgrading from .NET 3.1 to 6? If all you had to touch was ASP.NET startup changes, that's hardly 1 day's worth of work.
I think this is part of the intent. Letting software get painfully out of date is something that wrecks code bases and companies, yet that's the way they naturally want to operate. Since they've always operated that way they're not use to budgeting for the resource to continually stay up to date. I prefer to upgrade on a regular cadence and even think LTS is an anti-pattern. The regular cadence means upgrades are as small as possible when they do happen, and people experienced with upgrading are more present in the engineering team. The alternative is shit gets 5 years out of date and then 10 years out of date.
I don't comment on Java or Python platform versioning policies because, like you with .NET, "I don't follow the ecosystem closely enough" so I don't have anything meaningful to say about it.
> It feels kind of scary committing to .NET 7 right now .. Or is the difference between major versions not that big?
One of the weirder HN tropes is being upset that new software continues to be released, or that languages get updated with new features.
I've so rarely seen truly major breaking changes that require substantial effort in updating your own code. Like, the Python 2->3 transition is notable because it's unusual.
What I'm finding the weird trope is the pattern of: "I don't know anything about X, anyway why doesn't X do (thing that X has been doing for ages)"
ignorance is no problem, I am ignorant of the details of most popular programming languages, and I know that. it's another thing to show awareness of ignorance, and then lose that awareness before the end of the paragraph.
Whatever happened to "Whereof one cannot speak, thereof one must be silent."
They do have a tick-tock-style LTS approach. Our strategy is to stick with the LTS versions, since we sell software to banks.
The path we are on:
Framework 4.x => .NET Core 2.x => .NET Core 3.1 => .NET 6 (we are here today) => .NET 8
The migration from .NET Core 3.1 to .NET 6 was a total non-event. It was substantially harder for us to go from 2.x to 3.1.
Based upon the current proposals and available documentation, we anticipate our migration from .NET6 => .NET8 will occur with absolutely zero ceremony sometime around Q2 2024.
> The migration from .NET Core 3.1 to .NET 6 was a total non-event.
It was a rather big event for Forms, WPF developers, and there are still issues with the designer and 3rd party component libraries, and C++/CLI as well, which happens to be used by them.
> It feels kind of scary committing to .NET 7 right now, when in a year, it will be replaced by yet another major version promising ever increasing performance gains... Or is the difference between major versions not that big?
Switching between major versions these days is incredibly easy so it's not like you pick a version for life. They're also not changing THAT much.
They have an excellent track record of not breaking compatibility, most updates are drop-in & forget (unless you want to utilize new lang/runtime features)
I recall a few bumpy bits in the various upgrades from 1.0 to 2.2 - they reworked quite a few areas such that updating the .NET Core version required changing your code. They provided good documentation and migration guides, but they did definitely break things along the way.
To be honest we didn't migrate .NET Framework apps - new build was done in .NET Core and thankfully there were a few smaller projects in the pipeline to allow us to fiddle with a bit through the transition 'til it was a bit more stable.
The updates from .NET Core 2 to .NET Core 6 have taken me maybe an hour of combined work across many years. Each major version has barely any breaking changes if any.
.NET Core 3.1 and .NET 6 are LTS (though 3.1 ends this year). Also in general things don't break that much. There are cases like nullable checks whose default changes and so if you go with that default you will have to update, but it's tweaks more than overhauls.
I have to agree that it's excessive and I don't understand why they're doing it. Major versions imply breaking changes, which contradicts the assertions of other people here replying that the differences between each version are minor; in that case why not minor version bumps? What's the benefit? The only thing I can think of is that it forces upgrades to new versions of Visual Studio.
One immediate problem it creates is with recruitment - between 2019 and 2021 we went from .NET Core 3.0, through 3.1, .NET 5, and now .NET 6, and suddenly in the space of one job I went from up-to-date to 'legacy CV'.
The annual cadence simplifies our planning— we can preview features a year ahead and prepare for the LTS release. It also shortens the feedback cycle for Microsoft which has to be useful.
From a recruiting standpoint, it allows me to start conversations. I don’t take points off if candidates haven’t worked on the latest version, but one of my go-to questions is “What features in .NET X are you looking forward to using?”.
In an industry of constant change, it’s a negative signal if you don’t know anything about the current version of the platforms you use. Many, many candidates don’t.
The .NET framework has changed a great deal between versions 1 and 6, so I don't agree that it's sufficient just to say that I am experienced with .NET and leave it at that, nor is it equivalent to listing library versions. Besides I'm a contractor; potential customers are usually looking for someone who can hit the ground running, not spend a month skilling up.
If a recruiter denies you because you have .NET 6 on your resume instead of .NET 7 then that probably isn't a very lucrative job in the first place. You don't need a month of skill up to go from 6 to 7, maybe like 10 minutes.
Personally I'd just say .NET in general and leave the specifics to the interview.
I guess it's useful to know whether or not a candidate is 15 years out of date. We had a guy come through who knew classic ASP and I'm told it was a painful interview.
Major versions do include breaking changes in some libraries. You usually keep all Microsoft libraries aligned on the same major version to ensure compatibility, and it allows Microsoft to introduce these breaking changes on a regular basis without causing any confusion. If you're on .NET 6, most of your Microsoft nugets are on 6.xx.xx.
It gets really weird when you have one dependency including .NET 6 version, and another including .NET 5, since they sometimes rework nuget packages with how many Microsoft.Extensions they have now.
> "Arguably the biggest improvement around UTF8 in .NET 7 is the new C# 11 support for UTF8 literals."
> "UTF8 literals enables the compiler to perform the UTF8 encoding into bytes at compile-time. Rather than writing a normal string, e.g. "hello", a developer simply appends the new u8 suffix onto the string literal, e.g. "hello"u8. At that point, this is no longer a string. Rather, the natural type of this expression is a ReadOnlySpan<byte>. If you write:"
> public static ReadOnlySpan<byte> Text => "hello"u8;
> public static ReadOnlySpan<byte> Text =>
new ReadOnlySpan<byte>(new byte[] { (byte)'h', (byte)'e', (byte)'l', (byte)'l', (byte)'o', (byte)'\0' }, 0, 5);
No, not unless you can pass a ReadOnlySpan into every API that expects a String. The change you referenced let's folks work around the fact that String is UTF16. It doesn't transparently handle ASCII with one byte like other languages.
Doing magic with the internal string format was considered by the .net team and rejected.
There are multiple operations that right now are O(1) no-allocation, that would become sometimes O(n) allocating if they did that sort of magic.
This includes: PINVOKEing windows APIs that are expecting WCHARS, converting a string to a ReadOnlySpan<char>, and using `unsafe` to pin a string and access its contents as a `char*`.
Making code that users may have been relying on being O(1) and non-allocating into possibly O(n) and allocating was deemed too disruptive.
Plus because strings store their contents within themselves, the on-demand conversion of a string to UTF-16 would require allocating a new object, and updating all pointers to the old object to refer to the new one. Which currently is an operation only done by the garbage collector. If expanding the string on-demand required running a full garbage collection to handle this... yeah, going from O(1) non-allocating, to o(n)+Full Garbage Collection is a total non-starter.
I think Java on the other hand did not expose very many places where a string could be observed to be a utf-16 character array under the hood (perhaps only as part of JNI marshaling?) making the ascii-only string optimization more feasible. Javascript never made the strings internal encoding visible, simply requiring that the indexer be able to return a UTF-16 value in O(1), so the ASCII optimization is simple there.
> A huge amount of effort in .NET 7 went into making code gen for Arm64 as good or better than its x64 counterpart
Awesome. I was using an LSP server for F# (in Sublime Text) on an M2 Mac and it was always running at 500%+ CPU. I had to turn off the LSP server. Hopefully this version fixes it.
Assuming you're talking about FsAutoComplete and this was recently, that's nothing to do with the .NET Runtime and entirely a coding mistake that I made that we've released a fix for :)
For me, the Feedback button anchored to the viewport right does not work, and I'm not interested in signing up to leave a comment on the page, but in the event the author or anyone associated with them stumbles across this:
This article reliably causes Chrome to hang on my Samsung Android. No suggestions re. the length here, just feedback.
https://devblogs.microsoft.com/dotnet/performance_improvemen...
> Native AOT is different. It’s an evolution of CoreRT, which itself was an evolution of .NET Native, and it’s entirely free of a JIT. The binary that results from publishing a build is a completely standalone executable in the target platform’s platform-specific file format (e.g. COFF on Windows, ELF on Linux, Mach-O on macOS) with no external dependencies other than ones standard to that platform (e.g. libc). And it’s entirely native: no IL in sight, no JIT, no nothing. All required code is compiled and/or linked in to the executable, including the same GC that’s used with standard .NET apps and services, and a minimal runtime that provides services around threading and the like.
Of course it does have some downsides:
> It also brings limitations: no JIT means no dynamic loading of arbitrary assemblies (e.g. Assembly.LoadFile) and no reflection emit (e.g. DynamicMethod), everything compiled and linked in to the app means the more functionality that’s used (or might be used) the larger is your deployment, etc. Even with those limitations, for a certain class of application, Native AOT is an incredibly exciting and welcome addition to .NET 7.
I wonder how things like ASP .NET will run with Native AOT in the future.