Some serious projects  started adopting CoreRT , despite Microsoft's neglect for their own runtime. CoreRT seems to really deliver on the single-file fast-startup small-size .NET promise. Getting this project folded into the LTS .NET 6 release should be a priority.
The source generators  that are coming to .NET 5 easily cover 99% of our Reflection.Emit use cases, so the JIT is going to be more of a legacy burden once .NET 5 comes out.
I want Go's small size, fast compile times, fast startup, without Go's bs (explicit error checking, seriously?).
Pieces of our software get deployed over a satellite link so yeah, megabytes matter (and that's why I don't even dare to propose using .NET for those parts). Sharing code with the rest of our .NET stack is a PITA though so people are getting itchy to rewrite the rest of our .NET stuff in Go for better sharing (the Reflection.Emit parts would be replaced with "go generate" which is... a source generator). It would be good to get some clarity on the static compilation roadmap in .NET because I like my job, but I also don't want to become a fulltime Go developer.
You could abandon validation and pre-compile but then running applications is no safer than c. You could modify the IR to do bad things like leak memory and crash VM.
JS has the same problem. Parsing and validating JS is a large part of page load times these days.
I'm not sure source generators will be the solution you hope for. Java has supported build time code generation for ages but everybody still reaches for reflection. A few popular libraries like MapStruct use build time generation instead of reflection, but again there's like 20 other popular libraries that do the same thing with reflection.
Java has attempted to fix the reflection hole with modules. You're supposed to pre-declare the classes you're reflecting in your assembly. But at current adoption rates its going to be another 20 years before you can count on all your dependencies using the module system correctly.
Trimming unused code is easy without reflection. Its been standard practice on Android for a long time. But every time I've tried to use it on servers I run into random crashes cause by runtime reflection and classloading.
Pretty unfortunate shortcomings to Java and C#. I don't think they'll ever be truly fixed unless somebody uses to nuclear option of disabling reflection completely
As long as you have a compile time option I don’t thinks it’s nuclear, it just makes sense.
Didn't know that, interesting. I remember when invokeDynamic was added specifically to make VM conversion of dynamic languages easier, I guess JDK maintainers are somewhat guilty of the same laziness as the rest of us.
An aside, from reading about JVM targets of Haxe I learned that MethodHandles have truly terrible performance especially on Android.
I don't think the potential performance advantages of build time generation are overblown, just that everyone, even language designers, don't want to deal with the added complexity.
I could be wrong of course, but because of this I think the C# idea of generators will end up underutilized as well
I just wish they'll see that most users would gladly trade some compatibility for smaller size and faster execution.
I use Reflection, Reflection.Emit, and Linq.Expressions not just for serialization.
Building code in runtime from something else can be a win, especially performance but sometimes development time as well. That "something else" is not necessarily other code: https://github.com/Const-me/Vrmac/blob/master/Vrmac/Draw/Tex...
Constructors are the only odd duck for which there is no efficient alternative. You either generate a delegate at runtime to call the right constructor, or you have to use reflection/Activate.CreateInstance.
A couple of days ago I found this though, it works using source generators and so it doesn't require reflection to work: https://github.com/TomaszRewak/C-sharp-stack-only-json-parse...
Reflection is just easier. Unless you go out of your way to use libraries that use code generation, Java is just as bad as C# in this regard.
There's a few bright spots, like MapStruct for object->object mappings and Dagger2 for dependency injection that are built entirely on code generation. But these are exceptions to the norm
Megabytes are a one-time thing, unless your app is HUGE. Once you've deployed the runtime and dependencies, you can only redeploy your application DLLs.
We looked into that but we can't run with an unpatched runtime. And servicing the runtime involves installing a huge package several times a year.
Edit: It seems the cross platform is still difficult? particularly iOS, at least compared to using Delphi for iOS
This "publishTrimmed=true" is demo'd in this video https://channel9.msdn.com/Events/Build/2020/BOD106
Starting around 00:46:30
You'll have several GUI apps installed on user system...
UWP already handled that years ago, thought.
.msix/.appx supports dependency properly. If the app target's a new .NET version, and it doesn't have downloaded, it just.... download automatically on install. UWP .NET on Windows it's also inside .appx packages.
You basically have to directly fiddle with the flags to the IL linker to really get the size down. It's a pain. They are working on designs to make it better:
I think these changes can make self contained .NET apps compare more favorably to Go apps, at least for larger applications. It would probably take something more like CoreRT to get app size to be competitive with Rust and C.
That's 3 orders of magnitude difference. Obviously it won't be as much with more complex applications, but it's still funny to see others here considering a dozen MB or so for doing something trivial to be small, when that's the size of a full installation of Windows 3.11 complete with all its built-in apps.
JS is also exposed to this. Some libraries can be shaken but I've always ran into random things breaking when trying it.
> We're talking about saving how much disk space anyway?
Surely this is relevant to how the customer values disk space, not the developer.
Tons. Serialization, for one, And plugin systems are commonplace.
> You have zero grounds to dictate the value of resources.
Nonsense and worse words. We know how much disk costs. It is a rounding error for the overwhelming majority of people and the overwhelming majority of apps. Prioritize what matters.
We know how much disk costs, but at any given moment a certain non-negligible share of businesses will be working with very old equipment in some of their branch offices, self-service terminals, factory floors, labs, etc.
I agree that 17 MB will rarely be a show stopper in isolation. But resource consumption is critical if Microsoft wants .NET to be a universal solution for business computing needs. Memory is far more likely to be the bottleneck in my experience.
If a company has modern equipment in 95% of their locations, but the remaining 5% are difficult to upgrade for some reason (cost of disk is unlikely to be that reason), then those 5% will determine which technologies are even taken into consideration.
Surely this is doable statically. What is the advantage of doing this at runtime rather than at compile time?
> Nonsense and worse words. We know how much disk costs. It is a rounding error for the overwhelming majority of people and the overwhelming majority of apps. Prioritize what matters.
Got a citation? Disk space is the only variable that people even know to complain about... Not sure what you're drawing this from but it stinks to high heaven of corporate propaganda.
I'm not sure where "Disk space is the only variable that people even know to complain about" did come from, but games use 100-150 GBs nowadays, so things like 17 MB are basically irrelevant unless you do verrrrrrrrry specific stuff
If you have a phone with 8G of space like I do, obviously games which require 150G are beyond my means. How does this work towards disk space being irrelevant?
If you're talking about the server side....well I think you'd save a lot more on less vCPU than a little more attached storage.
So let me put it back on you: except maybe IoT, where do you run into problems where your app takes up too much disk space?
EDIT: Maybe it's an update/bandwidth thing? Or a Docker pull time in CI? I'm trying to play devil's advocate here...
What does this have to do with C#? Surely you hold it to a higher standard than electron of all things—C# has been around for 18 years. Electron is just repackaging a browser as an app. Is this the standard to which microsoft holds themselves? Might as well sell scripts for google docs....
Asp.net, entity framework, anytime you use an attribute, binding, autogenerating columns based on an object.
On windows the dotnet runtime is so ubiquitous I'm a bit surprised this is needed.
E.g. I have one app that has 4 aseemblies (the program + 3 internal libraries), 5 more "internal" general assemblies/libraries of mine, and 12 dependencies from nuget, for a grand total of 21 (+ 1 wrapper .exe) files. Now double that number if you want the pdbs (debug/symbol files) too. Without the runtime.
Shipping 21 files isn't horrible (I have seen node_modules directories in the hundreds of thousands of files), but shipping one single self-contained .exe would be even nicer.
You can kinda use IlMerge (and some hacking) to achieve a similar result, but it can break in subtle ways (especially when you try to debug something) and does not work with netcore as far as I know. And it's not officially supported either.
You can also use the PublishReadyToRun flag to do AOT compilation so that startup times are faster (https://docs.microsoft.com/en-us/dotnet/core/whats-new/dotne...). It still keeps the IL (intermediate language) around for certain things so the file sizes can be a bit larger, but if startup times are a concern, it's an option.
The full .net framework had ILMerge that was doing exactly that. But it isn't compatible with .net core.
It doesn't bloat the executable with binaries that are already installed on the machine, and the executable benefits from future updates to the framework.
In Unix we will have single executables with the runtime bundled inside. This mode is being referred to as "SuperHost".
Windows however will have some files left on disk:
> Ideally, we should use the SuperHost for self-contained single-file apps on Windows too. However, due to certain limitations in debugging experience and the ability to collect Watson dumps, etc., the CoreCLR libraries are left on disk beside the app.
I guess for some scenarios such as deploying webapps etc, it might be very handy to copy ONE file and run it. But for client apps, is there a visible improvement to having 1 exe instead of 1 exe and 5 dlls, which are the same size together?
I see the whole thing as more of an issue of your target demographic and the scale of your application. Things that run as background services would make sense to distribute as an installable package. Larger applications like Office or Visual Studio are much too large to throw in a single executable. Something else though like Acrobat Reader or FileZilla I think would make sense to distribute as a single executable. Most times I just don't want to install anything. I use FTP so seldomly that I'd rather want to just download a portable FTP client than keep something installed or even have to extract a multi-file archive somewhere.
on most windows machines where the .net framework is installed you can find a compiler.
my motivation for building this tool was to have access to some basic tools in a locked-down enterprise environment ducks
I ended up being so annoyed by the experience I wrote the server backend in Python and trying out the Python "compilers" for the next client.
When you have something that is going to be scattered across maybe ten or twenty machines tops for a small project, for whatever reason, people just like the .exe and I can't say I blame them.
Project two was a different problem, and compiled Python client, Python server. I dimly recall using something like "freeze" for Python to generate something with many fewer files. I changed my methodology partially based on my dislike of the IDE that Visual Basic had at the time and partially due to the criticisms of the VB.NET client I had produced.
The thing go has though is that single-binary cross-compilation is _much_ easier to grok. It took a lot of reading .net docs to understand what incantations I needed to pass to the compiler.
One of my main goals was to have a single binary hosted on heroku free dyno (512mb ram), and this was using ~1gb vs go which idled at ~50mb.
 - https://github.com/J-Swift/GamesDbMirror-go/blob/master/pkg/...
The other large remaining point is start up time for e.g CLIs and AWS Lambdas. .NET Core has been making great progress there but I think a true native executable will always beat it.
Go will likely continue to have a bit of an advantage for short-running processes, but .NET and Java are both likely to start getting into this space. Micronaut is advertising "startup in tens of milliseconds with GraalVM". Microsoft is working on integrating all the Xamarin/Mono/Framework/Core work into one .NET and there's a lot of great stuff there. CoreRT isn't going to be productized, but it's likely that a lot of the ideas will become part of .NET in the future. We've seen announcements about compile-time code generation which will help .NET avoid reflection in AOT-compiled scenarios.
Go is very successful for a reason, but Java and .NET aren't ignoring Go's advantages. I think there was a bit of complacency for a while. Java was the open-source statically-typed platform and C# was the Microsoft one. As new languages like Scala, Kotlin, and Go came on the scene, there was renewed interest in pushing Java forward and as Microsoft pivoted away from a Windows-first-and-only state of mind, there were a lot of areas to push C# into (with help from the Mono/Xamarin folk). AOT compilation is going to be important for things like iOS development and WebAssembly which both look like they're going to be big emphases for Microsoft going forward (they've announced how they're going to unify iOS/Android/Windows/Mac development with .NET 6 and Blazor seems like one of the more exciting WASM attempts).
Again, not taking anything away from Go, but I think Java and .NET will both be making strides in startup times and AOT compilation.
It's not clear to me how the .NET 5 feature is different.
Most of this is already true because of .NET Standard 2.0, which latest of Mono, Framework, and Core already implement. Large swaths of the extant OSS libraries for .NET have already converted to .NET Standard 2.0. There are mostly only niche use cases left that today that you can't build into .NET Standard libraries and import into programs compiled for any of the extant CLRs.
If you don't have daily familiarity with these tools, they are absolutely inscrutable.
You can't pick a language based on one characteristic. Show me a language that doesn't have a long series of stupid issues. Usually you have to pick the lesser evil.
They have their own set of issues...