I worked on Mono a lot back in the early 2000s (back in the SVN days before it moved to Git, even). This move makes a lot of sense. Things evolved a lot over the years. Mono's legacy goals, which are to be a portable CLR (.NET) runtime for platforms that Microsoft didn't care about, don't make much sense today.
Mono made a lot of sense for running places where full .NET didn't, like in full AOT environments like on the iPhone where you can't JIT, or for random architectures that don't matter anymore but once did for Linux (Alpha, Itanium, PPC, MIPs, etc.). When Microsoft bought Xamarin (which itself was born out of the ashes of the Novell shutdown of the Mono effort) and started the DotNET Core efforts to make .NET more portable itself and less a system-provided framework and merge in a lot of the stuff Mono did a single more focused project made more sense.
Mono was still left out there to support the edge cases where DotNET Core didn't make sense, which was mostly things like being a backend for Wine stuff in some cases, some GNOME Desktop stuff (via GTK#, which is pretty dead now), and older niche use cases (second life and Unity still embed mono as a runtime for their systems). The project was limping, though, and sharing a standard library but different runtimes after much merging. Mono's runtime was always a little more portable (C instead of C++) and more accessible to experiment with, but we need that less and less, but it's still perfect for Wine. So, having it live on in Wine makes sense. It's a natural fit.
Is there somewhere where someone new to the ecosystem can get a simple introduction to all of these different terms and which ones are still relevant today? I looked into .NET somewhat recently and came away with the apparently mistaken impression that Mono was how .NET did cross-platform. I guess I must have been reading old docs, but I'm pretty sure they were at least semi-official.
Is there good documentation somewhere for getting set up to develop with modern .NET on Linux?
For modern .NET, you don't need to know anything about the legacy terms of Mono, .NET Core, .NET Framework, .NET Standard, etc. All you need is .NET 8 SDK. It's fully-cross platform and installs support for both C# and F#.
For example, just download .NET 8 SDK on whatever platform, which is usually very easy on most platforms, and then run `dotnet fsi` to get into an F# REPL.
This is from Mint 22. MS does have its own PPA though.
$ apt search dotnet
p dotnet-apphost-pack-6.0 - Internal - targeting pack for Microsoft.NET
p dotnet-apphost-pack-7.0 - Internal - targeting pack for Microsoft.NET
p dotnet-apphost-pack-8.0 - Internal - targeting pack for Microsoft.NET
p dotnet-host - dotNET host command line
p dotnet-host-7.0 - dotNET host command line
p dotnet-host-8.0 - .NET host command line
p dotnet-hostfxr-6.0 - dotNET host resolver
p dotnet-hostfxr-7.0 - dotNET host resolver
p dotnet-hostfxr-8.0 - .NET host resolver
p dotnet-runtime-6.0 - dotNET runtime
p dotnet-runtime-7.0 - dotNET runtime
p dotnet-runtime-8.0 - .NET runtime
p dotnet-runtime-dbg-8.0 - .NET Runtime debug symbols.
p dotnet-sdk-6.0 - dotNET 6.0 Software Development Kit
p dotnet-sdk-6.0-source-built-arti - Internal package for building dotNet 6.0 So
p dotnet-sdk-7.0 - dotNET 7.0 Software Development Kit
p dotnet-sdk-7.0-source-built-arti - Internal package for building dotNet 7.0 So
p dotnet-sdk-8.0 - .NET 8.0 Software Development Kit
p dotnet-sdk-8.0-source-built-arti - Internal package for building the .NET 8.0
p dotnet-sdk-dbg-8.0 - .NET SDK debug symbols.
p dotnet-targeting-pack-6.0 - Internal - targeting pack for Microsoft.NET
p dotnet-targeting-pack-7.0 - Internal - targeting pack for Microsoft.NET
p dotnet-targeting-pack-8.0 - Internal - targeting pack for Microsoft.NET
p dotnet-templates-6.0 - dotNET 6.0 templates
p dotnet-templates-7.0 - dotNET 7.0 templates
p dotnet-templates-8.0 - .NET 8.0 templates
p dotnet6 - dotNET CLI tools and runtime
p dotnet7 - dotNET CLI tools and runtime
p dotnet8 - .NET CLI tools and runtime
p libgtk-dotnet3.0-cil - GTK.NET library
p libgtk-dotnet3.0-cil-dev - GTK.NET library - development files
dotnet-sdk-8.0 should have the rest of what you need downstream from there. For other libraries and versions, you should be able to use NuGet with your project directly.
I've been using the script installer version intended for ci/cd as I actually like that installer more, it's the only one that really supports multiple versions correctly.
What's unfriendly about just clicking through the options? Anytime I want to install .NET, I just go to that exact documentation, click on the distribution I want (usually Ubuntu), and then just click on the version (https://learn.microsoft.com/en-us/dotnet/core/install/linux-...). I almost always use Microsoft's feeds though, so as to not rely on the middleman of the Ubuntu package manager feeds.
Ubuntu is a subpar package maintainer, but in well run distros that middleman who does the packaging makes an effort to ensure you are getting a stable, performant package, and tries to catch eratta or abusive practices that upstream starts pushing (say, Microsoft opening Edge when you run wget or curl in the terminal, rather than calling the real wget or curl).
To a point. Making cross platform native desktop apps is still in the hands of 3rd party vendors such as Avalonia and Uno. MAUI was supposed to fix that oversight to a less than stellar results.
If there were an old version of C that only worked on one platform but had a graphical toolkit in its standard library, and a new version of C that is cross platform but that graphical toolkit is now ambiguously still sort-of part of the standard library but still not cross platform (and there was no realistic alternative)... Then yes it would be valid to object C is not really cross platform.
back when .NET was first launched it was advertised as the new way of making desktop applications on Windows.
Visual C# made it very easy to design GUI interfaces.
So this "it's all for backend now" notion is surprising.
.Net is "Microsoft Java". Like Java it was designed to do everything, but as desktop development died (and mobile development was locked down by Apple and Google, limiting it to their corporate languages), it pivoted towards networked applications.
They were legally forbidden from going the Embrace-Extend-Extinguish route there, so they had to build their own version from scratch. C# exists because J++ couldn't.
Is Kotlin the most "active", "hot", or "up-and-coming" competitor? Possibly. But the "largest"? Its deployed footprint and popularity are nowhere close to Java's at this point in time.
No and it's not even close. Kotlin only has a single Jetbrains Compose (I presume Kotlin Multiplatform is the same thing). It is also subject to the quirks and specifics of JVM implementations, build-systems and package management. Kotlin native partially bypasses this, but its performance is a factor of 0.1-0.01x vs OpenJDK (if there is new data - please let me know). This is very unlike NativeAOT which is on average within 90% of CoreCLR JIT but is also a performance improvement in variety of scenarios.
C# and F# get to enjoy the integration that is "much closer to the metal" as well as much richer cross-platform GUI frameworks ecosystem with longer history.
There are more than 10 sibling and gp comments that exhaustively address the GUI and other questions :)
> That's a massive advantage over the arcane package management and build systems of .NET.
Very few languages ever achieve a build and package management system as mature and usable as the Java ecosystem.
I've been waiting for 12 years for .NET to match Java's ecosystem, and it's still not there yet.
If you want to sell me on "advantages" of invoking Gradle or Maven over
dotnet new web
dotnet run
curl localhost:port
or
dotnet new console --aot
echo 'Console.WriteLine($"Right now is {DateTime.Now}");' > Program.cs
dotnet publish -o {here goes the executable}
or
dotnet add package {my favourite package}
I suppose you would actually need 12 years of improvements given how slow if ever these things get resolved in Java land.
Also, what's up with Oracle suing companies for using incorrect JDK distribution that happens to come with hidden license strings attached?
Well, that's where the problem lies, isn't it? The ecosystem for .NET is extremely limited compared to what's available for the JVM
And the way JVM packages are distributed, with native libraries, BOMs and platforms allows more versatility than any other platform.
The build system may be better in dotnet, but that only really matters for the first 10 minutes. Afterwards, the other tradeoffs become much more important.
I don't think "JVM is more popular" argument does justice to Java (and Kotlin) strengths. With this reasoning, you could also say "C++ is more popular for systems programming" but it doesn't stop developers from switching to Rust, Zig or even C# as a wider scope and easier to use language that has gotten good at it.
Nonetheless, you could make this argument for select Apache products, but that's Apache for you. It does not hold true for the larger ecosystem and, at the end of the day, quantity is not quality, otherwise we would've all been swept by Node.js :)
Same applies to "packages that bundle native libraries".
First, they are always maintenance-heavy to manage with ever growing matrix of platforms and architectures. Just x86 alone is problem enough as all kinds of codecs perform wildly different depending if AVX2 or 512 is available vs SSE4.2 or even SSE2 without EVEX. Now add ARM64 with and without SVE2 to the mix. Multiply this by 2 or 3 (if you care about macOS or FreeBSD). Multiply linux targets again by musl and glibc. You get the idea. This a worst-case scenario but it's something Java is not going to help you with and will only make your life more difficult due to the reason below.
There is also an exercise in writing JNI bindings. Or maybe using Java FFM now which still requires you to go through separate tooling, build stage, deal with off-heap memory management API and still does not change the performance profile significantly. There's a reason it is recommended to avoid native dependencies in Java and port them instead (even with performance sacrifices).* Green Threads will only exacerbate this problem.
Meanwhile
using System.Runtime.InteropServices;
[DllImport("libc", EntryPoint = "putchar")]
static extern int PutChar(int c);
var text = "Hello, World!\n";
foreach (var c in text) PutChar(c);
since C# 2 or maybe 1? No setup required. You can echo this snippet into Program.cs and it will work as is.
(I'm not sure if binding process on the ole Mono was any different? In any case, the above is a thing on Linux since 8 years ago at least)
* Now applies to C# too but for completely different reason - you can usually replace data crunching C++ code with portable pure C# implementation that retains 95% of original performance while reducing LOC count and complexity. Huge maintenance burden reduction and "it just works" without having to ship extra binaries or require users to pull extra dependencies.
> There is also an exercise in writing JNI bindings. Or maybe using Java FFM now which still requires you to go through separate tooling, build stage, deal with off-heap memory management API and still does not change the performance profile significantly. There's a reason it is recommended to avoid native dependencies in Java and port them instead (even with performance sacrifices).* Green Threads will only exacerbate this problem.
public interface MSVCRT extends Library {
public static MSVCRT Instance = (MSVCRT) Native.load("msvcrt", MSVCRT.class);
void printf(String format, Object... args);
}
public class HelloWorld {
public static void main(String[] args) {
MSVCRT.Instance.printf("Hello, World\n");
for (int i=0;i < args.length;i++) {
MSVCRT.Instance.printf("Argument %d: %s\n", i, args[i]);
}
}
}
> "C++ is more popular for systems programming"
Sure, and it's got many great libraries – but actually using those is horrible.
You're absolutely right about Rust though. crates.io and cargo are amazing tools with a great ecosystem.
The primary issue I've got with the .NET ecosystem is actually closely related to that. Because it's so easy to import native libraries, often there's no .NET version of a library and everyone uses the native one instead. But if I actually want to build the native one I've got to work with ancient C++ build systems and all the arcane trouble they bring with them.
> Same applies to "packages that bundle native libraries".
You seem to have misunderstood. The fun part of the maven ecosystem is that a dependency doesn't have to be a jar, it can also be an XML that resolves to one or multiple dependencies depending on the environment.
> The primary issue I've got with the .NET ecosystem is actually closely related to that. Because it's so easy to import native libraries, often there's no .NET version of a library and everyone uses the native one instead. But if I actually want to build the native one I've got to work with ancient C++ build systems and all the arcane trouble they bring with them.
What is the reason to continue making statements like this one? Surely we could discuss this without trying making accusations out of thin air? As the previous conversation indicates, you are not familiar with C# and its toolchain, and were wrong on previous points as demonstrated. It's nice to have back and forth banter on HN, I get to learn about all kinds of cool things! But this happens through looking into the details, verifying if prior assumptions are still relevant, reading documentation and actually trying out and dissecting the tools being discussed to understand how they work - Golang, Elixir, Swift, Clojure, etc.
> You seem to have misunderstood. The fun part of the maven ecosystem is that a dependency doesn't have to be a jar, it can also be an XML that resolves to one or multiple dependencies depending on the environment.
Same as above.
> JNA
I was not aware of it, thanks. It looks like the closest (even if a bit more involved) alternative to .NET's P/Invoke. Quick search indicates that it comes at an explicit huge performance tradeoff however.
This uses Win32 API. I will post numbers in a bit. .NET interop overhead in this scenario usually comes at 0.3-2ns (i.e. single CPU cycle which it takes to retire call and branch instructions) depending on the presence or absence of GC frame transition, which library loader was chosen and dynamic vs static linking (albeit with JIT and dynamic linking the static address can be baked into codegen once the code reaches Tier 1 compilation). Of course the numbers can be presented in a much more .NET-favored way by including the allocations that Java has to do in the absence of structs and other C primitives.
> Quick search indicates that it comes at an explicit huge performance tradeoff however.
That's definitely true, but it should be possible to reimplement JNA on top of the new FFM APIs for convenient imports and high performance at the same time.
> Of course the numbers can be presented in a much more .NET-favored way by including the allocations that Java has to do in the absence of structs and other C primitives.
Hopefully Project Valhalla will allow fixing that, the current workarounds aren't pretty.
I fully agree though that .NET is far superior in terms of native interop.
> As the previous conversation indicates, you are not familiar with C# and its toolchain,
I've been using .NET for far over a decade now. I even was at one of the hackathons for Windows Phone developers back in the day.
Sure, I haven't kept up with all the changes in the last 2-3 years because I've been so busy with work (which is Kotlin & Typescript).
That said, it doesn't seem like most of these changes have made it that far into real world projects either. Most of the .NET projects I see in the real world are years behind, a handful even still targeting .NET Framework.
> were wrong on previous points as demonstrated.
So far all we've got is a back and forth argument over the same few points, you haven't actually shown any of my points to be "wrong".
> I've been using .NET for far over a decade now. I even was at one of the hackathons for Windows Phone developers back in the day.
This conversation comes up from time to time. It is sometimes difficult to talk to developers who have a perception of .NET that predates .NET Core 3.1 or so and newer. Windows Phone and its tooling is older. I am sad UWP has died, the ecosystem needs something better than what we have today, and the way Apple does portability with MacCatalyst is absolutely pathetic. In a better timeline there exists open and multi-platform UWP-like abstraction adopted by everything. But these were other times and I digress.
The package distribution did not change significantly besides small things like not having to write .nuspec by hand in most situations. Nuget was already good and far ahead of the industry at the time it was introduced.
The main change was the switch to SDK-style projects files. Kind of like Cargo.toml but XML.
Adding a file to a nuget package (or anything else you build) is just adding a <Content ... /> item to an <ItemGroup>.
As you can see, it is possible to make definitions conditional and use arbitrary information provided by the build system. It is very powerful. I don't know what made you think that I assume anything about .jar files.
Together with <PublishAot> property, invoking 'dotnet publish -o .' calls into cargo to build a static library from Rust, then compiles C# project, then compiles the produced .NET assemblies to native object files with ILC (IL AOT Compiler), and then calls system linker to statically link together .NET object files and a Rust object file into a final native binary. The calls across interop, as annotated, become direct C ABI calls + GC poll (a boolean check, multiple checks may be merged so less than a branch per call).
This produces just a single executable that you can ship to users. If you open it with Ghidra, it will look like weird C++. This is a new feature (.NET 7+) but even without NativeAOT, it was already possible to trim and bundle CIL assemblies into a single executable together with JIT and GC. As far as I'm aware, the closest thing that Java has is Graal Native Image, which is even more limited than NativeAOT at the present moment (IL linker has improved a lot and needs much less annotations, most of which can be added as attributes in code and the analyzer will guide you so you don't need trial and error). And the project that allows to embed bytecode in the .NET trimmed single-file style in Java is still very far from completion (if I understood it right).
I think https://two-wrongs.com/dotnet-on-linux-update is more or less representative of unbiased conclusions one makes when judging .NET by its merits today. You can always say "it used to be bad". Sure. It does not mean it still is, and the argument is irrelevant for greenfield projects, which is what I advocate C# is the better choice for anyway.
> I fully agree though that .NET is far superior in terms of native interop.
This is not limited to native interop. At its design inception, C# was supposed to replace C++ components at MS. Then, in C# 2, a focus group including Don Syme if I'm not mistaken pushed for generics and other features. Someone posted a history bit here on HN.
This and influence from the projects like Midori (spans, struct improvements), and subsequent evolution (including the existence of Mono) and especially after it stopped being .NET Framework and became .NET resulted in a language that has much wider scope of application than most other GC-based languages, including Java, particularly around low-level tasks (which is also why it's popular in the gaming industry).
Unfortunately, the perception of "another Java" hurts the ecosystem and discourse significantly, as the language and the platform are very unlike this claim.
> NET Multi-platform App UI (.NET MAUI) apps can be written for the following platforms:
> - Android 5.0 (API 21) or higher is required.
> - iOS 11 or higher is required
> - macOS 11 or higher, using Mac Catalyst.
> - Windows 11 and Windows 10 version 1809 or higher, using Windows UI Library (WinUI) 3.
Okay, where's Linux? That's what Mono was originally made for and where Mono really shines.
Also, the development experience isn't great either:
> - If you are working on Linux, you can build and deploy Android apps only
> - You need a valid Visual Studio or IntelliCode subscription
The getting started guide only exists for Windows and macOS and the forum post announcing experimental Linux support is full of caveats.
I don't think you and I would agree on what "cross-platform" means, especially in the context of Mono being donated to Wine, which is a heavily linux-centric discussion topic.
> - If you are working on Linux, you can build and deploy Android apps only
> - You need a valid Visual Studio or IntelliCode subscription
You don't: https://marketplace.visualstudio.com/items?itemName=ms-dotne... (DevKit, which is the licensed one, is completely optional - it gives you VS-style solution explorer. You can already get it with e.g. F#'s Ionide that works for any .NET file in the solution, though I use neither)
While I don't have much direct experience with it, as it was easy to migrate my personal projects, it seemed the idea was sound. It seemed like it was a way to encourage people to write libraries against the new .NET Core (at the time) but still allow those libraries to be used in .NET Framework as a sort of bridge for people stuck on .NET Framework.
Any support for open source or cross-platform stuff was a bulwark against claims of monopoly abuse, but none of it worked well enough to be a true replacement. Mono worked for some purposes, but it was far from the first party support cross-platform .NET gets today. Nowadays it sounds like .NET Core + third-party GUI libraries is the way to go.
> Nowadays it sounds like .NET Core + third-party GUI libraries is the way to go.
For reference for those unfamiliar with the terms:
.NET Core was the name given to the cross-platform fork of the .NET runtime.
It was forked out of .NET 4.x and dropped support for a lot of things in the first versions.
It ran on various distributions of Linux and MacOS.
At the same time there were forks of other libraries/frameworks in the .NET ecosystem to have 'Core' variants. Often these were dropping support for legacy parts of their code so that they could run on Core.
Later versions of .NET Core brought over support for a many of the things that had been dropped.
.NET Core and .NET had stand-alone versions until .NET Core was renamed to . NET and became .NET 5.
So, if you want to do the most modern cross-platform C# you would use .NET 9.
More or less. any version of .NET >= 5 is cross-platform and is a direct descendant of the "Core" side of the fork, and so has no "full framework, windows only" variant.
It is "Core" in a lineage sense, but there's no need to make that distinction any more. The term "Core" is out of date, because the experimental "Core" fork succeeded, and became the mainstream.
I've been a long way from Windows development for a while, so missed that shift. I knew it was coming since moving functionality to the open source thing seemed to be Microsoft's target (with some skeptics doubting it, understandably). I didn't know it already happened.
The shift is slow, but it has been ongoing for years, and is pretty much wrapping up now. .NET 5 was released in November, 2020 and that was the "beginning of the end" of the shift over.
For what I do, it's not really "Windows development" in any meaningful way. It is business functionality with HTTP, message queues etc, developed on mostly Windows laptops, and deployed to mostly Linux instances on the cloud. Not that the host OS is something that we have to think about often.
For this, .NET 3.x "the full framework windows only version" services are regarded as very much legacy, and I wouldn't go near one without a plan to migrate to a modern .NET version.
However, YMMV and people are also making windows desktop apps and everything else.
Quantity of languages might be less important than: how many needs are served by those languages, whether the ecosystem is dynamic enough to keep expanding served niches, and whether the culture and community is likely to produce language support for a niche that matters to you ever or on a realistic timeline. The JVM does appear to have a lot more niches covered, but you can still do all the things those languages do in what's available for the CLI.
I don't know much about the current state of CLI and .NET beyond what I've read here, but it sounds like it's dynamic enough to keep expanding. I also don't know enough about the long tail of niche languages supported by each to know which direction they're headed.
That's the situation with the tools used for music production. In theory, any DAW (Digital Audio Workstation) can make any kind of music. In practice, they all move toward different kinds of music, and you'll run into increasing friction as you do weirder or more complex stuff if you pick the wrong DAW. Cubase can do electronic music, but you're better off with FL Studio or Live. Live and FL Studio can do orchestral, but you're better off with Cubase.
And I'd guess there's a similar dynamic with CLI and JVM and the languages that target them.
It's a fork with a lot of modifications (mostly removing deprecated stuff and making it cross-platform). You can still see a lot of ancient stuff in the sources such as referring to the base Object class as "COM+ object" (.NET was originally envisioned as a successor to COM).
>An early name for the .NET platform, back when it was envisioned as a successor to the COM platform (hence, "COM+"). Used in various places in the CLR infrastructure, most prominently as a common prefix for the names of internal configuration settings. Note that this is different from the product that eventually ended up being named COM+.
Correct, the bytecode wasn't even 1:1 compatible. They then brought over missing pieces, and consolidated .NET Framework features into .NET Core, thus becoming just .NET to end the dumb naming war, since everyone calls it .NET anyway...
Good write up that wonderfully encapsulates how stupid Microsoft’s naming is - you didn’t even mention .NET standard.
I love .NET. It’s a great stack, especially for backend web apps. Blazor is a great SPA framework too. But I loathe how Microsoft continue to handle just about everything that isn’t the framework and C# / F#. It’s laughable.
Oh don’t get me wrong - I wasn’t criticising your write up. It was concise and still relevant.
It’s just funny for newcomers to peel back the onion more. Writing a source generator? Target .NET standard 2.0 (not even 2.1) for a whole host of reasons.
The ".NET" label was applied to a bunch of things at Microsoft.
It was also an early name given to their social networking / IM things.
But for the last 20-ish years it's really only been applied to things related to the .NET Framework.
So, yes - Visual Basic.NET is a language - it's the language that replaced Visual Basic 6. It compiles to the Intermediate Language (IL) that the Common Language Runtime (CLR) executes. There are other languages that compile to IL, too like C#, F#.
The .NET Framework is really a bunch of libraries and tools that are packaged together.
The .NET Standard is a standard that allows you to build a library to a known set of supported libraries and IL / CLR features.
So, yes, depending on which specific part you're referring to - it's all of those.
The "Xbox Series X" is such a nonsensical name that only a marketing department could come with it. And this entire line of names exists solely because someone thought that nobody would buy a "Xbox 2" instead of a "PlayStation 3".
Because X's mean moar marketing power... Like the Extreme X870E X motherboard... There's multiple X's and Extremes and the X's mean extreme... so it's moar extreme!!!
Among the other small nits in your otherwise concise post... the windows only versions of .NET (1-4) were known as .NET Framework. So, Framework is the only windows only version, followed by Core being a limited feature set but cross platform and then .NET 5 (no suffix) being a full featured version that is cross platform.
I'd argue that the dominance of Linux on cloud and Azure growing business is what's causing Microsoft to have an ongoing interest in linux support.
A factoid that's shared sometimes (no idea if true) is that Microsoft now employs more Linux kernel engineers than Windows kernel engineers due to Azure.
That came after. Linux wasn't even on 2.6 with its famous stability yet when this kicked off. What you see now is a result. They softened on open source as they realized it actually has some benefits for a company like Microsoft.
The Microsoft of the Halloween Documents[0] is a different Microsoft from the one we see today that understands open source as something good rather than as a threat, and it started with Microsoft being forced to play nice.
After having gouged Red Hat and Suse for years with their bogus Linux patent racket and bankrolling the infamous SCO Unix lawsuit. Make no mistake M$ coming over all We Love Linux was like Donald Trump turning up at the DNC.
I do remain skeptical that the node on the Microsoft org chart that usually strangles anything good the companies does is waiting to strike. It used to be Windows node, but now it seems like the ad node comes in for the kill most of the time. The company is slowly morphing into Google as Google morphs into Amazon, while Amazon is morphing into UPS.
Off-topic but to join in the general good vibes this announcement emanates: i have to say that my experience using Azure cloud has been stellar. Their co-pilot integration works well, IME. Azure shell is simple and good. Dashboard UI is always good.
Bona fides: I have used GCP for 3 years, AWS for 3 years, and Azure for ~ 1 year. As well as the more "bare-metal" types of cloud providers like Linode/Akamai, and Vultr -- all the latter of which are great for self managing your infra.
I also really find the ability to spin up Windows Server and Windows 10/11 etc super useful for builds, testing, Hyper-V.
I really like Azure for huge projects with many moving parts.
More like it was shoring up for developers who use and/or target mac and linux. Many devs are using macs and targetting linux for deployments. MS wants Azure to be a first class option for developers and is the focus for making money going forward. It makes sense for their developer tools to offer that.
Azure didn't exist. OS X had just come out and almost no one took Macs seriously as a development target yet. Windows was the only user-facing thing anyone developed for aside from little Java games on flip phones. The Web 2.0 takeover was still years off and Internet Explorer ran the show.
Is "historical context" not as clear as I thought? You're the second person to challenge this by pointing out the current situation when I'm talking about how we got here.
Then you're not talking about what I was talking about in the post you replied to with a framing that suggested you were disagreeing. Did you click the wrong reply link?
Mono implemented the GUI stuff like Windows Forms, do the latest windows cross platform stuff support that? Can you run .Net GUI windows program on linux without Mono but using the latest .Net thing ? I know it was not possible in the past.
The whole point of .NET-Core was to remove all the (largely desktop-oriented) platform-specific dependencies that tied it to Windows, so you could run server-oriented .net programs on Linux. So no, afaik you can't simply run GUI apps built with .Net on Linux desktops - that's the reason Mono wasn't simply killed, because it covers that niche (which wouldn't even exist, were it not for Mono/Xamarin's efforts back then. But I digress...). Nowadays there are a few other attempts at providing that UI layer.
.net Core still has Windows Forms thoguh? At least I (for kicks) migrated one of my old .net 4.something projects to .net core and it still works and shows the classic Windows Forms GUI.
.Net Core on Windows has support for loading assemblies that reference COM interfaces and the win32 API, along with other things that aren’t supported elsewhere like C++/CLI.
That’s why loading System.Windows.Forms still works, it’s not part of .Net 5+, but it can still load the assemblies on Windows (they still use GDI, etc under the hood).
Sure, nobody wants to write Winforms new applications today
My point is about running existing applications on Linux,
there are still issues with running .Net GUI stuff under wine and Mono was not a perfect implementation.
I read in other comments that the newer .Net cross platform stuff is not a replacement for Mono for running this old applications. (nobody will rewrite them to use the current GUI stuff from MS since are old apps)
No, Microsoft's .NET only supports WinForms on Windows. They do have an official cross platform GUI toolkit in MAUI, but it strangely does not support Linux.
Last I knew it is also considered pretty lackluster. Every time I read up on it it feels like, even beyond the lack of Linux support people just don't care for it.
If I was building a cross platform native app with .NET I'd probably use Avalonia right now.
Yeah, the took an age delivering it, then it came out and most of the early reports were “It’s still not ready.” and then I think Microsoft just gave up.
I think not supporting Linux was a tactical error, though. Some people will put up with a lot for Linux GUI support, and some of those people are the types who can resolve problems with your half-baked GUzi framework.
Does it really need help? I struggle to imagine a scenario where one would consider MAUI not supporting Linux to be an issue (if we discard superficial bad faith concern) when Avalonia, Uno or, if you care about Linux as the main target, Gir.Core exist.
And, at the end of the day, you have a tool with an extremely rich FFI capability so whatever is available from C you can use as well.
Sorry I clearly was not clear enough. I mean specifically an issue with MAUI itself. I agree dotnet/c# have some solid UI options cross platform at this point. MAUI however seems to be at best a mess and at worst dead in the water.
> "The future is already here – it's just not evenly distributed."
Were I live and work (IT and consulting in central south-east Norway) it has been the year of the Linux Desktop on and off since 2009.
That was the first time I worked full time at a place that deployed Linux for everyone and everything that didn't have a verified reason for needing Windows.
I think we had one 3rd party trading software running on a Windows machine and maybe the CEO and someone in accounting got Windows.
Everyone else was upgraded to Linux and it worked beatifully. It was my job to support the sales department with desktop related issues and it was absolutely no problem to do it while also being a productive developer.
Since then I have not worked on a place that required Linux, but I think most of the places I have worked on since has had Linux as an option as long as you supported it yourself, and some places also have been very active writing how-tos and working with me to troubleshoot issues that were related to Linux, since many of them were also Linux users.
At the moment I use Mac, but at my current job I'm also allowed to use Linux.
Open Source Support reasons. If Linux developers want better MAUI support there is a "Community Repo" to contribute to and help move things further along. The impression is that if things were further along it might get formally "adopted" (by the Dotnet Foundation) for "official" out-of-the-box "support", but it isn't far enough along and doesn't seem to have enough contributors with enough momentum. It currently seems that the Venn Diagram of "Developers that say they want MAUI support for Linux" and "Developers that would contribute to Linux support for MAUI" has too small of an intersection.
Sure, Microsoft could pay more employees to work on it faster, but Linux loves and prefers open source from Linux devs "untainted by Microsoft", right?
Contribute to the Maui backend for GTK and/or Qt, nothing is stopping you
Alternatively, just because you're on .NET doesn't mean you need to use Microsoft sanctioned UI toolkits, just as C++ has no "official" UI toolkit. You're free to pick up some GTK or Qt bindings if you want a native feeling and your application is already architectures correctly. Alternatively, throw Imgui at it if you just need dev tooling, or maybe try other cross platform toolkits in the ecosystem like Avalonia or Uno
It is not perfect, there are issue depending if you need 32 or64 bits or if you need .net4 or greater. Games work but I have issues running tools like mod managers, game save cleners that are made with .net . In my case Sims3 works fine but not the Sims3 Launcher(this tools has more features then just launching the game like importing custom content/mods )
Sadly some Java tools stopped working if you run latest Java runtime because for some reason some crap was removed from Java and nobody made some easy way to add them back with soem package install.
With commercial applications that want to just take their existing code and have it run on Linux with only a couple lines changed, Avalonia XPF will do that
You are expected to use Avalonia or Uno for multi-platform targeting or Gir.Core (GTK4) or one of the many other binding libraries for Linux-specific GUI.
Also very easy to throw something together on top of SDL2 with Silk.NET.
Practically speaking it is in a much better place than many languages considered by parts of Linux community to be more """linux-oriented""".
My personal use case is running old GUI apps, I am not planning on writing GUI apps with .Net , MS had the opportunity to open source .Net/Silverlight and make money from tools but they bet on Windows and today most apps are node and javascript, a much inferior platform but MS open things up too late.
No, they pretty much gave up on winforms when .net core morphed into "the" .net that is cross platform. There are some nice crossplatform gui libs now though.
If true this would be huge. I got burned on the whole silverlight, Universal Windows Platform, WPF etc. All these new and improved solutions had all sorts of issues, no designer, no or weaker accessibility stories, bloated, slow etc etc.
C# + Winforms would be appealing. Some of the performance with larger datasets in the new solutions (tables etc) was just surprising. I really feel like Microsoft got so distracted chasing phones, tables, touch etc they forget just basic line of business application development which they could and should have owned.
.net Core doesn't supply WinForms, but WPF is the far more common paradigm for Windows apps now. WPF is supported by projects like Avalonia on Linux. There are also a few other major alternative UI toolkits, more commonly used by cross-platform (vs Windows-exclusive) developers.
This is the "virtual monorepo", if you want to clone one repo and build the entire SDK product then this is the correct thing to checkout - but development work right now still happens in the separate project repos, of which there are ~20
No it's way better than Flutter. Avalonia really works on desktop.. :). Also the model is WPF so whoever know a little bit of legacy .NET framework will be able to write Avalonia apps in no-time
I don't know any .net, and have never heard of this until now. Only stories with comments on HN are from eight years ago. Although I liked the screenshots on the linked site, it doesn't seem to have much buzz around it.
And unfortunately, the only stench I can't stand more than Google's is Microsoft's.
I do not follow buzz. I am an engineer by education and attitude and always try to investigate my options based on my needs and requirements. I use buzz only to drive me trough my investigations. In my case I had a desktop application that had to run on Windows and MacOs and needed support for Rich text format and rendering of custom graphs.
Following buzz I started to do a prototype with Flutter and stopped after a few days as I found out that most of the open source controls I was using had bugs on Windows Desktop. Then I moved to MAUI and discovered that in order to have some decent Rich Text support my only option was Blazor Hybdrid. Nedless to say I found bugs that prevented my prototype to work correctly. Then I moved to UNO and found that it doesn't have full Rich text format support. I was able to find some .NET open source libraries for doing text layout on Skia and with that I was able to find a partial solution that was however pretty complicated. Out of curiosity I investigated Avalonia and found that everything that I needed had full support. Being fluent in WFP I built the prototype in 3 days and I never looked back.
Your experience might vary depending on your fluency of WFP but I found that, considering Windows Desktop as a target platform, Flutter and MAUI are absolutely the worst options.
In my opinion Uno is better than Avalonia when considering web application support but Avalonia has more coverage of the WPF api with respect to what Uno does for WinUi. And for sure marketing is the worst part of Avalonia while it is the BEST for MAUI and Flutter.
BUT
That's now officially unsupported as all of Xamarin Forms is no longer supported and the MAUI replacement doesn't cover Linux nor does that look likely (MAUI is mired deep in problems due over-ambition, failure to resource and it seems a significant push in MS to use MAUI Hybrid aka web UIs within native apps).
Yes. There are multiple UI projects that build on the WinUI 3 components in the Win App SDK.
There's the first party MAUI which is an updated version of Xamarin Forms. The two best-known third-party implementations are AvaloniaUI and Uno. I prefer Uno, it has more cross-platform targets.
Which lets you run Blazor (web framework) like a desktop UI across all major desktop platforms. Microsoft has MAUI/Blazor as a thing, but only targets Mac and Windows ATM, so Photino bridges the gap for Linux.
Photino lets you use anything other than just .NET but has pretty decent .NET support.
(i hardly know what i'm talking about so somebody else may have a better idea, but i'm here now so)
mingw is a GNU's header/library environment (tools too maybe?) to create windows compatible applications. So I'd look into searching mingw .net and/or mingw mono.
also, ask your favorite AI, they're good at this type of question so long as it's not up to the minute news
> I looked into .NET somewhat recently and came away with the apparently mistaken impression that Mono was how .NET did cross-platform. I guess I must have been reading old docs,
.NET Core 1.0 (2016) was the first cross platform prototype. It got good in a release in 2018 or 2019, I even forgot which now. And took over steadily after that.
We don't even think about it any more. "which OS is the prod env on" isn't a factor that causes any support worries at all.
I would say I’m not ‘new’ and even developed .net 4.5 for a number of years. I’m just as stumped by the naming mess that Microsoft made across the board in that space.
Edit: I say 4.5 because I mean the original thick .net which is not dotnet core, which I think is the way to differentiate between versions, but also all the sub libraries like the orm were iirc named the same but did different things.
They should have rebadged everything with a new name that didn’t involve a word that is fairly painful to google (‘core’) can be used in development as well as the name of a framework.
It's even worse, since they dropped the core now and just call it .NET.
So searching has become even more of a pain.
It's also pretty much a mess, because many things were different between the versions.
So let's say you google how to do something and the result could be:
I think Microsoft is completely allergic to naming anything with a unique name or term; in fact, it's almost like they pick names that will be hardest to find with a google search.
If you just want to get into .NET (C# or F#) on non-Windows platforms, the latest .NET release (at the time of writing, 8.0) is what you want. The development experience is good these days.
Aside from following the default 'start here' documentation, there are various timelines made for fun and profit that visualize the full history, for example:
This is quite overwhelming, but it can still be useful when reading an article about .NET that is either older or refers to history as you can quickly see where in time it is located.
> Is there somewhere where someone new to the ecosystem can get a simple introduction to all of these different terms and which ones are still relevant today?
Not really. It's legacy cruft all the way down.
But the good news is that if you stay on the beaten path, using the latest SDK and targeting the latest Runtime, everything Just WorksTM.
i want to love dotnet-core, especially since godot switched from mono in godot 3 to dotnet-core in godot 4, but so far i haven't been able to
currently debian has a mono package but no dotnet-core package. i'm not sure why this is; usually when debian lacks a popular nominally open-source package like this, it's either because it fails to build from source, or because it has some kind of tricky licensing pitfall that most people haven't noticed, but diligent debian developers have
does anyone know why this problem exists for dotnet-core?
also, does dotnet-core have a reasonable aot story for things like esp32 and ch32v003?
.NET Core is available for Debian, you just have to add Microsoft's APT source [1].
Fedora [2], Ubuntu [3], and FreeBSD [4] build .NET from source themselves. A lot of work has been done to make it possible to build .NET from source [5] without closed source components, so it might just be a matter of someone being motivated to create the package for Debian.
When using Microsoft repositories you need to explicitly opt out on telemetry collection.
I think telemetry collection alone should be a good reason for Debian to consider repackaging it. I don’t want telemetry to be collected on my GNU/Linux machine, thanks Microsoft, but you already have so much telemetry from my Windows machine, please leave my other machines alone.
In any case, Debian would use https://github.com/dotnet/source-build and dotnet/dotnet, and could easily include the argument or a patch for this. It’s unlikely to be an issue. My bet it was not in Debian because there was no one to take initiative yet or there was but that person has faced a backlash by people in Debian who are similar to vocal minority here that posts FUD because of their little personal crusade.
it doesn't seem to have come up on debian-legal in the last year or so https://lists.debian.org/debian-legal/ but debian-legal is also kind of a shadow of its former self
you could easily imagine fedora distributing their own build of software whose licensing fails to comply with the debian free software guidelines; bundling proprietary software used to be common in linux distributions in fact
Does Debian require packages to work on all of its architectures? If so, that could be the issue. .NET Core only supports x86, x64, and Arm64 (I think Arm32 has been discontinued and RISC-V is experimental at this point).
It's possible that they object to .NET Core having certain license restrictions on the Windows port (https://github.com/dotnet/core/blob/main/license-information...). .NET Core is mostly MIT or Apache licensed, but the Windows SDK has some additional terms. Skimming the third party licenses, that doesn't seem like an issue (mostly MIT/BSD/Apache or similar).
I think the licensing situation is an interesting question: if you have software that's 100% open source when compiled for your OS, but requires non-free stuff to run on Windows, is it ok to include in Debian? It looks like none of the non-free stuff (like WPF) gets distributed with the non-Windows SDK builds. Binaries created from your code only depend on MIT-licensed stuff on macOS and Linux, but might depend on something closed-source when targeting Windows - though it looks like almost all of that stuff is either WPF (so you wouldn't be able to develop on Linux/Mac anyway since those libraries wouldn't be in the SDK on those platforms) or were removed as a runtime dependency in .NET 7. It looks like `Microsoft.DiaSymReader.Native` might be the only thing left. Maybe that's what is holding it back?
> also, does dotnet-core have a reasonable aot story for things like esp32 and ch32v003?
"Reasonable" can be a lot of things to a lot of different people. People have been working on RISC-V support. Samsung seems interested in it. But I probably wouldn't recommend it at the moment - and Mono doesn't really have RISC-V support either.
to be clear, my question about debian is not about whether i can install dotnet-core in debian; it's about why it isn't in debian's repositories rather than microsoft's. microsoft, to understate the case somewhat, doesn't provide the stringent protections for users that debian does
> Specifying a specific list of architectures indicates that the source will build an architecture-dependent package only on architectures included in the list. Specifying a list of architecture wildcards indicates that the source will build an architecture-dependent package on only those architectures that match any of the specified architecture wildcards. Specifying a list of architectures or architecture wildcards other than any is for the minority of cases where a program is not portable or is not useful on some architectures. Where possible, the program should be made portable instead.
i don't think the license you link to would be a problem in itself, because it only applies to certain files which are not useful for running dotnet-core on debian anyway. debian has lots of packages from which non-free-software files have been removed. i don't know anything about diasymreader?
with respect to esp32 and ch32v003, what i meant to point to was not the risc-v architecture (some esp32s are tensilica!) but the limited memory space; jit compilation is not a good fit for 2 kibibytes of ram or even 520 kilobytes of ram
if you want your package to be in debian, you are going to have to find a debian developer who is willing to take responsibility for maintaining it. microsoft is already providing .deb packages on their website, at least binaries
getting one of your people to become a debian developer is similar in difficulty to getting one of your people to become a senator or a citizen of switzerland
It sure wouldn't hurt if they hired a Debian Developer to do it right, or maybe work through the process of turning an employee into a Debian Developer.
Debian developers can do it right because they're not affiliated with the vendor, so they can disable user-hostile features and settings that the vendor enables by default.
i don't think debian developers are actually prohibited from becoming employees of the vendor, but i think that if they get caught pushing malware, their dd status is likely to be revoked, and the process that allowed them to become dds is likely to be reviewed. any dd can generally push a change to any debian package to the archive; it's a major level of trust. that's why it's generally not realistic to try to get one of your employees to become a dd
There's a large segment of Debian Developers that are also the upstream maintainers/owners of various projects. I can't think of any paid examples, but volunteer ones are plentiful.
Perhaps the Debian project would force a .NET package to come with telemetry disabled by default, but for as long as said employee can abide by the rules of Debian, I don't really see any reason it can't be done.
Even with AOT compilation, as someone who loves C# and also does embedded development in C I would personally say a garbage collected language like C# has no place there.
not everything running on a 20-mips 32-bit microcontroller with 2 kibibytes of sram needs to be hard real time and failure-free, and of course the esp32 has hundreds of kibibytes
and, correct me if i'm wrong here, but doesn't c# allow you to statically allocate structs just as much as c does? i'd think you'd be able to avoid garbage collection about as much as you want, but i've never written much beyond 'hello, world' in c#
c# has the concept of value types (which structs are), which are stack allocated. Generics have seen more and more instance of getting a Value type like Value Task for stack allocated async objects. But if you add a class as a member of the struct that is going straight to the heap with all the GC stuff that entails
what about global or static variables of value types? i mean in theory you could stack-allocate whatever you want in your main() method and pass pointers to everything, but that sounds unusably clumsy. but with global variables and/or class variables there would be no problem except for things that inherently require heap allocation by the nature of the problem
Static fields may be placed on Frozen Object Heap. The values of static readonly fields may not exist at all if the ILC's static constructor interpreter can pre-initialize it at compile-time and bake the value into binary or codegen. Tiered Compilation does a similar optimization but for all cases. This is with JIT though which is not usable in such environment.
Otherwise, statics are placed in a static values array "rooted" by a respective assembly. I believe each value will be contained by a respective box if it's not an object. This will be usually located in Gen2 GC heap. My memory is a bit hazy on this specific part.
There is no concept of globals in .NET the way you describe it - you simply access static properties and fields.
In practice, you will not be running .NET on microcontrollers with existing mainline runtime flavours - very different tradeoffs, much like no-std in Rust. As mentioned, there is NanoFramework. Another one is Meadow: https://www.wildernesslabs.co which my friend is using for an automated lab for his PhD thesis.
Last mention goes to https://github.com/bflattened/bflat which supports a few interesting targets like UEFI. From the same author there's an example of completely runtime-less C# as well: https://github.com/MichalStrehovsky/zerosharp. It remains a usable language because C# has a large subset of C and features for manual memory management so writing code that completely bypasses allocations is very doable, unlike with other GC-based alternatives.
there are ways (byref I think?) to pass references to stack variables around. And Statics depends. Static const even with stuff like strings would just compile directly into the binary, regular static still has to end up on the Heap.
I believe that you would use dotnet nano for something like that. I used it (or some previous version of it) once many years ago and was very impressed with the productivity and ease of use it offered. Ultimately the lack of community surrounding it drove me to other technologies. Might have changed since then though, who knows!
It never had any sense, and it never had any future. We told Miguel he would be playing the chase game with Microsoft and he will always be behind and never being sure if MS won't use the patent card if Mono actually becomes dangerous (and they can get quite nasty when pissed off - see the accusations against ReactOS).
But he was in love with COM/DCOM, registry, and many other things that MS shipped. Some of these things made Gnome much slower than it could be.
I always assumed Microsoft did not condone Wine or other re-implementations of their APIs (like ReactOS), but that they were protected by DMCA reverse engineering provisions and anyway too insignificant to send the legal team after.
Wikipedia says,
> Until 2020, Microsoft had not made any public statements about Wine. ... On 16 February 2005, Ivan Leo Puoti discovered that Microsoft had started checking the Windows Registry for the Wine configuration key and would block the Windows Update for any component. As Puoti noted: "It's also the first time Microsoft acknowledges the existence of Wine."
> In January 2020, Microsoft cited Wine as a positive consequence of being able to reimplement APIs, in its amicus curiae brief for Google LLC v. Oracle America, Inc.
I think Microsoft has finally realized that its animus toward projects like Wine and pre-acquisition Mono was ultimately unproductive, and a net negative for Microsoft itself.
I still don't trust MS's motives in general, but I think they at least recognize that Wine/Proton helps make the Win32 and DirectX APIs a sort of de-facto cross-platform standard when it comes to things like desktop gaming, and that this is a good thing for them.
On the server side, MS knows that Linux is by far the most popular server OS, and official support for running .NET backend apps on Linux from MS themselves is a win for them as well.
>that Wine/Proton helps make the Win32 and DirectX APIs a sort of de-facto cross-platform standard when it comes to things like desktop gaming, and that this is a good thing for them.
I'm not sure if it benefits microsoft in the long term, because the "backwards compatibility" features of Wine need to be implemented in Windows already as a part of the system. So in the long run wine/proton/mono will implement windows features on linux in an optional/replaceable/modular way in user-space while keeping backwards compatibility for older windows software, while windows is forced to implement (and distribute these features) with their OS and has to sacrifice backwards compatibility if they want to simply their OS.
I would say that the adoption of wine/proton helps the linux ecosystem a lot more because there wasn't a standard executable format for linux beforehand (static? tarball of program and dynamic libraries? .deb file? AppImage? Flatpak? Higher-level language like java?). How do you reliably link to libraries like mesa or even glibc? Now there is a solution: just distribute a windows program and test it to confirm it works in wine/proton. Perhaps it is better for DirectX adoption, but it seems like Vulkan/OpenGL/WebGPU are still superior in terms of cross-compatibility, regardless if you use wine or not.
> there wasn't a standard executable format for linux beforehand (static? tarball of program and dynamic libraries? .deb file? AppImage? Flatpak? Higher-level language like java?).
By this logic there wasn't a standard executable format for Windows, either (static? zip archive of program and dynamic libraries? .msi file? installer program? UWP? higher-level language like C#?).
Windows NT (2000, XP, etc) used to include an emulator allowing to run DOS apps and win16 apps. I don't see why running older / obsoleted win32 APIs through an emulation layer won't be a good approach. Maybe even by adopting and running Wine.
> I'm not sure if it benefits microsoft in the long term, because the "backwards compatibility" features of Wine need to be implemented in Windows already as a part of the system.
Sometimes running old software atop Wine on Windows is the easiest - or even only - option to have said old software work on new Windows.
I disagree. MS was completely succesfull in their goals. They kept a ton of developers busy learing useless Xamarin, thus keeping them from developing products that can actually compete with Microsoft products.
Next they killed of an open source competitor (Mono) of their product, stole the usefull bits to put it in .Net, and now they dump the leftover project (that's not competing with them anymore) back into the OS world.
I don't think Microsoft viewed Mono as a competitor. Even before Microsoft acquired Xamarin for hundreds of millions of dollars, they already had a history of collaboration on .NET, including sharing test cases in order to help with compatibility, and co-developing integrations into Microsoft products such as Azure and Office 365.
The "keeping [developers] from developing products that can actually compete" assertion is frankly absurd. .NET's real competitor is and has always been Java. Java, possibly the world's most-used platform that isn't JavaScript, has always had heaps more people working on it than .NET's entire ecosystem, let alone just the Mono project.
> kept a ton of developers busy learing useless Xamarin...
What kind of moustache-twirly stupidity is this? Yeah, Microsoft maintained a shitty cross-platform SDK so that developers would make worse software, because that's somehow helping any of their main product verticals. By the way, those are (broadly speaking) cloud, client software, and games.
Do you have any evidence to suggest that there was a Xamarin-based application that would have directly competed with Office? How about Fallout? Now, do you have any evidence that Microsoft tried to make Xamarin worse at doing the thing that application was trying to do?
> Next they killed of an open source competitor (Mono) of their product
Sure. Mono is only useful for legacy purposes. Microsoft's own design was always the reference implementation of .NET, regardless of whether it was open-source. Mono existed for the sole purpose of being an open, cross-platform reimplementation. Now that the reference design is itself open-source and cross-platform, Mono is mostly redundant.
Microsoft as a company is extremely myopic. Budgets are scrutinized down to the penny every few months at very senior levels. This drives a culture of immediacy. Wine was a threat until Microsoft realized everyone in tech had moved to service based business models (aka "cloud"). Only afterwards, did they "realize" Linux as a threat to their long term viability no longer mattered.
They finally started to admit where they're losers and stop trying to fight those battles.
Dumping endless piles of cash into projects nobody cares about and pretending like you're the dominant player when you control some dwindling 2% of the market is stupid and more companies should learn that lesson
I think it's more because individual pc instances literally doesn't matter anymore. Operating systems and programming languages which lock you into them are irrelevant from a revenue standpoint.
I think it's just such a clear business-razor because of the cloud: can I take my app and spin up a bajillion cheapo servers with no licensing costs using that stack?
If the answer for .Net was 'no' then there are meaningful domains where people would just jump ship in a second. Research, academia, teaching, and certain government areas pop to mind. Keeping Linux support, because of that server dominance, is a core concern for them.
I think it's the same for any global enterprise: profit.
In that regard, "trusting" something like MS is like evaluating their stock: what do they make money off, what is a threat to that. Which makes it rather easy to "trust" them: if they can make money off SomeOpenSourceProject they'll help it along, if it doesn't help, nor threat, they'll ignore it. If it's a threat, they'll put (some) money towards fighting it.
For me the difficult part, and why I still don't fully trust MS, even with Github or VScode lies in their internal competition: MS has projects that directly compete eachother. Business-wise it makes no sense to me (and is the primary reason I'll stay away from investing in MSFT). But also their internal competion between profit now and delayed profit. MS has often done things (or not done things) that increase the bottom line this quarter, but harm them over years. In that regard too, MS makes no sense to me Business-wise.
I guess having a cash-cow-"monopoly" for decades kinda absolves them of the responsibility to run the entire company in a way that makes sense business-wise.
> Wine/Proton helps make the Win32 and DirectX APIs a sort of de-facto cross-platform standar
There are perfectly fine _actual_ cross-platform standards like Vulcan and OpenGL. If your goal is cross-platform, making a Windows app that you hope will be converted well enough is a strange way to approach it.
And yet, the win32 is the only one that is confirmed to work.
Example: Game Neo Scavenger is available for linux with binaries for them. They dont work in any modern linux because (I believe) they were compiled for a 32bits version of linux.
Do you know how you can play the game on linux?
Yes, using the windows version with lutris, which is 32bits too.
That doesn't help, most Linux distributions do not maintain ABI (library-program linkage) compatibility between major releases, and in the case of rolling distros half the system has to be recompiled when things such as libcurl, openssl, libc etc change. If these change, it's possible that anything compiled agaisnt the system version of it will no longer work without being recompiled
Windows goes back and beyond for compatibility with existing compiled software and Wine inherits that, is partially why Windows versions under Wine often have a higher chance of running than the native versions (ARK is a great example)
Projects like Flatpak attempt to solve this by the use of runtimes.
Game studios get a lot more mileage out of testing their games on Linux+Wine than bothering to build a Linux native version that will stop working within two years.
Steam has incentivized this with their Steam Deck Verified program. The Steam Deck being so popular means a lot of studios want their games to be verified on Deck, and if they're verified on Deck then they work in the Linux desktop version of Steam out of the box.
Maybe they feel the same about Wine as I do about WSL.
You can argue that the my-thing-wrapped-inside-your-thing increases exposure to my-thing and that's a net good outweighing any other factors, but you can just as validly argue it helps divert from actual adoption of my-thing and facilitates never moving from your-thing, since no one has any actual empirical study, it's all just feelings and beliefs.
Maybe one logical argument that might have some meat is maybe WSL/Wine just means that the exposure vs crutch aspects cancel each other out (for every user who is exposed to foreign-thing and maybe decides to adopt it, there is another user who thanks to the swallowed version does not ever have to to move), and if that's true, then any imbalance in effects comes down to the the innate virtues of the two things. Both groups of people are equally exposed to both platforms and have equally good-enough use of both platforms, and neither has to actually change to get the benefits of the other, and so the user will choose whichever actually seems to serve their needs the best as their native platform.
I wonder if it's possible to make a desktop backed by WSL that would be a better experience than the current ad/spying-riddled Windows native desktop? Then MS would be forced to try to enshittify WSL so that it doesn't provide an escape and superior experience from the current Windows experience. Is WSL a good thing THEN?
At least for now, WSL has absolute crap access to hardware, not even just like gpus for gaming but even simple things like access to a usb-serial adapter. So, it's probably not possible to make a functional WSL desktop yet. Maybe such things will intentionally never be fixed in WSL just for this reason, so you can only ever use it for pure web app development no different from a cloud instance.
Microsoft in 2024 feels like a different beast. All the MS devs I know seem fully on board with totally cross-platform support. Half of them are coding on MacBooks and I would hazard a guess that a good proportion of .NET web sites being built are being deployed onto Linux boxen.
.Net SRE here, all our .Net REST APIs are deployed on Kubernetes. Devs are still mostly on Windows because Visual Studio.
I've worked with Azure team, all greenfield they do for Azure goes on Linux as well. Windows Server is pretty much dead to Microsoft though it will be continue to be supported and released because $$$.
It's been very hard to explain to my organisation that Windows Server is dead. We haven't deployed a system to it in two years, everything is some kind of dockerised linux thing but the "we're a Microsoft shop" idea prevails.
In the end we decided to just let the management think what they want. I've been more of a Unix person for 40 years but I kind of miss the "good" versions of Microsoft Server - in the 1990s / early 2000s it was a real contender. If they hadn't doubled down on weird things like Powershell it might still be a contender.
It hasn't. Powershell is probably one of great things to come out of Windows Server. I still use it with *nix machines and it powers some sidecars at work. If you are stuck with Windows Server, it's only thing that gives you a fighting chance of being able to do anything NotClickOps (tm)
Sure, it's got some unique characteristic that more traditional shell users dislike but that's just a matter of taste.
It irks me that the default for servers still seems to be 5.1 which is anemic and seems to have really weird quirks and syntax differences from later versions. As if the default silent jsondepth thing was not enough converfrom-json hash tables have case insensitive keys. Really?
Someone wrote some automation code that handles json payloads using powershell. When we tried to migrate to azure functions which uses 7.x by default things broke because users never cared to check sensitivity of key names.
It’s also slow even for interpreted language standards.
I’ll seriously never use powershell for anything serious ever again even though I admit syntax and design feels kinda nice.
Because backwards compatibility. I've run into stuff that doesn't working in 7.x without a rewrite.
It's just best to think about Powershell 5.1 and Powershell Core 7.x like Py2 -> Py3. Most of code works as is, some doesn't, and you should use latest when you can.
>Someone wrote some automation code that handles json payloads using powershell. When we tried to migrate to azure functions which uses 7.x by default things broke because users never cared to check sensitivity of key names.
Azure Functions are nightmare in its own. Not sure how much of that is Powershell fault vs Azure Functions.
>It’s also slow even for interpreted language standards.
Actually, it's blown Python out of the water at work. It's startup time can be painful as it's interpreting everything but once it gets going, it really moves. We use it to churn through 4GB CSV at work replacing a Python script, it's much much faster.
>I’ll seriously never use powershell for anything serious ever again even though I admit syntax and design feels kinda nice.
Your loss. Despite the few problems I run into, I really like it and wish more *nix people gave it a try. It's much better then bash nightmares I've seen.
Not really. More command-line admin programs and traditional shells like yori or bash would have been fine. Good enough for every other Network OS to date. Netware had great TUIs.
NT also had OS/2 and Posix subsystems ((checks calendar)) about thirty years ago, now that I think of it.
Not sure what you've used it for but Powershell is about the best thing to come out of Microsoft in the last decade. Very useful and extendable - useful anywhere bash is. It's also the scripting engine of choice for Azure and Entra ID too which is far from Windows Server Land.
I'm sure the vast majority of individuals, especially those doing the technical work, are normal people wanting good things and pushing to do good work. It's the organisation as a whole that turns into a different beast making business decisions
If Windows Update replaced components of Wine, that would (a) break people's Wine installs, and (b) give those users a way to legally get Microsoft's versions of those components for use outside of Windows.
Even if Wine had become drop in replacement for modern Windows distribution, I doubt it would hurt much. Business would likely still buy Windows, because of support and security patches. Consumers would get their Windows preinstalled. Some manufacturers would probably do Wine installations - but then it would depend on support. Don't want to sell machines to people who are not tech savvy, that are not getting updates. That is a potential for returns and massive cost and headache.
AWS Supports one of the tools for porting out of AWS. Supporting something that looks like an escape valve (whether it works or not) keeps the antitrust people off your neck.
I can’t remember, sorry. I met them at a tech meetup a couple years before I was using AWS and it didn’t stick in my brain. Except the funding source bit.
I mean, if wine is a problem they did basically the same thing with the WSL, especially version 1 (version 2 is just a VM, but the concept of running unmodified Linux binaries on Windows like they are native application is the same).
I think that they don't care about going against the open soruce community, given that Microsoft uses a lot of open source software in their products (also, probably violates the terms of the GPL license of such software).
Fun fact: Second Life, the virtual world, has an in-world scripting language called LSL, and it gets compiled to bytecode that gets run on a virtual machine. Initially, it got compiled to bytecode that ran on an in-house virtual machine, but in 2008, they switched over to compiling LSL to Mono bytecode to run on the Mono virtual machine. I wonder if that's still how it works. (I haven't been involved with SL for a long time.)
We're hard at work adding Luau (https://luau.org) as a supported language for both in-world scripting as well as client/viewer-side scripting. As a handy byproduct of that, LSL will also gain the ability to be compiled to Luau bytecode, allowing us to eventually (someday, at least) shed any need for our custom-patched version of Mono 2.6. More juicy details here: https://wiki.secondlife.com/wiki/Lua_FAQ
Always nice to see that SL is still going. I'll probably never remember my login to my old 2006 era account but the years of weird virtual world memories remain.
Cool! LSL is such an interesting language. Having an explicit state with entry and exit functions is quite unique I think, and seems like it could be useful outside of SL. Given that scripts are isolated and communicate via messaging over channels (IIRC), was there ever any interest in executing it on the BEAM virtual machine?
Microsoft's own FOSS multiplatform implementation of the .NET runtime is now much more performant and feature complete than Mono.
However Mono is easier to embed into other applications and easier to port to new platforms. That is for example why it's used for the .NET/Blazor WebAssembly stuff. Microsoft still maintains their own fork of Mono for this specific use case.
Mono also implements some of the legacy Windows Desktop GUI frameworks like WinForms and WPF that Microsoft never bothered to port to their new .NET runtime. This is probably why the Wine developers might be interested in Mono.
Mono also supports winforms. I don't think they're supported in dotnet (but there's libraries for Gtk, although you could also use Vala with a bit extra effort).
A shoutout goes to a project that aims to simplify CoreCLR embedding UX to prevent the issues stemming from embedding legacy Mono: https://github.com/StudioCherno/Coral
I was pleased to see WinForms got some updates in .NET9. I really thought they'd left it. I still use it every day when I need to spin up a new tool to do some little task that needs a GUI.
Free software is typically described as "free as in freedom" or "free as in free beer". (This is probably a limitation of English tho, my language has 2 different words for permissions and costlessness.) GP above proposes the "free as in puppy" variant, which means that it is a burden of maintenance. I can't recall any real examples for this.
If you want to be pedantic, English does have distinct words for the two connotations of free--"liberal" and "gratuitous". Although it should also be immediately obvious why those words aren't preferred either: "liberal" also has several other connotations (to the point that a "Liberal Party" could be almost anywhere on the political spectrum), while "gratuitous" tends to lean more towards "unnecessary" than "free of charge" in common parlance.
> English does have distinct words for the two connotations of free--"liberal" and "gratuitous".
Sorry but no it doesn’t. These words have the other meanings you mentioned, but they don’t include either of the meanings of “free”.
If you said you were giving away “gratuitous software”, native English speakers wouldn’t know what you were talking about. The only way to understand it would be to realize that those words are etymologically cognate to words in European languages that do have those meanings.
The word "liberal" definitely has the same definition of "libre"--ever hear of the term "liberal democracy"? That's exactly the same kind free they're talking about.
> 2. given, done, bestowed, or obtained without charge or payment; free; complimentary.
It's more of a stretch there, because the primary definition of gratuitous has a connotation of unnecessary, even undesirable. If you didn't have at least some hint of disapproval of a service, you'd reach for the word "free" long before "gratuitous".
The “liberal” in “liberal democracy” doesn’t mean the same thing as the “free” in “free software”. It’s the license that is liberal, not the software itself, so at most I’d admit that “liberally licensed software” means the same thing.
Similarly you would say someone who’s gotten out of prison is now “free” (or libre in French or Spanish) but you wouldn’t say they’re “liberal”.
"Gratuitous software" would be excessive and unnecessary software. Which I think a lot of commercial (particularly "news") websites qualify for, and "modern" websites in general. NPM makes it easy to just install something, which requires all kinds of other things, which duplicate each other, etc.
Strangely enough, I think the LaTex distribution qualifies, too. I tried to install it recently, and it wanted 1 GB of disk space! That's multiple times the size of the entire system disk when LaTeX was created...
Sooner or later a lot of the web is going to run on WASM, at which point we'll have a virtual machine running in a user program running on an OS which incompletely virtualizes the bare machine (hence why we've ended up with WASM). Extra gratuitousity if the browser is an Intel binary being run on an M* processor via Rosetta translation... Maybe eventually we'll realize that the OS needs to provide a full virtual machine, complete with window to draw in, filesystem isolation like Plan 9, etc. But, inertia will probably make it take while.
> Strangely enough, I think the LaTex distribution qualifies, too. I tried to install it recently, and it wanted 1 GB of disk space! That's multiple times the size of the entire system disk when LaTeX was created...
There is the TinyTeX distribution, which is smaller. (Despite its name, it isn't tiny, or small, or medium in size, but it is also large. But smaller than the default LaTeX distribution with all the possible packages, source code and documents.)
Well here in the project readme there is a table with the sizes: https://github.com/rstudio/tinytex-releases (a bit outdated, they didn't refresh it). I downloaded TinyTex-1 for Windows, it's 338 MB uncompressed. It is _huge_ in my book.
Free software releases often include the "free as in puppy" implication as a disclaimer of responsibility for the effort you may need to expend to make use of it - "if it breaks, you get to keep both pieces".
To be fair, "free as in beer" doesn't work for a lot of people who don't drink (or do drink, but don't like beer). I don't think we're going to come up with a one-size-fits-all slogan...
Wine has (or used to have anyway, not sure if it still does) a version of Mono it used to run .NET stuff within Wine; I'd assume this has to do with that, that they were relatively alone in having a continuing interest in the Mono codebase vs. the dotnet core stuff.
I think it's just hurting someone at Microsoft less if they give it a home that isn't /dev/null.
Edit: quick hat tip to Mono.Cecil which I've used a couple of times to crack .Net components to bypass licensing code. It's not that we didn't pay for them but we couldn't be bothered to deal with license deployment and maintenance.
Mono was very useful in university. Must have been 2005 when I got asked if I wanted to use Java or C# for the programming course. Being bored with Java I picked C#. We were a very small group of two students.
But as I just had a Powerbook I used Mono to run it on OS X. At the end of the course someone from Microsoft came to the university to answer any of our question about upcoming features in .NET and C#. And as we were a small group I set directly in front of him with the shiny apple point at him.
Very interesting language at that time. .NET not so much. Also still remember that we were tasked to implement 3 sort algorithms of our choice. One of mine was bogosort and with Mono on PPC it could sort up to 7 elements, before becoming really slow.
8! is 40320. Even if it took 10 times as many iterations to find the correct order, it would still only be less than 4 million swaps. Just how slow was that computer?
So it was a PowerPC, not so much Software was optimized. Then it was a new language written to be used on Windows. And that was run via Mono, which was a 3rd party actually writing it for Linux and not a BSD derivative on a CPU that no one is using with a kernel that's different.
It might not have been that horrible, but it was just a quick presentation. Nothing that should even run for a minute.
A bit off-topic, but this makes me wonder about the relationship between Microsoft and Wine. Do they consider it a threat? An ally? Both?
This is my first time seeing Microsoft acknowledge Wine's existence, and in this case, it was at least in a friendly manner? Or could there be bad faith behind this 'donation'?
Another poster quoted Wikipedia somewhere here; MS implicitly acknowledged Wine's existence back in 2005 when they added a check for some of Wine's registry keys which would disable Windows Update if it found them. And in 2020 MS filed an Amicus brief in that Google/Oracle lawsuit in support of free re-implementations of APIs, citing Wine as a positive example.
While I am still wary of Microsoft after their previous anti-competitive behaviors, I think they've taken a more pragmatic view of late, and realize that projects like Wine are actually good for their platform as a whole. I expect if Wine/Proton did not exist, we'd see more (for example) Windows-only games ported to macOS or Linux. With Wine/Proton, those ports are mostly not necessary, and Microsoft gets to say that Win32/DirectX is something of a cross-platform gaming "standard".
What could WINE possibly do to them? Rob them of all kinds of enterprise and cloud business? WINE is a single LED on a nuclear powerplant control panel.
The best Wine environment is still a Windows install. You need a lot of things to do to run some of the mill Win32 app, so Wine is not a direct threat for MS in any foreseeable future.
I know it's a long-standing empirical truth that anyone involved with Mono is required to prefer doing just about anything besides thinking about or touching what's on the Mono project website, but this announcement really deserves to be put on page unto itself with a URL all its own, rather than shoehorned into an anonymous div on the Mono landing page and at the top of /news.
It seems like the link we got (https://www.mono-project.com) might be the URL for the announcement - that's to say, this is the last update on that website, and will stay there indefinitely.
Thank you, I’ve been looking for an explanation of this. So Mono is useful to Wine because its users care more about licensing and running legacy software: Mono is free software and an acceptable runtime for pre-.NET 5.0 stuff.
I like the strategic approach. Pay attention software publishers, and hardware manufacturers! You can gain some significant public accolades.
When a publisher or manufacturer wants to end a product line, instead of shutting it down, spin it out as F/LOSS, and give it some seed money. If the thing is good, people will pick it up and it will survive. If not, the company still gains public appreciation.
This dovetails well as a potential solution into the problem we are discussing in the Smart TV, smart home, smart vehicle articles.
For everyone who is confused by what is going on, here's the explanation:
Today, there are 2.5 Mono's:
Mono that lives in https://github.com/mono/mono. This is the original Mono codebase that was written back then and was the .NET Framework for Linux, with corresponding compat. and such, pioneered by Miguel De Icaza, who now seems to be happier in Swift land. At the present day, it was receiving very little maintenance and I don't believe was actively used. Please correct me if I'm wrong.
Mono that lives in https://github.com/dotnet/runtime/tree/main/src/mono. This is the Mono that got merged into .NET, becoming the building block for multiple components and one of the official runtime flavours. It is actively maintained and is at relative feature parity with CoreCLR, predominantly serving mobile targets (iOS, Android) and WASM as well as exotic or legacy targets like ARMv6, LA64, s390x(?), ppc64. It is also useful for initial stages of new platform bring-up process. Note that you are not expected to use it for targets that support CoreCLR due to a massive rift in performance between the two. When you are using it, you do so as a part of standard .NET toolchain - it is picked automatically for appropriate targets, or can be opted into with some configuration.
Mono that lives in https://gitlab.winehq.org/wine-mono/mono which is a Mono fork actively maintained by Wine for its own usage. Going forward, any possible ambiguities regarding ownership and stewardship are considered resolved and the ownership of mono/mono and everything related to it is transferred to WineHQ.
Honorable mention also goes to private Mono fork used by Unity which they are (painfully) trying to migrate from.
The comparison we discussed was for unrepresentative code that used none of the features that make .NET fast (generics, SIMD, expected forms of inheritance and abstraction and devirtualization they enable, CoreLib APIs). The closest case in there was JSON serialization which CoreCLR was 385% faster at. It is unfortunate that you feel a need to say this, knowing that it doesn't even show a tip of the iceberg.
Please do not mislead casual readers here with such comments.
They will have a bad time running basic programs - the original Mono is outdated and cannot execute assemblies that target non-legacy versions, and the Mono that lives in dotnet/runtime (which you have to go out of your way to use on CoreCLR platforms) tends to have all kinds of regressions on user-provided code that is not as robust on runtime checks to ensure that Mono does not accidentally go onto the path that that it has especially bad regression on. Even CoreLib code nowadays uses more and more struct generics assuming monomorhpization which performs poorly on Mono. There is very little work done to improve performance on Mono with effort invested mostly in WASM area and to ensure it does not regress further. Major platforms like Android and iOS are in the slow but steady progress to migrate to CoreCLR/NativeAOT (there are other reasons not in the least much smaller binary size). And for WASM there is NativeAOT-LLVM experiment that is likely to make Mono obsolete for that target too.
The workloads that matter and are representative are the ones produced by C#, F# and VB.NET compilers as well as projects that care about exercising the standard library and/or produce recommended CIL forms (like https://github.com/FractalFir/rustc_codegen_clr).
Why so arrogant? The CIL is good enough. It's a promise of the ECMA-335 to cope even with unoptimized CIL, and Mono indeed includes many optimization steps. Your arguments - especially concerning SIMD and other features supported by CoreCLR - are absolutely not relevant in this context. CIL is always the same (regardless whether the CIL was generated by your big C# or my small Oberon compiler), and if I feed unoptimized CIL to CoreCLR, it still has the opportunity to make use of the SIMD features of the given CPU if need be. As already discussed it's even more interesting to base the performance comparison on unoptimized CIL, because by the end of the day we all want to know how good the optimizers of Mono or CoreCLR are.
And you didn't answer my question, so I assume you're working for Microsoft or some of their affiliates, and your claims are obviously biased by this.
Comparison against Oberon+ string primitives allocates a new char array every time. Other operations allocated it to just null-terminate it (string constants are null-terminated already for example, or can be done so explicitly by compiler instead, in any case this is an incorrect design). Somehow, it failed the basic task of modeling C behaviors on the one and only high-level bytecode target that comes to modeling C the closest. This was the very first thing I saw when I opened the compilation artifacts with ILSpy.
In any case, my goal was to post a disclaimer and it is fulfilled.
> string primitives allocates a new char array every time.
So what? What do you think do the dotnet string or marshalling classes internally? And how should that affect the performance comparison if we feed the same CIL to both - Mono and CoreCLR?
But we can leave it at this; people can read the arguments at the given link, we don't have to repeat everything again.
If we compare the last major release of Mono back in 2019, where there was a real improvement to the CLR (not just bug and security fixes), with the CoreCLR versions at that time, the factor is rather 1.1 (see e.g. https://www.quora.com/Is-the-Mono-CLR-really-slower-than-Cor...).
I'm genuinely curious, for someone who develops web application backends and larger distributed systems & infrastructure, predominantly using Go and Python, exclusively targeting Linux, is there anything in the .NET ecosystem that anyone would recommend I take a look at? Many thanks.
.NET Core is my favorite way to quickly implement an app to run on a Raspberry Pi. Just basically copy & paste into a folder, chmod the executable and off you go.
I have a number of these devices running in the house doing various things.
Definitely C#. You’ll find tons more resources. F# is fantastic, but it’s not a good *first* programming language.
A lot of what you’ll learn when you first learn programming is going to be applicable in any language though. Once you’re comfortable with C#, and can understand the difference between imperative, object-oriented, and functional programming, you’ll be in a good place to check out F# (or any other language, really).
It doesn't matter, if you want to "actually" use .Net you have to at least be able to read C#. And I guess some files still - as it was 3 years ago - need to be C#, for example in mobile apps.
Its an interesting question. I've found personally people with previous imperative/functional language (e.g. JS/Go/etc) have picked up F# quicker, and people with OO knowledge (C++, Java, etc) have picked up C# quicker. There's a lot of implied/conventional knowledge with OO that many C#'s dev forget they have (i.e. its all sunk cost to them). If you just want to cut and paste code however C# has more Microsoft provided doco so there's that.
Honestly this is such an interesting question. Conventional wisdom would definitely say C#, but I’ve always wondered if that’s because imperative programming is easier than functional for a beginner, or because basically everyone starts with imperative. I’d be curious to see what would happen if someone started functional first.
Modern .net on Linux is lovely, you can initialize a project, pull in the S3 client and write a 1-3 line C# program that AOT compiles to a single binary with none of the perf issues or GIL hand-wringing that plagues life in Python.
Given modern Python means type annotations everywhere, the convenience edge between it and modern C# (which dispenses with much of the javaesque boilerplate) is surprisingly thin, and the capabilities of the .net runtime far superior in many ways, making it quite an appealing alternative especially for perf sensitive stuff.
> for someone who develops web application backends and larger distributed systems
Blazor: It's Microsoft's way of doing in-browser C#. It can do quick-and-dirty server-side HTML, and professional-grade, in-browser WASM.
Why is this useful "for someone who develops web application backends"?
The nice thing about server-side Blazor is that you can make a management console, or otherwise port ops scripts, into a self-service page. Because you can choose to render on the server, you don't have to write an API, serialize your response, ect. You can do a SQL-ish query (with LINQ and Entity Framework) in the middle of HTML.
(Granted, for production-grade pages Blazor can run in the browser as WASM and use industrial-strength APIs.)
As someone in same spot I'll say that .NET looks more than interesting after so many years using 6-8 languages daily. And I'm more "make it works, not shine" type.
Why .NET > Go in my opinion?
- performance-wise the gap is not big and probably even .NET can be quicker
- development time can be reduced, tooling is great for .NET and even funny-not-funny error handling is cleaner
- still much easier to find people in .NET than Go where I live and work
Now it's time to verify those assumptions - I'm going to implement next real project in .NET and see how it went. Hobby or "trials" in .NET resulted in fun and speed, but it often happens on first date :)
Is AOT compiling your binaries [1] an option for you? The starting size of AOT compiled C# can beat Go in size [2] and from there it really depends on what you do and how you do it. Some simple ASP.NET server with https and routing can comfortably fit under 10 MB and there are compilation options that can help optimize further [3].
Although the SQL-like form isn't always favoured, and quite a lot of the time I use the plain OO one.
Oh yes, extension methods: do you want object X to support method Y, but can't change object X? Well, provided you don't need access to anything private, you can just add a method and do X.Y()
EF ("entity framework") is the ORM. LINQ lets you write queries against any collection, such as a Dictionary or a List. So I write lots of "listOfFoo.Select(x => x.Name).ToArray()" style code with it, which compiles down efficiently.
These days "pipeline oriented programming" (which is what LINQ is) is seeping into many modern programming languages like Rust, although array programming languages are still (unreadable) kings at it.
LINQ is just the way .NET calls iterator expressions that are a staple in any language that claims to be good and modern.
There are two main interfaces in .NET that have different behavior:
IEnumerable<T> which a sequence monad, much like Seq types in FP languages or IntoIterator and Iter (IEnumerator<T>) in Rust. This is what you use with whenever you 'var odd = nums.Where(n => n % 2 is 0);`.
IQueryable<T> which is what EF Core uses for SQL query compilations looks the same as the first one, and has the same methods, but is based on something called "Expression Trees" that allow runtime introspection, modification and compilation of the AST of the expressions passed to Select, Where, etc. This existed in .NET for ages and really was ahead of the time when it was introduced. You can write a handler for such expression trees to use LINQ as sorts of DSL for an arbitrary back-end, which is how EF and now EF Core work. You can also compile expression trees back to IL which is what historically some of the libraries that offer fast reflection relied on. Of course this needs JIT capabilities and runtime reflection, which makes it AOT-incompatible - calling .Compile() on such query in a JIT-less application will be a no-op and it will be executed in an interpreter mode. It is also difficult for the linker to see the exact types that are reflected on which means you have to annotate the types you want to keep and AOT compile code for. Which is why this mechanism is largely replaced by source-generation instead, closer to how it happens in C++, Rust, etc. An example of this is Dapper AOT.
You mean `dotnet publish -r linux-x64 --self-contained` ? This will embed the runtime in the executable. You can also do trimming so it removes anything that's not used. Also, there's AOT but it's got a ways to go.
The "fully static binary" only works because Go ships cryptography and most other usually host-provided features that other languages rely on host's libc instead, at the cost of performance, limited feature support and requirement to recompile everything in order to ship (inevitable) security fixes, which did happen in the past.
.NET native compilation toolchain supports this mode but it's not a default for a reason (causes binary size bloat too, musl is rather small, but ICU is very much not).
(just to be accurate - all C# and runtime code becomes a single static executable, but cross-compilation is possible between CPU architectures within OS only, with additional options enabled by 'PublishAotCross' nuget package that switches to Zig toolchain's linker so you can AOT compile for Linux targets under Windows, for "self-contained trimmed JIT executables" you can target any OS/ISA regardless of what you use)
Anyway:
dotnet new console --aot #or 'grpc --aot', or 'webapiaot'
dotnet publish -o .
Notes: gRPC tooling is a bit heavy, webapiaot template could be improved in my opinion
As of today, ILC has become better at binary size baseline and scalability due to more advanced trimming (tree-shaking) analysis, metadata compression and pointer-rich binary section dehydration (you don't need to pay for embedding full-sized pointers if you can hydrate them at startup from small offsets). You can additionally verify this by referencing more dependencies, observing binary size change and then maybe looking at disassembly with Ghidra.
Also better capability for true static linking - you can make .NET NativeAOT toolchain produce static libraries with C exports that you link into C/C++/Rust compilations, or you can link static libraries produced by the latter in NAOT-compiled executables[0][1]. It is a niche and advanced scenario that implies understanding of native linkers but it is something you can do if you need to.
Binaries compiled in such a way will have its interop become plain direct calls into another section in it (like in C). There will be a helper call or a flag check to cooperate with GC but it's practically free. Costs about 0.5-2ns.
Can anybody speak to the accounting implications of "donating" software to a foundation/501(c)3? Can there be any kind of tax write-off? (It looks like this might already have been owned by a foundation, but I'm still generally curious)
This will sound pretty dumb, but with all the amazing cross platform games written in Unity - which I thought was Mono or some form of cross platform library with .NET as one of the primary languages, I always wondered why there was not a more 'business app version' of this. After using Xamarin, Appcelerator, and dozens of other 'cross platform tools', with to be let down from ALL of them in the end and/or support dropped.... Having to support multiple platforms, esp. IOS vs. Android still seems to be stuck in the stone ages, esp. for small dev teams that can't allocate massive resources to multi-platform...
If you want a consistent UI (non-native look), your best bet may be Blazor Hybrid currently. Yes, it's web technology (with the overhead that comes with that), but at least it uses the native browser components, so it's not nearly as "heavyweight" as something like Electron. My main concern has always been the lack of Linux support, but maybe that's not an issue for you.
I have only used mono a couple times, but I am a bit confused by the wording here and it is likely because I don't know the full story of Mono.
But:
> Microsoft maintains a modern fork of Mono runtime in the dotnet/runtime repo and has been progressively moving workloads to that fork.
Does that mean that this mono project and its associated repo and what is within the dotnet repo are not the same and could (if they have not already) diverge?
Mono has no reason to live anymore, hence the lack of commits and contributions
It is a dead project, I wonder what winehq has in mind here
edit: as pointed by the comments, mono supports .net runtime before the newer ".net core" (which is not compatible). Because wine wants to be able to run older windows code, they probably still use this.
This isn't really true. Mono functions as a complete replacement for the ".NET Framework" - something that can be used to run any .NET app, including "legacy" apps targeting old ".NET Framework" versions, on any supported platform, even when the app was built to target Windows.
dotnet/runtime is intended to run more modern applications that target ".NET Core" - basically, stuff that's cross-platform on purpose.
There are tons of subtle differences relating to these goals but also some glaringly obvious ones, like mono having an implementation of Windows.Forms.
> hence the lack of commits and contributions
Microsoft have been actively forcing contributors out of mono/mono and into the dotnet/runtime repo for several years now, while Wine kept a weird halfway fork at https://gitlab.winehq.org/wine-mono/mono . Formally transferring `mono/mono` and the Mono name over to Wine will in theory allow `mono` to more effectively accept code which works to improve legacy .NET Framework support for compatibility reasons, while dotnet/runtime can continue to evolve as the way to run intentionally targeted .NET Core code.
Won't most apps use way more .net stuff than core? Mono was a way to run dotnet apps on Linux, killing it meant killing cross platform support for modern dotnet desktop apps?
Not really. Best to think of .net “core” as just .net.
Anything that was in the old .net that isn’t in core today won’t ever be.
Then there’s stuff that was missing in the earlier versions of core that existed in old dotnet. Some of it they later realised was useful for newer apps or apps migrated to core. These pieces were ported over by Microsoft or replaced by 3rd party implementations (e.g. avalonia for xplat ui).
(.net core is actually officially just .net, they dropped the core from the name)
I know little of dotnet beyond trying various semirandom things to make some .net apps work on linux. With that out of the way, my understanding is that
- Originally there was .Net Framework, by microsoft, for windows only. Versions 1.0 -> 4.8 were released.
- Then mono came along as a somewhat clean-room reimplementation of .Net framework, focusing on making it run on Linux. Though mono does not implement windows gui widgets, so for that there's stuff like Gtk#. And you cannot run windows GUI applications on mono for this reason, even though the core parts might be portable. Eventually Microsoft acquihired the Mono team.
- Later on Microsoft made the core of .net open source and portable, creating .Net Core. Or .Net Runtime, linked above, which is apparently the same thing (not sure when they dropped the "Core" part of the name). Applications written for .Net Framework can't just be recompiled for .Net Core/Runtime, there is porting work that needs to be done. And similarly as for .Net framework, even though the core is portable and open source, the windows gui libraries are not. So again windows GUI applications written using .Net Runtime cannot run on Linux. Not sure if there exists anything like Gtk# for .Net Runtime, allowing creating native Linux GUI applications with .Net Runtime?
- Finally, we have wine which is an implementation of the Windows API on Linux. And in a wine environment you can install e.g. .Net Framework including GUI libraries, so you can run .Net GUI applications that way.
They dropped the "Core" suffix with v5 in 2020, since at that point there was no longer naming confusion.
While Microsoft doesn't have their own framework supporting Linux GUI apps on the modern .NET runtime (MAUI does Mac/iOS/Android but not Linux), there are third-party ones like Avalonia.
Dropping the "Core" suffix introduced more naming confusion. Before that, ".NET" was often used as a shorthand for the (now legacy) .NET Framework. Which makes googling for Core-specific things much harder than it needs to be.
It is harmful to write new code that targets .NET Framework and existing actively maintained applications all have migrated to .NET. The ones that did not either have poor maintenance or authors that lack time as they don't owe extra effort unless they want to do that (or sometimes it is a skill issue, unfortunately).
This must be the reason. Wine seeks to be compatible with a bunch of legacy software, some of which will want to use the equivalent of .NET 1, 2, 3, and 4.x Framework and not just "dotnet core". (Or whatever the new thing is called in Microsoftese this week.)
Edit: maybe this means WPF can be the best way to write Linux applications. After all, Win32 is the stable Linux API... nudge nudge, wink wink. :-D
More targets, much leaner (< 10 MB clr + mscorlib), less than factor two performance difference to current CoreCLR, written in C, easier to compile than CoreCLR, etc.
> Does that mean that this mono project and its associated repo and what is within the dotnet repo are not the same and could (if they have not already) diverge?
Yes, they have diverged. Just as Microsoft forked the CLR to create CoreCLR, so too has mono been forked. Features like multiple AppDomains have been removed from this fork. Here is an example pull request:
The thing the .NET team maintains is (a fork of) the Mono Runtime/JIT. Mono's implementation of the .NET Framework BCL (= stdlib) isn't part of modern .NET.
> We want to recognize that the Mono Project was the first .NET implementation on Android, iOS, Linux, and other operating systems.
Is this true? The pre-releases and version 1 of .Net came with the source for a reference implementation of the CLR that ran on Linux or BSD. I can't remember what license it had and I thought Mono was a separate project, but maybe Mono was based on it. Not that it matters now.
You are thinking of Rotor. FWIW, I also feel as if Portable.NET--which was rebranded at some point to DotGNU when I think it was even donated to the FSF--had predated Mono in functioning?
The Mono website has an archive of an old mailing list post which at the time talks about even-older origin of the project. It is (of course) heavily biased for Mono, and hilariously gives me an awkward shout out ;P.
So it ran on Windows, FreeBSD and Mac OS X making it the first non-Windows implementation of .Net, but it didn't run on Linux. It also had a fairly useless licence, so Mono was separate.
Mono is still really the only way to run older .NET (pre FOSS runtime/Core .NET) on non-Windows platforms.
So Wine has historically kept a fork of mono for use within Wine for supporting .NET apps.
Modern .NET can be built for Linux, etc so this is less relevant now but there are still a lot of apps that depend on old .NET and Wine still gets value out of that.
There are a bunch of downstreams that get used for various purposes (Microsoft uses mono for webasm embedded .NET for example) so it makes sense to give over ownership of Mono to the Wine community as they are best aligned with the original upstream's intended use case (as a full replacement for .NET).
So yes it's on life support but arguably more in the sense that it has since specialized into a bunch of downstream projects. The upstream will probably mainly be used for coordinating common improvements that all of the downstream forks care about (which are mainly Wine and Microsoft).
Look at the release history and you'll see it was already on life support. MS stopped adding new features to .NET Framework with 4.8 but Mono has yet to reach parity with that.
What does 'donate' mean? Does it essentially mean that they're abandoning it and pull all resources, while the Wine team is welcome to continue maintaining it if they want to?
Also, I'm not sure how relevant Mono is in the context of Wine. .NET Core is no longer an OS component, but just a runtime that ships with software. Imo their focus should be on getting said runtime working, rather than maintaining a .NET fork.
I'm not a gamer so forgive me if I see connections that aren't there. Does this in any way impact game emulation? Isn't wine part of proton or stream attempts to run windows games on Linux? I suppose .net and clr play some time in win32, how is that usually emulated?
I think it makes sense. Considering that they are two competing technologies which more or less try to accomplish the same thing - make Microsoft technologies compatible with other platforms.
Hotmail was built on top of Unix by someone else, and then Microsoft acquired it.
Yes, Microsoft does do Linux these days, they even have their distro, but this still does not answer the question of why they would replace real Windows with Wine and risk compatibility issues, if they don’t have to pay anything for licenses.
Is this a "dropped on community" project like Borg/Kubernetes fiasco with most PRs ending up in the following, and just corpo-sponsored changes and patches getting through?
> The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
I kept thinking Microsoft needs to port a windows version for their handheld, doesn't want to use steamos, but also needs to work with 'interesting' hardware. Their answer would be a linux port imo, but having too much there could annoy trust regulators so they divested from mono. but I had zero proof for any of these hunches.
This is the thing about software: even if you aren't looking to improve it, the world around it will subtly shift so it'll need to be updated or it'll stop working.
For example, Windows XP can't access the modern internet because it doesn't support TLS 1.2 or 1.3 and most of the web is now secure. The software still exists, but the world around it has shifted so it doesn't really work. If 95% of people end up owning electric cars, gas stations are going to become scarce. Maybe there will be workarounds, but the world will have shifted around the product. Let's say that all gas pumps were changed to wider-nozzle pumps. Sure, you could make an adapter, but that's the point: changes in the world around you end up necessitating changes, workarounds, etc.
It might be mostly read-only, but there's always little possible things that come up requiring work to be done on it.
.NET Framework isn't EOL and is probably going to be supported forever pretty much. There are still regular updates to .NET Framework distributed through Windows Update.
"Supported"... Kind of. Showstopper stuff is fixed. Other stuff is not. My last company had two open bugs with .Net Framework on more recent versions of Windows Server that were year and half old.
.Net Framework will be supported as long as Windows Servers OSes it runs on support it. If Windows Server team ever casts it out, it will die.
wine-mono for one. It's also used for some desktop apps, crucially for those built with the WinForms framework, since the newer, .NET Core versions of that are Windows-only.
Based on how Xamarin performed prior to the MS acquisition, I'd guess dead.
The license cost was high, and the MS acquisition came right around the time React Native and Flutter started to enter v1. I think they'd of been blown out of the water pretty quickly. At least Microsoft allowed Xamarin to get into enterprise .NET shops pretty quickly. There's a lot of B2B form based apps written in Xamarin. I worked on a pretty big one that made (and continues to make) a lot of money.
I've long assumed the point of the acquisition was because Xamarin did basically all the hard work of allowing .NET to be cross platform.
What happened to Xamarin looks like Microsoft took whatever IP was relevant, and left everything else go, which this decision is a confirmation thereof.
It is kind of interesting to see Miguel's feedback, now that he his allowed to talk about how things went down.
I’m a big fan of Miguel’s work. His comments have been pretty interesting. You also don’t have to read between the lines much to know how he feels about what’s happened to his tech.
I assume he’s got fuck you money now though. I’m very excited to see what he does with Swift and Godot, Swift is a great language for gamedev.
Perhaps us C programmers should be telling the Rust programmers to stop shitting up the industry for everyone else because "C" is too hard to get right?
Really, point aside, there is no place for zealots and many places for a rational decision analysis in what tools to use. Absolutes and extremism are all bad.
Every program I've seen like that is terribly inaccessible. "Doesn't expose headings to screen readers" levels of inaccessible. And yes, you can do a whole lot of working around that, but you know what I'd do?
<h2>Section heading</h2>
I like Rust, but please don't write a website in it.
Oh wow so you write fancy crud apps on the web? I've already got 300 people who can do that in C# (.net core) already. Why do I need any Rust people?
What I need is people who can write very complex rule and constraint engines, mathematicians and statisticians to deliver some business value, not crud monkeys.
Given the style of writing and argumentation you have prevented here, no one with leadership experience in the software industry is going to actually believe you. Maybe they're lateral to you.
And, even then, arguing quantity of engineers over quality of engineers is a pretty bad argument, especially on this site. So, yeah.
> If Rust is "too hard", find some other profession and stop shitting up the industry for everyone else.
If you really like Rust you should promote it by using it to write great tools that inspire others to use it, instead of shitting on people who use other tools to do actual work.
That’s the old code base which has been in maintenance mode for 5 years and which Microsoft doesn’t want to maintain anymore. New development still happen in a fork which remains under the stewardship of Microsoft.
Second paragraph of the article by the way, just saying.
The Wine project apparently decided they wanted to keep alive an old version of a piece of software Microsoft has no interest in and Microsoft gave them the official repo instead of throwing it out.
Mostly interesting in that it is a token of goodwill from Microsoft to Wine something which is in line with the current Microsoft view of the OS market but would have been very surprising not that long ago.
Another perfect execution of embrace(Microsoft became the steward of the Mono Project when it acquired Xamarin), extend(Microsoft maintains a modern fork of Mono runtime in the dotnet/runtime repo and has been progressively moving workloads to that fork), extinguish(we recommend that active Mono users and maintainers of Mono-based app frameworks migrate to .NET) for anyone who thought MS had actually changed since the bad old days.
Only wrinkle is that Mono was originally a .NET runtime for Linux. So they weren't embracing an external standard but a knock-off of their own. But I still agree with elements of your statement in principle. However, giving Mono back to open source is an interesting development and I don't know how it fits in your narrative.
That's not what EEE is. For starters, the term applies to standards, not to implementations. The standard here is .NET, which Microsoft controlled from the start.
Mono made a lot of sense for running places where full .NET didn't, like in full AOT environments like on the iPhone where you can't JIT, or for random architectures that don't matter anymore but once did for Linux (Alpha, Itanium, PPC, MIPs, etc.). When Microsoft bought Xamarin (which itself was born out of the ashes of the Novell shutdown of the Mono effort) and started the DotNET Core efforts to make .NET more portable itself and less a system-provided framework and merge in a lot of the stuff Mono did a single more focused project made more sense.
Mono was still left out there to support the edge cases where DotNET Core didn't make sense, which was mostly things like being a backend for Wine stuff in some cases, some GNOME Desktop stuff (via GTK#, which is pretty dead now), and older niche use cases (second life and Unity still embed mono as a runtime for their systems). The project was limping, though, and sharing a standard library but different runtimes after much merging. Mono's runtime was always a little more portable (C instead of C++) and more accessible to experiment with, but we need that less and less, but it's still perfect for Wine. So, having it live on in Wine makes sense. It's a natural fit.