> SONiC is an open source network operating system (NOS) based on Linux that runs on over 100 different switches from multiple vendors and ASICs.. members including Alibaba, Broadcom, Dell, Google, Intel, Microsoft, NVIDIA.. Microsoft founded SONiC.. as open source so the entire networking ecosystem would grow stronger. SONiC already runs on millions of ports in the networks of cloud scalers, enterprises, and fintechs.
It felt it was being forced on me when various distributions switched to it. And it was something new and buggy to learn that I didn't wanted to deal with.
The general idea behind it are good in my opinion. But the problem is how confusing and slow the god damn thing is.
Try overriding a service definition? You must know to set exec to empty string before you are allowed to override it. Really first line sets to "" second to the value you want. Just weird.
Try getting the logs of the service you just edited and restarted? Wait 10s on a flagship computer for the 5 lines of log to show up.
Systemd stores logs in binary. That takes more storage space than gzipped text. While being about 100x or more slower than zcat and co.
Linux was always being developed with companies, and with their self interests. It's the same with BSDs too.
Yes, we also add what we like and what we want to see, but companies employ these developers to implement what they need inside Linux.
We have kernel-bypass-networking because of Cloudflare. BSD has extremely fast networking because Netflix and other companies needed it, Linux has better power management because server needed it, etc., etc.
I also have points which I don't like with systemd, but companies are neither new, nor they're gonna leave Linux alone (and, they shouldn't).
I have zero to do with Mariner, but I would just like to point out that nobody seems to have actually read the distro docs. I see it as a way to have end-to-end supply chain management for critical components that are used across multiple MS workloads, and not as something directly targeted at customers, so all the conspiracy theories in this HN thread are incredibly amusing.
Everyone’s free to use it, though. The entire thing is up on GitHub someplace (on mobile, so can’t dig out the repo easily), but some assembly might be required for your use case.
As an anecdote, when I was playing around with it (built a VM, some containers, etc.) I also moved my personal laptop to Fedora because I realized I was out of touch with modern RPM-based distros.
> so all the conspiracy theories in this HN thread are incredibly amusing.
Microsoft may be just doing something for their needs, and putting in the open like a good citizen of the FOSS world, but the older people who remembers whatever happened back in the day have all their rights to be paranoid about MS.
"Eh, this is the old Microsoft, today's one is different" is just a sentence. They still don't (and probably never will) inspire confidence when it comes to Linux and FOSS.
We see what they have done with GitHub, for a start.
Well, I have been in the industry for over a quarter of a century (at least) and joined Microsoft around the time Satya presented his “MS Loves Linux” slide, and those people would be amazed at the internal culture around OSS contributions. But having worked at several tech companies and with direct insight into a few others, I’m used to cognitive dissonance between outside views and reality.
(It was a little annoying to get vaguely threatening, but overtly insulting e-mails at my taoofmac.com address the first couple of months, though. Oh, and I’m a Mac user, too. Try to factor that in …)
Everybody is telling the same thing, but with the all Secure Boot debacle, Halloween Documents, GPL violations with Copilot, and other things, it's very difficult to believe and feel this.
When these definitions merge, Microsoft looks like a cozy home with a fireplace inside, but built like a spiky poisonous ball outside, designed to crush anything and everything.
I don't want to act like a bone headed old men yelling to the cloud, but from outside, none of this is visible.
Oh, even Microsoft screwed its own supporters with hot-reload support incident in .NET Core.
Whenever I say myself, "OK, past is past. No need to be this rigid", Microsoft does another thing to make me return to my old stance. Maybe Microsoft needs to be able to see itself from outside?
Well, maybe you should listen to “everybody” a bit more, the way you’re piling on more dislikes and grim figures of speech into what wasn’t an argument is exactly why I never replied to any of those e-mails.
Well, seeing the actions of "Greater Microsoft" drowns out the actions and words of people who work inside the machine (aka "everybody").
Seems like we're not so different in the amount of history we have accumulated about these things called computers, so we have possibly seen many (if not all) of the big events shaped the industry.
I'm not grim or angry, but have generous reservations for the company and its motives, that's all. I'm seeing Microsoft as an elephant in the china shop, contemplating the most efficient maneuver to break everything at once.
In my 40-year petrochemical career I enjoy working most with the oustanding individuals that are employed by oil & chemical companies but who recognize they can afford to not operate as toxically as they used to do.
They are working from within to make it more sensible for the rest of the world, but progress is hindered by the overwhelming momentum in the opposite direction remaining from those who are not on the same page at all.
> and provide the resulting source code to anybody regardless of the source and target license w/o consent via Copilot.
That is one opinion. Mine is they used the code as allowed under the Github terms of use and/or the principles of fair use. Ultimately it doesn't matter what either of us think. A court will decide.
If this is going to be a desktop offering from Microsoft, I think it will be for remote cloud and potentially WSL2 use only. The future of desktop Linux is a VM under Windows.
> The future of desktop Linux is a VM under Windows.
Is there really any demand for this though? What Linux gui apps exist that windows users want/need? Enough that Microsoft sees an addressable market large enough to get roi?
The inverse seems to have way more practical use cases that could actually drive revenue - games and legacy business applications (as mentioned in the esr prose).
I got quite excited when I heard about wslG and rushed to install it.
Never used it since. I did have the Linux version of dBeaver installed that way but there is little/no difference just running the native Windows install for that.
The only use I can really think of is doing cross-platform GUI development, but even then MS will say "hey look, native Linux windowing support in WSL" and also "Not yours, no linux version of MAUI for .NET"
I use it for running the automated browser tests with my frontend stuff at work. The code is in WSL, and running Cypress or whatever with browsers in Windows with code in WSL seemed to not work. But install Chrome/Firefox in WSL, and it works great with WSLg. Chrome on Linux also attaches to the debugger in VSCode, which doesn’t attach to the Windows version of Edge.
I fully disagree with this. WSL has only from my perspective gave Linux Desktop a bad name due to many poor choices and performance. It has gotten better, but you still need someway to deploy Windows and Windows deployment automation sucks and is bloated. Why would a user want to run a lightweight desktop inside a legacy spyware one?
Because Windows is super easy for the vast majority of people to use and desktop Linux isn’t.
Windows deployment isn’t bloated, it just has a ton of actual functionality that real users want and that doesn’t come for free. The UX for the admin is also super nice, with tons of online articles and communities with people who are willing to help you do what you’re trying to do, as opposed to tell you why what you want is bad.
> Windows deployment isn’t bloated, it just has a ton of actual functionality that real users want and that doesn't come for free
A base Windows installation takes up a lot of space mainly because of Microsoft's compatibility commitments and the implementation strategy it uses to maintain them, and to some extent the way libraries are typically distributed. Neither OS features, nor hardware compatibility, nor the selection of included applications has much to do with that. Microsoft delivers some real value through their compatibility commitment, but it's a different thing than functionality.
> The UX for the admin is also super nice, with tons of online articles and communities with people who are willing to help you do what you’re trying to do
This is frankly a stunning claim. From local configuration being the hodgepodge of many generations of GUIs to the incompleteness to the painful slowness of completing a task through visual imitation to the fundamental hostility automation to the slowness and incompleteness of PowerShell to the absolutely anarchic nature of software management on the platform, administering Windows machines is a cumbersome, manual mess.
> communities with people who are willing to help you do what you’re trying to do, as opposed to tell you why what you want is bad.
This is an illusion afforded to those who come to both operating systems with Windows-centric expectations. In this very forum, you can find Windows users who respond to posters who complain about defects that come up with Microsoft's official Windows port of OpenSSH by telling them they never should have tried to use rsync to transfer files between Windows machines.
With any operating system, there will always be people who respond to questions involving solutions that are at odds with the paradigm, strengths, or cuatoms of the operating system with advice to re-think the problem. (And sometimes, they'll be right!)
The UX for the admin is also super nice, with tons of online articles and communities with people who are willing to help you do what you’re trying to do, as opposed to tell you why what you want is bad.
> You mean the tons of online articles and communities trying to get you to purchase spyware?
Linux has a substantial amount of communities and support, and with systemd things are getting pretty routine there isn't a lot of ways to mess things up like people in Windows communities telling you to reset various windows components by deleting important system files.
Microsoft already has their way. Any time you boot an alternate OS, it's only because Microsoft deigns to allow it. They're the CA for Secure Boot, and at any time they can forbid disabling Secure Boot in order to qualify for Windows.
I have always found it funny that Microsoft provides a free service to the Linux community which makes them significantly safer, and for their trouble they get no end of shade from that community.
That won’t happen, they might get told to cut it out but they won’t get broken up. Microsoft is so unbelievably too big to fail that their existence is a strategic asset for US national security. SecureBoot isn’t some big ole scandal to cut out Linux, it’s a feature to protect the massive fleet of Windows boxes provisioned by various government agencies that got spun into a consumer feature.
>While there is no official GUI desktop, there are some interesting GNOME packages landing in the repository
Since it's likely that some developers at Microsoft have some prior experience with Windows development, I wonder if they find Gnome and GTK philosophy more close to Windows programming experience than KDE, QT etc. I also wonder if we will see a GTK backed for .NET MAUI.
The article says Delridge is Debian based but appears to be using RPMs? Never seen that. I would have thought the primary advantage of being Debian based is the large repository of packages?
You can install RPMs on Debian but most probably you will run into filesystem-level conflicts with DEB packages. But if you are rebuilding your own Debian-based distro, nothing prevents you from packaging everything into RPMs instead of DEBs and that's see. Not that I see the point of that, though.
From what I understand, most of the package-level tools (dpkg, rpm, etc.) will technically work on any Linux system; you just wouldn't be able to share dependency management with whatever the native package management system is, so as mentioned elsewhere in the thread, dealing with file conflicts would be one annoying issue. I think using the repository-level tooling (apt, yum/dnf, etc.) on distros they're not the default on might be harder to do, although I wouldn't be surprised if it also fell into the "technically possible but probably not worth the effort" bucket.
My point is that using non-native tooling might conflict with the native tooling which presumably still exists and is managing some non-trivial portion of the system (at the very least, the base installation, but probably a bunch of other packages as well), so whether or not that conflicts with an external package manager is likely to vary by distro.
Yeah but Debian’s package repository is all deb based. It would be difficult to translate precisely into RPM - they have different tooling and semantics.
The whole point would be to rebuild Debian (or the subset you need) and package everything as RPMs, i. e. do not use apt, dpkg, etc but yum/dnf, rpm, etc.
The idea of building a whole distro on alien is pretty laughable. Besides, for making a whole distro, what you want to convert between is the source packages, not the binary packages they produce, which is typically (always?) what alien is used for.
Imagine writing Linux From Scratch and someone from Microsoft reads it and uses it to make a Linux distribution that goes on to power dozens of that trillion-dollar company's products.
To be clear, I'm not saying that "you should be outraged," necessarily. It's just an interesting thing.
Imagine being ok with trillion dollar companies using opensource code you wrote, for free.
Imagine if the people who work at those companies aren't your enemy. But instead are potential customers and potential collaborators, who see value in your work and want it to succeed.
Imagine if there was a good implementation of some idea, expressed as code. And we can all share and contribute to that single good implementation to avoid pointless work and avoid fragmentation. Imagine people from all across the world contributing both financially and via bug reports and pull requests to making that implementation succeed.
This fear of being exploited by big companies seems to motivate a lot of people in the free software world, and personally I find it pretty odious. Why shouldn't big businesses use my code? The better opensource companies (Google, new Microsoft, Facebook, etc) contribute back to the community in all sorts of ways; and I consider that a net positive on the world.
I've seen this fear show up for decades in all sorts of contexts, and I think I disagree with it every time. For example:
- A few years ago I was involved in a big argument over the licence for OpenStreetMaps. There were essentially two camps: the "Oh no what if (big evil corp) uses our community mapping data to make their map better" camp, who wanted a restrictive anti-company license. And people like me who just wanted to give a gift of mapping data to the world.
- This is the gap between GNU GPL (especially GPL3) and BSD/MIT licenses.
As an opensource maintainer, I do have a problem with people who work at big companies expecting me to volunteer my time responding to issues on github. I'd like big companies who file issues to contribute financially in exchange for my attention.
But if they don't pay, I'm more than happy for them to use my code. I made it for everyone. I'd really like it if fewer people needed to reinvent the wheel for silly licensing reasons.
>Why shouldn't big businesses use my code? The better opensource companies (Google, new Microsoft, Facebook, etc) contribute back to the community in all sorts of ways; and I consider that a net positive on the world.
Because free software is in a constant battle for relevance, and they will use your code to make their proprietary offerings strictly superior to the free ones. What do you think would happen if Autodesk were permitted to fork Blender into a paid product, extend it in all sorts of proprietary ways, and push it heavily through advertising campaigns and educational subsidies? Blender would die, is what.
You know, I don’t think it would? If autodesk depended on blender, and the core of autodesk was blender, I think autodesk would throw gobs of money at making sure blender flourished. They’d probably try and hire lots of blender devs too - just to get them to work on core blender features so they could make autodesk better.
There’s plenty of examples of this happening in the wild. Is FreeBSD dead because macos is built on top of a lot of its code? Is redis dead because of redislabs’ proprietary offerings? No.
Is SQLite hurt by all the projects built on top of it? No. The opposite - it’s strengthened by being used.
As someone who contributes to BSD licensed code, please don’t tell me my users are “appropriating” my work. I’m happy my code is being used, and the more the merrier.
>Imagine if the people who work at those companies aren't your enemy. But instead are potential customers and potential collaborators, who see value in your work and want it to succeed.
Yes, but we'd have to imagine it isn't microsoft doing it.
You'd think people would learn the dozenth time it happens, but no, here we are again with copilot and VS Studio Code.
Visual Studio Code is a massive gift to the community. If we take it in a different direction than Microsoft wants, that would be us doing what you are accusing them of. We can do what we want though. So can they. That’s Open Source!
It's not a tiny detail. ‘VSCode proper’ includes integration with the extension marketplace that everyone expects to use, as well as exclusive access to proprietary, Microsoft-backed extensions which are widely used throughout the VSCode userbase for essential functionality.
Microsoft has arranged things so that actual open-source builds of VSCodium will always be lacking in comparison to their proprietary product.
Oh, my apologies! I'm not tuned properly to pick up irony about that on HN since I've read so many posts overlooking the real difference. Hopefully my comment will be useful as a starting point to someone who hasn't really looked into the issue before.
Probably presenting a code editor with IDE like extensions that are all Open Source, and then gradually killing the Open Source extensions and switching them out for proprietary ones.
Well Github is struggling to not abuse its Free Software users, VS Code has failed to remain Open Source, I'm not sure about LinkedIn because that never pretended to heart Linux so that gets a pass, and MS Linux isn't really a thing.
Although if you are to believe the terrible kernel commits they were attempting to make to allow them to do an Nvidia closed source shim then it wouldn't shock me.
> I'd like to license my code under the GPL, but I'd also like to make it clear that it can't be used for military and/or commercial uses. Can I do this?
> No, because those two goals contradict each other. The GNU GPL is designed specifically to prevent the addition of further restrictions. GPLv3 allows a very limited set of them, in section 7, but any other added restriction can be removed by the user.
> More generally, a license that limits who can use a program, or for what, is not a free software license.
I think the point is only that companies will often not use GPL licensed code in their own products, as that forces them to GPL their own code as well.
So BSD/MIT is less restrictive in that sense. There is no obligation to change the license of any code that uses it.
PRs to improve zstd's embarrassingly slow decoder require you to assign your firstborn to Zuck and new Microsoft is selling me a subscription to use my laundered open source code to generate autocomplete suggestions in their closed-source IDE that pegs 2 cores and needs 4 gigs of memory (on the remote machine only) to edit 50k text files over ssh
If microsoft's expensive software is bad, why are you using it?
And you don't need anyone's permission to make zstd better. You just need their permission to merge your changes into their (upstream) branch. If you think you can maintain zstd better than facebook, fork their code and compete with them. Its BSD licensed after all! And if your zstd changes are sufficiently compelling, maybe everyone will start using your version instead.
As a general rule, if I opensource some software I've written, I'm under no obligation to accept pull requests in any form. My time is my own. Facebook is free to run their zstd project however they like. They're already doing good in my books by sharing it with the world, for free, under a BSD license. They don't owe you anything more.
Zstd's decoder is very very fast for what it does, what ideas do you have to improve it?
It's unlikely to beat lzo family on decompression speed, but that's a different class of compressor entirely. And probably there are still some more tricks zstd can learn from closed-source Oodle leviathan, but i don't think much more can be learned.
I expect any "perform a conversion of some sort on a byte stream" implementation that uses 0 SIMD instructions and is not memory bound is leaving a lot of performance on the table, especially if one is permitted to mess with the design and layout of the input to make it more amenable to vectorized consumption. I cannot confidently claim that we're missing a a >2x speedup in this case though, it may be as low as ~1.3x or something.
I checked the zstd source and i'm surprised to see you're right -
There's a little x86_64 assembly but I don't see a single SIMD instruction anywhere, and no intrinsics neither. Seems like brotli is the same. I assume zstd still gains something from SIMD autovectorization in the compiler, that might be interesting to benchmark with- and without- such a flag.
Since the zstd bytestream got frozen in RFC8478, messing with the layout too much will require a zstd2 and moving the whole world again to use it (linux kernel compression, rpm binary format, etc)
LFS wouldn't be the ideal disto to create a Linux distribution. It might be good to help train people how to make one but Yocto and other projects are better suited for creating a distro. The use of RPM though is IMO a bad choice and one that lets me know that it is very unlikely that this would ever become a major distro for Microsoft.
To be honest, my first thought was "of course the people who insist on /r/n would pick RPM". Then I realized I haven't used RPM since RedHat 6, and maybe it's gotten better in the last twenty years...
I've probably just gotten inured to apt, but do you say RPM is the least horrible because the interface is more logical/convenient, or just better at managing dependencies?
You should take a look at xbps, used by Void Linux. It's extremely similar in speed and interface, but tracks shared library dependencies and so supports partial updates. xbps-src, roughly equivalent to Arch's makepkg, statelessly builds packages in a chroot which, when writing "templates" (PKGBUILDs), helps make sure you haven't missed any build dependencies simply because they happen to be installed on your system.
Same here.. I think that is what mingw or some of the initial "Windows" Linux distros is based on. A PKGCONF file would really make creating and maintaining packages a breeze.
> The use of RPM though is IMO a bad choice and one that lets me know that it is very unlikely that this would ever become a major distro for Microsoft.
reply
RPM is antiquated. New package managers like apk & pacman have had great success due to their simplicity and attracting a lot of maintainers that are able to maintain more stable packages than other distros.
This is totally false, and compares package managers in incomparable categories (pacman's counterparts in the RPM-based package management world are Zypper and dnf, not rpm).
Arch Linux itself has very few packages— ~10k, around half of what you find Fedora, an RPM-based distro. The DEB packaging format and tools related to it are way more convoluted, and still Debian has several times as many packages as Arch and many, many more maintainers than Arch does.
The only area where the Arch community sees more contributors for packages is the AUR, where contributors are decidedly not maintainers, package quality is low, and packages are not stable. (And users face this in addition to the whole basic integration problem with source-based packages and packages installed via binary repos, of which only the latter is ever considered by pacman's dep solver during upgrades.)
Users like pacman because it's fast. It's not nearly robust enough for use at serious scale. Cutting corners is how it gets that speed.
Alpine's apk is just another lightweight cousin of dpkg that incorporates some higher-level dependency resolution in a barebones way, like ipkg and opkg, like are used on embedded systems or OpenWRT or whatever. It's not some maintainer magnet which has multiplied the (pretty tiny) Alpine repos (which don't even include, for example, a JVM).
False.. Arch Linux and Alpine maintains significantly more stable packages (meaning up-to-date) than other DEB and RPM-based distros. Zypper and dnf still use RPM BTW.
Also, Alpine certainly does have a JVM so that statement is flat out false and now sure where you're getting your misinformation from.
You're right, I misremembered some detail about Java packaging on Alpine. There's an Alpine-based Docker image I've set up to build for a Java application at work where I have to manually pull down some binary that I expected to be packaged, but I don't recall all of the details.
As for the substance of your reply:
> Zypper and dnf still use RPM BTW.
Yes, this is what I said. Zypper and dnf are high level package management tools comparable to Pacman. RPM is not.
> stable packages (meaning up-to-date)
That's not at all what the word 'stable' means, but
> Arch Linux and Alpine maintains significantly more [up to date packages] than other DEB and RPM-based distros
Debian Unstable has > 18k up-to-date packages. Fedora Rawhide has > 14k. openSUSE Tumbleweed has > 9k. Arch has fewer than 9k up-to-date packages.
Arch has fewer up-to-date packages than any of the most prominent RPM-based or DEB-based rolling release distros. It does have a larger percentage of its (relatively very small) total package collection at the latest versions from upstream: https://repology.org/repositories/statistics/pnewest
On that front, the difference between Arch and Alpine is greater than, e.g., the difference between Alpine and openSUSE. Debian Unstable about matches up with the AUR on that metric.
Not to mention that Nixpkgs, whose tooling is pretty much the polar opposite of Pacman's KISS philosophy, has more packages than Arch and the AUR combined and has more packages which are up-to-date than both combined.
I am sure that the simplicity of the PKGBUILD format has been important in the personal journeys of many package maintainers for Arch Linux. But the notion that this means that Arch has attracted so many maintainers that is capable of keeping a greater number of packages up-to-date is absolutely unsupported by the facts. The further claim that this (fictional) superiority reflects fundamental technical virtues in its package management system that make it a better base for enterprise Linux distros than something like Fedora or Debian or openSUSE is a huge leap and a non-sequitur.
It’s used as the “system distro” for running support services and stuff before/between when distros are loaded. The distro userland sits on top of the VM running CBL-mariner (and I’m almost certain that when you run multiple wsl distros they are all in one VM, though that may be wrong).
WSL distributions (instances) are not VMs. They are best described as "containers" running inside the WSL2 VM. Each WSL2 distribution/instance has its own isolated:
PID namespace
Mount namespace
User namespace
(I believe) Cgroup namespace
(And probably) IPC and UTS namespaces
WSLg System Distribution (Windows 11 only)
init process
However, they all share the parent VM's:
Network namespace
Device tree (other than /dev/pts)
CPU/Kernel/Memory/Swap (obviously)
/init binary (and some other binaries that are injected in)
I must be the only one in the world not intrigued and excited by an incredibly boring server distro, barely different from Debian, that happens to be made by the Microsoft cloud Linux server department. Am I really missing something?
No. Microsoft will buy Canonical for Ubuntu and that sweet, sweet developer mindshare. Ubuntu is the developer distro after all. Ubuntu will become Microsoft Linux.
I have defaulted to Ubuntu for familiarity reasons, but recently I got to learn
- the procedure for upgrading to a newer release of ubuntu it you run sed as root to fix your sources.list, then you do-release-upgrade, but it doesn't work because it has an undocumented dependency on pciutils
- at this point i should be used to this pciutils thing since ~all software has undocumented dependencies on autoconf, automake, autotools, libtool, cmake, ninja, rust, python2, python3, calling the binary called python in your path and expecting it to be a particular version that is 2 or 3 but some stuff needs 2 and other stuff needs 3, perl, npm but not the latest npm, clang but not the clang in the repositories, gcc, nvm, ruby, lua but not the latest lua, bazel, make, openssl, libressl, libcrypto, a different version of libc that will break almost all software on your machine when you install it, a bunch of optional obscurely named C header packages that install the headers into folders that the build won't search for header files and that turn out not to be optional if you want the thing to work at all, and a hundred different javascript build automation and test automation tools that exist mainly to call each other
- snap will just install broken package updates for you and is incapable of undoing this operation by design
- systemd-resolved will randomly just start taking 250+ms to return cached DNS entries so if you call connect() many times per second you need a much bigger threadpool than you may have expected (and this is maybe an architectural change to your software if your previous threadpool size for connect() offloding was 1 or if you were naively calling connect() on your thread where you do actual work)
- if you thought you knew how to set the open file limit for a user, systemd has rearranged your furniture and by the way this operation now requires you to reboot your computer
so i'm not sure ubuntu has been a great fit, but i'm also not switching to anything else
>snap will just install broken package updates for you and is incapable of undoing this operation by design
I no longer recommend Ubuntu for this reason to people looking to get into Linux for the first time for developer-adjacent reasons. My recommendation is now Mint or Pop_OS, as they are effectively Ubuntu without Snap.
Edit: I am not blindly against snap, or against its idea outright. I understand what it is for, and if it delivered on that without issue, I would welcome it. My avoidance of Ubuntu is due to having to repeatedly (and increasingly) help Linux newbies through problems, where lately all of those problems have wound up being based in snap.
Cracked me up the other day when Ubuntu in WSL2 told me that I needed to install Firefox via Snap if I wanted to use it, but Snap doesn’t work in WSL2 because systemd isn’t being used.
I found that switching to debian for WSL 2 has made my life in WSL 2 much better.
you can set up systemd in WSL 2 now, and it's supported, but it gave my Ubuntu distro a lot of problems so I disabled it there. in Debian it works great so I left it on.
The only supported (=tested) upgrade from non-LTS to LTS is from the last version before the LTS, and you should not change the sources yourself when you do that.
When you skip releases, no distro will officially support such upgrades (except for Ubuntu LTS-to-LTS, without skipping an LTS), and things might break.
Also, when you really want to skip releases, it is often better (IME) to change the sources manually and not use do-release-upgrade, but use aptitude and/or synaptic and/or other such tools to fix the dependencies manually. In any case, you might have to fix some breakage afterwards (or mid-upgrade…), as it is untested & unsupported.
The next version of Ubuntu after the purchase will contain MS Buntu, a female gnome in the shape of a paperclip who assists with all sorts of programming tasks - logging into hotmail, using copilot, signing up for github enterprise, and so forth.
I'm a huge Arch (and NixOS) user. But one of the things that I think is difficult about Arch in an enterprise is that by design it is difficult to standardize enterprise tooling around. Ubuntu, on the other hand, has a bunch of defaults that can be more reliably be predicted for in an enterprise.
I'm a security engineer and I've thought long and hard about why enterprise applications often lack Linux support -- it would be difficult to have something that works across the board (there are so many choices of desktop environments, notification daemons, network subsystems, etc.). If I were to write enterprise applications that targeted Linux I would probably target the default config of the current LTS release of Ubuntu (Gnome as the desktop environment, etc.).
I feel like Ubuntu is the desktop developer environment for the enterprise that is forward enough to support desktop Linux.
I think I'm smelling what you're stepping in. What made me assume arch was the ease at which everything is 'there' without a snap or ppa via the AUR. Thanks for taking a moment to share a different perspective with me.
I've been trying to have a similar strategy to you with regards to enterprise long-uptime application deployments. I target RHEL and Debian Sid because they move at a almost glacial pace and have quite a bit of documentation.
Do you ever wonder what the trajectory of Microsoft and Canonical looks like with regards to an M&A perspective? It would be a good contender to counter Red Hat's dominance in the enterprise space.
My daily driver for dev work is Fedora. It gets a bad rap but I think it's superior to most other distros if you want a "work-focused" distribution with relatively little tinkering required.
WSL on my office laptop takes 10s to boot and if you CTRL+C before it gives the prompt it shows a python stacktrace. I can only hope that by the time the people who have only ever used windows retire, windows will become as relevant as what cobol is today which is probably an insult to cobol as it's not known to make blue death screen on ATM and airport screen
As everyone else you're living in a bubble. In your bubble people probably do hate Windows. Given HN origin in the Silicon Valley - where macOS and Linux are very popular - probably the majority of HN users also hate Windows.
But that doesn't say anything about the majority of the developers out there. From my experience Python and Ruby guys usually do use Linux and macOS, while .NET and Java devs run almost always Windows on their PCs and laptops.
The overwhelming majority of developers I interact with these days are on MacOS, and it's not even close. Probably a 80-20 MacOS to Windows ratio. I'm not in Silicon Valley, and I work with people from lots of different companies. Every company that offers the option for devs to have a Macbook, that seems to be what the "average" dev is choosing for a while now. This is obviously an anecdotal sample size but I'm looking at a group of about 30 companies over the past year ranging in size from < 15 employees to up to 500, and everything in between.
Ask a C#/.NET and he'll tell you 90% of the developers he knows use Windows laptops (the real Visual Studio works only under Windows). There are more than 10M professional .NET developers out there.
Same... a lot of it has to do with just the convenience of Mac shell being more native than the janky WSL implementation that does take time to startup and is noticeable, and not having to worry about a translation layer for volumes.
Microsoft actually bought a Gentoo based distro in container space - Flatcar Linux (which is a direct continuation of CoreOS that was bought and killed by RedHat)
Because Azure is the new OS, and for better or worse, UNIX was won the server room wars, with Linux being the most used variant nowadays.
However with containers running directly on top of hypervisors and serverless, it is only a matter of time until the actuall OS of the server room is irrelevant.
Until then, Linux and BSD (which they also support) are business relevant.
I recall when novell bought suse, since everybody was considering novell as basically a proxy of msft (maybe a member of the blackrock/vanguard galaxy), it was presumed than suse gnu/linux was basically msft gnu linux.
This comment made me LOL. Thank you, that's pretty funny.
I was curious if this was a saying I'd never heard of or if you just came up with it. Seems you just came up with this?
What's amazing is that Google already has this HN post indexed and you're already the first hit for the phrase. And even weirder, Google says that they (currently) indexed it 56 minutes ago, when HN reports you posted it 23 minutes ago. Google is now apparently predicting the future 33 minutes in advance.
Get a grip old chap and think through the old saw: "... when Hell freezes over", meaning [it] will never happen. There are plenty of riffs on that meme, including devilish hand protection strategies.
Google is a boring old replayer of trite old stuff and quite right too - that's what it does! I suspect it also reads HN on a tight loop - many of the locals there are also from this parish.
I suspect even the eye of Sauron doom scrolls HN ...
> even weirder, Google says that they (currently) indexed it 56 minutes ago, when HN reports you posted it 23 minutes ago. Google is now apparently predicting the future 33 minutes in advance.
One interpretation is that Google is lying about the indexing time.
Yeah, they don't seem to make any guarantees at all about the date being displayed next to the search result... I'm too lazy to check the page source here right now on which meta fields are updated with comments, but this is what Google says about how they determine the date:
> Google determines a date using a variety of factors, including but not limited to: any prominent date listed on the page itself or dates provided by the publisher through structured markup.
> Google doesn't depend on one single factor because all of them can be prone to issues. Publishers may not always provide a clear visible date. Sometimes, structured data may be lacking or may not be adjusted to the correct time zone. That's why our systems look at several factors to come up with what we consider to be our best estimate of when a page was published or significantly updated.
Wouldn't that just be because OP's comment appeared on the article, which was posted ~1h ago, and the Google result link is to the article, not the comment directly, hence has the earlier timestamp?
Googles indexing defers javascript execution. If comments are loaded via JS, they'll appear on the index with the timestamp the page was fetched, not the time its script was executed.
With Microsoft Linux Enterprise Edition, you can create
scalable multi-tier applications using our new Graphical
User Interface command-Line Technology (GUILT)?. Extend
your productivity with optimized support for Internet
Active-XWindows? Technology and built-in Internet Xplorer
web browser.
...
Hot Topics
Microsoft Invades Cuba
Microsoft Monkey Colony on Mars
MS Linux to have Start Button
MS Linux Faces Competition
I do miss the time when whimsical stuff like that was commonplace. Even if it wasn't always original or super funny, some of it was, and it was nice that the Internet was a place for those things.
Still Microsoft, although after a change of tactics. They don't want to kill Linux anymore, but something even more dangerous: controlling it by making their one the accepted corporate standard ("Nobody was ever fired for using MS Linux", etc).
The day they push some proprietary or oddly licensed technology to be integrated into their Linux is the day a thousands alerts should be raised into the community. Then will come software that runs only, or runs better, on their Linux.
WSL was the first step: making something with both Microsoft and Linux in its name accepted by the community.
Mariner isn't being pushed for others, it's a way for Microsoft to mitigate risk of supply chain attacks in their services currently using other distros
exactly what has been extended or extinguished, here? What kernel versions belong to Microsoft?
has Dell, or anyone else, been told to stop selling laptops with Linux in favor of Windows with WSL under threat that they won't be able to sell Windows laptops anymore?
exactly what world are you seeing, because it isn't the one I live on.
> The day they push some proprietary or oddly licensed technology to be integrated into their
> Linux is the day a thousands alerts should be raised into the community.
UEFI was created by an alliance of 12 companies, 11 of whom are not Microsoft, and Intel began the work. Again, not Microsoft.
TianoCore is an open source reference implementation of UEFI maintained by Intel; there's no proprietary Microsoft stuff in there.
Microsoft seek to ease development for users who either choose Windows or who are forced to use Windows by an employer. there's no "play" here, no extending or extinguishing. They want, like any reasonable developer would, a better world for software developers.
Yeah, we're just now reaching the embracing. First Windows+WSL becomes the ideal dev environment, then Linux apps run natively without WSL, and then... why run a Linux kernel? After that they'll have their sights on the real prize: being the dominant browser again.
> First Windows+WSL becomes the ideal dev environment, then Linux apps run natively without WSL, and then... why run a Linux kernel?
I think it's more like, "Why run a NT Kernel?" or from an MS internal perspective, "Why fund development and testing of an NT Kernel? We already made it so all legacy Windows apps run natively on Linux anyways"
I find these kind of comments fascinating, with the juxtaposition of the late 90s OS wars from the Linux side and the complete pivot to aaS and meet people where they are from Microsoft.
Be as watchful as you want, be aware that the enemy isn’t coming though.
Windows Server has a vast ecosystem of skilled technicians, vendors, and MSPs who maintain the installation in the closet of every barber shop, high school, auto parts distributor, and dental clinic in North America. These institutions don't have & wouldn't be able to get the expertise to run their own Linux servers in the same way. That's what Windows Server is for. But as applications migrate to SaaS and the cloud, the operational capabilities of random small customers stop being relevant. Microsoft is trying to catch up to this with Azure, and Azure is embracing Linux, because that's how the internet-scale backend game is played.
You almost had it, until the very end. There are services in Azure that run on Linux, sure, but it’s not in danger of displacing Windows any time soon.
Those closet ADs will end up dirsync’d to AAD, and their inventory/CRM/etc app will be whichever SaaS figures that out fastest.
I don't deny that Microsoft is using Windows Server for internet-scale workloads. My contention is they are trying to capture the business of small software vendors, who previously built shrink-wrap products for Windows Server and are now reaching for something like AWS to host their SaaS. This type of business favors Linux, so Azure has to be a good place to run Linux.
I suspect Microsoft's vision is for you to use Azure AD with site-local Read Only Domain Controllers running headless on Server Core in case of internet or Azure AD outages. Server Core is more competition to headful Windows Server, rather than Linux in particular.
The only great argument for running Windows Server is Exchange and Active Directory (or IIS for legacy .Net Framework applications). That being said though, Exchange is so janky you're better off outsourcing that if at all possible.
No. You would have to be a maniac to use MSSQL on Linux in production. It’s so expensive anyway that the Windows Server licenses pale in comparison.
This kind of thinking is the equivalent of putting no-name retread tyres on an F1 car. Like okay, it’ll function, technically. It will save money. Is it a good idea?
Microsoft seems to have basically given up on windows except for legacy support. I suppose satya’s master plan is to eventually compete with Amazon on the price of Linux VMs
On the contrary, they’ve been adding new features which bring it closer to POSIX in capabilities - for example, various Windows 10 builds introduced Unix domain sockets, a pseudoterminal API, proper UTF-8 support, POSIX file delete semantics, etc. The pain of porting software from Linux/macOS to Windows has been reduced (significantly for some types of apps) and
here’s to hoping they reduce it even further.
SQL Server has been able to do that on Windows for about a decade now. Just grant the “Lock pages in memory” privilege to the service account and it’ll do it automatically if the server has 16 GB of memory or more.
I use it as a free 15% go-faster button in my consulting gigs.
The current implementation is ok for programs that start soon after boot, run as root, and never exit, but even for server workloads there is a lot of other software out there.
> Today they "emulate" Linux on Windows (WSL2), tomorrow they will "emulate" Windows on Linux (i.e. all the Win32, etc APIs on Linux - think WINE++).
No. Microsoft tried to bridge the Linux APIs to NT APIs in WSL1 just like WINE does for Windows APIs to Linux APIs. But they ended up running into issues and limitations that made them change their approach.
Now, with WSL2, they just have a virtualized instance of Linux running. This isn't even an odd choice as Windows has already been running on top of a hypervisor for anyone who has Hyper-V enabled (new installs of Windows 11 have it enabled by default to support Virtualization Based Security).
> Microsoft tried to bridge the Linux APIs to NT APIs in WSL1 just like WINE does for Windows APIs to Linux APIs
WSL1 doesn’t implement the Linux syscall interface on top of the user-mode Win32 or NT APIs. Rather, it runs in-kernel; it calls kernel-mode NT APIs (only some of which directly correspond to NT syscalls), and (I assume) also implements some aspects of the Linux APIs internally to itself. Its approach is rather different from that of Wine or Cygwin, both of which translate one user-space API to another; and also from the legacy Windows OS/2 and POSIX/Interix/SFU/SUA subsystems, which implement those APIs mostly in user space, on top of the NT user-space API. WSL1 is essentially unique; the closest analog is probably the Linux syscall emulation in FreeBSD/NetBSD, Solaris10/Illumos LX branded zones, and the old Linux iBCS2 personality - although those are all far easier because there is much less mapping in emulating the Linux API on another POSIX OS than one fundamentally non-POSIX.
> But they ended up running into issues and limitations that made them change their approach.
I don’t think those issues were inherently insurmountable, and solving them in the context of WSL1 would have made Windows a better platform, since many of those issues (e.g. poor filesystem performance) also impact native Win32 apps. But, they made a decision on where to invest their resources. WSL2’s implementation strategy requires less engineering work overall to reach a given outcome.
I specifically said NT APIs to differentiate from the Windows APIs. And my "just like WINE" comment was more to differentiate what they were doing with WSL1 from WSL2. My use of "just like" is too strong there.
And I also don't think any of the problems were insurmountable. I'm still disappointed that they gave up on WSL1. (At least partially because I thought the whole Windows Subsystem concept was neat design)
> I specifically said NT APIs to differentiate from the Windows APIs.
It depends on what you mean by "NT APIs". If you mean the NT API exposed to user-mode (NT syscalls), no, that is insufficient to implement WSL1 (or anything else like it). If you mean to include kernel-mode-only NT APIs (including undocumented/private APIs which MS doesn't expose to third parties, even new APIs added specifically for WSL1 to consume), then yes that is.
> the whole Windows Subsystem concept was neat design
Also worth keeping in mind, that WSL1 is not an "environment subsystem" in the sense that Win32 is – or OS/2 and POSIX/etc were. It has a radically different architecture from a classic Windows NT environment subsystem.
Ah my mistake about the subsystem implementation. My memory was fuzzy given the 5+ years since I read Windows Internals 7th Edition. Thanks for the correction!
> No. Microsoft tried to bridge the Linux APIs to NT APIs in WSL1 just like WINE does for Windows APIs to Linux APIs.
Uhh, I think you miss read my post. I agree with you WSL1 was a dead end. It tried to put Linux on top of NT.
I believe MS will try to put Win32, etc on top of Native Linux APIs (removing NT entirely). WINE and Proton have effectively done 90% and MSFT will have a way simpler time of getting it to 100% given they dont have to worry about their own copyright.
MSFT also has the financial motivation to stop developing the NT Kernel, so it’s valuable for them to invest into this “WINE++”.
I think it is unlikely even Microsoft would succeed in replacing the NT kernel with the Linux kernel.
NT has features which Linux kernel devs have actively opposed including – such as alternate data stream support in its VFS/IFS layer, and a stable device driver ABI. So even supposing – and I'd be rather surprised if it were to ever actually happen – Microsoft were to release the NT kernel under a GPL-compatible license (thereby eliminating many of the legal issues), they still might find it a great struggle to get the necessary features upstreamed.
Important parts of Windows – e.g. synchronisation primitives – cannot be emulated on top of the Linux syscall API as it currently exists (with accuracy and high performance), due to gaps in functionality. See https://lore.kernel.org/lkml/f4cc1a38-1441-62f8-47e4-0c67f5a... for some of the gory details. I note that LKML message appears to have received zero replies, which is not an encouraging sign for any progress on that issue.
> MSFT also has the financial motivation to stop developing the NT Kernel
It would be a huge investment to extend the Linux kernel to be able to support a 100% correct and equal performance emulation of the NT API, and to port that API to run on top of it. Even with such a huge investment, there would be no guarantee of success (what if Linus refuses to upstream the required features?). And, if it legally requires open-sourcing large parts of the existing Windows kernel (due to GPL compatibility), that would greatly improve the viability of Wine/ReactOS/etc, threatening Microsoft's existing Windows revenue stream. The financial case for what you are proposing is far less clear than you think it is.
Love your response. Im sure you’re right about all of those details if a strict translation was needed. I just don’t believe it needs to be so strict, and as such that it is not that difficult these days.
Apple’s Rosetta 2 and Valve’s Proton have shown that these kind of emulations can be extremely effective. When you control the OS code, and are willing to be extremely hacky for the medium term, anything is possible.
If I were to lead this effort I’d do:
1. Try my best porting the Win32 APIs, Direct3d, to Linux. I’d convert many of them. Use Office, the Windows window manager, Visual Studio, etc as test suites. This can be deployed to users where there is no performance regression.
2. Update any non-legacy applications to use more of WSL2 especially were incompatible with the new Win32 APIs. eg Office, Electron, Chrome, Visual Studio.
3. Build a binary parser like Rosetta that detects use of any incompatible Windows APIs. Mark these binaries as “legacy” (this parsing only needs to be run once per binary). Run legacy binary processes through a hyper visor and slimmed down snapshot of Windows.
4. Investigate more traspiling of binary code like Rosetta, but for Windows APIs to WSL APIs.
5. Announce it and state the intention to migrate.
Many modern apps are Electron anyways. Actively supported apps will slowly migrate to more performant APIs. Legacy apps were not designed for today's hardware anyways and it would work just fine with a little performance degradation but overall still a performance win compared to when the app was released.
> Apple’s Rosetta 2 and Valve’s Proton have shown that these kind of emulations can be extremely effective.
Rosetta and Rosetta 2 are CPU emulation with the same underlying OS – so quite a different problem from what WINE/Proton/etc address. And something Microsoft actually already has anyway – their implementation is less impressive than Apple's in terms of performance, although it can do some interesting things which Apple's can't – in particular, mix emulated x86-64 and native ARM code in a single binary image and process (Arm64EC) – Apple fat binaries are two completely separate binaries concatenated together into a single file, only one of which is actually loaded; Windows Arm64EC binaries mix x86-64 and Aarch64 code in a single binary image, so one function can be native and the other emulated. A program can be native Aarch64, but still be able to load legacy x86-64 plugin DLLs, with some performance penalty incurred for the later [0] – something Rosetta or Rosetta 2 can't do, although Classic MacOS had similar support for mixing emulated 68k and native PPC code in a single process.
Valve's Proton is just a fork of Wine, and it works great for a carefully curated subset of games. However, games are a relatively narrow category of applications, there are lots of things other categories of applications will do which games never will.
> 3. Build a binary parser like Rosetta that detects use of any incompatible Windows APIs
Sometimes, the problem is not with the APIs themselves, but particular patterns of using them. A straightforward emulation of a Win32 API under Linux will behave the same for the vast majority of cases, but in rare edge cases will behave differently. Inspecting a binary to detect what APIs it uses won't tell you whether it is one of the minority of APIs which depends on one of those edge cases. Sometimes the developers themselves won't even know, because it is not unheard of for developers to do weird stupid things by accident rather than intention, yet it just so happens they work, and they don't even realise they've done something weird and stupid.
The biggest problem with your proposal, is will the huge dollar investment it would require of Microsoft, actually be worth it for them? Even if (maybe) saves money in the long run, it will cost a lot more in the short-to-medium term. And the fact is, Windows licensing revenue is still massive enough to more than pay for what Microsoft is spending on Windows development, so there is no financial pressure for them to save money in the long-run – and migrating Windows to Linux would make it easier for their customers to migrate away from Windows, hence risking that revenue stream. Why take such a big risk with one of their cash cows for such an unclear benefit for them?
Even though it is possible to run an AD with samba, it requires more than just clickety click on a bunch of boxes and MS AD is quite a large very functional product.
I’ve been meaning to get around to trying Samba’s AD emulation for several years now.
AD has been very solid in my experience. Samba has big shoes to fill in my mind. Is it really workable as an AD replacement for a real production AD environment today?
Having said that, as a former directory engineer (iPlanet/Sun/Oracle/ForgeRock) I don't think any of the Azure AD workarounds, including samba's (who really needs CIFS in a world where files can be served up over https with secure OAuth?), are worth all the extra effort. If you need an enterprise directory, you should deploy one. The good news is that both Ubuntu and Red Hat now support Azure AD, so you're not stuck with half measures.
Of course not every shop _needs_ a system/network directory, and both those Linux ecosystems support a range of user and system management options that can do the job. Even if you finally find yourself in need of something more, AWS and GCP offer competitive identity services that can work just as well with non-legacy systems as Azure AD (so long as you don't have any Microsoft PaaS or SaaS dependencies).
(correction: gregkh works for Linux Foundation, Sasha Levin works for MS).
Azure Sphere OS for embedded is based on the Linux kernel, https://static.sched.com/hosted_files/ossna19/91/Crossover_E...
There's also their NOS for merchant silicon whitebox networking, https://www.linuxfoundation.org/press/press-release/software...
> SONiC is an open source network operating system (NOS) based on Linux that runs on over 100 different switches from multiple vendors and ASICs.. members including Alibaba, Broadcom, Dell, Google, Intel, Microsoft, NVIDIA.. Microsoft founded SONiC.. as open source so the entire networking ecosystem would grow stronger. SONiC already runs on millions of ports in the networks of cloud scalers, enterprises, and fintechs.