As I read this, my homelab toilet servers in the next room are running x86 64-bit Ubuntu. My Framework 16 laptop is running x86 64-bit Ubuntu. My RPi 5 is running 64-bit Ubuntu (though that one is ARM). I used to have half a dozen dual-boot Linux distros on my fun laptop, and now it's just Ubuntu. I know those aren't the same as having other forms of Unix, but I think it echoes that even within the Linux ecosystem, Ubuntu has become the de-facto standard for a lot of people.
I have never been a fan of CentOS, and SunOS was so long ago I barely remember it, and I've never tried FreeBSD, but I do want the FOSS-OS ecosystem to thrive and not collapse into Canonical alone (especially given that I don't entirely love some of their recent moves).
I believe the overwhelming popularity of ubuntu derives from it's similarities to windows, and therefore adoption by windows refugees.
A non-technical aspect of this is the widespread epidemic of windows brain damage: that is the usually life-long shaping of a user's perspective of what is "normal" for an OS, resulting from being weaned on a windows OS. This has many many embodiments. One particularly egregious aspect is the idea that one should choose the one "most popular" option, instead of evaluating the technical merits of the many available options.
Thus, just use ubuntu.
For servers especially, ubuntu is often inappropriate. For instance a GUI is not often desirable in a server only computer.
snap is another example. This non-native package manager breaks the basic security model of a linux distribution. While distro packagers don't routinely audit every package, they do pass through a distro-local creation process. Whereas snap imports a binary package from a remote 3rd party repo. This again is a result of windows brain damage, as the standard mechanism for installing non-M$ s/w on a windows machine is to install a binary from a 3rd party repo.
A diverse ecosystem is most healthy. In this light the recent years of corporate consolidation of the free software ecosystem are concerning in many ways...
I think there’s probably small but strong market for “Linux MacBooks”, as in laptops with the same or better level of fit and finish and attention put towards things like battery life, standby time, reliable sleep/wake, etc and are designed with Linux in mind.
All the current options, even those geared towards Linux specifically, make significant tradeoffs in one or more of these areas, and I think this significantly curbs Linux adoption.
At MPOW [1] I have a Mac because it can be properly sealed and instrumented by certified commercial tools. Inside it, I do all development in a Linux VM that runs my editor, language servers and linters, source control, local build pipelines, local DBs, a local emulator of many AWS services, and the system under development.
The native macOS apps I run are Firefox, Zoom, Outlook, a VPN client, and, of course, a VM runner, and WezTerm to interact with the VMs.
I slightly miss the GUI niceties of Emacs and Vim, but both are very much usable via a terminal session: true color, mouse support, shared clipboard, blazing display speed. I could run VSCode if I wanted. (I of course also somehow miss sane window management which I could get under X or Wayland, but frankly 3-4 native windows are not such a pain to manage manually, and Firefox has Tree Style Tabs.)
With new Macs being ARM-based, and most production servers being x64-based Linux, there's pretty little choice, mostly about the type of emulator you pick. Yes, you can mount the source tree via NFS. Or you can develop 100% without native code locally under macOS, it works with Java and somehow with Node. If you got Python though, you suddenly depend on significant amounts of native code extensions (from greenlets to numpy to Pillow to ML libraries), and developing on exactly the same OS which you run in prod saves you a lot of trouble.
After using Ubuntu (with a few detours here and there) since 2006, I switched everything to Debian with the latest release. I'm very happy. It feels like Ubuntu used to feel - a distro made for users first.
Because I regularly need to change configurations in snap applications, and it's a huge pain. Or I want the application to make use of files in a location, and they can't access them because they're snaps.
I found trying to work with them atrocious for anything that needed to be configured. You have to find the documentation for the product to see what you wanna configure, then you search the system and it's not there, so you have to find the (likely non-existent) documentation of where the snap moved the configuration to.
that's in addition to the normal gripes about application start up time, or how hard they make it to not auto-update out from under you at.
And then there's the closed nature of the snap ecosystem (single storefront controlled by Canonical), making many feel that canonical is trying to turn Ubuntu into a a more iPhone style closed system as opposed to being able to add whatever source you want to apt.
I think this is less of a problem with FOSS. If the current iteration is acceptable, and the next generation generates pushback, then the community can branch off in multiple directions that each faction believes to be best.
Going back to the biology reference of monoculture... in evolution you do have times where the population is mostly homogeneous, until you encounter events that promote/favor diversity. Eventually events settle on one main "winner" configuration and the cycle repeats. Similar here in my opinion.
I've thought about this quite a bit. Ultimately there's a lot of benefit to a monoculture, but there's also some pretty scary downsides. At the end of the day, a monoculture is cheap interoperability and a crappy alternative to standardized protocols.
Still, I already log into an ARM machine at home. No other choice at this point. And ARM at work was at the top of my todo list when I left my last job.
That makes sense, but I'd love to know why it wouldn't work for them (especially in a post about how things are becoming a monoculture OS and architecture wise).
Going off my experience, it might be about not wanting to mix replacement part pools, or just biases about ARM being underpowered among management. It sounds like they're running everything on-prem, whereas a "cloud-native" org probably doesn't care as much.
I think there's something to be said about the benefits to diversity from having the BSDs in the ecosystem but in the linux distribution world there's basically been a Cambrian explosion with incredible amounts of duplication.
Ubuntu is not going to be the only thing people run but consolidation in itself is not bad as there's also been a lot of half cooked, abandoned and redundant projects around.
I think the efficiency of the market for lack of better term is showing its hand here as it is in everything else.
You used to go to an airport and see maybe 10 different airplane types and about twice that in trials and airline colors. Now you see three for each roughly.
Systemd is a huge threat this way. Who cares what distros there are when they use the same base capabilities?
I'm a systemd fan actually, but I recognize the threat. I don't like there being a total success, a monoculture. But alas, it feels like most people either live with and learn to love what systemd offers (many dimensional excellence & capabilities exposed consistently) while living with the couple hiccups, or they have a retro backwards looking anti-big reaction.
So there's like a centrist force & an anti-force. There's no one trying for bigger better. No one's trying to out-do systemd, do a better job: they're working tirelessly to try to recreate a past where we did much less, had much smaller ambitions. There's no progressive force.
This directly mirrors Molly White's recent XOXO talk (video not yet available) where she talks about how the better new web we want isn't just so replay of the past. That we need to build on, no be afraid of what's newly possible, chase new targets, evolve. https://bsky.app/profile/knowtheory.net/post/3l2ib4kfrjm2r
The main issue today is hw support and power saving: IllumOS/OmniOS, FreeBSD could barely run on modern hw and even if they run they tend con consume much more electricity than GNU/Linux.
Beside that instead of becoming fool with Ubuntu I'd rather choose NixOS or, if I can depending on the infra needs, Guix System to have a more manageable system language. They are MUCH more manageable and robust than Ubuntu or other GNU/Linux distros by nature, not stable as old SUN Solaris, but still stable enough and they teach how to manage modern systems and infra, witch is damn needed these days.
Perhaps having an OS polyculture is a luxury afforded to organisations with lots of time and money to wear the inefficiency cost of doing things 3 or 4 different ways, or who want to intentionally prop up a competitor to ensure there is market competition (it's fabled that IBM bought so much SLES to provide a competitor to Red Hat), and maybe some other reasons.
An Ubuntu monoculture at least gives you an easy migration path to Debian.
MirageOS is for unikernels that can run on Xen or KVM, and I think would be very interesting for all sorts of network services. Each would be essentially its own VM, but with only the parts that are compiled into the service.
https://mirageos.org/
Only Linux can run Linux containers, so you either want Linux to be a microkernel, or you want a microkernel that can run a Linux kernel. The latter exists, it's called WSL2
WSL2 emulates a full machine via Hyper-V. Any OS that happens to have a microkernel and reasonable hypervisor implementation would meet that definition.
The Linux kernel in a minimal VM has been done in other projects too. Firecracker came to mind first, but there must be others.
I'd imagine this would allow you to run whatever kernel you wanted to start whatever "container image" you had planned. Quotes because I'm not sure if that's still really a container.
Brian Cantrill talked about how they got Docker running on Joyent. They had to make shims for many system calls and a little more fiddly code for a handful more.
No idea how much skill that takes, but it’s been done once.
It's the same thing as making WINE: you implement the ABI in userspace. It's a pain in the butt and it suffers from compatibility and performance issues. Programs run with WINE often run into problems not found on their native Windows environment. It's better to have a standard that each operating system implements so that apps don't need a weird portability layer in-between.
Sadly, Linux doesn't give a shit about other systems, so any effort to standardize containers will happen outside Linux, and Linux probably won't pick up its changes because they already have Linux containers. So a WINE like thing would be the most workable option.
The current state of affairs is actually a much better solution. Just make all containers OS-specific, and ship the OS kernel and dependencies along with a shim, and run them with QEMU. Apps run way more reliably this way with no porting required. The devs do less work, the users do less work, there's less bugs, and you can run any app in any OS.
I don't really understand this post. My take is that, if anything, the host OS is becoming less and less important (not that I would choose ubuntu for... well... anything these days).
There is an explosion of viable alternatives that are all running in containers (nix, alpine, debian, hell - lots of folks are just using distroless/scratch containers these days with static binaries). For folks on managed clusters, they don't even care what the underlying hosts are at all (and again - they aren't ubuntu).
Kernel consolidation is real, but I definitely don't see an Ubuntu monoculture. I see a linux monoculture. My guess is that won't change until Linus steps back from running that show, and then I expect it to fracture again (It's always the people that end up mattering).
I also don't agree that ARM isn't coming. It is (I'm already running several services on rpis, and I get issued ARM macs for work). From a power perspective... hard to beat - The performance/watt is just too good to ignore. Graviton/TauT2A are already here, and they're economical on the hosted side.
Basically - this feels like the "I'm getting on up there and just want a stable thing I know" type nostalgia post. Ubuntu fits that bill for a lot of folks, but it's not where I see the next generation going.
If you're only seeing ubuntu... I suspect you've been doing the same thing for too long. Time to jump out of the rut. Go somewhere new, see the cool new-fangled things the kids are playing with (it's not ubuntu).
Ubuntu is a perfectly fine solution (and frankly - debian more-so) but the view the author has from the context of a university setting is... as outdated as my professor that insisted on smalltalk still in 2007. It's not wrong, it's just aged.
> I feel that as system administrators, there's something we gain by having exposure to different Unixes that make different choices and have different tools than Ubuntu Linux.
You could try Windows! The water is warm and quite pleasant. There’s many things that are worse in Windows, but there’s also many things that are better!
It was still available as a download for Server 2012. It was listed as deprecated and they recommended various migration paths, but they didn't stop shipping it.
you could replace it with a fuller 'interix' which was basically a full Unix with x and motif. I ported a large motif gui app but never used it. Microsoft later bought interix
Yes but only as a technicality for sales. Nobody ever actually depended on or used the Windows POSIX system. Cygwin was created as a response to wanting a real working POSIX system on Windows.
I have never been a fan of CentOS, and SunOS was so long ago I barely remember it, and I've never tried FreeBSD, but I do want the FOSS-OS ecosystem to thrive and not collapse into Canonical alone (especially given that I don't entirely love some of their recent moves).