Ish. Photon is optimised for virtualisation, but it also runs on bare metal. (I don’t know anything about CBL Mariner).
The article is talking about stripping down to the bare minimum, down to the level of even removing filesystem drivers, because they’re not necessarily needed!
I was thinking about this yesterday; other than POSIX why do operating systems choose a filesystem heirarchy as the thing that they provide as a datastore? Why not a structured document store, a blob store, a key value store, or some mixture of all of this?
I'm very bitter that computing went into turtle stacking mode and took its 50 years of baggage with it.
- I don't think that the suggestion was to remove the filesystem outright, just that if you run everything over NFS or 9P then you don't need drivers for ext/xfs/fat/...
- To the broader "why do we even use filesystems" - because those other options are less powerful while offering marginal benefit. Also because backwards-compatibility and interoperability with other systems is useful.
CBL (not sure about Mariner specifically) had builds targeting hardware, AFAIK.
I recall ~2016 that Azure was running their own Linux images on various parts of the gear, like "Smart NICs" in VM hosts that cooperated with SDN driver in Hyper-V.
I spent a few hours building the smallest kernel that'd boot in Firecracker ( https://blog.davidv.dev/minimizing-linux-boot-times.html ), got it booting in 6ms with network support, which opens up some interesting use-cases.
That’s super impressive! I have spent many hours optimizing embedded boot times, but it’s amazing how much faster it can be when the hardware is virtualized!
>"Although CMS was originally designed to run on bare metal, the version shipped as part of IBM CP/CMS was dedicated to running inside a VM.
As a thought experiment, now let's think about what a Linux system would look like if it was designed with this in mind. It will only ever be a guest, running under a parent OS. (To make life easier, we can restrict any specific edition to one individual host hypervisor.)
A lot of issues ordinary distros face just… disappear. It doesn't need an installer, because a VM image is just a file. It doesn't need an initrd, because we know the host hardware in advance: it's virtual, so it's always identical. It doesn't need to boot from disk, because it won't have disks: it will never drive any real hardware, meaning no real disks of its own. That also means no disk filesystem is needed."
It also wouldn't need most drivers -- and possibly might not need any drivers, depending on how things were configured!
I wonder why regular distros even need installers.
I mean yes, it's friendly to new users. Yes, some steps must be run that cannot be expressed as one-size-fits-all files in a tarball.
But could a power user not untar a filesystem and run a script and be good to go? I know that's a de-facto installer - I just wish the tarball was easier to get at, I guess. Seems like a lot of data (deb packages) tied up inside code (one big 'install' function)
It depends how you want the "install" to go; Gentoo stage3 tarballs are exactly like that (and, unsurprisingly, the BSDs at least used to favor it). On the other hand, I'm personally quite fond of the method where you point the package manager at a root directory and it creates the system there by (more or less) just downloading and unpacking packages into that directory; off the top of my head, nixos defaults to this approach, Arch at least used to, yum/dnf distros were quite happy to do it with a little prodding, debian does it with debootstrap... Anyways. The point is that this is in some ways more flexible; rather than starting with a root tarball and then adding packages, you just... install packages (almost incidentally including core/base packages).
Yup, all doable. 15yrs ago, I'd build custom Xen PV images for Debian and Ubuntu VMs that would start off as running debootstrap into a chroot. It was basically just that a directory you'd customise files in and turn into a tarball.
It doesn't seem like the author is aware of the irony of talking about minimizing bloat while advocating for running things in virtual machines instead of natively on a single OS. I know there's still a point to minimizing bloat when VMs do* makes sense in business/institutional environments but it's still funny in most use cases.
I'm reading this as using the VM for software compatibility (Linux software on Plan 9). You absolutely can do that without a VM (see: WSL1, FreeBSD/NetBSD/illumos Linux ABI compat layers, WINE, darling, vx9, ...) but it's hard - there's a reason MS eventually created WSL2. Using a VM doesn't have very much overhead these days and gives you ~perfect compatibility with minimal effort.
> There is an urgent need for smaller, simpler software. When something is too big to understand, then you can't take it apart and make something smaller out of it.
And the first step is stopping the practice of adding 6 extra layers of container/vm/etc abstraction on top of everything making it infeasible to debug or understand.
Yeah nothing like having databases running on the print server. Gotta avoid that bloat. I smash all my applications into one server, it's so much leaner.
Seriously though, I have no idea what your point is.
I wonder how much emulator it takes to run a Linux that's made to run in a VM.
I might be more enticed to build a hobby OS if I knew I could do a port of qemu's core (It's gotta be just SDL, file I/O, and keyboard/mouse plus the other 90% of the owl, right) to host Linux apps somehow.
It's not that hard to emulate the basic Linux system calls. If you have the user mode compiler and libc running, then you can get simple console applications up and running in a day or two.
Having looked at qemu a bit, it feels like a bigger undertaking to understand the qemu internals enough to start modifying it.
But if you can run qemu as-is, the core kernel is not that big if you have a very limited set of HW support and don't implement the more complicated system calls.
But you know what, in fairness, it's closer to "what the author thinks should be implemented".
I reckon I might be able to have a pretty good stab at building a Linux-for-VMs like this, but it's nothing to do with my actual day job and it's not the sort of thing I do for fun. I am unlikely to try unless someone were paying me.
(My actual "vanishingly little time outside of $DAYJOB" project at the moment concerns DOS and running DOS on modern-ish hardware. It's nothing to do with Linux at all.)
I'm much more interested in trying to turn 9front into something that can run modern Linux apps, transparently, without making 9front much bigger or more complicated than it is and without the considerable maintenance overhead of emulating the moving target that is Linux.
But that is something I am not even thinking about, for two reasons:
[1] I definitely can't do it.
[2] I am more interested in non-Unix-like OSes such as Oberon or Smalltalk.
The key enabler of market share is running unmodified software. Few can be bothered to recompile their software for a unikernel, or switch to a unikernel-specific runtime provided by a third party (compatibility, CVE reaction time, etc).
I suspect large cloud providers may be running their custom internal stuff using unikernels and reaping resource economy and security benefits, without us knowing much about that.
People bothered to containerize their software because the benefits offset the work. Unikernels shouldn't be much harder but they haven't successfully made that argument.
the management interfaces for VMs suck compared to all the advances made for containers, so unikernels have to be quite compelling for people to suffer the worse ux to adopt them.
Lots of interesting ideas but I’d like to know how much faster/less RAM would a purpose built Linux-for-VM would be in practice. I’m guessing not as much as one would think, initially.
Because this article is part #4 of a series. It is not intended to run on a Linux host machine. The real reason for doing it is to confer Linux binary compatibility to a non-Linux OS.
https://github.com/vmware/photon
https://github.com/microsoft/cbl-mariner