The 'bootable container' / 'native container' space is getting really exiting, even (and especially) for desktop usecases. Atomic Fedora has had support for so called Ostree Native Containers for a while now, and that will eventually adapt `bootc` as the base layer for building and booting containers (but as of now it's not totally ready yet). VanillaOS is also working on similar things but I don't think it'll use `bootc`.
Some awesome community projects have also been born out of this space:
- https://universal-blue.org/ provides some neat Fedora images, which have one of the best Nvidia driver experiences on Linux IME, and are over all solid and dependable
- https://blue-build.org/ makes it pretty easy to build images like Universal Blue's for personal use
The best part here is really the composability; you can take a solid atomic Linux base, add whatever you like, and ship it over the air to client computers with container registries. All clients see is a diff every day, which they pull and 'apply' to be used after the next boot.
It depends what you're trying to do, but I was essentially following this guide: https://rancher.github.io/elemental-toolkit/docs/examples/em... updated to ghcr.io/rancher/elemental-toolkit/elemental-cli:v1.3.0 / registry.suse.com/suse/sle-micro-rancher/5.4
The whole project is in major flux now though, with v1.3 -> v2.1 being pre-release and docs haven't been updated, so I'm waiting for dust to settle before picking it back up. But basically `docker build` -> `elemental build-disk` -> qcow2/iso -> deploy / `elemental upgrade` update via OCI registry, or deploy vanilla image and then just update that via registry.
One of my biggest complaints with distros has been the lack of documentation on how to actually build the distro itself, not just an ISO but like build all the packages from source as well. Like as if you were following an LFS book. I have seen VERY few distros that provide this.
GNU Guix does an awesome job with this. The documentation is one of the main reasons I left NixOS, and then some time later I landed on Guix. I have stuck on the latter for a few years now.
Out of personal interest, do you also make use of the non-free parts of Guix? And if so, how well do they work, and how well are they documented compared to the "core" part? The orthodoxy of free software purity is nice, but I unfortunately also need to get CUDA working.
I tried Nix a few times, but the documentation was so lacking and/or outdated that I couldn't figure out how to get the setup I needed working, and I had to drop the idea as I couldn't justify the time investment that would be needed to prod around in the dark until everything worked.
Can't speak to CUDA as all of my systems run AMD or Intel at this point, but I use the nonguix channel for the mainline Linux kernel. I even built a custom iso using the mainline kernel, since my servers NIC requires non-free blobs. The process for doing that was surprisingly easy to me.
Between the nonguix README and other resources like systemcrafters, you're in pretty good company as far as the documentation for non-free things to.
Edit: less related but I still wanted to mention:
Guix makes extensive use of the info[0][1] system for documentation also. There is essentially a textbook worth of information locally on the machine, which is generally what I use instead of turning to the web.
Universal Blue’s build system (not Blue-Build) is pretty clear, and self documenting. I maintained a personal fork of Bluefin for a while, and it was easy to understand!
Honestly? Immersing myself headfirst into Nix/NixOS, which has been fun and worthwhile.. but I’m not convinced it’s really “better,” at least for my needs.
And the fact that it’s ever so slightly easier to build and deploy a NixOS VM from scratch on a Proxmox VE server than to build and deploy a CoreOS VM using Ignition (also on Proxmox).
But it’s probably worth it for me to switch back, at least for now. It takes maybe 10-15 minutes to build a bootable Bluefin fork native container image with GitHub Actions, but a relatively basic NixOS+ Plasma 6 image took closer to 60 minutes and came out to over 8 GB compressed…
Interesting, I _just_ went from NixOS to Bluefin. I took home-manager with me, though, which gives me just enough Nix without the NixOS headaches (mainly around processes daring to bring "foreign" binaries into the system). My honeymoon period with Nix lasted about six months and was pretty quickly over. I stuck with NixOS for about 18 months only out of laziness of not wanting to set something else up. I really like this new stack, though. Time will tell if it's actually better.
Did I guess it right that it basically processes Containerfile and instead of producing a .tar artifact (which is what container images usually are) it produces .qcow2/.ami/.raw/.iso/.vmdk file which in case of .qcow2/.raw/.vmdk can be used by a virtualization software to start up a VM with a disk mounted from that file?
Will the changes made inside a session with such a VM persist? or will they get lost (which is the default behavior with containers)?
Container's filesystem may be as narrow as a single binary file, surely a VM with such a filesystem won't be able to boot - where will it take the OS (with the kernel, drivers and other stuff) from?
1) you create an container image based on the upstream image that supports bootc, using a Containerfile that serves what ever purpose you want.
2) you push that container image to some registry
3) you use the bootc image container to create an qcow file from the image you have built (or you install the image on a bare metal system)
4) you boot up the virtual machine or bare metal system, which now includes "bootc" utilties too
5) from this point on you can update the container image you have created in step 1) and you automatically roll forward the booted virtual machine or bare metal system to the latest image you have relased (or rollback, if your updated image breaks stuff) using the included bootc utility
Currently the image that supports this seems to be limited to centos:stream9, or rhel9:
So only CentOS? Would it be possible to run that with firecracker? If that is the case, then wouldn't it be better to just run a Docker/container file in a firecracker vm. It will be more isolation, and easier scripting and networking?
Something like this is what I had desired when briefly experimenting with Fedora CoreOS and having to build layered images for ZFS support. I was new to CoreOS and was stuck right after I finished building an OCI image. Eventually I learned that the only way forward was to boot with a base image, then layer what I had built and run `rpm-ostree commit`.
I wonder if this project would've served my use case. The OCI images you build when layering FCOS images all build atop the base FCOS image. So I would expect them to be "bootable" in some sense.
On a tangential note, does anyone here remember Erlang on Xen [0]? It's a project from a decade ago, allowing you to package your code to run directly on the hypervisor without an OS. I really liked that approach and am wondering why it seems to have hit a dead end.
Keeping an eye on this. I've been wanting something like this to manage an air-gapped system. I don't want to worry about keeping on offline apt repository (or what have you) synced, I just want to boot a full new image and mount my home folder.
I haven’t set it up myself yet, but at least in theory all you need to do is build and push (and sign) images to a self-hosted container registry, and then have your air-gapped systems update from that machine.
I have used GitHub Actions and GitHub Container Registry the way Bluefin uses it to build and push images there. You might be able to even just mirror them from there if you want to punch a hole in your air gap.
Somewhat-related, a project of mine (https://github.com/queer/peckish) allows for converting docker images to ext4 images, among other formats. A way to turn it straight to a bootable image is very cool though, I’ll have to give this a try later!
I build Ubuntu OVAs offline using debootstrap->systemd-nspawn. All you really need to do is install the kernel and initramfs packages, then mount the fs to install grub to it.
Key differences: I like the chrootyness instead of having the complete filesystem available (similar to what orbstack does). I'll have a look this weekend.
Really interesting. I'm guessing this would be used when you want a container experience with VM level security. Would hopefully make it easier to create bespoke VM images to do fun stuff.
This turns a Linux container into a bootable Linux OS.
If your container starts a GUI environment and launches a browser pointed to your React app I see no reason to think that wouldn't work after making it bootable.
It sounds like you're expecting it to be some kind of OS of its own though, that it would automatically drop straight in to code you give it somehow. That's not what it is.
If you're looking to build a bootable kiosk sort of thing, the old-school way was just a service file that ran xinit -- chromium --kiosk http://localhost/whatever. (no idea how to express that in wayland.) It's not locked down as much as you'd hope and there are a lot of details, but starting with that and letting the error messages guide your way is a workable approach...
Some awesome community projects have also been born out of this space:
- https://universal-blue.org/ provides some neat Fedora images, which have one of the best Nvidia driver experiences on Linux IME, and are over all solid and dependable
- https://blue-build.org/ makes it pretty easy to build images like Universal Blue's for personal use
The best part here is really the composability; you can take a solid atomic Linux base, add whatever you like, and ship it over the air to client computers with container registries. All clients see is a diff every day, which they pull and 'apply' to be used after the next boot.