NixOS OCI containers are a powerful way to run apps that are not packaged for Nix or NixOS. And because they’re ultimately systemd units, you can customize virtually everything without having to fiddle with the container runtime.
If you want to take this a step further and migrate or run a Compose project on NixOS, I maintain a tool that makes this pretty easy to do :)
I've used docker-compose, k8s, and NixOS myself, being from a similar technical background as the author, but I find myself disagreeing with some of the author's opinions on the technologies. They're not wrong of course, but I've had different experiences.
k8s: Installing and using k8s can indeed be a nightmare. In my job, we use Azure, so it's not so bad since launching a cluster is mostly handled by Azure. Setting it up for personal use is less fun. The mountains of YAML you can end up using to deploy even semi-complex services is even less fun. That being said, I've been wanting to use it for a personal project (distributed cluster using cloud VPSs and bare metal at home connected using WireGuard). I just wish it was smaller and faster. Most guides recommend 2Gb of RAM and 2 CPU for the smallest of small deployments.
docker-compose: I actually love docker-compose for my personal stuff. I have an intel NUC hosting homeassistant, pihole, caddy, deluge, jellyfin and a handful of other stuff. Everything lives in a series of folders for each service. Backups (both data and code via git), disaster recovery, and just general reasoning about of it is so easy. The docker-compose files are small and easy to read. I also find docker-compose to be about as immutable as you'd like it - version control your docker-compose directories, pin your image SHAs, and you're in a good place. Or don't, and it will still work pretty well.
NixOS: I've done it. I installed it on my Framework Laptop since it was all the rage at the t ime. I lived with it for about a year, and it was okay - for day-to-day use - AFTER I had spent weeks learning how to use NixOS. I will freely admit it's an awesome technology in some respects. But the documentation just was not there. It was way too hard to learn how to do even basic tasks. I thought nix flakes might be the "aha" moment I was looking for, but I gave up trying to get that to work after a couple of days of troubleshooting. Don't even get me started on trying to package up something from scratch. As a random example, I googled "packaging python for nix" and the top result [1] is just way too complex for something that should be pretty simple. The example includes some abomination of a .nix file with inline bash and python scripts.
I don't really know where I'm going with this. I really do like the idea of NixOS. I just wish it was much, much easier to reason about. Curious to hear what others make of this.
Nix has a dedicated following among some of the DevOps members in my company. Luckily, it stays mostly segregated, but I had an interesting conversation with some of them the other day about how awful troubleshooting Nix is. Now, mind you, these are pretty OG Nix contributors, and the response was basically: you have to want it enough.
I had my own falling out with Nix/NixOS over a year ago. I guess I didn't want it enough shrug.
For clarity, I still use home-manager because it's fairly painless, but anything above that is pretty much a no-go for me.
It doesn't need to be paid though, it's just a result of poor design and poor documentation. It's the only OS where I feel like I'm both 20 years in the future and 20 years in the past.
I think it's also an issue with the flexibility paradox. There isn't a good single way to package Python for Nix because before you even start there are several key questions which come up that most other distros can't even begin reasoning about:
- Are we packaging just one Python package for Nix, or a thing and all its dependencies?
- Is the package already on PyPI, and are we packaging the source from there, or is it the source from some upstream?
- Are any of the dependencies coming from source, either their upstreams or forks?
- For dependencies that already exist in the nixpkgs SHA that is supplying the Python interpreter, do we want to use those versions, or package newer versions? Is it the same decision for everything, or does it vary?
- Is there already a Python level dependency locking scheme in place such as poetry/pipenv/uv, or is it a plain setuptools package? Is it acceptable for Nix to become the locking mechanism, or will that mess things up for non-Nix consumers of this?
- Are we looking to supply a development environment here too, or is this purely about deployment?
To be clear, none of this is an excuse— it's horrifying that there can't be a single "Tool X is the singular Python-on-Nix story, follow the one page tutorial and you're all set". But I think the massive amount of choice and flexibility is the crux of why new methods and tools are still being rapidly invented and proposed.
For myself, I would choose poetry2nix as how I'd ship a Python project to Nix hosts, but that immediately implies some answers to a bunch of the above questions, mandates poetry for your top level project, and once you look closer there turn out to be some truly horrifying implementation details [1] that are what make poetry2nix appear as seamless and friendly as it does.
I can't think of a single external packaging system that actually does Python non-painfully. Then again I can't think of an internal Python packaging system (there are like 5 now) that does Python non-painfully. But since one of the selling points of Nix is "it's like virtualenv but for everything", it's a little disappointing that it doesn't mesh better.
I'm not a Python developer, though I use applications that require Python libraries. I don't want a separate numpy for every application I use tucked away in my home folder somewhere; I want a system numpy provided by my distro that can't get out of sync because there's only one copy of it anywhere.
I think the Debian model is okay, it just falls down in terms of being able to have any kind of meaningful cooperation between the debs and what pip does, and of course there's the massive velocity mismatch between what's on PyPI and what's in your distro.... and of course no way to install more than one instance of something so heaven help you if you have two down-tree packages that depend on a different version of a thing.
And this isn't new: it was true for gems with Ruby, it was true for pear with PHP, it was true for CPAN with perl. I don't do javascript at all but I assume it's doubly true for npm just because that place seems to make even pypi look conservative and glacial. I've mostly come around to the belief that if an piece of software has a package manager you should just use it instead of your system package manager. But man do I miss the days of CPAN packages that might update once a year.
Generally I'm in agreement, but the one place that starts to fall apart is with bindings— if there's a native code library, then I really would prefer that library come in from the native package manager rather than be smuggled in by pip... but that's not generally how it works these days, and outside of Nix there isn't a great way to resolve this case.
At the same time Nix implements python packaging particularly bad[1] when it does not have to. And that's on top of already insane state of python packaging. And poetry2nix becoming deprecated really bit me recently :/
Similar issues (minus insane packaging) is with Common Lisp packages, where there's annoying focus on compiling libraries into binaries (it would have been fine to have a bit of patching to provide things like correct paths for ASDF and CFFI search paths, but no....) which gives absolutely no benefit.
I will say, there is one element of Nix's core design that makes almost all of these integrations considerably worse than they have to be, and that's the whole IFD fiasco— if it were possible to ingest an ecosystem-specific lockfile, run a build on it, and evaluate that result, then a lot of the drama and workarounds would melt away. This is sort of supported today, but it's basically banned from nixpkgs because of how the current implementation pauses evaluation any time it needs to run a build that evaluation is dependent upon... and it can only run one such blocking build at a time.
The new rust implementation (tvix) is addressing this as one of its core design goals, allowing evaluation and builds to run in parallel. But even once it's complete and usable, it's unclear what the relationship will be between tvix and Eelco's Nix.
Another is that Nix as language is yet another of those "functional programming sounds fun, let's do bare minimal pure language and forget about all the niceties that proper ones figured out".
I think my argument is that most of those issues are relatively shallow— they're problems that could be addressed with tweaks to the syntax or additions to the standard library.
The IFD issue is fundamental. It's why poetry2nix has to be implemented entirely in Nix code in order to run at evaluation time without blocking, whereas in a world where eval and build could properly interleave, this would never be done— the core logic of poetry2nix would be implemented in something sane.
there is a very real sense in which nix has been riding on a good idea while ergonomics elsewhere in the field have advanced, yes. make no mistake, the technical work, getting programs to behave themselves in a fairly alien environment etc, has been impressive, but it's like in the marathon to get there, the idea of making that process somewhat nicer has been subject to constant procrastination. which sucks since in some sense the lay packager is working with the exact same tools as the people bringing the whole system together. not just nixpkgs dx has fallen down the list of priorities, but technical debt down to nix itself has accrued as well. the tvix effort emerged from dissatisfaction with instability (the same dissatisfaction that kept the actual version of nix used in a typical nixos install well behind master) well before any administrative/sponsorship struggle snuggle, and i still maintain that an effort more conservative in scope such as lix would have emerged politics aside, as again, what the end user sees switching to that is mostly "oh hey this random thing that repeated segfaults conditioned me out of attempting just... works now lol?" or various ux papercuts just ceasing to be. nix-at-large has a ways to go but i am optimistic
Thanks for the perspective. I mainly use NixOS to run my server's not personal machines. I can see why it can be a frustrating experience for a machine that you just want to run personal stuff on.
In my instance of creating server machines(cattle), the configurations are pretty light and what's important is the reproducibility aspect of it. If I need to take one down and rebuild another it takes about 10 minutes. All the upfront work of configuring Nix for that one machine has paid off.
I'm a big nixos fan myself, and I appreciate your post, I don't do any of the socials listed on your site and noticed the word 'serice' if you wanted some backseat editor. Apologies if hn comment in the wrong place for the feedback.
> As a random example, I googled "packaging python for nix" and the top result [1] is just way too complex for something that should be pretty simple.
Aware that this is more of a critique about the documentation situation, as opposed to the python packaging situation. However, there is poetry2nix[1]. Which makes packaging look something like this:
poetry2nix has actually been deprecated[1], and is in my experience subtly broken, as is the entire python packaging mechanism in Nix[2].
So we're now getting yet another attempt, this time called pyproject-nix[3].
I'm now considering taking similar stab with common lisp packaging, because the amount of time I lost fighting Nix trying to run a development environment is making me reconsider using Nix at all.
I think it's great to document this, and some people are going to prefer working with containers no matter what. That said, personally I've moved away from it and these days I just use nixos modules and run all of the services on my home server directly on the host. You don't get the same isolation that you might get with proper containers, and that might be an issue for production machines, but I find the simplicity is a win for a home server.
If you're after simplicity, it's hard to beat docker compose for self hosting stuff. I run NixOS on my laptop and routinely run into things that aren't yet packaged for NixOS, but I've yet to come across a project where I had to write my own dockerfile.
Personally I’ve always found working with docker to be pretty frustrating, especially dealing with docker compose. Most of what I run on my server is at least in nixpkgs already if not already a NixOS module, and I just find it less frustrating to write nix than to deal with Docker.
That said, I know nix pretty well at this point and I would probably have a different opinion if I hadn’t spent so much time learning nix.
> That said, I know nix pretty well at this point and I would probably have a different opinion if I hadn’t spent so much time learning nix.
Yeah, I'm in exactly the opposite boat—I've been using docker professionally and at home for 5+ years now and know it very well, while nix is still very new to me!
There's probably no way to objectively tell which one we'd have preferred if we started in the opposite order.
Minor nit: configuration is not the hard part. The hard part is getting "/root/registry-password.txt" onto the NixOS machine in the first place. I mean, you could just scp it I guess, but why spend hours tuning a NixOS config that requires you to manually do stuff in the end?
I'm aware of all of the NixOS "secret management" methods out there but I found none of them satisfying back when I was still using NixOS.
If you're on e.g. AWS or GCP, you can pull them from the cloud's IAM service. If you're on kubernetes, you can use k8s secrets. If you have e.g. vault you can use that.
It's really only deploying on unmanaged servers where this comes around, but it's also somewhat of a hard problem. Like you don't (or shouldn't) bake secrets into disk/VM/container images, so once you're no longer building on some managed layer then you do have to figure out bootstrapping yourself.
These are a little chicken and egg as you need the system's host key for that. If you want to use a signed host key, you need to deploy that, otherwise if you just let it generate a host key you're in TOFU territory
It’s finally happened, people are typo’ing closure as clojure, instead of the other way around!
For the topic at hand isn’t this always a problem with deployment systems? You need to have the secret somewhere after all. In my case I only ever use nix for personal systems, so feel totally justified just storing my ssh key as a secret in yadm.
Yeah I agree it's manual but it takes about 5 minutes to SCP the password onto the machine.
I have some playbooks I setup to creating a new machine. All in all it takes about 10 min to get it up and running. Maybe not instant but at the moment I don't need anything else.
In this context, I think the prime advantage would be that instead of:
- Managing/setting Ubuntu/$distro for the host
- Installing Docker compose on host
- Writing a docker-compose.yaml file to declare your container architecture
- (potentially) Writing a systemd service to bring docker-compose up with the host boot
You just:
- Mange/setup nixos
- Add container architecture definition to nixos config
The containers, being systemd units, would have all the normal systemd log management like the rest of your system, instead of having to dig through docker-compose logs with a different mechanism than "normal" systemd service logs.
You'd also get all the normal benefits of a nixos system: the config file can be placed on a new system to completely reproduce the system state (modulo databases et all), rollbacks are trivial, etc.
Nix is a language, package manager, and OS. This post discusses NixOS.
While docker-compose allows you to compose your containers with a yaml/Dockerfiles, NixOS allows you to compose the system that all of your containers run on (from userspace down to kernel selection/configs, file system, etc), as well as your containers - all in a declarative .nix file. That .nix file can be used to spin up any number of identitically configured systems.
It's also reproducible, in that you can specify the sources (refined to a specific commit if you prefer) for any and all packages on the system - and build them with Nix within a sandboxed environment protecting dependencies and env configurations (Nix is also a powerful build system).
1. There are many services that is already "implemented" in NixOS, with sane default configurations and easy to customize (because the contributors have designed good abstractions, and also because of the flexibility of Nix language). One good example is `nginx`. Btw `paperless-ngx` and `jellyfin` are also already implemented. In this case you do not need to use docker at all.
2. Because of the good abstraction in the service implementation, I usually do not need to go very deep to understand the common configurable options for each of the services.
3. All those services become systemd services once up. As long as you are familiar with how to manage systemd services at runtime, you know how to work with them.
4. Even for those ones that do not exist in NixOS, as the authoer suggested you can still start them as docker-based systemd services, with very simple and intuitive nix configurations.
5. NixOS configuration are mostly deterministic and modular. I can use git to manage all the configurations for different servers. There can be occasions that I will need to migrate the services to a differen machine (e.g. upgrade, replicate, ...). With the NixOS configuration of those services, I can simply re-use the configuration code and have a very high confidence that they will work as expected on a new machine.
6. The above also makes it very easy to revert my deployment to any previous successful version. Without having to worry about breaking anything, it also gives me the confidence to quickly try out different ideas.
The contributors to nixpkgs for the most part, the whole thing is on github and it's one of (if not the) largest Linux package repo of any distro. You can override defaults easily. Security updates are handled by updating your nix channel and rebuilding the system, or updating your flake and a rebuild (if the maintainer has released a more recent version, if they haven't you can make an overlay and bump the version, add your own patches to the build or 'derivation'. Rollbacks are baked in until you remove them from the 'nix store'. You can configure all the things!
I run several services on my home server. For those that are well packaged for Nix, I just use those. For ones that are poorly packaged or not available, I use oci-containers. They all run as systemd services and operate the same way, so it's a consistent interface.
Nix is more about package management (and a OS) while docker is more container focused.
But because the individual software on nix is so well separated / encapsulated it carries similar benefits to containers. So they’re different but with overlap
1. There are many services that is already "implemented" in NixOS, with sane default configurations and easy to customize (because the contributors have designed good abstractions, and also because of the flexibility of Nix language). One good example is `nginx`. Btw `paperless-ngx` and `jellyfin
2. Because of the good abstraction in the service implementation, I usually do not need to go very deep to understand the common configurable options for each of the services.
I suppose this is fine for a local machine setup. But, I would rather setup it a handful of VMs in a k8s cluster.
Currently, run my own k8s cluster with 20 worker nodes (basically just VMs on a few computers). Able to not only containerize my workloads but also evacuate workloads to different workers when I need to take down the server for maintenance (os updates, moving, kubectl upgrades).
I had actually planned to setup another remote cluster in my parents home (800 miles away), but ended up 86’ing that because their residential internet is the absolute worst.
Currently cluster only accessible when behind VPN or on local network. Haven’t setup proper authN/authZ controls yet.
To each their own. When K8's is managed it's awesome.
I would like to do without the headache of dealing with K8's installation or some orchestrator layer. I can "schedule" my on application instances at the size I'm working with.
Also, I run this setup on cloud vm(multiple actually). So it's not restricted to running on a single machine running in a closet.
Mostly for easier segregation of workloads. Some of the IoT shit that runs on this cluster I segment it off through k8s and network policies. Also most workloads wouldn’t need to use all cores or memory on that machine (one machine has 128G)
Mostly trying to get rid of longhorn. I've found it to be a continuous source of troubles w/ etcd sync & IO issues. Current iteration of cluster is all optane so might give longhorn another go, but still want to move storage off cluster.
Especially source repo needs to live on some sort of striped zfs array. Ordered one of these quad nvme NAS things [0] so that's probably going to be storage. Either nix or proxmox...not decided.
Also still a bit fuzzy on what best game plan on PVs is. Minio/s3 or nfs appear to be options.
DB...just the usual suspects...mainly postgres for gitea I think. Mongo for dev stuff. Used to vanilla proxmox/docker/lxc so this is all unchartered territory for me.
I could never get around on longhorn or any of the other "lightweight" K8 distros. I guess that's why I spent time setting up NixOS and writing this post.
Okay nice, seems like various storage services for a home lab setup. Seems like a cool project. Especially if you can distribute it across all those NAS's
If you want to take this a step further and migrate or run a Compose project on NixOS, I maintain a tool that makes this pretty easy to do :)
https://github.com/aksiksi/compose2nix