Hacker News new | past | comments | ask | show | jobs | submit login
What Is Nix? (shopify.com)
598 points by elsewhen on May 20, 2020 | hide | past | favorite | 333 comments



This article is a good explanation of how nix works at a high level, and I'm excited to see nix getting some really prominent support, but for some reason it never tells you what the point of all of this is, so I think many folks might feel turned off by it. In other words, I don't believe it ever compellingly answers the question that constitutes its title. The word "package" doesn't even appear until near the end of the article! This is probably a second or third article to read about nix rather than the very first thing that exposes you to it.

The point of nix is just to create completely reproducible builds and package management, including support for multiple versions of packages side-by-size with no issues. It's sort of a next-generation package management system that tries to avoid most of the pitfalls that OS package managers have fumbled with up to this point. It's really that simple.

"nix" as a term refers to a system of multiple components that work together to achieve that goal. There's the package manager itself (called Nix), the language that build instructions are written in (also called Nix), an existing ecosystem of predefined packages (called nixpkgs), and an optional Linux distro that uses Nix the package manager as its package manager (called NixOS).


The PhD thesis that goes with Nix is a really great intro to it. It's very accessible and well-structured.

It motivates the need for a new take on package management by analogy to memory management in programming languages.

It contrasts how software deployment works today (“Fuck it, I’m sure /usr/bin/local/python is a reference to a python 2.7.12 install with PIL available”) to how we used to write software (“Fuck it, I’m sure 0x7FEA6072 is the address of the array of files; so clearly, assuming a 32-bit address space, the seventh file is at 0x7FEA608A.”)

In both cases, as long as your assumption was correct, things go swimmingly. But it could be easier and rely less on hoping your assumptions are correct, and more on things that are verifiably true. And that's what Nix offers: a way to build software that is insulated against assumptions and "it works on my machine".

Thesis: https://edolstra.github.io/pubs/phd-thesis.pdf


The downside is that the approach works best when packages are aware of the Nix approach. But the packages have not been written with Nix in mind, and some work is needed per package to adapt to the approach.


Nixpkgs has already done much of that work for you, thankfully.


It's a true wonder but much like the AUR (read: Archlinux User Repo), it's great 80% of the time when projects have reasonable popularity or committed (even 'niche') support.

It's honestly a treat for personal use if you have the mind of a tinkerer, but it's a difficult proposition to sustain in business (you basically end up vendoring yourself if big enough).

Nix has a very different proposition that solves from the "inside" a problem that is currently usually solved from the "outside" using e.g. Ansible in ops/infra, or Vagrant in dev.

In my perspective, the outside/black-box solution is ultimately brute-force, plain and simple. It works because it scales well, because copying data is cheap enough (re. deduplication, delta sync, etc) You hit limits when systems grow older and complexity creeps in no matter what however, it's an approach that requires a blank slate every now and then.

But the inside package approach is elegant in that it's absolute, it's not conjuring a black box that "should just work" if Ubuntu18.04-387lts-32-1.989-2a and my_fav_package.1.1.1.9.3.5-f work fine together, this time around. Sure, there is testing but we're back to the popularity/support limit.

In the end, in a world where some of the stack is basically nailed and can be modularized with clear forever-true expectations of I/O (the content, not the hardware), then Nix eventually prevails— but we really have to evolve "LTS" for what it means (like we'd do in construction, electricity, plumbing...), and not a half-trendy windmill / rat race. But we have to think of those systems that could transfer almost-as-is well from now to 2050 or more, not 2025-2030. It's not impossible, it's what COBOL did, does as we speak.

I think something like Nix could help shift perception in the right direction, but I expect mindsets to take a good part of the decade to change deeply, if it happens.


Yes that's true. But some things remain difficult, for example installing the latest version of CUDA.

https://discourse.nixos.org/t/cuda-setup-on-nixos/1118/9


Yes, absolutely. Although to be fair, CUDA is a royal pain (although way easier) even on more standard distributions (supporting multiple versions seamlessly is hours and hours of fun and breaks all too easily) and is behind the great majority of time “wasted”. NVIDIA deserves a lot more flak for this than they receive.


Very true.

Part of the reason is probably that you need the proprietary drivers to use CUDA efficiently and as they are not in the (upstream) kernel you have to use some other tools, like unofficial repositories or the native installer (which doesn't always play well with the OS package manager, system upgrades, etc.). It's a real PITA.


Funny, I was able to install it (for use with pytorch) just a few days ago without a hitch.


Did you use conda? It does a lot of the leg work for you.


No, I tried conda once, but it downloaded gigs and gigs of data, and I have enough trouble trying to keep nix under control.


That's really a flaw with how those packages have been designed rather than a flaw with Nix. We've known that it is a terrible idea to put everything into global directories, edit `PATH` and so on for literally decades.


I gotta use pointers? Ugh, sounds hard, I think I'll just stick to DOS. /s


What I'd really like to see is a realistic, end-to-end tutorial for either 1) deploying a relatively straightforward web application (like Dokuwiki or ZNC), or 2) setting up a basic desktop for day-to-day use. I feel like I've seen a lot of "snippets", I feel like I understand how Nix works and what it's supposed to be good for, but I don't have a coherent sense of the steps involved in actually using it for mundane things.


Nix contributor here. You are completely right, that is missing. Unfortunately the documentation is somewhat fragmented and its structure makes it quite hard to find relevant information, especially to newcomers.

We started to work on making official guides for common Nix tasks, about how to get a development environment set up, how to build a Docker image… focus is on the DevOps side at the moment, not so much on the desktop user, as we see that as the most valuable use case. This is part of the work of the NixOS marketing team to facilitate adoption of Nix into the mainstream.

Have a look at https://nix.dev/ for the first guides being worked on – pretty barebones so far, but we are aware and working on it.


For usage of "setting up a basic desktop for day-to-day use", to use nix to just mundanely install a list of packages:

What I have is a "myPackages.nix" that I symlink into ~/.config/nixpkgs/overlays https://github.com/rgoulter/dotfiles/blob/22ebe20a820a1adf64...

After installing nixpkgs tool, I can then run "nix-env --install --attr nixpkgs.myPackages" to install that.


Yeah I was trying to do Linux things on my Chromebook and some blog or other recommended Nix as package manager. I was like cool this is like apt-get but trendy I'll try it.

I then proceeded for like an hour to try and figure out how you say apt-get install in Nix. There was all this documentation but none of it said "here's how you install emacs and stop thinking about Nix"


`nix-env -iA nixos.emacs`

But I do agree with you, it's not as straight-forward as it should be yet. But I absolutely love how I can be certain that no garbage is accumulating on my computer like I needed this program one time and I don't even know what does it do, yet I have it's complete dependency graph installed that the package manager can barely uninstall. In nix I just create a `nix-shell -p package` for one time use, do my work and then forget about it. At the next `nix-collect-garbage` it will be removed from my computer completely.


It's a bit old now, but not that much has changed. All my commercial projects use essentially this method of deployment.

https://jezenthomas.com/deploying-a-haskell-web-service-with...


Look for nix users' "this is not a dotfiles repo" repo on GitHub.


Care to clarify and/or add a link? I'm not sure what this means. (Edit: I did not downvote you)


https://github.com/ihebchagra/dotfiles I guess sometimes they are called dotfiles haha.

https://www.google.com/search?q=personal+config+github+nix seem to be a good search

Perhaps I misremembered how many of them make throw shade on dotfiles repos, or perhaps it is just that google isn't good at finding such things ("not dotfiles" won't work).


> for some reason it never tells you what the point of all of this is, so I think many folks might feel turned off by it

I’m attempting to write some hands-on pragmatic no-bullshit articles tackling that.

Why? I’ve been hearing a lot about Nix over the years, how it’s an experiment yet so good it’s actually usable. But every article I found obscure the good parts, either because of too much details and theory or because they’re apt->nix-env but without the whys.

And the good parts are really good. So good that as soon as I was enlightened I simply dropped my lonesome 13 years ArchMac project on the floor.

But it needs a pragmatic approach to explaining what it is and how it can be of value to you day to day. Everything is actually in the manual, but it lacks some ties with existing knowledge for people to make the jump, unless they’re really really curious enough to piece things together.

That’s what I’m writing.


That sounds great. I remeber when I started, it was very difficult, I don't think I would be able to start without nix-pills articles.

Nix needs more tutorials, especially in areas that could showcase it (which you planning to do).

Nix is often referred as a package manager, but I believe that characterization does a bit of disservice to it, because it can do much more than that. IMO the areas where it excels the most for me is the build system. Ability to have a language define the exact environment that developer has is IMO awesome.


Isn’t that the same goal as Docker? I’m surprised there’s still no Docker base image for NixOS...


"Same execution environment everywhere" is one way which developers use Docker. Docker gets this by copying the layers of a built image. Unlike nix, the image building itself doesn't need to be reproducible. So you can have a Dockerfile which works now but will fail to build in however many months.

"Reproducible builds" do get you "same execution environment everywhere". But they have the stronger guarantee that for the same inputs, outputs will be the same.

IMO/IME, I don't think that aspect of nix is a strong selling point for use of nix on developer workstations. Probably thanks to less-elegant solutions like "<language> Version Manager" etc..

But I think the nix language makes for a nicer way of describing a package of software you're developing in terms of dependencies and outputs than Dockerfile.


I do have to commend Docker for providing and managing an agreed upon VM for non-Linux users to host all their containers. It's the "killer-feature" that has made it as successful as it is. But underneath it requires a VM (libcontainer,LXC,virtualbox,hyperkit,etc) on non-Linux machines.

This helps developers work together and quickly get small projects up and running. I'd contend that after a while, a mess of containers/sidecars ends up becoming just as difficult to manage as a mess of native binaries. Hence the growth of so many container management systems. Now, because they are re-inventions of service managers we get the benefit of designing them from scratch for modern needs, but also loose many of the benefits of well understood semantics of native processes.

Looking for feedback: I've been playing with an idea (and have a system in production using it to try out the concept) where the Dockerfile only contains busybox+nix and you when you run it you specify an environment as a Nix path. Specify a binary cache via env vars. Using "nix run" this will download all deps and run your program, with bind mounts all containers can share the host cache. Put a RUN into the Dockerfile and you can prefetch all the deps. Basically it's a Docker container that uses Nix at build or run time for all the heavy lifting, instead of the docker layers mechanism.


How much overlap is there between your idea and Nixery?


Have you checked out the official container system in NixOS?


Yes, but it requires NixOS. This “docker compatibility layer” is about being able to use nix style packaging in environments that expect Docker. Eg: ECS. https://github.com/tomberek/nix-runner


There's some conceptual overlap but I don't think the two tools are redundant.

Using nix for development is sort of like having dedicated handcrafted development Docker containers for every single project... without having to ever use Docker or containers. You just get the sandboxing and safe reproducibility for free. It's kind of like having a build tool like cargo or stack, but for everything, all the time. You can fire up nix-shell for a project and just magically have the dependencies for that project available. There are tools like direnv and lorri that make this even easier and more powerful. Then, if you want to package up your project into a Docker image for deployment, you get that for free too.

With all of that said, the magic is blunted a bit by some rough edges, missing packages here and there, etc. I wouldn't jump into nix expecting to have a completely polished and flawless experience like you can get with Docker, which is a much more mainstream project at this point. But I do think this will rapidly improve with nix, especially with large and well-known companies like Shopify using it.


Using nix, you typically build from scratch and only include binaries that are needed in the Docker container. It’s quite elegant, and uses the nix cache, too, so you aren’t dependent on order of layers

https://nixos.org/nixpkgs/manual/#sec-pkgs-dockerTools


A Docker base image with NixOS doesn't really make sense, since with Nix you wouldn't use Docker for building Docker images, but let Nix make images from scratch.

That's the approach my team is taking, anyways.

https://nixos.org/nixpkgs/manual/#sec-pkgs-dockerTools

(and as others have noted, you don't need an OS in your Docker image)


https://hub.docker.com/r/nixos/nix/ seems to be a thing. It's apparently not a nixos image, but you probably don't want nixos with all the service configuration and so on, just nix, for most docker use-cases?


So... statically compiled executables?

Wasn't that tried long ago and it was determined that the user should be able to choose when to upgrade dependencies, such as if a dependency needs an out-of-band update to work on the localhost OS?


Your criticism misses the mark because nix users have the ability to update a dependency and rebuild all of the dependees. With nix, I can update openssl in one place and be sure that everything that depends on it gets re-evaluated. How can I be confident that everything is linking the patched openssl I want when I'm using aptitude, pip, npm, docker, etc?


But what if they're using different versions of OpenSSL?


It'll only rebuild the packages that depended on that particular openssl. This is an area where Nix shines, because all packages are explicitly bound to their dependencies, it means it's no longer relevant what one file happens to be occupying `/usr/lib/libssl.so`, or even `/lib/x86_64-linux-gnu/libc.so.6`, so you can be running different apps that rely on totally different glibc versions alongside each other with no problems.


I mean if there's a bug, it's not enough to patch one particular OpenSSL. You have to audit manually, so Nix won't make much of a difference.


Well, by default it uses the same library everywhere, but it gives you an option that you can have two apps that rely on e.g. different version of openssl. If you do that it is on you to make sure both dependencies are updated. There is a bit of benefit as well. For example instead of creating a new derivation, you can override existing one (kind of like you extend a class in OO) if for example you make another version of openssl based off the existing one, but maybe changed compile flags, then nix will be smart enough to recompile both.


Except the auditing doesn't have to be manual: https://github.com/flyingcircusio/vulnix


Oh, that sounds like it could solve my issue. E.g. I compiled my program against my glibc which is 2.24, but target was running 2.23. I didn't even use any "new" 2.24 features, but it was missing a symbol, so it wouldn't run. But then to compile against 2.23 locally, I had to get an older version of GCC (not sure why, but), and everything needed to build the older GCC, and so on. But then I got some old versions of libraries on my system, so I had to delete them. I ended up compiling it all inside a docker container because I didn't want to pollute my env with older versions.


Maybe but don't underestimate the effort, I needed to do something similar and tried to use Nix but I failed: the doc wasn't very good, there weren't 32bit packages..

So I just built gcc myself which I found very easy.


Good point: statically compiled binaries are a big problem for security updates.

This applies to Linux distributions as well as large organizations that have their internal distribution (like Amazon).


The user should use nix to do that.


Thank you for the explanation! I'd never heard of nix and I was confused by the article.


We're using Nix where I work as well, and while I can say it was not the most user-friendly to set up, once we got it in place it has reliably worked for the last two years.

We used to have a long and flaky shell script which set up the development environment for new engineers and CI machines (for iOS development on macOS, specifically). Now we have a very simple script which just installs nix and runs alls builds through the nix-shell. This makes builds on local machines and CI very easily reproducible. It also means that when dependencies change, we just update our nix config, and the new dependencies will automatically be fetched for all of our engineers and CI machines — no need to manually re-run the setup script.

This is my favorite part of Nix — you can do `nix-shell --run "<some command>"`, and it will make sure you have all of the correct dependencies (downloading if necessary), then run the command with those exact dependencies. This is especially magical when engineers working on other platforms have conflicting dependencies — a problem we no longer need to worry about.


You can use Nix on MacOS X and with nix you don't need brew.

I used it for few years that way.

If you use nix-darwin + home-manager you can also configure your mac the way you would NixOS.

Amazing tools that discovered not long ago (they aren't specific to os x) is:

niv - makes pinning to specific repos/versions in repos much easier which helps with reproducibility, especially pinning of nixpkgs version which is now controlled in your source code.

direnv + lorri - you no longer need to invoke nix-shell just cd the directory and all tooling you need for your project is suddenly available which is AWESOME.

Here's an example of using them together with packaging a python project[1] if you have lorri and direnv installed and cd to the directory, not only you have the right python version installed but also all dependencies and even the "hello" command becomes available, as if you installed it through `pip -e .`

[1] https://github.com/takeda/example_python_project


I've had luck with using the approach in here https://nixos.wiki/wiki/Python#Emulating_virtualenv_with_nix... because it allows the escape hatch of using pip if something isn't packaged and I want to quickly try something out. I have also been meaning to try mach-nix which may obviate the need for pip at all. Having a good standard skeleton for a python/nix project would be super helpful to add to the wiki, etc.


I tried multiple solutions myself, this one is the closest to what I want, because:

- overlays the packages on top of nixpkgs - understands setup.cfg and can get dependencies from it

It is still not perfect, but when I find some time I think to submit a PR to nixpkgs to improve setupcfg2nix. Particularly it doesn't understand tests_require which you can list dependencies to run unit tests. Nix has equivalent checkDepends.


Unfortunately, the security updates introduced in MacOS Catalina have made installing Nix on the latest version of MacOS a fairly involved process.[1] I looked at using it when I rebuilt my computer recently and decided to wait until it was more baked.

[1] Catalina restricts which directories can be added to the root dir. Nix uses /nix.


There is a way to deal with the latest round of developer-hostile changes from Apple:

https://github.com/NixOS/nix/issues/2925#issuecomment-604501... documents the workaround but if you don't have a direct link it's almost impossible to find because GitHub's UI is terrible.


The problem here really seems to be with nix. The /nix path shouldn't be hard coded. (Security concerns aside, I don't want a package manager to clutter up my root directory.)


It's necessary to hardcode paths in order to ensure that the exact version of a dependency is linked into the executable. Without this mechanism, nix packages would not be reproducible and self-contained.

Also, the path isn't really hardcoded. You can use a custom path, but that means you can't use binary caches.


I'm sure there are reasons for it, but it still sucks.

It ought to be possible to use binary caches and still swap out the path. (Modifying a string in an ELF executable isn't that hard.)


It's not just ELF executables that hardcode the path. There's a ton of different ways to run software which all use hardcoded paths. Which is good! That's the entire value proposition of Nix, in a way, that it's now safe to use hardcoded paths. But you can't edit all those different hardcoded paths in all those different filetypes.


Where else are paths hardcoded? This seems like a huge mess.


Just picking one example off the top of my head, pkg-config files.


Modifying a string in ELF is hard. As long as the string has the same length as the original one, it has a good chance to work (but still fails for compressed data). But if you want an arbitrary directory, you need to replace it with a different length path, and that will probably break a lot of things.


The nix author created utility called patchelf which is used by nix all the time during the build (to rewrite library paths).

As far as I know it creates a new elf each time it is called (it doesn't modify a file on place) so the length of library name is not an issue.


It's not that hard if you leave some spare space for string constants in the .rodata section. Nix already patches ELF executables as things stand.


The problem is how it affects all the hashes, which aren't recalculable in such an easy way.


Could you not check the hashes before doing the substitution?



This is only sort of true: I’ve found that Nix on a Mac is great except that about 10% of the derivations just don’t work: some are excluded explicitly as broken and some just fail in strange ways and need lots of debugging.


For me, most of packages worked, the ones that didn't were actually justified to not work, definitively far more than 10% (it was more like 90% for me). The stuff that didn't work was justifiably linux specific, except one thing, that now memory escapes me what it was.


By this do you mean developers doing local builds still run their builds at the command-line via nix-shell and don’t ever build from Xcode directly? That sounds very awkward.


Not at all — you can control what commands are actually executed when developers hit the play button or do ⌘+B/⌘+R in Xcode. Xcode is fundamentally just an IDE, even though we forget that sometimes since it's traditionally coupled so tightly with xcodebuild.


Ok so developers would normally just have an external build target selected as the active target? I assume this means you lose all of Xcode's scheme editor capabilities though.


A lot of misunderstanding and overexplaining in these comments. The reality is that Nix is a complex system of many parts, under active development, and yet is perhaps the best infrastructure we have for creating deterministic, hermetic, reproducible builds. I've worked on one of the largest Nix systems and it saves our entire dev team hours. Everything is only built once, on a large set of specialized remote builders, then shared among all of our devs, CI, and deployment infrastructure. Perhaps this is common in FAANG, but Nix enables this even for small teams and individual developers.

If thats something you would like to learn more about but are having trouble getting started, I'm happy to help. Email is in my bio. We could setup a call or chat session or whatever.


This is what I'm personally looking for, are there some kind of tutorials how to configure nix in an organization to do CI/CD best practices etc.


I'm still confused. What is Nix? Is it an OS, or a package manager? From the looks of it it's a package manager that I should be able to use it on any POSIX system, but I doubt that's the case?


It's kind of a mess.

Nix is a collection of tools and systems that together form a highly reproducible build system.

Nix is also a declarative, largely pure and lazy programming language that you use to design and specify the different build outputs for the Nix build system.

Nixpkgs is, more or less, the only project written using Nix (and a lot of shell). It's a collection of many thousands of "derivations", many of the things you'd expect to be able to fetch from brew or apt. Each derivation encodes the steps to use Nix (the build system) to construct that output.

NixOS is a Linux distribution built atop Nix, Nix, and Nixpkgs. The basic claim to fame here is "totally declarative system setup" where you write all of the configuration for your system in Nix, using Nixpkgs and some other NixOS specific tooling and libraries, and then "build" the entire system with high repeatability.

Then there are a few other related projects such as Hydra (a CI server and binary cache builder—most of the time you don't build with Nix, just download the proper thing from the cache) and NixOps (an extension to NixOS which provisions whole fleets of servers declaratively).

Finally, you don't need to run NixOS in order to benefit from Nix. Nix and Nixpkgs are (largely) compatible with MacOS and Linux. That means you can install Nix onto your existing MacOS or Linux system and use it a lot like you'd use brew or apt. That said, the premiere Nix experience is always NixOS.


I've had the (dis)pleasure of working with several projects that have been built by developers that have religion around Nix. These projects were contract work where the client paid a significant amount of money, and the final product was really poor quality. One of them is a financial application that has strict security requirements, so having a reproducible build system and some of the other qualities of Nix sounds great, however, the auditor, which is one of the top auditors in this industry had this to say in their report:

> Overall, we found the dashboard code to be difficult to review and build. In particular, the manner in which the frontend JavaScript is compiled made it difficult to comprehensively evaluate. Given that this includes the driver code for Ledger devices and handling of user input, we believe this to be an area of concern and poses a significant risk. Building the dashboard took hours on an AWS EC2 VM (with a four-core CPU and 32 GB RAM) and while much of that build time is attributable to Nix's way of building dependencies, the code complexity adds unnecessary risk. We encourage a rework of this area of the project to follow more idiomatic web development practices and patterns. This would make the code more comprehensible and conducive to security evaluations, therefore reducing the risk of vulnerabilities that go unnoticed.


It sounds like "this is the first time I've seen this build system and don't really see how it works, would be better to change it to a more often used one".

Which is fair for a security auditor, but it stems from the newness of the project and nothing inherently bad with it - like I doubt they know what actually happens inside whatever other build system other's use, but since they empirically know it poses no threat they are okay with it. If you ask me, this is absolutely no reason to not use Nix - well maybe not for a bank (though on long term they would definitely win with it)


Ah yes.. cryptocurrency, where bizarre programming practices secure large sums of money


LOL. Reading that audit report made me immediately wonder who in the heck is blowing money on bringing new tech to their critical finance systems. And then I saw your post and, yes, that answers that.


I am aware of a few trading firms that are phasing in nix to handle their dependency clusterfucks.


Woah, I didn't know there is an entire ecosystem of contract job and auditors for big projects. How do companies usually hire contract jobs (outsourced HR, upwork, Accenture)? And how do they hire auditors?


This used to be called "quality assurance". Web and .com blew that apart. We ain't got time for that.


This sounds like a security audit, and auditors found the build system difficult to work with when attempting to audit code.


Sounds like their main concern wasn't Nix but the complexity of the code itself.


The hours long build system appears to be attributed to Nix.

Is Nix really that slow?


If you're applying idiomatic Nix to modern JS, I wouldn't be surprised - modern JS tends to involve installing thousands of packages by just combining them into a directory, but Nix doesn't want you to edit existing directories. So your dependency graph turns into a build-dependency graph, with each JS package requiring a full build of everything it depends on, and the Nix build system is presumably not optimized for fast turnaround times on five-line JS modules that don't even have a compilation step.

(Personally, I wouldn't try to apply idiomatic Nix to modern JS - I'd apply it to the major components like "my web server" and "my database library" and have "all the JS I depend on" as one big Nix package. That's not really a claim about Nix, I wouldn't try to apply idiomatic, say, Debian packaging to modern JS either. In both cases I'd still get about 90% of the benefit of using Nix/Debian as a delivery mechanism.)

The other thing that could be slow is if you start compiling all your dependencies, including gcc and node.js, from scratch. While there's some security benefit in doing so, the reality is that just about nobody actually does that. You'd want to set things up to use the precompiled packages, or at least set up your own cache server and have compiled binaries you trust but only do it once.


Hi, JS developer here who uses Nix and who liberally makes use of single-responsibility modules.

No, Nix does not need "hours" to build a modern JS project. Something else is going on here. Nix is slow, but not that slow.

(With the limited information provided here, my first guess would be "someone turned off the binary cache because they don't trust prebuilt binaries, and now it recompiles the world from scratch because there are no prebuilt binaries anymore".)


I suspect that someone being the auditors who are suspicious of opaque binary caches and insist on building everything from scratch.


I don't know. If that were the case, I'd imagine they would know why their build was taking hours and that it wasn't a Nix problem.


That was my reading, based on the fact that they glossed over Nix itself and attributed most of the issues to code complexity.


It explicitly says this, though:

> while much of that build time is attributable to Nix's way of building dependencies

And that part just doesn't make sense to me. I can't see any way in which specifically Nix's way of building dependencies would contribute to this. The compiling of the universe, sure, but then why mention Nix?


It does say that, but it's off in its own clause, then shifts focus away from it towards what seems like the focus of the auditors.


[flagged]


Part of the removed code:

    test("guy", function() {
      assert.equal(typeof babel.guy, "string");
    });
Facepalming so hard right now


In case you hadn't noticed, this was a joke commit. The code in question was never actually a real part of Babel.


I assure you, I understand what an easter egg is. I was facepalming at the test, which was doing the equivalent of:

    expect(true).toBe(true)


[flagged]


Thank you for heavily derailing the productive, technically meaningful conversation I was having. Next time I won't say anything since apparently productive and technically meaningful conversations aren't what this site is for.


Please take your badly-informed language bashing elsewhere.

I actually audit dependencies for a living, and the dependency model that JS uses is far, far preferable for auditability over the "monolithic tightly-coupled framework and a mountain of custom badly-maintained copy-pasted business logic" approach that is widely used elsewhere.


[flagged]


Seriously, piss off. You clearly don't understand the tools you're loudly complaining about, and it's interfering with what was otherwise a productive discussion about Nix and build times.


Please don't break the site guidelines yourself, regardless of how wrong or annoying other comments are or you feel they are. It only makes the thread even worse.

I've responded to the other user elsewhere, but actually your replies were worse than what you were replying to, even if you're right on the fundamentals. "Please take your badly-informed $thing elsewhere" and "Seriously, piss off" are well out of bounds here.

If you feel like another user is derailing the discussion, flag the comments (https://news.ycombinator.com/newsfaq.html). If you feel like more is needed, email hn@ycombinator.com so we can take a look. We frequently moderate off-topic subthreads.


The term “modern JS” should be used only as a pejorative.


Would you please not post in the flamewar style to HN? It damages the container here (i.e. the capacity of this place to sustain curious conversation), and you unfortunately did it repeatedly in this thread.

https://news.ycombinator.com/newsguidelines.html


I've built a combined python2/3,c,c++,javascript (VueJS) deployment along with hardware-in-the-loop integration testing and compiling for ARM on an Raspberry Pi with Nix. Should not take hours.

Something else is wrong. Perhaps binary cache is not being used? Something is invalidating the builds like a "src = ./.;" which telling the build system to include every single file in the directory in the build rather than using a .gitignore or git-hash for reproducibility. Or someone was using npm2nix (deprecated)?


No, nix can cache already built binaries with hydra, so on a normal system it simply downloads from there - same speed as apt or the like. During the audit they likely disabled it, so nix had to build everything from scratch (like even system libraries, the build tools, everything).


Dependencies can take awhile if you can’t used cached versions of the binaries. Building some Haskell tooling from scratch took ~3 hours on my MBP.


There are few compilers that are slower than GHC. I suspect this issue has nothing to do with Nix, and the tools would have taken ages to build from scratch without Nix.


So the auditors said that the system was hard to audit?


Considering that “difficult to audit” overlaps with: difficult to understand, difficult to deploy, difficult to onboard, and a bunch of other things that are also very important to non-auditors, that seems like an entirely reasonable and informative complaint.

If you hire a financial auditor and their report was basically an nicer version of “‘books‘ were written in pencil on napkins, many food stained, some illegible. Attempting to make sense of them took forever. Fix this shit to meet minimum standards for business accounting, it’s entirely unsuitable for its purpose” that’s be great info, if you didn’t already know it.


"books written in pencil on napkins" is analogous to how most web application projects are organised today. Nix would be the formalisation of this.


Never used nix, but their stated goal is to make implicit dependecies explicit (and reproducible). Isn't it only natural for this to sometimes bring a dependency hell to light, that otherwise could have been swept under the rug?

Or phrased differently: maybe the problem here wasn't nix, but the way developers chose their dependecies


Putting it this way it misses the point of that statement, but yes, that does appear to be what they said.


I haven't used Nix, but I would have thought that builds would be fast, due to how cache-able the dependencies should be.


We've been using Nix for deploying a Rails app for an enterprise customer for quite a few years now. One area where it shines for us is the ability to build it on relatively recent version of Ubuntu and deploy to a (almost EOL) RHEL6 box. Bundling, assets and various other tasks take just a few minutes. We also have ~20 Go services that are also deployed via Nix, and building takes seconds.

However, it can be quite cumbersome to get a Nix expression to the point where it builds reliably for something that takes multiple steps like a Rails app, especially if you're building on macOS and deploying to Linux. It's come a _long_ way in recent years, but with Enterprise customers now embracing containerization we migrated everything to that and haven't looked back.


I'm having a hard time parsing -- you migrated everything from nix to containers (containers removed the need for nix) or to nix+containers (containers solved the "having to build for multiple platforms" issue)?


Ah my apologies - We use Bazel to build our services, and the output artifacts were then pulled into Nix and deployed as Nix packages. Bazel has _excellent_ support taking the same application code and creating Docker images from them (https://github.com/bazelbuild/rules_docker#language-rules), and the tools available for deploying containers is orders of magnitude more feature-full and higher quality than what you get with Nix today. So we no longer have Nix anywhere in our pipeline, and all of our artifacts are now deployed inside containers.


It is, perhaps they didn't use cache to store previously built packages and build everything from scratch. With no cache Nix will start with building compilers and glibc until it has everything to build the actual application.


While this is true if you are building new versions of dependencies on an existing system, in many cases in high security environments you want to start with a clean build environment every time. Cached dependencies might be considered to be a security risk. Bootstrapping an entire Nix environment can take a lot of time.


Sure, but in most systems you wouldn’t have much of a choice; if you’re using Debian or Ubuntu, you’re probably using binary packages built by someone else. It takes a lot more work to build everything you want from source in a traditional distro.

It seems like a pretty straightforward tradeoff: do you want to rely on prebuilt packages for speed, or do you want to rebuild it all (and accept the extra time that takes)? Nix and Guix at least make the latter pretty straightforward to do.


Sounds like they were building everything from scratch, for some reason.


That’s how nix guarantees reproducibility. The compiled artifacts can be cached but that didnt quite work out of the box


Nix guarantees reproducibility, which means anything can be rebuilt from scratch, but that's a very abnormal use case. If it didn't work out of the box, it's a problem with the package scripts (the "derivations"). That said, all of our software tends to bottom out in a bunch of shitty C libraries that are all delicately cobbled together with autotools and cmake, so anything that aspires to reproduce these things is going to have issues. This tends to make Nix difficult to use, because it doesn't have nearly the same investment/manpower (yet) as other package ecosystems that is necessary to wrangle these dependencies into a stable foundation that doesn't leak its underlying havoc to higher levels of the stack.


This comment is really, really confusing. If it builds once Nix, it will build again with Nix... If you can find a derivation that builds on one machine and not on the other there's usually a fundamental difference - either in CPU arch, your nixpkgs config differs (overlays), etc. Or I guess you could write a build script that non-deterministic ally fails, but that has nothing to do with Nix's maturity.

Nix ensures the tooling is called the same. I can understand having a non-autotools project that is more difficult to write a derivation for, but "Nix can't wrangle messy libraries" makes absolutely no sense, either from a "make compiling reliable" or anything at use-time.

Do you have a specific example in mind that is less hand-wavy?


> If it builds once Nix, it will build again with Nix...

That's the aspiration, but it doesn't always pan out. The rate at which you run into problems depends a lot on the packages you use, how much attention is given to them, and how hard it is to reproducibly package them. I've noticed in particular that the Python ecosystem is really fragile.

> Do you have a specific example in mind that is less hand-wavy?

A specific example that comes to mind was the psycopg2 Python package, which would build on some developers' machines but not on others. This sort of thing happens all the time, usually on macOS, and usually with C packages (sadly, so much of the Python ecosystem is built on C and its shoddy build tooling). I've also found quite a few packages in nixpkgs that simply don't build on macOS, but which presumably build on Linux; however, I forget which ones specifically.


But the build time dependencies would still be there. Just implicit and undocumented.


So I understand correctly from this that nix has no binary packages? If so, why not? To me, compiling from source is not strictly necessary to get reproduceability guarantees. It might actually be harmful if you're not carefully checksumming the build products (e.g. cosmic rays messing with complex builds).


You do not understand correctly.

Nix users use binary caches[0] instead of building everything from source. You can use the binary cache provided by NixOS, or Cachix[1], or something you set up and host yourself, or all of the above.

[0]: https://nixos.wiki/wiki/Binary_Cache

[1]: https://cachix.org/


Then why would a build take so long as GP describes?


I can't imagine why some unknown people encountered difficulty building some unknown project.

We use Nix to build and deploy our big Haskell monolith along with the operating system itself, and all system dependencies, e.g., PostgreSQL, Redis, Grafana, collectd, influxdb, openssh, ejabberd, Elm applications, etc, and it works fabulously.


GP here. My guess is that they intentionally disabled binary caching because this is a financial application and they wanted to make sure no untrusted binaries got into the build pipeline.


Thanks! That makes some sense. Though, I work for a financial service provider as well and have never found compile-from source to be solution to that problem. If you do have dependencies, whether you trust the sources or the binaries shouldn't really matter, as long as you trust the repositories where they come from. And if you don't trust it I wanna see the army of developers actually checking all the sources down to the system level.


I can't entirely speak for Nix, but I know they're fairly similar, so: Guix (and I'm pretty sure Nix) have optional binary packages. Realistically most users will use those, but the option to build from source is always there. So, while you say:

>To me, compiling from source is not strictly necessary to get reproduceability guarantees.

...I somewhat agree, but you do of course need to be able to build from source. And, in a system like Nix or Guix, you have an easy way to verify that a build and all its transitive dependencies are reproducible, if you have someone else's build to compare it against. See `guix challenge` for example: https://guix.gnu.org/manual/en/html_node/Invoking-guix-chall...


Unfortunately there's a pretty annoying bug with MacOS which resulted from Apple making /nix non-writable by default. And since /nix is hard coded in all the cached packages it's not easy to fix.

This is one big thing that's preventing us from adopting nix

https://github.com/NixOS/nix/issues/2925


It's not a bug in macOS, it's a security feature.

However, you can use synthetic firmlinks to define arbitrary root-level paths that _are_ writable.

See `man synthetic.conf` and check out https://derflounder.wordpress.com/2020/01/18/creating-root-l... for an example usage.

PS: It looks like nix has a PR open to test that solution here: https://github.com/NixOS/nix/pull/3212


However, you can use synthetic firmlinks to define arbitrary root-level paths that _are_ writable.

See `man synthetic.conf`

No, synthetic.conf only creates directories or symlinks. If it actually created firmlinks, the problem would probably be much smaller.

The problem with symlinks is that some builds use realpath. If /nix was a symlink to e.g. /usr/local/nix, this would result in /usr/local/nix being used in this builds. This leads to various build problems [1] and can result in the wrong paths ending up in binaries. If this happens on a builder, the resulting binaries are not portable between macOS + Nix machines.

So, instead the only way to get it currently working is to use synthetic.conf to create a new top-level directory and mount a APFS volume with the Nix store there.

If Apple actually allowed creating firmlinks with synthetic.conf, the Nix installation would be drastically simplified. /nix could just be a firmlink to e.g. /usr/local/nix, since realpath does not report the actual path on firmlinks.

[1] https://github.com/NixOS/nix/issues/2926#issuecomment-559989...


I stand corrected, I must have misunderstood the doc.


I'm trying to understand why macOS can't use a different path (since it's a different OS anyway, and you can't run Linux Nix binaries on macOS). From the end of the thread, it sounds like

> The main consequence of using a separate prefix for macOS is that you can't have Hydra jobsets anymore containing jobs for macOS and Linux. It would also make it harder to deploy from macOS to Linux.

i.e., if the same package builds for both Linux and macOS, you can't write a single shell script that runs on both Linux Nix and macOS Nix and calls that package? Maybe I'm misunderstanding how people deploy things, but is that a huge problem in practice?

(I don't quite follow the thing about Hydra jobsets. Aren't you building once for Linux and once for macOS anyway? Figuring out how to make the build do that seems like it would subsume any complexity of using /nix vs. /opt/nix.)


You can use a different path, but then you don't have access to the binary cache, because of the mismatch between your path and the path found in the cached binaries. That's how I understand it.


Perhaps they hardcode full paths into whatever Nix calls its formula files (which are then, if I understand correctly, referred to by their cryptographic hashes, meaning that a file with a different path would have a different hash, and so break the whole hierarchy of formula-to-formula dependency references above it.)


From looking at PR seems that it will utilize some extra space (for derivations that aren't really OS specific) and there might be problems wih deploying from os x to linux, but feels like they are planning to go that route ultimately.


Buried in that issue is a workaround, although GitHub's UI hides it from you: https://github.com/NixOS/nix/issues/2925#issuecomment-604501...


Several people in my company use the separate volume workaround and it's been totally painless.

Nix has sharp edges but this one is less of a concern than you'd think ime


Hardcoding /nix is a very big downside of Nix for me. I understand why this makes other things easier, but it's a huge compromise on flexibility.


This comment was better than the article at explaining what is nix, thanks!


Thanks! So let's say I start installing Nix packages. What distro's packages would they end up most similar to? Say, maybe Arch (which mostly leaves things unmodified)? Does that mean you basically end up with Arch no matter which distro you're on?


In addition to other answers, I think it's important to note that Nix is comparatively very small (but growing!). To that end, the Nixpkgs policies are still in flux and defined somewhat culturally (at least compared to older, larger distros).

There's a release schedule for NixPkgs which is being continuously updated, you mostly subscribe to a fixed "channel" which gives you the default set of derivations. If you need something more fresh then you either modify a derivation yourself (hot-patching the distro) or pull something temporarily from master on NixPkgs.

This is hugely facilitated by the way Nix allows you to install multiple package versions in parallel without conflict.

Finally, Nix is still sometimes a bit of a research project. The "best" way to manage a distro that's being built using this technology is still being uncovered. For instance, there's an upcoming feature (flakes) which hopes to seriously change the way you talk about and consider versions of NixPkgs and NixOS.

Using Nix definitely requires some patience with living on the edge of tech. It can be frustrating and less reliable than other repos due both to the bigger challenge of building a repo using Nix's technology and the smaller contributor base.

That said, the returns on using this weird technology are really high. The short pitch is something like "zero runtime cost, highly repeatable Docker containers for everything". Of course, the technology works nothing like that, but it really hit some of the big position independence value props of Docker in a way that's lightweight enough to use it for everything.


> That said, the returns on using this weird technology are really high. The short pitch is something like "zero runtime cost, highly repeatable Docker containers for everything". Of course, the technology works nothing like that, but it really hit some of the big position independence value props of Docker in a way that's lightweight enough to use it for everything.

It simply delivers what docker was promising to deliver.


Trying to put it politely, but your question seems a bit confused.

Arch is not the only distro that strives for minimal modifications on upstream packages. In fact, most distributions work that way - it's Debian and its derivatives that's the outlier, and these days, even Debian rarely patches upstream packages.

There is much more to distributions than what patches they apply to their packages. The package manger they use, their default configurations, their compiler toolchain, their C library, their init system, their stance on software freedom, etc etc - much more than just what patches they apply to their packages.

But, yes, Nix packages are mostly close to upstream, with occasional patches to improve determinism and reproducibility.


Thanks for explaining. I'm (obviously, I thought) not suggesting distros are only about the patches, but patches are a very significant aspect of their raisons d'être to me. I thought most distro maintainers do things like backporting changes though, for security/stability/etc.? Including RedHat, Canonical, the Debian maintainers, etc.? Arch (and I guess Gentoo, and maybe their derivatives like Manjaro) are the only ones I know that minimize upstream package changes. With other popular distros like Fedora, Ubuntu, SUSE, etc. I fully expect they've made custom modifications to some popular packages, but it's been years since I've touched other flavors. You're saying this is wrong and only Debian derivatives tend to do this?


Fedora stays close to upstream: https://fedoraproject.org/wiki/Staying_close_to_upstream_pro...

Ubuntu is a Debian derivative, they patch things heavily, just like Debian.

SUSE, I don't know about their policy. I assume they're like RHEL, which starts from Fedora, which is close to upstream, and then backports bugfixes from later versions on to stable branches.

Many distros maintain stable branches and just update the packages on that branch when they have security issues. Backporting security fixes has (somewhat) gone out of favor, because backports executed by package maintainers that weren't familiar with the code have caused serious security issues in the past. Of course, distros like RHEL and (I guess) SUSE still do backport security fixes to their stable branches. But they start from a base which is close to upstream - it just diverges more and more as their stable branch gets older and older.


GuixSD? Only because Guix was build from Nix.

Frankly I don't think there's anything else like it. I don't know Arch enough to compare, but there's something also similar to Gentoo, except Nix knows that if source + dependencies + architecture + configuration is the same it will pull compiled version from cache, if something changes it will recompile it.

The killer NixOS feature is that it has a single configuration file that's declarative which you can use to describe your OS, so it has something like salt/ansible/chef/puppet built in, and unlike them it's also is truly declarative. For example if you have package installed, to remove it, you just remove it from the list, where in those tools you need to create a state that ensures the package must not be present.

Edit: from other comments I see that you meant that packages in arch are not modified, I guess Nix does follow that and only add patches if it absolutely needs them to make application work correctly, but unlike other distros nix also allows user trivially (ignoring the steep curve to learn nix :) to modify a derivation (for example applying patches, changing dependencies, changing ./configure flags) similarly how you would extend a class in OO language. In any other OS to do such customization, you would need to generate a new package, place it in package repo, and worry about your modified package breaking other parts of the system. NixOS will then recompile that package and use it (if you use caching it will pull from cache)


In my experience, nixpkgs packages are modified with the bare minimum to get the upstream code to work.


Thank you for this thoughtful comment. I was involved in establishing a bunch of build and deployment infrastructure at my org that's currently based around large (300mb+) "bundle" Debian packages, but that bundle package is composed of hundreds of small sub-packages, most of which don't change day to day (it's extremely convenient to ship them all as one big unit for versioning sanity and ABI consistency reasons).

Nix seems like it would be a great fit for this use case, but there are a number of things about it which give me pause, particularly when it may be possible to get at least some of its advantages by applying the lessons learned to a system built out of boring, old-school packaging tools.


"Nix is a collection of tools and systems that together form a highly reproducible build system."

Wouldn't it have been nice to have that statement at the start of the article?


What is Nix?

Nix is a powerful package manager for Linux and other Unix systems that makes package management reliable and reproducible. It provides atomic upgrades and rollbacks, side-by-side installation of multiple versions of a package, multi-user package management and easy setup of build environments.

What is NixOS?

NixOS is a Linux distribution with a unique approach to package and configuration management. Built on top of the Nix package manager, it is completely declarative, makes upgrading systems reliable, and has many other advantages.

[1]: https://nixos.org/


Thanks, but that makes me wonder how that's even possible given that distros make their own modifications? Do people port each distro's packages to Nix then? Are they kept up-to-date? Or does it automatically translate from apt/pacman/etc. databases somehow? Or does it just basically install vanilla packages on all distro?


Yeah, there is a giant repo of package definitions at https://github.com/NixOS/nixpkgs. Those definitions tell nix how to build everything from the ground up.


> how that's even possible given that distros make their own modifications?

Own modifications to what? All the packages in nixpkgs depend only on other packages in nixpkgs. If you install nix on an ubuntu system and then install a package from nixpkgs, then that package won't use any ubuntu libraries.


> Own modifications to what?

To the packages. e.g. I believe Ubuntu modifies Python so that sudo pip install uses /usr/local instead of /usr. Lots of other patches and backports I'm not necessarily aware of. That's basically what makes Ubuntu Ubuntu, otherwise it'd be more like Arch. So how does Nix deal with this? Do you get the value-add from your distro or do you basically end up with pseudo-Arch wherever you start?


The Nix packages are independent of the OS, this is actually one of the advantages of using Nix. It means that similar to when using Python virtual environments user packages are not mixed in with system packages. Nix also versions package changes. When switching between versions, Nix just updates the paths in your environment. If you wanted to stop using the packages all together you only need to remove the environment path.

Another advantage is that you define installs as part of configuration file, similar to Ansible/Chef/ect so things become repeatable.

The difference between Arch and Ubuntu is not so much that patches are applied to packages though, it's that packages are precompiled for Ubuntu, where as packages for Arch are often compiled from source.

Nix can compile packages from source, or use a binary from cache if it is available.


If you install python through nix, you get the nix version.

If you install python through apt, you get the ubuntu version.


I get that part. I'm asking what the Nix version is like. Is it like the Ubuntu version with all the patches and backports and everything, or is it like the Arch (i.e. basically original unmodified) version?


Nix packages are typically close to upstream, but low-level packages sometimes have patches to make them more reproducible and deterministic, so that they work better with Nix's efforts for determinism and purely-functional packaging.

Nix packages are created from scratch, not copied from another distro. Nix is typically one of the most up-to-date distros: https://repology.org/


Okay thanks, so it sounds like I'll (roughly) end up with Arch (i.e. mostly-unmodified) whether I start on Ubuntu, Fedora, or whatever. I have another on that front: what about things like kernels? Don't those conflict with the OS?


If you install the "linux" package using nix on Debian, you get a directory in your nix store (the collection of "installed" packages) containing a bzImage, a System.map, and a `lib` directory containing all of the kernel modules. It would then be up to you to build an initrd and wire it into your bootloader, if that's what you wanted to do.

In other words, Nix packages are just files in their own special place on the disk (under /nix). If you want to configure binaries from those packages to run as daemons, or otherwise be wired into the system globally, that's up to you.

In fact, this is also true for the packages that you've "installed"! The nix package manager creates a package that represents your "user environment", and that package just merges all of the packages you installed using symlinks, and a symlink to that package is added to your $PATH.


"Just files" doesn't quite capture the complexity of the situation to me though. Say I happen to install package X via apt and Y via nix, and both of them depend on Z (in apt and nix respectively), and Z needs to bind to a port, then I imagine both will install but one of them will break, possibly including their dependents. Or if I install a package on Nix that expects a certain syscall that's not in the Ubuntu kernel yet (maybe like the recent /usr/bin/sleep issue with WSL), then that either breaks Nix or my ability to keep using an Ubuntu kernel. Right? And there are probably other things I'm just lacking the imagination for right now. But it seems to me there are all sorts of conflicts that can come up in practice. I've seen enough trainwrecks when upgrading even across OS versions that I have a hard time seeing how running 2 package managers can work on a single OS without breaking something?


> "Just files" doesn't quite capture the complexity of the situation to me though.

I think that's the root of the issue.

You seem to be somewhat confused about how Linux executes programs. An executable is just a file on the disk. It might be entirely self-sufficient or it might use libraries. Libraries are just files on the disk too (they end in .so.number where number is a version number). To load the libraries, when it starts a program, Linux first runs another executable called a dynamic linker (usually ld). Where the dynamic linker looks for the libraries files depends of its configuration. Ld will notably look into the paths listed in the LD_LIBRARY_PATH variable.

All of this has nothing to do with distributions. It's just how Linux works and will always be true. As long as you have all the so your executable need in a place where ld can find them, it will run.

Now, what a package manager does when you install a package is just putting all the files contained within it in the correct location, ensuring you will have installed the necessary shared libraries for the executable you want to run and setting up everything that might need to be setup.

You can manually add executables and so files as much as you like. The only issue you might encounter is using a location which is also used by a package. Depending of the package manager you use, that might either overwrite your modification or make the package installation fail.

While I have never used it, apparently Nix is well behaved and only install things into /nix so that shouldn't be an issue.


> Say I happen to install package X via apt and Y via nix, and both of them depend on Z (in apt and nix respectively)

That actually is impossible, Nix will only depend on packages in nix, and nothing else. So whatever you have installed won't affect it.

The important part of Nix on Linux is patchelf[1] basically binaries generated are processed by it, this rewrites elfs to link to libraries in the /nix/store

Regarding syscalls, if you use NixOS then you're tied to specific state of nixpkgs, which also dictates the kernel installed. So you shouldn't run into it. You probably might run if you install Nix on Ubuntu. I don't remember running into it, and I think it should be rare since linux ABI supposed to not break compatibility.

[1] https://github.com/NixOS/patchelf


>> Say I happen to install package X via apt and Y via nix, and both of them depend on Z (in apt and nix respectively)

> That actually is impossible, Nix will only depend on packages in nix, and nothing else. So whatever you have installed won't affect it.

You misunderstood the scenario. X, Y, Z are package names here. Like Z might be openssh, and it might have dependent X in apt, and Y in nix. You'd get an apt installation of Z and a nix installation of Z.


> Say I happen to install package X via apt and Y via nix, and both of them depend on Z (in apt and nix respectively), and Z needs to bind to a port, then I imagine both will install but one of them will break.

When you install a package with apt, rpm, etc., it has the ability to run code at install time, create init scripts/systemd unit files, etc. When you install a package with Nix, it does not have that ability. So if you install Y via Nix, it will install Z via Nix, and Nix's Z will not be started automatically. You'll need to start it yourself (or Y will need to trigger it, when you run it, etc.)

If you also install X via apt and Z via apt, apt's Z will be started automatically, and will prevent Nix's Z from starting at the same time. You'll need to stop apt's Z.

Note that there's a spectrum here - Red Hat-based distros conventionally don't start software automatically when you install it, although they do leave the configuration in place for you to enable it (with chkconfig or something). Debian-based distros do. (While I am generally a Debian fan, this is one of the things I don't like about Debian ... possibly because I lost some mail once when I did "apt-get install exim" on a production server that had died, and a bunch of email that had been in other people's queues hit the default exim and then got bounces because I hadn't configured it yet.)

Another way of thinking about it is that there are two meanings of "install GRUB". One is that the GRUB commands are available for you to use. One is that your disk has GRUB in its boot sector. Arch, Debian, Red Hat, etc. all interpret "install GRUB" as doing both of these, but doing just the first one is valid, and then it's up to you to use the GRUB commands to put GRUB on the boot sector of whatever disk you want.

> Or if I install a package on Nix that expects a certain syscall that's not in the Ubuntu kernel yet (maybe like the recent /usr/bin/sleep issue with WSL), then that either breaks Nix or my ability to keep using an Ubuntu kernel. Right?

Yes. Nix won't install a new kernel for you, unless you're using NixOS.

(Technically, I suppose you can put a new kernel down on disk with Nix, but it won't install it in the sense of changing systemwide configuration.)

> I've seen enough trainwrecks when upgrading even across OS versions that I have a hard time seeing how running 2 package managers can work on a single OS without breaking something?

This is generally due to one of two things, both of which Nix sidesteps:

- There are incompatible updates to a file, e.g., you install "mawk," which provides /usr/bin/awk, but some other package expected that to be "gawk" and uses gawk-specific features. Nix doesn't have an equivalent to /usr/bin; there are no paths that are shared across packages. Each file is in a directory which corresponds to the identity of the one package (at one specific version) that provides it, and other packages bake in those dependencies. This is the fundamental cool thing about Nix.

- Systemwide changes when you install/remove stuff, like uninstalling syslogd shutting down your syslog daemon, installing a kernel changing the default kernel you'll boot into, etc. Installing a package in Nix doesn't have any effects beyond putting down files. The most Nix will do is keep a pointer (a "profile") to some set of stuff you're interested in (again, by exact identity). You can change that pointer, but there's nothing special about your particular pointer. If you point to a set of fewer things, other people can keep pointing to the things you no longer see. If you point to more things, that doesn't cause any code to run automatically.

Combined, this means that Nix stays out of the way of a traditional package manager. (Also, it means that you can set up your system so non-root users can install/remove things with Nix - you need to do some setup as root tp install Nix in the first place, but once it's there, users can't mess with each other.)

If you're familiar with virtualenvs etc., you can sort of think of it like that (although it's not a perfect metaphor) - you can install Python packages into a specific environment, but they don't get installed systemwide, and if you install, say, a web server (gunicorn, Flask, Django, etc.) into multiple virtualenvs, none of them are started automatically. You can't meaningfully install a kernel into a virtualenv (nothing stops someone from releasing a kernel on PyPI, but "installing" it will just download the file and put it somewhere and nothing else).

This does mean that the experience of using Nix is different from using a traditional package manager - you'll need to take care of starting the services you need, etc. Basically Nix reduces the scope of a package manager to say that running services isn't part of it, and then it does a better job of implementing that reduced scope. If you're a company like Shopify that is unlikely to be happy with the built-in web server that automatically starts when they "apt-get install apache2" that shows "Congratulations, you've installed a web server" to your customers, that's totally fine, you were going to do your own service management anyway.


Thank you! Your comments are always great. :) Especially this bit was I think the insight I was looking for:

> This does mean that the experience of using Nix is different from using a traditional package manager - you'll need to take care of starting the services you need, etc.

I was trying to figure out where exactly the trade-offs would be -- so that's one of them: it seems like (in some sense, to simplify) Nix takes responsibility for the data & programs, and leaves you with the responsibility of handling the control flow (at least externally).

If you don't mind, this leaves me with one more question. A typical headache that comes up during upgrades (on pretty much any distro) is config changes -- i.e., where the user and the package manager are both responsible for handling data. (Contrast this with the data/control-flow responsibility split we discussed above.) Like you modify /etc/ssh/sshd_config or any other one of the countless /etc config files that you can't configure with a .d folder, then the system upgrades and leaves you to handle the merge conflict. How does Nix deal with these? Presumably "never modify /nix/etc/..." is neither a realistic expectation for the user nor for the package manager, so there are bound to be merge conflicts, right? But that goes squarely against having a deterministic build configuration, so what happens?


So, there is no /nix/etc - anything in /nix is immutable. I haven't done this and hopefully someone will correct me if I get the details wrong, but I think the way you'd make this work within nix is you'd make your own package, let's call it "my-sshd-config," and it depends on a certain version of sshd. Then you'd have /nix/store/abcd1234-my-sshd-config-1.0/etc/sshd_config. (where "abcd1234" is a hash of everything in your package, including the specific version of ssh you depend on)

If you want to upgrade sshd and the config changes, you'd make my-sshd-config 2.0, and nix would put its files in /nix/store/efgh5678-my-sshd-config-2.0/etc/ssh_config. You could have both of these installed at the same time.

Then, it's up to you to stop the sshd running out of my-sshd-config 1.0 and to start the one running out of my-sshd-config 2.0. Once you're confident of the upgrade, you can then tell Nix to clean up my-sshd-config 1.0, but that's just removing files, since you've already stopped the service.

(We're about to upgrade sshd at work next week and I wish we had something like this, honestly, both so I could distribute the files to machines well before the upgrade and so that it's easy to roll the upgrade back - just stop the new sshd and start the old one. Then there wouldn't be concerns about config drift, forgetting to revert files, etc. The only configuration that I'd have to manage is which one is active.)

Most people who use Nix without NixOS aren't running services like sshd out of it, but NixOS will do basically this for handling service upgrades. Conceptually NixOS gives you one mutable configuration knob, which is "what is all the systemwide stuff for this system." You can depend on some particular sshd and its config, some particular init system and its config, some particular syslogd and its config, etc. If you want to change any config, you make a new package with the changed config, and then flip the one mutable thing to point to it instead. (This does mean that it's easy to roll back your system to an old state if you realize you made a mistake!)

There is a limitation here, which is that Nix can't do a lot about config in your home directory, like ~/.ssh/ssh_config (for the SSH client). You'll have to be careful with that just like you would with upgrades in general. Alternatively, you could make your own Nix packages that include client config, and avoid storing config in your home directory.


Oh dear... so you have to write a Nix package (and learn how the language and package management work) just to modify global config files, instead of just editing them? That sounds like an absolute nightmare for a local machine, though possibly a great tool for automated systems.

Also, if there's no /nix/etc... then what is Nix modifying? sshd (or any other more common program; I'm just using sshd as an example to understand the rest of the system) won't magically know to look at /nix/store/efgh5678-my-sshd-config-2.0/etc/ssh_config, right? I thought you'd at least need a symlink like /nix/etc/ssh/sshd_config to link to it. Unless you're saying Nix modifies your system's /etc/ssh/sshd_config directly and makes it a symlink to the immutable Nix store? In which case, wouldn't it just trample everything your own distro's package manager does in that folder?


NixOs manages configuration in a really elegant way with modules. Modules are a structured way of combining different sources of configuration.

E.g. you might need to configure a list of users on the system. You might manually configure your user in your top-level module:

  users = [ "dataflow" ];
But if you have postgres enabled, then it might configure a user in it's own module.

  users = [ "postgres" ];
When the modules are evaluated and combined, you can end up with a list that contains both:

  users = [ "postgres" "dataflow" ];
Compare this to conventional distros. If you edit your distro's default config file, and the distro updates the default config file, how will your modified config be updated? E.g. Arch just puts the new config in a .pacnew file, and you will just keep using your old config. pacman prints a line out, but it often scrolls off the top of the screen and it's easy to miss. It's up to you to manually merge those configs now. E.g. my arch system has 22 .pacnew files in /etc, and occasionally things break or don't work as well because updates aren't applied to the configs. Debian has debconf, but that always seemed more like a set of ad-hoc workarounds than a solution.


It's not done that way.

If you install Nix on Linux or OS X, you just have packages installed but configuration (if any) you have to edit manually like you would in another Linux distro.

If you use NixOS in addition to packages it also uses modules. Modules take care of setting up services, adding users/groups etc and placing the configuration in right place. You use module configuration to configure the application, and Nix generates actual config.

Here[1] is example of how PostgreSQL is configured.

[1] https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/s...


I think most people who use Nix on Redhat, Debian, Ubuntu and so on probably don't use it to provide system services like `sshd`.

A more common use would be to install a package in a version that isn't available, or where you would need to link to a different version of the library than is available on your system. Nix makes the dreaded autotools dance much more bearable, even for someone like me who doesn't really know any C++.

Another common situation is when you are a software developer working on many projects with overlapping but slightly different dependencies where slightly different sometimes turns out to be incompatible. Ideally you could handle this with `virtualenv` or your language's equivalent, but I often find I end up with multi-language projects where this just doesn't work. For example, if one of my Python libraries also wants some C libraries in a particular version. Nix is ideal for such a situation.

Docker and docker-compose can work too, but, eh, it's kind of slow, and it'll probably break a few months down the line when all the JavaScript library authors decide that building things with `plink` is gauche and everyone should move to `smooch` instead, or when CRAN decides it doesn't want to keep these old versions of libraries any more.

And, what do you do about your linting and formatting and code completion tools? Do those go inside Docker too, because then the Docker slowness really shows up.

(If you don't have these sorts of problems often, then Nix may not help you all that much. I guess reducing the amount of stuff you install and uninstall with `apt` or `rpm` probably still reduces the rate at which your system rots though.)

Having said all that, if you decided that you did want to use Nix to provide packages for your system services, then there's no difficulty in specifying a mutable config file in /etc/wherever. That part would be handled in your init system config (say, a systemd unit) instead of inside Nix.

It's for NixOS that you would use the Nix language to configure everything, and I've found that experience to vary quite a lot.

I've used some obscure or packages which didn't expose enough of the configuration to do what I needed by default, and so I had to make my own version. This can be pretty annoying and time consuming.

Nixpkgs' maintainers get a lot of work done individually because of the tooling, but there just aren't enough of them yet.

On the other hand, I've found that configuring popular things like Nginx and Postgres is often much easier in the Nix language.

It's much harder to make a small syntax error, you have access to some helpful templating constructs, and you can re-use the same constants for different config files in different packages.

Similarly, for NixOS, configuring your systemd services is really well handled.

Wow, that ended up long winded. Sorry.


Interestingly, sshd WILL magically know to look at /nix/store/efgh5678-my-sshd-config-2.0/etc/ssh_config. Usually you would use sshd as a systemd service (possible to do and manage with Nix in Ubuntu, but why would you?). If so, you would write the systemd configuration using Nix and the service file deployed to NixOS (or Ubuntu, imagine deploying some custom/proprietary service you want isolated from whatever else the client has on the machine) would have all the long hashes auto-magically inserted to produce this (this is on my machine, all i had to write was "services.openssh.enable = true;" but additional configuration is available, see https://nixos.org/nixos/options.html#services.openssh.):

  [Unit]
  After=network.target
  Description=SSH Daemon
  X-Restart-Triggers=/nix/store/d4ys2c8kzzcp3g4fv3ivy7a5nkayg7w2-sshd.conf-validated

  [Service]
  Environment="LD_LIBRARY_PATH=/nix/store/71mr6yjmia7y8lw4g5ghk5ag9yq5ir2i-nss-mdns-0.10/lib:/nix/store/zbxfs37qjj6ddrfnzrdnxnkrvvm1ddsf-systemd-245.3/lib"
  Environment="LOCALE_ARCHIVE=/nix/store/9b725cly2a6a61vb8bgz7cyr0xr8y2av-glibc-locales-2.30/lib/locale/locale-archive"
  Environment="PATH=/nix/store/5yx7mv7md9c9nldj69inrnr7rjdkzqq3-openssh-8.2p1/bin:/nix/store/miwvn81sgbbcq5bfglr6v3pwchgsd00c-gawk-5.0.1/bin:/nix/store/ca9mkrf8sa8md8pv61jslhcnfk9mmg4p-coreutils-8.31/bin:/nix/store/hg3albf7g05ljfqrfjhd58rblimrp6ph-findutils-4.7.0/bin:/nix/store/8pajzfyqx1v7dz1znrnrc4pqj5rmnx24-gnugrep-3.4/bin:/nix/store/jpqlmf3wqg281j8fdz50kjl525pfsxjc-gnused-4.8/bin:/nix/store/zbxfs37qjj6ddrfnzrdnxnkrvvm1ddsf-systemd-245.3/bin:/nix/store/5yx7mv7md9c9nldj69inrnr7rjdkzqq3-openssh-8.2p1/sbin:/nix/store/miwvn81sgbbcq5bfglr6v3pwchgsd00c-gawk-5.0.1/sbin:/nix/store/ca9mkrf8sa8md8pv61jslhcnfk9mmg4p-coreutils-8.31/sbin:/nix/store/hg3albf7g05ljfqrfjhd58rblimrp6ph-findutils-4.7.0/sbin:/nix/store/8pajzfyqx1v7dz1znrnrc4pqj5rmnx24-gnugrep-3.4/sbin:/nix/store/jpqlmf3wqg281j8fdz50kjl525pfsxjc-gnused-4.8/sbin:/nix/store/zbxfs37qjj6ddrfnzrdnxnkrvvm1ddsf-systemd-245.3/sbin"
  Environment="TZDIR=/nix/store/wmry9mqmimq8ib8ijli4g1yx92gxjli5-tzdata-2019c/share/zoneinfo"
  
  
  X-StopIfChanged=false
  ExecStart=/nix/store/5yx7mv7md9c9nldj69inrnr7rjdkzqq3-openssh-8.2p1/bin/sshd -f /etc/ssh/sshd_config
  ExecStartPre=/nix/store/1mzzy0dwjzy6kcwad7q79pvc444yn288-unit-script-sshd-pre-start
  KillMode=process
  Restart=always
  Type=simple
No symlinks to /etc/sshd. This service would be independent from other software on the host system, other than PID1 managing it.


Interesting, thank you! So that means programs are sometimes patched to look for configs in nonstandard locations generated by Nix during installation time. But then if I wish to change any of those configs (maybe to change one of the defaults)... I have to copy them, make my modifications, generate my own package for them, and install them to wire them in as substitutes for the existing packages. Then when the upstream package changes the config file, I have to generate a new package with all the conflicts manually resolved, right? It seems a bit of an arduous process, though I do see the appeal.


> programs are sometimes patched to look for configs in nonstandard locations generated by Nix

This may be true, but typically the NixOS module will specify the generated config file via the command line [0] or symlink the generated config to the default location in /etc [1]. I don't believe it's terribly common to patch programs to have different config file paths in nixpkgs.

If you're just using plain Nix on a foreign distro, and not, say, home-manager or similar, it's up to you to provide your own configuration including service units. Presumably you could use Nix for this as well, but I'm not terribly familiar with using Nix on foreign distros.

[0]: https://github.com/NixOS/nixpkgs/blob/de493bd74921139860624e... [1]: https://github.com/NixOS/nixpkgs/blob/de493bd74921139860624e...


Woah, I'm lost here. The idea is that it uses /etc/ssh/sshd_config as its input? (How do you handle upgrades, then?) What is d4ys2c8kzzcp3g4fv3ivy7a5nkayg7w2?


In NixOS, typically the config files will be generated from scratch using the Nix language, see for example sshd [0] or bind [1].

`d4ys2c8kzzcp3g4fv3ivy7a5nkayg7w2` is a hash of the inputs to a derivation (a package in Nix terms).

In a sense, at it's lowest level, a derivation is a function `f(x) -> y` where `x` is some Nix expressions (including the inputs and how to build it, often in bash) and `y` is a nix store path. The nix store path includes the hash which is a hash of `x`.

For bind, the config file itself is a derivation, it just uses a plain string (interpolated with variables via Nix) into the writeText wrapper.

[0]: https://github.com/NixOS/nixpkgs/blob/de493bd74921139860624e... [1]: https://github.com/NixOS/nixpkgs/blob/de493bd74921139860624e...


I haven't used NixOS, so take me with a grain of salt - it looks like the actual way of doing this in NixOS is that you have a systemwide configuration file that you can edit, and running "nixos-rebuild" will pick up your changes and automatically make the packages you need. See "Changing the Configuration" in the manual: https://nixos.org/nixos/manual/index.html So, at the end of the day, there is a Nix package, but you don't interact with it by using the packaging tools, you interact with it by editing a file and then running a command that snapshots the current version of the file and does everything for you.

If you're running your own services, you don't have to go through Nix packaging, you can handle this yourself if you have a way you prefer. For example, if you're running WordPress out of your home directory, you can have a git repo with some config files and a script that runs a particular version of Apache, MySQL, PHP, WordPress, etc. out of Nix. If you want to upgrade, edit the versions in the script and also the config files, then tell them all to restart. You can't rely on having a single systemwide version of Apache like you can with a traditional distro, but on the other hand, you aren't tied to whatever version the system wants to give you, you can keep running the current version until you're ready to upgrade.

I'm looking forward to Shopify's part 2 blog post to see what they do exactly. :)

> sshd (or any other more common program; I'm just using sshd as an example to understand the rest of the system) won't magically know to look at /nix/store/efgh5678-my-sshd-config-2.0/etc/ssh_config, right?

Conceptually, my-sshd-config includes a script (or systemd unit, or whatever) that has a reference to a particular version of sshd and also has your config, and so it would run "/nix/store/aaaa1111-openssh-9.0/bin/sshd -f /nix/store/efgh5678-my-sshd-config-2.0/etc/sshd_config". The openssh package doesn't know about you, and you can't change it, but you know about it. (In other words, the inputs that resulted in the hash efgh5678 include "aaaa1111-openssh-9.0".)

It looks like the actual way you do this in NixOS is that the sshd package provides a function in the Nix language which takes some config as input and spits out a package as output. So your systemwide config file loads the sshd package and calls a function, which returns a systemd unit with the right filenames. https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/s...


Wow interesting, okay. I think I'll probably have to give it a shot at some point to try it out. Hopefully it'll live up to the expectations :-) thanks a ton for all the explanations!



Those are handled by NixOS - that's the purpose of NixOS, it was built on top of Nix explicitly to manage such things. You can read some about NixOS at https://nixos.org/nixos/about.html


And in particular, NixOS is a full OS, like Arch/Ubuntu/etc. You can't install NixOS side-by-side with Arch any more than you can install Arch side-by-side with Ubuntu. Nix-sans-NixOS is only the stuff that can be installed side-by-side with an existing OS - no service management, no kernels, etc.


Ah I see. yes as someone who contributes to nixpkgs, there are patches to use the /nix paths rather than the standard posix layout.

NixOS is not POSIX compliant and does not try to be.


Not a Nix user, but my understanding is that it's a standalone package manager with its own repositories. On a non-Nix distro, installing a package with Nix is akin to installing a Python module with pip, instead of the distro's package manager. It would not be managed at all by the distro's package manager. On NixOS, Nix is the distro's package manager.


Thanks, but I already got that much. It doesn't answer my question though. If Ubuntu has made a modification to a package (that's basically the entire point of most distros, otherwise they'd be Arch), should I expect those changes in whatever Nix installs?

It also leaves so many other questions unanswered, like what happens if I install GRUB or a new kernel or something else that's supposed to modify the system globally... but that's secondary.


Nix packages can not modify the system globally, by design. Not even on NixOS. This is why Nix allows unprivileged users to install anything.

When you install a package with Nix, all you are doing is drop a symlink in your ~/.nix-profile pointing to some /nix/store/<unique-identifier> item.

When you build a package with Nix (also does not require root privileges), it happens inside a container that can only write to /nix/store/<unique-identifier>.

The <unique-identifier> is a cryptographic hash based on all the inputs (dependencies) to the package (also /nix/store/<hash> items) as well as the build script.


This is a really good explanation, thank you!


NixOS only has nixpkgs modifications, not Ubuntu's, as much as Fedora doesn't have Ubuntu's modifications. My understanding is that packages available for Nix are patched to work within its context, but otherwise kept as original as possible, with optional configuration switches.


The documentation [1] mentions the possible configuration flags for a package, including Grub (which is not the default bootloader, btw).

[1] https://nixos.org/learn.html


No, you should not expect Ubuntu's modifications in packages installed through Nix.

Things like GRUB or the kernel version are handled at the level of NixOS, because, as you say, they affect the system globally.


It is pretty easy to be confused. Especially since it is also a language yet all the other replies at the time have writing have failed to mention that.

While I like the project as a whole, their naming is horrible. Just wait until you get into `nix-env` vs `nix-shell` etc...


> It is pretty easy to be confused. Especially since it is also a language yet all the other replies at the time have writing have failed to mention that.

The OP mentioned it. It helps to read the OP if one wants to avoid confusion.


Yes, I really wish they didn't name it nix, it is so hard to search for things, because people also use *nix in place of Unix.


Thanks, yeah. Conceptually I'm having a hard time wrapping my head around it since package managers are so tightly tied to an OS... I feel like surely I can't just use it in place of (say) apt or pacman without causing problems? And I haven't even gotten to the language/shell/etc. as you mention...


You're right that using 2 traditional package managers in the same environment will cause problems almost immediately, since both will want to control the same resources.

This is less so with Nix, as it mostly operates on its own folder called nix-store, the binaries in which are then symbolically linked to. It is actually designed to be able to be used this way. I know people are using it on Mac OS and I intend to install more and more packages through it to later possibly transition to NixOS.


You use the language to define the configuration, packages, and etc. NixOS isn't so much an OS configured by a package manager as much as it is an OS + package manager that you can configure with the Nix language.

Then there is also lot of tooling to hid all of the symlink magic. It is very deep rabbit hole. Most of the quick touch examples will basically look like magic.


Package managers are not tied to the OS. Essentially your fundamental misunderstanding here, from reading your comments, is that you are expecting a package manager on Ubuntu to manage "Ubuntu's packages", when the packages belong to a repository as accessed by a package manager, and "Ubuntu" doesn't really exist: "Ubuntu bionic" is merely "a bunch of deb packages that you can install from a particular repository using apt", while "CentOS 7" is merely "a bunch of rpm packages that you can install from a particular repository using yum".

You can add Ubuntu eoan or even Debian stretch repositories to your APT configuration on Ubuntu bionic, and install those packages... now, the naming and compatibility of dependencies might be different as you get further and further from what you were "supposed" to be using, but they will download and install (at worst an installation script will be missing a dependency and the package will be left "half installed", which is a technical APT term).

You can also install rpm on Ubuntu: it is even in the Ubuntu bionic repository. Now, what happens if you install an rpm file using it? It might very well overwrite something from a package you installed by apt, and neither might notice that happened. But you can do it. You can install yum (this will require manual install as I don't think yum is available from any common Ubuntu repositories) and then give it a CentOS 7 repository and install all the packages and what you will be left with will be approximately CentOS 7 (with some detritus from Ubuntu).

So on any system apt is managing deb packages and yum is managing rpm packages and where those packages come from is just some configuration, so you can install apt on CentOS 7 and use it to install packages from Debian stretch. You can't use apt on CentOS to manage CentOS packages as apt doesn't do that (though maybe you could teach it an "rpm method" or something but rpm's version numbering and dependency management aren't really the same so this is now just a confusing digression to avoid "technically you could" responses: yes, but then you are coding that yourself).

The only real exception to all of this is the kernel, but not even always as the kernel is kind of a file, but the way it is configured in the master boot record could be different... but like, you might have some distribution local kernel patches that their tools rely on for some reason, and upgrading libc on a running system is fraught with peril even when you aren't doing something insane this, so like: don't do this, but know it works.

I routinely thereby create a little folder on a Debian/Ubuntu box in which I can chroot to install a little world of an entirely different set of packages from a different package manager maintained by a different vendor as provided for some random specific version of their distribution. (Alternatively, rpm supports an argument to set the system root.) In that world there are no conflicts, as you made a folder for it.

When you install packages from apt or yum, usually they assume they could go in /. That is going to maximize conflicts unless you do the chroot. Nix has chosen to package their stuff not like that, and so you can install their packages on any other environment without it conflicting or overwriting.

So no, you don't "use it in place of apt" to manage Ubuntu packages, but maybe Nic and Ubuntu have chosen to both package the same thing you want (this is hopefully common as both are hopefully quite complete) and so you might "use it in place of apt" because you don't want to install an Ubuntu package, you want to install a Nix package instead. And as Nix packages go to special folders automatically, it doesn't damage your Ubuntu install.


POSIX (eg. Linux), Mac and Windows. Yes, it is magnificent.

Immutable, deterministic software on all 3 major platforms. Combined with cached build artifacts, builds and their outputs (including complete target system updates/upgrades) are blisteringly fast, cryptographically secure and completely deterministic.

EDIT: Sorry, yes; Windows only via WSL, not natively on Windows proper.


Windows? You're kidding me! Where do I see Windows support? On their website I only see Mac and Linux.


https://github.com/NixOS/nix/pull/3185 please somebody throw github.com/volth or me money to finish this


Would it be possible for you guys to team up and do an opencollective campaign for nix on windows?

The static-haskell-nix[1] project was successful with this approach.

Nix on Windows would be amazing!

[1]: https://opencollective.com/static-haskell-nix


I cannot speak for Volth, but I am far more pressed for time than money, and largely paid to work on open source as it is, so it's not like that would be dramatically more fun than what I normally do or something.

Also I think in the short term this needs a burst of effort rather than trickle of paid maintenance costs: we're firmly in capex not opex territory.


You can use it on WSL1 and WSL2 but you can't use it natively on Windows. I used it on WSL2 just this afternoon and it was seamless but that makes sense because WSL2 is basically just a Linux VM anyway.


Although I don't see why it wouldn't work on Windows natively. Sure, it would probably need a complete reimplementation to replace all the POSIX stuff with Windows stuff, but fundamentally it ought to be possible?


The biggest issue IIRC is that Bash doesn't play well with Windows. Things like paths are handled different enough to cause lots of breakage. We could rewrite all of the Bash in Perl or Python, which has better cross-platform support, but that's a lot of work.

That said, there have been attempts to get native Windows support:

https://github.com/volth/nix-windows/commits/windows


Why rely on bash? Windows has Powershell and symlinks, what else is needed to create a nix-style package manager for Windows?

I'm not saying it's a good idea, I don't think anyone would really want that, but it could be done.


Sure in theory but there are a few open bugs:

https://github.com/NixOS/nix/issues/2651 https://github.com/NixOS/nix/issues/2292 https://github.com/NixOS/nix/issues/1203

There's a locking issue with SQLite and the Sandbox has some namespace issues where Linux and Windows behave differently. I don't know how eager they are to make it work on Windows either, but at least WSL works :)


There's also a work in progress for a Nix FreeBSD port[1].

[1]: https://reviews.freebsd.org/D17766


Nix really is a purely functional, lazily evaluated language that is great at expressing dependencies in your project. You can think of it as a make on steroids, but it is much more than that.

Because the language expresses what to build from a known state (i.e. nothing is installed) together with its purely functional properties, means that for the same input (source code, dependencies, configuration options, system architecture etc) it supposed to always generate the same output.

Nix can be think of as package manager, and its nix-env command (which BTW is discouraged from Nix purist to use[1]) behaves like a package manager, but IMO it's more of a build system.

After the author of Nix wrote his thesis he mentioned that the language can describe an entire operating system. That's how NixOS happened. You can have a single configuration file that describes what needs to be installed on your OS how configured etc. The great thing is that you can take such configuration file, and recreate another machine with it configured exactly as you want it. No need for chef/salt/ansible etc. It actually has an edge over these tools, for example if you tell that specific package needs to to be installed, and than later you remove the package from the config, nix will remove that package as well (with chef/salt/ansible, you would need to add a state that's uninstalling the package), another benefit is that all changes are atomic, you either have your changes applied or nothing is changed, there's nothing in between (this makes things like replacing X11 with Wayland, or KDE with Gnome just another change, that you can always revert).

It's a very powerful tool, and the more I use the more amazing it seems. It shows that if you target specific problems the right way a lot of common headaches are eliminated.

[1] it's advisable to never install packages by hand using nix-env, since that adds mutability to your system, instead you should use nix-shell (another great tool) to temporarily make specific application available, use the global configuration.nix to install packages globally on the system, or use home-manager extension which allows you to configure each home directory (that means what dot files do, but also each user can have their individual packages, and goes further, for example you can even control what extensions firefox should have installed)


home-manager is wonderful. I haven't used nix-env in years.


> From the looks of it it's a package manager that I should be able to use it on any POSIX system, but I doubt that's the case?

No, that's basically true. It's fully supported on Linux and Darwin, assorted BSD support is underdeveloped but in scope.


Nix is a package manager, and nixpkgs is its "official package respositry." Both of those pieces should work on any (decently popular) POSIX system.

NixOS is a Linux distribution that uses Nix in place of apt or yum or what-have-you, and uses nix to allow the entire system to be configured declaatively.


One really cool thing about NixOS is you declare the OS just like you might a DockerFile, and you can choose which configuration file you use at booting time. This lets you install applications just during up-time (to try out a new tool), add it to the configuration file if you like it, and roll back to the version before if it breaks something. Versions are declarative too (PSql == X.X.X) and upgrades/downgrades are set there too. If I want to share an environment with someone I can just share the configuration file minus the secrets.


> One really cool thing about NixOS is you declare the OS just like you might a DockerFile,

But DockerFiles have a really terse, self-evident, trivial to read language. It is obvious what a given DockerFile means even if it's the first one you see.

For nix; well, I have seen a few nix files and each one leaves me more confused than the last. Can anybody point me to good examples of beautiful, single-file, complete, self-contained terse nix examples that describe a few simple systems? Everything that I seem to find are small fragments of stuff.


I don't know if it's beautiful, but here is one of mine.

It sets up me as a user: https://github.com/cswank/dotfiles/blob/master/nix/configura...

Sets up a hobby app I built via systemd: https://github.com/cswank/dotfiles/blob/master/nix/configura...

Defines a cron job that backs up my music files: https://github.com/cswank/dotfiles/blob/master/nix/configura...

Opens up ports in the firewall (they are all closed by default): https://github.com/cswank/dotfiles/blob/master/nix/configura...

The rest is pretty much boilerplate from the default configuration you get when starting from scratch.


This is because Nix needs to express a lot more so builds are reproducible and incremental. Docker doesn’t care about these things.


Even if Docker doesn't really care about this, the DockerFiles themselves can also describe a reproducible process, and they are certainly "incremental" due to caching (not sure of what you mean by "incremental"). Does nix power merit such a big compromise against simplicity and readability?


> the DockerFiles themselves can also describe a reproducible process

This is true, but Docker does almost nothing to support reproducibility. As soon as you do an apt-get, reproducibility goes out the window.

> they are certainly "incremental" due to caching

Caching is layer-based. Docker has no awareness of whether or not a particular dependency has changed or what is necessary to rebuild it. Docker only understands layers, which are inherently linear--you can only try in vain to force your dependency graph into that linear layer structure. An incremental build tool only rebuilds things that have changed (or whose dependencies have changed).

> Does nix power merit such a big compromise against simplicity and readability?

This is a good question. First of all, understand that Nix isn't intended to be a build tool, it's a package toolkit. It's intended to replace all of the stuff that the debian and centos people use to build and manage packages in apt and yum repos. That said, I personally think Docker's build system is sooooo bad that under many circumstances, Nix does a better job.

For example, if your whole repo is Go or Rust or some other language with a sane build and deployment story, then they already have incremental, reproducible build tools and it's fine for Docker to call into them (or to invoke them outside of Docker and just have Docker copy in the static artifact). But if you have a heterogeneous tree including C or Python or other languages which lack sane build tooling, then you need something to stitch all of that together in some (somewhat) reproducible, incremental fashion if you hope to be able to build reliably and in some timely fashion.

That said, there are a few other tools that are purpose-built for this problem; however, they all tend to be poorly designed, buggy, and hard to extend. I'm thinking of Bazel, Pants, Buck, etc--all of which are clones of Google's internal build system, Blaze. So far, Nix seems to be the best in class, even though it doesn't aspire to be in the class at all.


Great description, thanks. Question: do you know how Nix deals with language package managers (like pip)? If I do sudo pip install (leaving aside the usual debate as to its merits), does that then wreck my Nix install, especially if it happens to contain another version of the same package? And does it instantly make things non-reproducible again (just like the apt-get issue you mentioned)? Or does Nix somehow get around it?


It's totally possible to introduce mutable, non-reproducible state on a system that uses Nix, even if that system is NixOS. However, you can't do so in your Nix store, which is where all the Nix-installed things live (at least, not without a lot of hackery - Nix tries very hard to make the entire store immutable).

Basically this works out to: anything you install or configure with Nix, is immutable and reproducible. Anything you don't is at your own risk. Unfortunately there are still some "pockets of state" that haven't been solved yet, most notably applications storing state in ~.

For most every language dependency management system, there's a Nix equivalent of some sort that allows you to manage those dependencies without invoking the non-reproducible mutable tooling.


Nix doesn't pay any attention to the rest of your system. It keeps everything it needs (including its own Python interpreter) in its own "nix store" directory, at /nix/store. You can crudely think of it as having its own virtualenv for every distinct Python target you use.


Bazel is buggy?! Bazel is not a clone, it's the refactoring of the internal build system, basically. I am 99% certain that Blaze currently has Bazel at the core.

And it's pretty damn robust.


Bazel is definitely buggy. One of the silliest ones is this one (and as far as this one goes, I do not understand how this issue exists when Blaze has been used in Google for such a long itme): https://github.com/bazelbuild/bazel/issues/9419

One of the more fundamental bugs is this one: https://github.com/bazelbuild/bazel/issues/4558

I've even had to clean --expunge and clear my disk cache to fix some build errors before. Bazel paints a rosy picture of the world, but the real world is messy, and Bazel builds aren't as reproducible as it makes you want to think. (Which BTW is one thing that's had me asking so many questions about Nix.) At least the fact that it does not track changes outside the workspace means you can update your system compilers (or Python, or whatever) and end up with build outputs that will not be reproduced from scratch.

It takes a lot of time to track the issues down, so I don't necessarily know what goes wrong every time; these are just two issues I remember off the top of my head. But if you haven't run into bugs with Bazel then you haven't used it seriously enough.

And then there's the Python support which is even more half-baked right now. Their own comments readily mention that the Python support is not idiomatic. (Not just in terms of syntax, but some of it goes against Bazel's underlying design.)

Of course I say this with the full realization that "fixing" these things can be a huge undertaking (up to and including hooking the compiler programs manually), but that doesn't mean they're not bugs.


As far as bazel bugs, it's safe to say that if you stick to the ways that Google uses blaze, then you are unlikely to run into bugs. E.g. Google doesn't run blaze on windows or use system toolchains, so it makes some sense that those are where the issues are. Unfortunately, Google also uses their own rules, so the rule quality for open source rules varies somewhat since they haven't all been battle tested in the same way. blaze and the Google codebase have also co-evolved for like 15 years to work nicely together, whereas anything adopting bazel today might be an awkward 3rd wheel for a while.

Disclosure: Ex-Googler


Yeah, I've figured as much in hindsight. It'd have been nice if they were more forthcoming about this than leaving it as a surprise for everyone to figure out (including, now, for the parent to whom I replied).


For a long time, Python3 support was advertised and there were lots of Python3 flags, but it was patently broken and there were many tickets that acknowledged as much. Allegedly that has changed now, but my experience was so bad I ran from it.


It's probably better than when you used it now, but it might depend on what you're trying to do (e.g. Cython has more room for improvement).


It's worth noting that the bugs aren't evenly distributed across language plugins. Bazel might be solid if you're writing Java or something.


Dockerfiles which just pull packages from distribution repositories are not reproducible in the same way that Nix expressions are.

Rebuilding the Dockerfile will give you different results if the packages in the distribution repositories change.

A Nix expression specifies the entire tree of dependencies, and can be built from scratch anywhere at any time and get the same result.


Because Dockerfiles delegate all the complexity to a package manager (apt, apk, ...), which has its own languages (eg. debian/rules for apt).

Using Docker without any package manager doing this job would be a completely different story.

And btw, you can also use Nix to build docker images. For example, here is what would be the equivalent of a Dockerfile but with Nix: https://nixos.org/nixpkgs/manual/#ex-dockerTools-buildImage

> the DockerFiles themselves can also describe a reproducible process

It's trickier. You need to pin all versions of all packages you use, even indirect dependencies.


Dockerfiles are as reproducible as a bash script (and that's not a hyperbole). Yes, they are incremental but as you pointed out it is per statement.

This requires:

1) that it is on you to specify the correct order 2) you are limited to 42 layers 3) it doesn't work well even if you know what to do and care about it:

- in most setups that I've seen once you trigger build system it still starts from scratch (for example each time, docker images pull all dependencies again and again every build). nix build won't build stuff that was already done, unless you explicitly instruct it with --check option

- typically people use `COPY . .`, because it is less work, but in reality certain files need to be pulled earlier than others, but then you're more likely to run into layer limit, also it's more work, more prone to errors (like typos, forgetting about a file etc). With declarative configuration, nix will only rebuild derivations where input changed. It doesn't matter what order you specify dependencies.

> Does nix power merit such a big compromise against simplicity and readability?

The simplicity causes complexity everywhere else. Is this really that complex? Here's for example everything that's needed to build pgbouncer[1] (you can ignore the meta section, that's informational only). I'm using that since I used it recently.

Nix actually isn't a complicated language, just different. I believe the big problem people are having is just different programming paradigm, but the programming paradigm used by nix is really what is needed to solve the problem correctly (purely functional -> for the same inputs (source code, dependencies, system etc) you always get the same output; lazily evaluated -> build only things you actually depend on). The properties of the language allow to define how to build specific program in a declarative way.

[1] https://github.com/NixOS/nixpkgs/blob/master/pkgs/servers/sq...


> But DockerFiles have a really terse, self-evident, trivial to read language. It is obvious what a given DockerFile means even if it's the first one you see.

Is it, though? It's obvious what commands it runs, yes, but that doesn't translate to actually understanding what the end state is, and that's precisely the bit that matters.

Yet that's something you have to infer from a pile of operations that mutate global state, which is precisely the sort of thing that's very difficult to do reliably.

Sure, the result is that a Dockerfile is less code - because you're outsourcing a lot of the "understanding what this does" work to the reader's brain rather than to the code. That's not a good thing.

> Can anybody point me to good examples of beautiful, single-file, complete, self-contained terse nix examples that describe a few simple systems?

It's not completely self-contained (eg. the hardware configuration is in separate files for infrastructure migration reasons and some custom abstractions are used), but here's an example of the actual configuration of two of my servers, warts and all: https://git.cryto.net/joepie91/morph-rc/src/master/configura...

In practice, "self-contained" is something you're not likely to see in real-world usages of Nix.

Sure, people start out with a self-contained configuration, but they tend to discover pretty quickly that configuration is code, and that means that you can abstract out the repetitive and messy bits to separate modules, and now it is no longer self-contained.

Basically, asking for a self-contained Nix configuration is going to yield similar results to asking for a self-contained source file for a piece of software. You'll either get a) non-real-world code, b) a big mess of stuff dumped into a single file, or c) multiple files.


This file explains how to configure the system when you first set it up [0]. I've just added the one I currently have set up on my dev box to Github - it's not complex by any means but has what I need :) [1]

[0] https://nixos.org/nixos/manual/index.html#sec-configuration-...

[1] https://github.com/oneEdoubleD/nixos/blob/master/config.nix


Nix is a package builder & manager that creates reproducible build environments. Using the Nix language, you specify what the build environment should look like ($PATH, what should be in the build folder, etc). Nix then calls a program of your choice to do the actual building.

Just like Nix lets you specify a build environment using the Nix language, NixOS lets you specify, using the Nix language, what you want your operating system to look like (systemd services, $PATH, etc).

You should be able to use Nix, the package builder & manager, on any POSIX system afaik.


Nix is a package manager. NixOS is an OS which includes the Nix package manager. My understanding is that a feature of Nix is you can declare an OS (I.e. declare the collection of configurations and software etc for an OS) which Nix will process to provide you with an OS called NixOS.


So it processes the config file (and thus all needed packages) one time or does it process it every time you boot?


You type

   nixos-rebuild switch
and it builds you an entire OS and drops you into it. It also makes the new OS the default on startup.

Because it's all purely declarative, your old OS is still there. If things go wrong then you can just pop back to it, either in the command line with:

    nixos-rebuild switch --rollback
Or when you reboot, the previous 'generations' of your operating system show up in GRUB, so just choose an older one to boot.


Thanksfor your comment.This is very interesting for my Raspberry Pi. The ability kit it out according to function is great.


Whenever you run `nixos-rebuild switch` a symlink is created for `/run/current-system` which is the current system. That's what is used on every boot until another change (you also have an option in boot loader to boot into prior versions (it is just a symlink))


> From the looks of it it's a package manager that I should be able to use it on any POSIX system, but I doubt that's the case?

That's exactly the case. It works by installing apps per user and not globally, so they're put into your home directory.


I started reading this thinking Nix was a programming language, became very confused when it started talking about files and an object database.


Smalltalk has object databases, as images.


It’s a package manager that underlies NixOS.

It can also be run under any other Linux distro, and macOS.


And of course Windows Subsystem for Linux


> What is Nix?

It's everything you could ever ask for and more. On a more serious note, I get the question though. This article isn't very compelling.

NixOS is a linux distribution that allows you to specify your system in a declarative way. This means instead of running imperative bash commands like "apt-get git". You declare that you want git installed with something like environment.systemPackages = [ git ... ]; This may not sound that interesting you but it makes your system reproducible. So you have some plain text files that describe you entire system. You can check these into git or whatever. If you reinstall and use the same configuration as before you will be back at the same state so it makes no sense to do a reinstall of the entire OS. If you want to upgrade NixOS versions it doesn't really make sense to do a complete reinstall also. Rollbacks are also built in in case you make a mistake you can quickly go back. Your previous "generations" are shown in the boot menu if you restart your computer. NixOS also can run virtualized environments like docker, vms, etc. With docker it can be used run docker containers and startup. It also has it's own concept of containers which can be thought of like docker containers except they run NixOS so you specify them the exact same way as any other NixOS configuration. Unlike docker, it allows you to share packages between containers and the host system while also having isolation.

Nix packages are packages like in most other package managers. It's also relatively easy to create your own packages. You can use NixOS configuration to install packages or use it on it's own as a non-root user to install nix packages while on other OSes to for example replace homebrew on Mac OSX. It also allows you to install multiple versions of the same package. You can also us it build docker images so you don't actually need a Dockerfile and you can install the docker images which get saved to the nix store. You manage them just like another package. That's right, this means you can both build and run docker images without actually using docker.

NixOps is a tool for deploying NixOS to multiple computers. It builds everything on the host and copies it to the nodes. If two of your nodes use the same package it's only downloaded once. This means no configuring an apt-cache or something like that if you don't want all your machines downloading their own copies of everything. It also allows you to provision and deploy to the cloud. It basically replaces terraform, cloudformation, etc.

Nix is the language used to configure NixOS and NixOps as well as to create packages that can be installed. Nix files themselves don't do anything. They just build a configuration object. You can almost thing of them as glorified JSON generators that generate the configuration which the respective nix tool uses to get whatever job done it needs to.

Now don't get me wrong, the learning curve is very steep but once you learn it you "get it" and realize how it all ties together and it starts to make perfect sense. I left a lot out here (like development) but It basically solves every need I have programming, infrastructure, and computers in general and I won't go back to anything else.


Nix is a powerful package manager. I can confidently compile and run multiple incompatible versions of software simultaneously. I can build projects from years ago. I can package large projects from different ecosystems (python 2/3, c/c++, go, javascript) and be confident they will not interfere with each other. I can try bleeding edge software with no risk to it interfering with my system. It is faster and less hassle than juggling various Docker containers and VMs. It protects me from dependency hell.

But overall: it makes me more productive. It is my secret weapon to manage the complexity of software development.


As an example, what's minimally required to run two different versions of program-x from the command line. Can I do something like this easily?

    cat-8.22 /etc/passwd | cat-8.3 -A


Minimal example:

  export v824=$(nix-build 'channel:nixos-15.09-small' -A coreutils)
  export v830=$(nix-build 'channel:nixos-19.09-small' -A coreutils)
  $v824/bin/cat /etc/passwd | $v830/bin/cat -A

You can get better specificity by using a nixpkgs git repo/hash instead of channel or adding `--no-out-link`, but this is minimal.


Even more concise:

nix run -f channel:nixos-19.03 hello -c hello | nix run -f channel:nixos-20.03 ripgrep -c rg Hello


First: It depends on what you mean by "easily"

General installs from Nixpkgs aren't going to be highly-idiomatic. You can install two different versions of coreutils, but one of them is still always going to be first on your PATH. So, if you want to specify the version in each command like this, you'll have to override the expressions to write a version number into the name of each binary. (But, yes, they're both on your PATH. If by "easily" you mean you could run `type -a` and pick the correct full path for each and then run them in a pipeline like this--yes.)

I guess you could also build the path of each into an environment variable, or maybe create a wrapper for each with the version name included.

Second: This is an awkward example because people rarely use multiple versions of coreutils concurrently, so there's just one version at a time in a current version of Nixpkgs.

It'd be easy to put more than one python or ruby or postgresql in the same expression--things people are more likely to need (and thus multiple recent versions are all available at once)--but you'll (probably) have to work a little harder to do something like coreutils. (i.e., copy in a complete expression for each, or pull each version from two different commits in Nixpkgs.

Nix makes it fairly simple to override an existing expression to change something like the version/commit used, but it breaks down if there are substantive changes in what's required to build them. Given the nearly 4 years between the early 2010 release of 8.3 and the late 2013 release of 8.22, I'd be surprised if they'll build with the same expression.


tomberek gave you example how to do it, although I think in real life you probably won't need to do something like this. Much more common situation is having two programs that depend on conflicting version of a library. You get this often if you start including custom repos into your system, because you need specific version of something. Sometimes you make that application work, but then you break something else.

Docker is often pitched to solve this, you get one application in one docker, and another in another. Since docker isolates and pretends to have 2 different operating systems it kind of allows this to happen, but that's hacky.

Nix solves this correctly. Instead of having some central place listing what libraries are installed on the system, it actually every component listed in an unique path. This way application A can use library X and application B can use library Y even when X and Y are different versions of the same library. What's more, is that the derivations in nix can be extended like objects in OO class. You can easily modify application A to use library Y and B to use library X. And if such build is not available in cache Nix will know how to rebuild it (no need to worry about having compiler and libraries installed).


Yes, that's probably the more typical case. Thanks!


Stumbled upon NixOS yesterday and today its frontpage, 1st link. Same thing happened with OpenBSD the day before. Whats tomorrow, Qubes? I wonder what the chances of coincidence are if I investigate one new OS per day. https://www.foxypossibilities.com/2018/02/04/running-matrix-... https://en.wikipedia.org/wiki/NixOS


No tomorrow could be BSD and Nix https://github.com/NixOS/nixpkgs/pull/82131


Oh nice, thanks for doing the work.


Post a link about it tomorrow and maybe your prediction will come true ;)


Baader–Meinhof strikes again:

>Once something has been noticed then every instance of that thing is noticed

https://en.m.wikipedia.org/wiki/List_of_cognitive_biases#Fre...


We're doing a project right now to try to migrate to using nix for our python code that runs on a raspberry pi. It's been great in many areas, but pretty annoying when something doesn't work. You have to dig deep to fix errors.

Of course we're trying to get numpy to cross compile from x86 to armv7 which involves cross compiling gfortran to get blas... so I guess a few bugs [1] are expected.

Otherwise I've been really impressed.

https://github.com/NixOS/nixpkgs/issues/88449


Cross-compiling is finicky in any ecosystem, things like numpy especially so. When working on armv7 projects, I got a few TinkerBoards and ran the builds native. The first one took a while, but i just kept my binary cache up to date and normal CI/CD builds were fast. I'd also push core builds to https://arm.cachix.org. I haven't updated it recently, but I am considering doing that again.


A company I worked at embraced nix. Beforehand I was a big docker fan and had few issues with it (yes there were occasionally caching issues and damn it docker figure out the issue with hyperkit on mac os, but largely its a productive tool). From an outside perspective nix just felt like alot more work, and nearly no one (except the people that set it up) could ever get it to work. So basically the entire build process for the core of the system was something that literally no one in the company wanted to touch. I'm sure if I was an insider on the tool I would appreciate the added stability you get on the backend, but at face value it seemed like it just made builds unapproachable on the front end. I'm happily back to using docker and haven't looked back.


Out of curiosity, any reason why you didn't want to understand it? You spent time learning docker, why this was different?


I haven’t worked through the entire text but I absolutely love the manner of explaining.

I often feel when I’m reading an explanation the author is reluctant to really break something down to the essentials and ELI5 ... which ironically can be conceptualized as the most intellectual way to understand something. Please explain it from as close to first principles as possible. Assume as little as possible.

Maybe it is because the author thinks it is an intrinsically hard concept (as per the intro).

Or maybe this person is just kinda awesome at explaining stuff.


The only takeaway I get from this article is...

Why is Nix?

And from skimming along the comments, both on HN and on Disqus, there's a lot of confused people trying to understand/describe the difference between Nix and Docker, because although the article described how Nix works in a very technical way, it didn't explain what it can be actually used for.


It's a package manager written in such a way as to properly solve the problems with existing package managers which docker patches over. It uses deterministic builds and careful isolation of dependencies to ensure that the environment it creates is the same each time, and that you can have packages which depend on conflicting versions of another package installed at the same time.

The entire OS is accurately described by a config file, and this can be reproduced exactly using just that file.

In contrast to traditional package managers: handles conflicting dependencies, state is tracked through editing the config file, not a serial of install/uninstall commands which mutate the state of the system. Config files of installed packages are also controlled through nix config.

In constrast to docker: properly reproducable (Docker will re-run the same commands in the Dockerfile, but there's no guarantee you'll get the same result. For example, basically any package installation from a traditional package manager you run in the Dockerfile will not reproduce when run later because newer versions of packages will be installed), also more efficient in terms of space usage. However, AFAIK it does not namespace networking and so on (nixOS has its own containers system which does do this however).


Here are my favourite use-cases:

* automatically installing all project libraries and dependencies so you can build your project without having to apt install a bunch of stuff first. This is done through direnv/lorri, so when you cd into the project directory, direnv uses nix to install everything automatically. Very quick onboarding and also no need to keep up with the various company projects' deps.

* building docker images with intentional layering so deltas remain small is a breeze: buildLayeredImage.

* with home-manager I can ensure my dotfiles have all their dependencies so vim plugins and whatnot just work. It also means I get the same nice home environment on _any_ Linux distro I choose.

* Same as above except for my entire system with NixOS.

* Shipping packages with specific config defaults _only_ for your project directory so you don't have to worry about making everyone configure something is also a breeze with Nix package overlays and overrides.

* You can even have Nix modules for your VPN connection and the like so co-workers can import that into their home-manager or NixOS configuration, and if it changes so does your system/home config next time you build it.


I have always wanted to try Nix, but not sure how installing software from source outside the package system works. With Arch Linux, it's pretty easy to create an AUR package if it doesn't exist, or simply install it outside the package system. Is it easy to create Nix packages (based on the features of Nix, I'm guessing no)? What about if you don't feel like creating a package?


Creating nix packages is actually pretty easy once you get the hang of it. Maybe a tiny bit more involved than AUR packages since it's a new language and all, but it's not that much more difficult. When I was using nixos on my desktop machine, I made packages for my own personal tools, so it integrated in the system.

It's also very easy to customize a package to suit your needs, and in fact makes for a very nice system to hack on open source projects. You can mostly setup an override in your $HOME to make a package's source come from your local disk, and when you install the package it will compile locally instead of getting a prebuilt binary.

If you don't go through their packaging system... It's really easy to end up in a mess. There are ways to simulate a "normal" linux from within nixos, essentially making a chroot in which the various libraries are in their normal locations (achieved through an overlay filesystem). I believe that's what they do to run the Steam client. I assume (though never tried) you could use this to run other binaries without packaging. But then... what's the point?


I think it's about as easy as creating AUR packages most of the time, especially after the first one or two. You can create them in a more ad-hoc way than you probably expect. For example, this video shows an example of creating a totally-legitimately-real package: https://www.youtube.com/watch?v=1nU_hR2kod4 -- some of the follow-ups in the channel also show a little more work with that simple derivation ("package" in de-nixified terms)


you can always find examples to learn from in nixpkgs git repo. If the package can build from source, look at other package with similar dependencies. If the package is binary distribution (eg closed source), follow this guide: https://nixos.wiki/wiki/Packaging/Binaries


You can use Nix on your existing OS with no obligation to install all packages with it. On macOS, I use Nix for most things, but I still use homebrew for a few things and occasionally download+install.


Really it's kind of a minimalist linux distro with the capacity to coexist harmonically and/or parasitically with another operating system. Don't try to say 'oh it's like arch' or 'yeah it's a bit like Ubuntu', because no, it isn't, and by design. It seems that as much as design meant to go a particular way, there it went.

It does pretty well directly on the hardware, but seems to want to live (like it is) in a container, though that might just be it's style ;) it certainly seems to be convenient for containers.

It has a certain otaku appeal and ninjatsu stealth. I'll probably like it a lot some day soon but at the moment it just screams 'exotic' in my ears while I'm asleep at night.

This probably isn't helping so I'll just stop now.


> It has a certain otaku appeal and ninjatsu stealth. I'll probably like it a lot some day soon but at the moment it just screams 'exotic' in my ears while I'm asleep at night.

I’m trying to refute this characterization, but I am sitting in front of my desktop with an anime background in SwayWM on NixOS. Is there something I don’t know about correlations between seemingly unrelated hobbies?


NixOS, Emacs, Haskell, tiling window managers.

They're a clique in the tech world. I think a unifying theme is to have a small foundation that the user can develop within.

Going a bit further on my soapbox, you might also be into philosophy, anime, election reform, baduk, and math. Is the underlying personality trait systemizing/stubbornness, wanting to break everything down, and rebuild it according to defined principles?

http://dreamsongs.com/RiseOfWorseIsBetter.html would call this approach the MIT style of design.

@nntaleb might suggest they overestimate their principles and their ability to enact their principles, and call them IYIs. :)

(This comment itself is my admission of being an IYI.)


For me nix(OS) began from much simpler upside: now I never forget that handmade one line tweak somewhere out there - all Linux system goes in one nice config.


This sounds similar to BuildXL[1] (originally called Domino) which Microsoft uses to build Windows has been in use 6 years. It does this sandboxing using Detours on Windows which intercepts system calls and allows you to described packages in a language called DScript. I think you need something like this if you don't use a system like Bazel that keeps build dependencies well isolated. Even then the system you use can leak into the build.

1. https://github.com/microsoft/BuildXL


This article suffers from the same problem every nix article I read suffers from: it dives in too deep from the get go. When you sit somebody down in front of a computer for the first time in their life, you're not going to explain what it is composed of, what happens in when your mouse button is clicked and why Windows vs Mac vs Linux is a thing.

Please, first tell me what nix is, provide me with a few commands to get me started, make a few comparisons to what the audience should reasonably know, and then make this your second or even third article on the subject.


That's kind of the approach I took in this playlist: https://www.youtube.com/playlist?list=PLRGI9KQ3_HP_OFRG6R-p4...

However, there is really an important and subtle set of concepts to grasp in order to actually understand Nix, and not just mess around with a package manager tool incidentally built with it. It's fine to not understand them, but they are what this article tries to explain.


FWIW... I read this article as someone who doesn't know what Nix is. Like no idea at all besides the name sounding familiar. After reading the first few paragraphs I feel I'm not the target audience for the article, which it totally fine, but based on the title of the post, I assumed I was.


Yeah, it was a last-minute title change that I now regret.


Yeah, that's tough :(

On the bright side... sounds like the community (or at least me) would really be into another article as a primer.


That's a nice playlist. Thank you.

I'll try running those commands in a nix docker to follow along.

Does this playlist exist by any chance on peertube?


We use Nix for all our developer tooling at CircuitHub. It's not the easiest thing to get into - but once you do - it solves so many problems! Great to see a large company like Shopify get in on this.


I feel someone should say something about guix here. I don’t have anything worth saying about either though


Anecdote about the subjective difference between using both:

I've used nix and nixos on and off since the early versions, but it never really clicked with me, and every time I want to write or just read nix expressions I have to reread lots of documentation like I'm starting from scratch again, whereas I would probably still be able to write a debian package from memory alone despite many years of lack of practice. This leave me with the impression that nix has to be a full time job.

In another hand, I gave guix a try recently and was really surprised by how easily all the pieces fell together "naturally".

Also guix being a smaller community, it might counter intuitively be moving faster (due to less care for not breaking things or less bureaucracy, depending on your views). Indeed, another issue I have with nix is how jammed their input queue of PRs is.


I'd say the PR quite is less bureaucracy and more a high number of packages with not enough hands to process them all.


I spent a couple weeks casually studying about both, and finally hitched my wagon to Guix. The careful separation of free and non-free software is incredibly refreshing. I appreciate the emphasis on building packages from source. The docs are great and so is the Guile language.

It didn't take me long to get a bootable USB with proprietary drivers and mainline kernel, even though I thought this would be much more difficult.


Nix and Guix are the next generation of package management.

Nix is more idiosyncratic, since Guix usies S-expression syntax.

Also, Guix is free software, with everything that entails.


Wondering if folks would prefer spack (https://spack.io/) given its heavy adoption in the HPC community.


After reading the article, I am still asking the same question: What is Nix?

Edit: from their website https://nixos.org/:

> Nix is a powerful package manager for Linux and other Unix systems that makes package management reliable and reproducible. It provides atomic upgrades and rollbacks, side-by-side installation of multiple versions of a package, multi-user package management and easy setup of build environments.


Really looking forward for knowing how Shopify use nix and what problem the find with it.

I was advocating for it in my organisation but eventually we decided for a different technologies...


Burke Libbey has a few talks that are available on YouTube including a presentation at NixCon about the work he is doing at Shopify. I would highly recommend checking them out if you are interested in Nix.


Hi! Yes, and here's the link to my recent Nixology playlist: https://www.youtube.com/playlist?list=PLRGI9KQ3_HP_OFRG6R-p4...


This is great, thank you.

Can you provide more information about setting a build infrastructure in an organization. Particularly how to set up a shared cache so if one user builds something it is available to others?


Hey! At the end of your article you mentioned you're hiring. Are the positions remote? If yes, how can I get in touch?


Yes, we’re hiring! You may have seen the news today that we’re all in on remote now. Send an email to burke.libbey@shopify.com, I’m as good a point of contact as any for Nix stuff.



I like the idea of Nix. It's definitely the future of package management and build systems. But it's solving a problem we knew how to solve in the 80s: dependency hell is solved by statically linking everything. In fact in the Windows world it's still like this. If you need OpenSSL, for example, it should be in your source tree and compiled in the same build pipeline as your application.


Static linking is unfortunately broken for many projects though, primarily because every C/C++ dependency has a slightly different incantation to make it work. It also doesn't address the problem of languages which expect dependencies to be files on the filesystem, like python or ruby.


Nix is a cool ecosystem but getting builds quite expensive on conventional CI/CD system. One can reduce costs by running custom Hydra CI instance but I would love it to be a bit better. Working with private stuff is a mess and requires deep knowledge of the system, hard for newcomers to pick up. Also, NixOps super cool extension to manage a fleet of instances.


Is Nix only handling the build part (with configure flags etc.) or is it also used to configure services, for example?

I'm not versed in devops/provisioning etc., but say, I had a NixOS system with some server software (mail server, DNS server, whatever).

Would I still want to use Chef/Puppet/Ansible, or is that included in the Nix ecosystem?


NixOS exposes config options as variables you can set in your configuration.nix file. Also, external config management tools generally interact poorly with nixOS, because they assume a system which is managed imperatively.


yes, it's included. Nix config files will automatically generate and run services. There's also Nixops which is a tool to push these config to remote machines ie cloud vms.


It looks similar to "npm with ipfs/docker (layer-fs)". My initial guess is it may use more disk space like snap. Because it requires exact version of dependency. Unlike in npm/pnpm that access wide range of dependency version as long as it satisfy the server version.


Nix seems wonderful, but I have one question: If nixos.org and github.com disappeared tomorrow, are users sufficiently enabled to keep the project going, or are those two single points of failure?


is there still relevance for this way of building in the world of Docker - especially with Packer and LinuxKit ?

Packer allows us to create highly repeatable OS builds with all the right configuration and packaging. For example, this is AWS official Packer build for their EKS AMIs - https://github.com/awslabs/amazon-eks-ami


Docker kind of sucks for development though, because you need to reinstall all your devtools into the container and have to wire up your IDE to support docker. With nix, you just write a shell.nix, run nix-shell and can then use your regular tools in that environment.

Also, nix builds are far more reproducible than packer builds. With nix, you get an environment that is guaranteed to be identical down to the hash of every single file as long as you use the same version of nixpkgs. Packer and similar imperative tools typically install whatever version is the latest in the distro repo, and building the image 2 months apart will grant you different artifacts without investing a lot of effort into pinning everything.


I really wish nix was the solution, but I'm not sure it is. I currently don't install anything on my base system, but do it via docker. Looking through my dockerfiles, currently I have stuff installed via:

* apt

* apt preceded by apt-key add

* wget/curl

* pip & pip3

* git clone

* go get

* npm install

* snap

And that's even though I try to avoid esoteric package managers as much as possible. (For example, there was some package recently where the recommended method was brew for linux, and another that wanted to be installed via conda.

There's just so much stuff out there that insists on using some random packaging system. docker, even though it's a bit flaky, abstracts that away.


nix is the solution. It's totally different from any random packager you've seen. It goes the reversed direction of abstracting dependencies away: it makes dependencies explicit (and reproducible!)


I get it, but my point is that unless it can cope with packages that aren't in the ecosystem, it will just add another variant rather than solving the problem, as per https://xkcd.com/927/

It reminds me a lot of http://www.vestasys.org/


Nix at this point is almost 20 years old, and we have 10,000s of packages.

If that meme wants you to blame something, it should be Docker.

(First of all, it's younger. There's big tons of effort with Docker and yet nothing is reproducible, and DPKG, APT and all their ilk can't even be deprecated but instead are used awkwardly within the Docker ecosystem.)


Hearing about Nix couldn’t have come at a better time. Having just got a new Mac, seems like the perfect time to try a fresh Nix-only setup.


from now on every new machines will be easily replicated with configs from older machines: copy the config file over, type one command and let nix does the rest


"All software exists in a graph of dependencies."

And some of these graphs consist of a single node. Junk the jank!


I mean, if you count the compiler/interpreter (which nix does), realistically that's a bare minimum of two.


Nix comes with its own glibc, so that's 3, although I guess while it is not the default you could compile statically.


If you compile statically, you've got musl, which is still a (build-time) dependency.


There's a confusion between build and runtime dependencies. From Nix point of view. Even if you compile statically your code still depends on libc, it's just that you include the libc in the resulting binary. With Nix all dependencies are listed for your application. Because if no compiled version is available in binary cache, Nix will get a compiler and build one. This is actually awesome, imagine there's a program in nixpkgs that you want to use, but you want to have it linked with different libraries.

For example pgbouncer[1] can be linked with different DNS resolvers, which have their strengths and weaknesses (currently it is linked with c-ares which probably is what most people want, but until version 1.10 that wasn't true) you can override the derivation and use different dependency. Nix will build on the fly the new version that does do what you want, and this is awesome.

[1] https://www.pgbouncer.org/install.html


I use C but I never use the C standard library. It blows.


Maybe I'm missing the joke, but even if you use printf() in C you are using C standard library.


It's not a joke. I don't use printf, scanf, malloc, etc...


What do you do?

I don't think you can make syscalls without embedding assembly into your C code.


Too bad that Nix don't work with fish shell


I'm using Nix and Fish and have been for a year or two now. I remember some issues initially but NixOS and home-manager have boolean options to enable Fish and it Just Works nowadays.


I don't understand what problem it solves


I loathe to be so critical, but I have no idea what this is.

I'm sorry to say but this is a bad article - either that - or nix itself is a mishmash of things impossible to explain.

I just see red flags everywhere.

If it's 'A' - then maybe take some time to do a high-level introduction and some context.

- 'Everything depends on something else' - really? Most 'installed software' doesn't depend on a whole lot else. Does the author really mean server software? Development environment?

- "The easiest place to start is the Nix Store. Once you've installed Nix, you'll wind up with a directory at /nix/store,"

No! That's not easy, because what he&& is Nix, why and how would I install it, and what is the 'nix store'? Starting to explain something by introducing a whole bunch of other unknowns without description or context is going in the wrong direction.

- "This directory, /nix/store, is a kind of Graph Database. Each entry (each file or directory directly under /nix/store) is a Node in that Graph Database, and the relationships between them constitute Edges."

WHAT? We are now 3 layers deep in a rabbit hole of unexplained things.

- " Nix guarantees that the contents of a Node doesn't change after it's been created."

What is a Node?

If the issue is 'B' - i.e. the inherent overlap of technologies that Nix makes it hard to explain ... well maybe it could be broken up.

Take a minute to give some context for what Nix is, the kinds of people that use it, what problems it solves, and then have some exceedingly basic introductory examples so that people have a frame of reference for what you are talking about.

Then you can step by step, go into the weeds.


> Most 'installed software' doesn't depend on a whole lot else.

Try this: Pick some random binaries on your system and run 'ldd' on them.


> - 'Everything depends on something else' - really?

Yes. Really.

> Most 'installed software' doesn't depend on a whole lot else.

Yes it does. Your ignorance of this fact does not make this any less a fact.

> Does the author really mean server software? Development environment?

All of it.


I thought this was satire when I first read it.

But no, people around are really do have serious challenges with language sometimes.

Here's some help:

The vast majority of software that users install on their computers depends only on what the user would perceive as the 'OS' itself. For example, if one were to install 'some app' that works for Mac OSX version X.X and later, there wouldn't be any contemplation of 'dependencies', to the point wherein the vast majority of users wouldn't even grasp the concept.

Granular package or system-level dependencies are something that only developers are aware of.

Of course, the fact that nobody but developers would contemplate 'dependencies' should be obvious to anyone who interacts with regular people.


> the fact that nobody but developers would contemplate 'dependencies' should be obvious

Is it perhaps lost on you that the vast majority of people discussing this technology are developers?

And you accuse me of having serious challenges with language. Yikes.

> The vast majority of software that users install on their computers depends only on what the user would perceive as the 'OS' itself

Yes. And those dependencies are implicit. You know, exactly as the article describes.

> to anyone who interacts with regular people

This is a thinly-veiled ad-hominem argument. Does this approach of snootiness and condescension normally work for you when interacting with “regular people”?


[flagged]


Read the article and you might find out.


Probably not.


A messy pile of unnecessary, redundant abstractions which "solves" nonexistent problem.

Stable interfaces (the main principle behind literally everything in nature) and semantic versioning will do the job.

systemd-like cancer. And, of course, look, ma, I am so smart.


Talks to the Raccoon server, I believe.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: