Hacker News new | past | comments | ask | show | jobs | submit login
The Curse of NixOS (wesleyac.com)
583 points by nemoniac on Jan 24, 2022 | hide | past | favorite | 353 comments



I've been using Nix and NixOS for a while and I've seen four core advantages:

1. Nix store: what the article covers: multiple versions of the same package, "virtualenv for everything"... etc

2. Reproducibility: because package definitions are self-contained, I can pin and reliably reproduce everything for my packages. And I do mean everything.

3. Flexibility: Nix works for system packages on NixOS as well as user packages on any Linux or macOS. I've been sharing like 70% of my system config between Linux and macOS, and I don't know of any other tool that can do that.

4. Programmability: package definitions and config are extensible and composable. Yes, learning an ad hoc language to do this is a pain, but once I learned it I got a higher-level and better-abstracted way to define and combine packages than any other tool I've used. It's like going from C to OCaml. At this point I've started hating Dockerfiles because they can't easily be composed and don't provide any real facilities for abstraction.

Ultimately, Nix took system management from feeling like an operational activity—maintaining and changing a bunch of mutable state to run the system—to declarative programming. Learning curve aside, making system management a reproducible programming activity with no system state to worry about has been a massive improvement.


I've tried to use Nix on and off for years. I always run into issues where I need to write some package and

(1) I get mired in packaging low level transitive dependencies and things which seem like they should be easy end up being nearly impossible

(2) Basic things like figuring out the argument types for a particular nixpkgs function take an insanely long time. It always involves grepping around the whole repo (nixpkgs is huge for those who don't know) for invocations to find the variable that gets passed into that function and then grepping around the repo for the function that produces that variable and so on until you get to the source type. Things have awful names and documentation is sparse.

I'm a big believer in Nix as an abstract concept, but the execution has been a miserable experience for me to the extent that I can't make sense of people who report such positive experiences. I would think I'm doing something wrong, but so many people have corroborated my experiences and when I bring specific issues to experienced Nix users they also get stuck.

Ultimately, I've reluctantly gone back to dealing with Docker and system package managers because their failure modes are rarer, more predictable, and more easily worked around.

EDIT: and I haven’t touched on the problems associated with using Nix in an organization. It seems like invariably one or two people become “the Nix experts” and they become the bottleneck for the whole organization, every team needs to coordinate virtually every interesting activity with them, etc.


As someone who is using NixOS for 8+ years, I do agree with both your problems.

(1) I do find that it's very rare having to implement a low level transitive dependency from scratch (most low-level system dependencies are already there, and for dependencies of a specific programming-language ecosystem you'd usually auto-generate them from things like `yarn2nix` or `poetry2nix`). However, I agree that once you need to do that for non-trivial library it quickly becomes painful and require intimate knowledge of both the build system and Nix environment.

(2) This is my main issue. My workflow is also having a local `nixpkgs` clone, and go find the function and the parameters I can use every time I don't remember how to use it. I also think that it is sustainable. There are many exciting improvements going on with the new CLI, so hopefully we will get nicer error messages and better tooling support in near future.

> but the execution has been a miserable experience for me to the extent that I can't make sense of people who report such positive experiences.

There are pain points, but the advantages for me were indispensable. Being able to pin dependencies accurately, reproduce an exact development environment deterministically, and use the single language to define applications, configurations, and even entire system images is a great benefit that I would need to use 10+ tools with individual quirks otherwise.


Have you looked into Guix? In some ways it's a bit simpler and cleaner because it is a much more recent effort and Scheme is somewhat easier. It might help with the learning process in case you disliked Nix. GuixSD does have some other drawbacks. I'm personally running NixOS but both are very cool.

Nix certainly has a steep curve, but simple things are not that hard. I got committed to migrate to Nix one Monday. I spent the whole morning reading about it. In the afternoon, my workstation was already up and running. On the same evening, I packaged two exotic things I need which weren't on NixPkgs. Next day, I fixed my favorite window manager, which was broken on NixPkgs. I maintain all these on NixPkgs now.

Complicated things are not that easy, and I would not have been able to do all this if I had encountered non-trivial issues in the process. I also totally understand that the learning curve is steep. Nix has too much legacy stuff and cruft built in. There's the classic Nix command, Nix 2.0 and Nix flakes. They are all coexisting and not very well documented. Nix needs more manpower, funding and tooling.

With that said, declarative stuff is poised to be harder for non-trivial stuff. Just like Haskell is harder than C, NixOS is harder than ArchLinux or Alpine. I still very much find it worth the effort for simple workflows. I can now update remote machines without fearing breakdowns and state is very explicit.


> Nix certainly has a steep curve, but simple things are not that hard. I got committed to migrate to Nix one Monday. I spent the whole morning reading about it. In the afternoon, my workstation was already up and running. On the same evening, I packaged two exotic things I need which weren't on NixPkgs. Next day, I fixed my favorite window manager, which was broken on NixPkgs. I maintain all these on NixPkgs now.

I mean... it's great that you did all that, but you do realize that going that far makes you a power user, right?

Most users are going to give up if they can't figure out how to install their window manager or other essential software (eg installing Docker or Steam is ridiculously hard if you don't know that they have special hard-coded config values; it's almost like a game you can only beat if you find the cheat codes).

> With that said, declarative stuff is poised to be harder for non-trivial stuff. Just like Haskell is harder than C, NixOS is harder than ArchLinux or Alpine. I still very much find it worth the effort for simple workflows. I can now update remote machines without fearing breakdowns and state is very explicit.

Hard disagree. I could totally see a world where NixOS is as easy to install as any other distribution. But the developers are going to have to put a lot of work into smoothing out wrinkles before it gets there.


I've tried a bootable vm image but couldn't figure out how to configure, the docs only cover the installer, no examples, so I gave up. It seems like it's a Guile recipe for an OS. I want the OS with docs, examples, maybe a simple instalker to get everything bootstrapped. Then I could learn to write recipes. Other than that, Guile is attractive and so is GNU Shepherd, which is why I tried it in the first place.


Yeah, I've had that similar experience too, and I'd love to see an approach that can fundamentally improve on this axis.

The way I see it, it's the foundational ideas that are important—but there are a few key principles beyond the one covered in the blog post.

Separately, today, I think the foundational advantages Nix has more than justify the learning curve—but that's a less obvious idea and I understand why it's hard to convince other people about it :).


The trouble is that it's the foundational ideas that people find weird and difficult. Functional programming is still niche and so both immutable package installations and nix-the-language don't build on what people already know.

I hope that'll change over time, because yeah, the foundations are /so/ powerful, even today.


I learned F# before ever touching Nix. And it's not the functional nature that's the issue, rather the terse syntax, sparse documentation of plenty of special functions you need, and so on that make it an issue.

It's hard to find out how to do things, and often you find two examples with a partial solution that each use a very different incompatible way to get there.

I think the core idea of Nix is good, but we need a decent programming language designer to have a go at a new language for packaging.


Well, if you're comfortable with functional programming, that's great. Seriously, awesome! But you're in the minority.

Hard disagree on the "decent programming language designer", though. Nix-the-language is beautiful. It's tiny, with a minimum of concepts to learn, and very simple, regular syntax. At the same time, it's not so spare as say, Lisp or Scheme. It's really good. The implementation of that language could use some work - interpreter speed and debugging tools come to mind rather quickly.

I think the problem you've encountered is nixpkgs. It's a huge, sprawling code base with a lot of legacy code and a number of paradigms that were enthusiastically adopted and then later supplanted. In contrast to the language and tools which are well documented, there's basically no documentation, and what little there is hasn't been kept up-to-date as things change.

Still though, I think the biggest issue facing nix users is that the rest of the computing world works on the assumption of mutable software installations. The way pip assumes that the python interpreter and the python packages that have been installed are locked in a death grip drives me nuts. Even if you don't go full nix-store, could we at least separate those things? Even node gets that right.


>he way pip assumes that the python interpreter and the python packages that have been installed are locked in a death grip drives me nuts. Even if you don't go full nix-store, could we at least separate those things? Even node gets that right.

On this I definitely agree. Like I said I do think Nix is the right thing to do, I'm just thinking the how could be better.


Exactly what I've found. The concept is brilliant. The execution is user-hostile. I wish someone brilliant would take a good look at it for a long time, then head off into the tropics for a month while meditating on how to simplify it and/or make it more user-friendly, and return with something that is equal to Nix in concept but superior in execution and UX.


> (1) I get mired in packaging low level transitive dependencies and things which seem like they should be easy end up being nearly impossible

this is most painful, because a lot of packages do tons of tricks that make it very difficult to get reproducibility. Like downloading dependencies at build time, non standard places of installing things hardcoded paths etc.

If you are tried cross compile arbitrary packages it might be familiar. A lot of projects don't know how to do things correctly, or doesn't care and just want their code to work.

To that, other environments have their own quirks. For example the way most nodejs applications work they do expect their dependencies in current working or parent directory.

All those things require some patching. If you can get the patch upstream that's awesome, otherwise you'll have to maintain it.

What is great about other distros, is that if you try to use some exotic library, you can just try things randomly until you get it compiled and installed in /usr/local :)

Imagine if you had RedHat installed, and it would impose that you can't run anything unless it's packaged in rpm. If you would have to prepare a .spec file and run rpmbuild command. I bet experience wouldn't be far off.

Unlike RedHat Nix does force this and this is a double edged sword. It creates this pain, but on the other hand it forces some kind of order.

BTW: if you want it hard enough you can still install packages the traditional way (but likely would have to specify --prefix to somewhere inside /home than /usr/local)


The problems are twofold:

1. The language is a significant problem. If you get everything right, it will do what you want. That's the only nice thing I can say about it. Almost every error it will give you will be misleading. I haven't had such a hard time figuring out errors since early C++ template metaprogramming. And frankly, that was easier.

It needs real diagnostics, and that frankly, that means it needs real types. But that won't happen. Too much of the informal type system of nix is "whatever keys that library function uses, go look in the source."

2. Half-implemented packages. Try walking out of the golden path on a package, without having to modify its nixpkgs entry.

Between the two, it's more nerd-bait than usable platform.


Looks like these things are being addressed, although from past of experience it probably will take several years :( (when I started using Nix, 3 years there was a talk about this new thing called flakes, now the version 2.4 already contains it, but it is still locked behind experimental features)

1. https://github.com/tweag/nickel - it is a typed language, that might in the future replace nix language

2. looks like likely is addressed by flakes



That's interesting, looks like it is transpiler that could be used immediately if someone doesn't mind the extra step. Thank you for sharing it.

BTW: the Nickel has chance to end up replace Nix language as I saw author of Nix (Eelco) is also contributing to it. I don't think it can transpile to a nix code.


"... with no system state to worry about"

I don't disagree with you, but one of my skepticisms about this is that Docker and Dockerfiles made the same promise like eight years ago, and at least as a small time operator (eg, homelab), it never really felt like the system was as stateless as it was telling me it was. There were always logs, indexes, caches, database content, and all the rest of it, and all of this had to be carefully volumed-out with those data containers being protected and versioned and carried forward independent of the throwaway Dockerfile-defined "application" containers.

So far my experience of NixOS is that it doesn't particularly offer a new solution here, but at least it doesn't pretend anything. The configuration.nix "defines" the whole system, but obviously doesn't touch anything in /home or /var, so that's on you to back up, migrate, whatever.


> The configuration.nix "defines" the whole system, but obviously doesn't touch anything in /home

https://github.com/nix-community/home-manager

> or /var, so that's on you to back up, migrate, whatever.

https://grahamc.com/blog/erase-your-darlings

https://github.com/nix-community/impermanence


You can't get rid of all state. The pros/cons of a Nix approach is that it is far more explicit about what is pure/impure. This experience is similar to going from a dynamically typed language to a stronger type system; it can be frustrating to be told that something is not allowed. Nix's usability difficulties often come from trying to build or use software that violates purity in some way (reading HOME directories during compilation, using the network in some way, assuming a dependency will "just be there"); it prevents these mistakes at the cost of forcing the person doing the packaging to fix these issues. This is a huge benefit in the long-term, but can annoy short-term uses.


I'm so confused-- there's a user upstream reporting something about having to grep for a variable name, then having to grep for the function that created the variable just to figure out the type.

Are they using Nix wrong? Or is this seriously a limitation of the current state of the tooling?

If it's a limitation of the tooling, that almost sounds like satire. I.e., someone builds a powerful functional DSL, uses it to painstakingly construct a monstrous package graph, then gets tired and wonders, "Wouldn't it be great if there were a language and data structure that were both custom suited to automating the grep'ing of argument types for my DSL's package graph..."


Many of the necessary fixes do end up getting pushed upstream, though, thinking especially of stuff like converting CMake projects to use the GNUInstallDirs standard, so there is a benefit to the strictness work that extends beyond even just the Nix community itself.

Of course, sometimes those kinds of changes just never get merged, for a variety of well-understood open-sourcey reasons, eg: https://github.com/pocoproject/poco/pull/3105


What are the well-understood reasons? It seems weird to me, as someone who hasn’t worked on open source, that the PR would just sit there untouched and not even commented on


I'm not seeing an easy reference right now, but I feel like a number of articles bemoaning the state of open source maintainership have come up over the years, particularly in conjunction with high profile security issues like the ones in the OpenSSL and log4j projects.

But in short, the maintainers are mostly uncompensated, and therefore have no sense of urgency or obligation to a faceless stranger making a drive-by contribution. Every accepted contribution is something that has to be understood, supported, and even justified by the project's maintainers going forward, so each one is an opportunity to take on more work and risk, and obviously by definition almost all them are things the maintainers didn't themselves "need" or they would have done it themselves. So each new branch or bugfix is relevant to a codepath that no one involved in the planning, testing, or release-preparation for that project has a first-class stake in. Each new capability added is an opportunity for there to be shrieks following a release because it was inadvertently broken in the course of some other bit of development.


And now that PR is merged...


Well yes. I wasn't intentionally shaming the poco project with the link, but having a bunch of attention drawn all at once does have a way of moving these things along.


That's true. I guess I see it as the difference between the system and what I'm doing with the system—it's like having a database system where I'd want the config for the database to be totally reproducible and immutable, but I'd expect to manage the data itself separately.

And, as a practical benefit, I found setting up borg backups for /home on NixOS a lot easier than I expected :)


I think your reasons are focused on Nix for package management. For NixOS specifically, I would add:

5. It's essentially a source distribution with caching. If you want to add say a kernel patch, or any other type of patch, it's straightforward. Whereas running Debian and finding the solution to some problem is building a modified package, my usual conclusion was just to bear the problem until it was fixed upstream, rather than suffer the maintenance burden.

6. The top-level config is front and center, and stateless. No messing about with update-initramfs, wondering what the heck the initrd is actually doing etc.

7. Taming the complexity of system software's configuration state. For example, there is no mucking with the systemd mess of /etc/systemd vs /lib/systemd and its assorted symlinks, as this configuration is handled at a higher level.

And for Nix I'd add

8. Dead simple cross (or emulated) compilation toolchains. Building A Cross Compiler used to be a Big Deal, with Nix it's basically like any other declaration to build an entire cross/emulated environment.


Building packages from scratch for Debian can be a hairy endeavour, lots of tooling and it can be hard to know which one to use.

But building packages for Debian is dead simple, it is literally one command. Adding a patch to an existing package, and then rebuilding it, is a common operation.


I've run Debian for two decades, so I've definitely ended up going down the path more than a few times. Yes, the actual building process wasn't so hard. It was the maintenance burden afterwards. Maybe I just didn't commit hard enough with my own apt repo, pinning, staying ahead of what was coming through backports, etc. But point is that it effectively never stuck long term, the end result was always going back to vanilla upstream. Whereas in Nixos such patches are some lines in the larger config file, which feels a lot more persistently manageable.


Whenever Nix/NixOS gets discussed I always see same points being brought up again and again. And again. I've never seen anything which is discussing solutions or the roadmap (however distant) to solve those issues.

Can it be that nothing is happening in this regard or is that really so that people only list same cons over and over again? :)


It's like complaining about C++. It will always be there. Ignore it, it's just people venting about the fact that solving hard problems is hard.


Did they ever fix the need to disable SIP for nix to work on macOS? That made trying it out a nonstarter when I looked into it 2 or 3 years ago.


I started using Nix around 3 years ago, and I actually didn't encounter this issue. The last issue was when Catalina made root read only, there is some hacking to keep /nix in the root.

Edit: after searching about it, looks like this was the same issue with /nix in the root, and disabling SIP was one solution at the time. Currently the solution appears to be creating a separate volume and using /etc/fstab and /etc/synthetic.conf to set it up[1]. The installation script does it automatically.

https://nixos.org/manual/nix/stable/installation/installing-...


I hit similar issues a while ago, and I think they have this figured out now. The installer Just Works (tested on a MBP2015) as of a few months ago.


This mirrors my (very limited) experiences with Nix. I've so far only dipped my toes in home manager, but being able to share shell config and cli tools declaratively between WSL, linux and MacOS with very little branching has been great!


> Reproducibility: because package definitions are self-contained, I can pin and reliably reproduce everything for my packages. And I do mean everything.

Does that mean we get downloadable precompiled binary packages too? If not, why not?


Yep. The cool thing is that installing a binary version of a source package is totally transparent from Nix's point of view—it's not a different entity, it's solely a matter of caching. If you're building a package that somebody else built and uploaded to a cache that you can access, you'll download the binary instead. There's a public cache, so when you use a standard package set as your base, you'll be getting most things as binaries. But if you make some changes or something else breaks and binaries aren't available, it falls back to building from source transparently—sometimes this can take a loooong time, but it usually doesn't, and I'd rather have the option to do that than not :).

The other bonus is that it takes very little effort to share your own binary cache—either privately (so that you or your company only need to build something once) or publicly (so that people installing your open source tool don't have to build from source). There's even a really well-designed commercial service called [Cachix][1] that provides a totally turn-key binary cache, including a free tier for open source projects.

[1]: https://www.cachix.org/


I use this for building system. Instead of Cachix I just set it up as an S3 bucket (it's a bit involving but also works). I needed to tweak some things but ultimately arrived to satisfying results.

The nice thing is that it sped up building by 50% and also extra bonus with things is that with merges, if files didn't change nix will finish building instantaneously.

This is similar with nix-shell, where for example with python you enter "virtualenv" like environment almost instantly without downloading dependencies over and over. Yes, you could get close to do that with CI by using some caching, but this is out of the box, due to nix understanding dependencies.

Also another great thing is that so far, I never needed to purge cache, which often happens with normal CI caching.


my understanding is that most nix/guix users don't compile (most) packages themselves but download them from a binary cache. why not would be for obvious security reasons but at least on the guix side there's a subcommand to compare build hashes from various mirrors you trust: https://guix.gnu.org/manual/en/html_node/Invoking-guix-chall...


I tried using NixOS on a laptop for a month and switched back to Arch a couple of months ago. It was an interesting experience - I never used functional programming language before, but nix was pretty ok as a configuration language. It looks like funny version of JSON with functions.

Debugging it was a real pain though - I was trying to write (or copy from someone else config) a function to recursively import all modules from a folder and the error messages were hard to understand with stacktraces ending somewhere deep in standard library.

The other problem was with nixpkgs, i.e. I haven't found a clear list of packages that were going to be updated. Jetbrains IDEA was updating very slowly and when I wanted to pin a version of it simply overriding version variable in the overlay was not enough - there were also some other linked variables. Emacs package from custom overlay constantly tried to compile itself, requiring me to check build status on hydra build system.

In the end I decided that it's not really worth it. I don't have many systems and services to manage and for a raspberry pi and a vps multiple docker-compose.yaml files in git repo is enough. I think Henrik Lissner (author of Doom Emacs) wrote a really good piece about why someone should choose to use NixOS - https://github.com/hlissner/dotfiles#frequently-asked-questi...


Same here, tried NixOS for ~3 months, I tried to go all-in too, I spend time to really read the docs and work out an intricate configuration that leverages features like flakes and covers my system setup and dotfiles (got inspired by Lissners NixOS configuration) and was ready to handle multiple systems in preparation to roll it out to my 2nd and 3rd machine.

Ultimately there were warts that I was able to solve, but still annoyed me greatly and felt ugly and forced me to step outside the convience of my nixos config system (like wifi with iwd, btrfs + swap files) and at one point I was no longer able to `nixos-rebuild switch --upgrade`, the command kept failing even on versions of my config that I was able to confirm working, I spent some time trying to find the problem but eventually came to the conclusion that it might not be worth it in the long run.

In the end I went back to my previous distribution of choice. I still find the whole concept interesting and I will give Guix a try next.

Fixing NixOS problems that the community hasn't encountered or solved yet really fills me with a special kind of dread compared to regular distributions where finding a solution to a rare problem is usually just a mix of taking a look at upstream and writing a patch or adapting general solutions from other distributions.


I honestly think that Nix is one of those things where "going all in" is one of the worst things you can do. Running a desktop system on NixOS is quite a tricky prospect for someone with no experience of Nix and you'll likely get frustrated with having to learn some potentially alien concepts to achieve simple things.

I tend to recommend people start in the shallow end, using Nix on a regular Linux distribution (or even macos) for a while. Use it to manage development environments and for ephemerally accessing tools as you need them.

For me, the place that NixOS itself really shines is on servers.


Being able to spin up an entire network of qemu+kvm VMs easily to do proper integration testing of my system configs was a revelation.


I found that building my configuration on a virtual machine was the best way to go. This way I was not desperate to get things working and simply moved when I felt like all my essential programs were in place.


> I haven't found a clear list of packages that were going to be updated.

The ideal here, IMO, is to use flakes and the nix-diff to compare the build before switching. E.g.

    nixos-rebuild --flake '.#myhost' build
    nix-diff /nix/var/nix/profiles/system result
    sudo nixos-rebuild --flake '.#myhost' switch
Could do the same without flakes after the fact by running nix-diff on different generations in /nix/var/nix/profiles (there may be a way before switch). It’s unfortunate that how to do the above is non-obvious.


There's nothing there that needs flakes (an experimental feature which people should not enable without understanding the implications). You could build a system derivation and run a diff against /run/current-system on it.

For what it's worth, nix-diff has very verbose output (it literally diffs everything that is different in the inputs & outputs). A slightly nicer way to diff systems is nvd[0] (example output[1]) which only shows version changes and added/removed packages.

[0]: https://gitlab.com/khumba/nvd

[1]: https://deploys.tvl.fyi/diff/4xmyvkr9nw0cwkn5q38p0cfc58x3jdy...


Flakes are "experimental" in the sense that gmail was in "beta" for however many years. I feel like it's important to mention that you're attempting to rewrite nix[1] and explicitly ignoring flakes[2]. Many people using nix are coming to the conclusion that flakes make things much easier to use and understand, and they obviate the need for a lot of secondary tooling that has become standard. There's an easy way to gauge support for flakes: look at how many high-profile nix repos have an up-to-date flakes.nix file.

[1] https://tvl.fyi/blog/rewriting-nix

[2] https://lobste.rs/s/ypwgwp/tvix_we_are_rewriting_nix#c_zvtze...


> experimental

Haven't flakes been non-experimental since 2.4?


No, using flakes still requires enabling the “experimental feature”. The difference in 2.4 is that you no longer need to use a prerelease version of Nix to have the experimental feature available.


I was using flakes, yes. I remember seeing nix-diff, but most of the time nixos-rebuild dry-build was enough. Sometimes dry build was failing with error, but switching or activating it ran without errors. Never figured out what was wrong :(


> Jetbrains IDEA was updating very slowly and when I wanted to pin a version of it simply overriding version variable in the overlay was not enough - there were also some other linked variables. Emacs package from custom overlay constantly tried to compile itself, requiring me to check build status on hydra build system.

The way I solve this is to import multiple versions of Nixpkgs, since they're all self-contained.

For example, if I'm upgrading from Nixpkgs/NixOS 21.05 to 21.11, and I find that package 'foo' has broken, or been removed, I'll do this in an overlay:

    self: super: {
      foo = self.nixpkgs2105.foo;
    }
Where nixpkgs2105 is a pinned revision of the Nixpkgs repo, defined in another overlay. My current Nix config has pinned Nixpkgs versions going back to 2016. For example, here's a bunch of such overrides:

https://github.com/Warbo/nix-config/blob/master/overrides/fi...

At the moment I'm using niv to manage the pinned Nixpkgs versions (the 'repoXXXX' entries):

https://github.com/Warbo/nix-helpers/blob/master/nix/sources...


Another problem I ran into is trying to figure out how each package wants me to define plugins. There are common examples for Python environments (Python packages are like plugins into Python) but when it came to other software, I was at a loss when digging into things, even after reading all of the general documentation.


"""The first is relatively simple: they developed their own programming language to do configuration, which is not very good and is extremely difficult to learn. The vast majority of people using NixOS do not understand the language, and simply copy/paste example configurations, which mostly works until you need to do something complicated, at which point you're completely high and dry."""

Maybe this is nitpicking, but: the Nix language is about as straightforward as "JSON plus functions". Maybe that is difficult for people who haven't had an experience with pure/functional programming?

Though, yes, the way Nix is used can be quite complicated/sophisticated. -- Even if all of Nixpkgs were translated to JavaScript, I think it'd still be complicated to understand. (Though I suspect using a more general purpose language would allow for more footguns).

+1 to the sentiment that most of the time you can get by just fine, but when you can't, NixOS is more difficult to make progress with compared to other Linux distributions.


It's been a while since I used NixOS, but I never really had an issue with the language. What was frustrating was in package solutions there were all these special functions which as far as I can tell were undocumented and I couldn't figure out what they could do. So there was sort of this gap in terms of documentation between theoretical understanding and tooling/functional understanding which I struggled to cross.


As someone 8 months into a major Nix packaging effort (1200 package definitions, mostly auto-generated), this is 1000% the most significant issue. Reading the pills can get you the first little bit, like the difference between derivation and mkDerivation, but so much later stuff is just completely undiscoverable and inconsistent, like the fact that overrideAttrs may subtly not do what you want if it's a Python package since you have to use overridePythonAttrs instead, but the corresponding function for other builders (eg overrideBazelAttrs) doesn't exist.


This is indeed not very good. Note there is a reason that overridePythonAttrs exists: it overrides the call to buildPythonPackage instead of mkDerivation. There is/was an RFC on standardizing overriding in Nixpkgs but it got stuck. I think for these things to improve what is really necessary is funding to improve Nixpkgs. These kind of issues are fairly hard to solve as they span multiple ecosystems and require coordination.


Oh, of course, and in my case it was an important discovery, because my custom packages were all also defined in terms of my own mkDerivation-wrapping function, so I happily copy-pasted the definition of overridePythonAttrs to make my own overrideMyAttrs version of it.

But it was brutal that I had to just stumble across this. I think the article correctly identifies that there is decent enough documentation in Nix for:

- Beginners to the nix language/concept (Nix Pills)

- Beginner users of NixOS (the NixOS manual)

- Beginners to packaging for Nix (wiki articles, blogs)

But there is a huge, huge gap for the user who is trying to cross the chasm and become a true expert in it all— who wants to understand at a deep level things like how overlays and namespaces work, how bootstrapping works and what is the division of responsibilities between nix and nixpkgs, plus the inner workings of things like buildEnv and stdenv.


I mentioned it already below, but just to emphasize: I think Burke Libbeys playlist on Nix [1] is that missing piece. It lays down a solid foundation of understanding on what Nix is both conceptually and syntactically. You are then able to derive the rest of the needed understanding by clicking through the nix source of whatever it is you are trying to figure out.

[1] - https://www.youtube.com/playlist?list=PLRGI9KQ3_HP_OFRG6R-p4...


As someone who really likes the idea of declarative system configuration and management, shares the vision of nix and guix, and have been dipping my toes by using nix package management on macos (and sharing most of the experiences mentioned upthread):

I do thank you for the pointer, and I think I've come across these videos before, _but_ to me, video is probably the last format I can assimilate this type of information. Usually, in my experience, well designed systems have a way of making sense so that documentation and understanding flows naturally from base abstractions, and this is something I miss in nix. I would like to try guix to see whether that makes more sense to me, but I am hesitant given that nix seems to have so much more of the mindshare.

One day I'll try to give the video tutorials a chance, maybe it will clear things up. Thanks again and thanks to mr Libbey as well.


> well designed systems have a way of making sense so that documentation and understanding flows naturally from base abstractions

Honestly, thats the main revelation I had when going through that playlist. Basically Nix is just attribute sets (which is sort of just a JSON object). The entire thing. Nix files are just some preamble and then they return an attribute set at the end. Understanding that will get you 90% of the way there in terms of understanding the syntax and being able to cut through the "weirdness".

At that point its about understanding actual concepts like overlays, derivations, etc but its much easier now that you understand the attrset concept.


I hear you, I think I understand what you mean and I definitely want to believe you, but how come this can't be explained in a succinct written documentation, instead of a series of video tutorials by example?

I do realize it's a work of passion done by numerous volunteers, to whom I feel gratitude, but at this point, why can't this be adequately defined, in a written form, in a way that is understandable? To me, the concept of a series of projections of declarative configuration statements onto a final state is not hard to understand in itself; yet, I am somewhat struggling to create and maintain such a configuration without feeling that I am just copypasting bits and pieces from random people.

I guess I just have to bite the apple and spend a few days watching YouTube to see if I reach enlightenment - at this point, I barely understand the function names.


Yeah like I said, I struggled for a couple _years_ on having a feeling of complete confusion when copying a shell.nix to every project I spun up. I was never able to understand _any_ of the written documentation. After that playlist I easily click through the actual nixpkgs repo itself and find almost everything I need to handroll my own derivations.

It seems like the community is just a bunch of people who have had that lightbulb go on already, so its hard to go back and view things without the lightbulb on.


The language reference worked out fine for me but the chasm between "basically running" and "knowing what I'm actually doing" has been substantially painful and I'm still not on the other side.

Though I say this while continuing to slog through said pain as I find time because once I've learned a particular thing it's always felt like it was worth the effort to get there.


100% this. The language is designed for its use case which is packaging and configuration (nothing more or less). It has a learning curve due to being lazy and functional but works great once you get the hang of it. But the documentation of all its functions is so annoying. You have builtins and the nixpkgs functions[1]. There is learning the language, and then learning how to use it. Then there is the entire ecosystem of custom packaging functions that have their own pros/cons [2]. The issue isn’t with the language but the difficulty with trying to make existing tooling work the Nix way. That part is where I agree with the curse of nix. But the effort is worth it because once the packaging is complete it just works (forever).

1: Best resource I’ve found is this: https://teu5us.github.io/nix-lib.html

2: The status of lang2nix: https://discourse.nixos.org/t/status-of-lang2nix-approaches/...


However, until this works, nixpkgs should provide a wrapper around FHSUserEnv which allows developers to develop without the curse.


Convenient (version-addressed!) FHSUserEnv is exactly what I want out of Nix. Land me in an environment that has a list of deps (at specific versions, not hashes!), let me go mess with it.


These look pretty promising. Maybe I'll give nixOS another shot because I really was a big fan.


The best way I’ve found around this is to always keep a local clone of nixpkgs so that I can grep for where functions are defined and hope like hell that there are some comments in the vicinity. Definitely not ideal.


If you're using Nix on your system, you can also find nixpkgs symlinked under ~/.nix-defexpr/channels/nixpkgs/pkgs/ or so.


And if you have flakes and nix-command enabled, you can run `nix edit nixpkgs#lib.sort` to get the definition of the sort function.

(This will just tell you it's imported from lists. `nix edit nixpkgs#lib.lists.sort` will get you the actual definition.)


Nice, I did not know this. Thanks!


What really helped me were the videos of Jon Ringer: https://www.youtube.com/user/elitespartan117j27


Glad someone found them useful :)


I cannot thank you enough for these. It is so much useful to see someone doing a task.


I believe what Nix really, really needs. Is to separate all the lib, builders and frameworks for languages need to be separated out of it (probably into separate repo, maybe even repos), and treat it like a stdlib is treated in any other language.

That means, it can't be just modified by anyone on a whim, all of changes need to be done through proper RFC mechanism. And anything in it must be documented.

Having all that done would greatly improve learning time, it would probably make it easier to maintain other packages as well.

It made sense that it was done this way originally, because initially no one knows what would be needed, but right now many things are solidified and don't change that much.


Fully agree. It is not the language, but the functions around nixpkgs.


> Maybe that is difficult for people who haven't had an experience with pure/functional programming?

I found my understanding of the Nix language became a lot better once I started learning some Haskell. Specifically once I understood what currying was about.

My real gripe with Nix is the lack of a complete, easy to find, documentation of its "standard library". Nix Pills & a few wiki pages are far from enough.


All builtins functions can how render doc with `:doc` in the `nix repl`. I think there's work to make this possible with any arbitrary function.


> My real gripe with Nix is the lack of a complete, easy to find, documentation of its "standard library". Nix Pills & a few wiki pages are far from enough.

How hard have you tried? It's literally two clicks and one scroll away from the official home page: https://nixos.org/ -> click "Learn" -> Scroll down -> click "Full Nix Manual":

https://nixos.org/manual/nix/stable/expressions/builtins.htm...


The other problem is the condescending replies from the higher tier nix folks while everyone else struggles. I almost think they garner a sadistic pleasure in watching everyone fail.

After months of "It's obviously this" and "you're not even trying" and "have you even looked?" and "there are plenty of examples out there already" and "you should start out by reading the introduction docs" and "you obviously don't know how functional languages work", I just stopped bothering.


I think the impedance mismatch is because some things are really simple, but are not documented in a way that is discoverable by someone who doesn't know, and is searching in the paradigm they do know.

I was trying to figure out how to install new machines from a central build host. Neither the straightforward install CD process, nor system.build.qcow2 made sense. I popped into the IRC channel to ask, and it turns out a 'nix build' 'nix copy', 'nix-env --store --set' and 'switch-to-configuration boot' did the trick. That's an extreme amount of power and simplicity that just kind of obviates a whole set of heavyweight processes (eg Debian automated-install). But if you are searching around for the usual type of automated install process, you just come up empty handed looking for it.


Maybe you should look within.

I'm still a long ways from understanding everything Nix/NixOS completely, but I've never gotten these kinds of comments apart from the extremely reasonable "you should start out by reading the introduction docs" - that the directions lead with.

And if you have a problem with that, I can see how you might have provoked that kind of response that seems so out of character to me.

I ask questions when I have them, and sometimes I get answers from them, and sometimes I answer them myself.

But I started off reading as much of the docs as I could tolerate before and while getting started, and always trying to self-help before asking for help.


This is a pretty good example the kind of victim blaming I'm talking about. It's a kind of insidious toxicity that permeates certain communities, although I've not been able to put my finger on what exactly brings it out, or why it's so tightly clung to as an acceptable behavior in some places.

I actually tried to help for awhile, and even started a help guide https://github.com/kstenerud/nixos-beginners-handbook/

But eventually one gets ground down and demoralized enough to just give up. It's a real shame, because I LOVE what NixOS is trying to accomplish.


I'm not trying to judge you harshly. I'm trying to understand.

In your first post in this thread, "high tier" NixOS users were called condescending sadists. This is a libel as far as I can tell based on my own experience.

Then was a list of disliked phrases, and second to last in that list was the instruction to read the manual. I believe in reading manuals, as difficult and time-consuming as the effort may be. This made me think expectations for using the project were wrong.

Now you're linking to a repo you made to attempt to give new users a guide to getting started.

You clearly made an effort to be part of the solution. This is admirable.

I'm trying to connect dots here - what I think happened was, in the course of trying to get help filling in the blanks, you probably exhausted the patience of others trying to help you.

I question if what you've been trying to do works at this point in the project's lifecycle. What you're trying to do is hard enough in a completely mature project. In an environment where we acknowledge the incompleteness of documentation, new command line APIs that are evolving, and new ways of doing things (I'm currently thinking of containers after looking at your guide, and flakes as well), communicating a true North is extremely difficult especially if you aren't one of the core developers immersed in the current state of change.

I don't think the project is mature enough to declare a "right way" to do all the things. I don't think we have enough mature users to support filling in all the blanks, yet, either.

I'm sorry you've had the experience you've had. Maybe after taking a break, you can resume your efforts, but with tempered expectations. We're all asking a lot of people who volunteer their time to build what should be acknowledged as a strategically important approach to computing.

We need more experts. The experts we do have are over-worked and under-appreciated. Perhaps you can help us there one day too.


Yes what you say absolutely all makes sense. My main gripe is that of culture, not substance.

It's understood and well communicated that the project is not mature enough to be an easy experience for a new user, and that's perfectly fair. But unfortunately, there's a degree of gatekeeperism going on in this community that is making the environment toxic for newcomers.

I've only seen it a few times before, but in every case, the longtimers seem unable to see it, and dismiss complaints by newcomers out of hand. Of course it's to be expected that a certain cohort of newcomers will be toxic and entitled, in which case one would absolutely be justified in dismissing them.

But that is not the case here, especially judging from the sheer number of upvotes my comments on this matter have received, and the number of blog posts in the wild about this very topic.

This is not a problem of maturity, but one of culture. And this is what exhausted me and made me give up on my attempt to make things better.


Right, it's unpleasant to see snark.

I hope my original comment doesn't read that way. I mean more "the Nix language isn't much different from something you're familiar with".

> "you should start out by reading the introduction docs"

This one I kinda buy, though.

With programming, with many things, you can get quite far without having to understand all the details. And many tools do quite well by being intuitive to use.

Nix is quite weird. And can be quite difficult. And it's got a small community, so it's under-documented and you won't always be able to find someone who's run into the same problem with a good explanation as to how to fix it.


This right here is enough for me to not even bother tying this OS. I'm always interested in operating systems (and the adjacent config systems that live on top of the Linux kernel, Arch Linux is one of those, Gentoo is another). But I'm not so interested in adding vitriol to my life, no matter the technical excellence. I'd rather live with a mundane kludge than deal with knowledge elitism.


In my experience the Nix community is extremely welcoming and refreshingly vitriol-free. This person isn't representative of the community.


>I'd rather live with a mundane kludge than deal with knowledge elitism.

But I would put up with a lot, if the payoff is more knowledge.


Lol cause one guy was cranky on the internet?


> The other problem is the condescending replies from the higher tier nix folks while everyone else struggles.

You may not like the reply, but the fact remains: the comment I replied to was a lie. It says there's hardly anything but Wiki pages and Nix Pills and that there's no "easy to find, documentation of its "standard library". And the exact document has been two clicks away from them for a long time.

> I just stopped bothering.

Another problem is your impatience and inability to perform searching and research for necessary information without anyone's help. The lack of focus and motivation to spend a little extra time to discover, learn, and build an intuition around a new toolchain is on you. How much time have you spent on useless media content on youtube or netflix lately instead having another try with the tool?


It’s obviously anecdotal but I’ve found the Nix community extremely supportive and haven’t really had that experience when asking questions. I typically stick to IRC but I have asked my fair share of dumb questions without any replies I’d call snarky…


Nix is wonderful, but absolutely is under-documented. (In part due to a small community, etc.).

A couple of examples I've run into:

https://github.com/NixOS/nix/issues/2259 showed people trying to make use of the "hello world" package which was given in the NixPkgs manual, but couldn't quite figure out how to get it to build.

e.g. if you want a package that's a simple script, "writeScriptBin" seems like what you'd want. -- But it's really not clear how to use it from the manual; you'd have to read the nixpkgs source. https://nixos.org/manual/nixpkgs/stable/#trivial-builder-wri...


> you'd have to read the nixpkgs source. https://nixos.org/manual/nixpkgs/stable/#trivial-builder-wri...

Is it too much to ask from developers in 2022? At source, the function is well documented and is provided with an example [1]. The only missing part is a rendered HTML with the same information.

[1] https://github.com/NixOS/nixpkgs/blob/master/pkgs/build-supp...


It's one thing to look at the source, it's another thing to know that you have to look inside `pkgs/build-support/trivial-builders.nix` to find the documentation. One of the main advantages of reference documentation is a central place to look up what things do.


`writeScriptBin` is the second topmost link on Google search that leads directly to that file in a central repository of nixpkgs. Unique function names have the advantage of being transparent references that you can find either via google, or github search, or grep on your local checkout of nixpkgs.


In 2022, I think it's a reasonable expectation that API documentation can be accessed as rendered HTML. -- e.g. Haskell's or Rust's documentation even links to the source for the packages.

On the other hand, I think it's fair to say NixOS will sometimes require a high level of involvement from its users. When something goes wrong on another OS, it's easy to search for the answer. With NixOS, you have to understand both the Linux part, and the NixOS part.


> Is it too much to ask from developers in 2022?

In 2022 it's not too much to ask from the developers of the tools to provide:

- consistent and up-to-date documentation (that's built and updated together with the source code)

- consistent, readable and understandable error messages


> Is it too much to ask from developers in 2022?

It's not too much to ask for non-production ready software. If nixos is supposed to be production ready, then yes.


NixOS is production ready, productive teams that care about reproducibility use it in a variety of production environments already.


I was definitely in the "I just dont get it" camp for a couple years before coming across a comment on HackerNews pointing to Burke Libbeys fantastic playlist on Nix [1]. It was a revelation and was what I needed to finally have the DSL click for me. Cant recommend it enough.

[1] - https://www.youtube.com/playlist?list=PLRGI9KQ3_HP_OFRG6R-p4...


Is there anything that explains the why and how a bit deeper?

As an example, in the video "Packaging a Gem as a Nix derivation", he basically just and copies and pastes another default.nix, changes some strings and mentions he doesn't understand what 'passthrough.updateScript' does, but just changes a string there too.

This doesn't really help me understand Nix.


I would watch the entire thing in order to get the "full experience" of his tutorial. This touches on things like the CLI vs just syntax.

If you are coming from the perspective of non-NixOS (e.g. HomeManager or just nixpkgs) then maybe start at his rundown of what NixOS is vs HomeManager [1].

Or if you really just want to understand the nix language itself then probably start at 'Demystifying nixpkgs' [2].

[1] - https://www.youtube.com/watch?v=IUsQt4NRCnc&list=PLRGI9KQ3_H...

[2] - https://www.youtube.com/watch?v=oWJaTb5uoT0&list=PLRGI9KQ3_H...


For me, nix the language itself is not that bad. It's all the magic happening in nixpkgs and the tight coupling there with the user interface (configuration.nix, shell.nix, etc.).

In NixOS/nix, you are essentially extending the nixpkgs source code with your own configuration to build a system, which requires good familiarity with the source. This would be like having to write C code to install packages on another OS. It's a questionable design choice but it does offer a lot of power.


> Maybe this is nitpicking, but: the Nix language is about as straightforward as "JSON plus functions". Maybe that is difficult for people who haven't had an experience with pure/functional programming?

I've programmed professionally in Lisp (SBCL) and Clojure, and done a lot of hobby work in Haskell and Rust. I found Nix the language utterly incomprehensible largely because of documentation and tooling reasons.


I've programmed professionally in Haskell, I find Nix the Language to be one of the nicest DSLs for defining build pipelines. It's lazy, pure, and it's sandboxed. Whenever you need types on top of it you just pick Dhall and dhall-nix and continue with a productivity gain.


Wow, I just commented that dhall would probably be a fruitful language to consider for nix! And it seems to exist, though according to the readme, general recursion, not strongly typed code and row polymorphism is not supported by dhall but is used heavily by nix :/

Do you by chance know whether a non-Turing complete language like Dhall would be enough to “replace” nix?


> Do you by chance know whether a non-Turing complete language like Dhall would be enough to “replace” nix?

Realistically, I'd say Dhall should be viewed as a complimentary typed DSL for Nix instead "a replacement". Dhall shines when it's used as a code-generator for recursive Nix expressions from typed configs that capture the intent of a build pipeline. You can see how it can be generated from this Haskell example, for instance [1]. The [2] shows you the available constructs, that do support recursive calls and recursive Nix sets.

[1] https://hackage.haskell.org/package/dhall-nix-1.1.23/docs/Dh...

[2] https://hackage.haskell.org/package/hnix-0.14.0.2/docs/Nix-E...


The semantics are simple and necessary, yes. The syntax is kind of crap in some ways (in others it's awesome and I wish other languages were as lightweight). Syntax in general is a bikeshed, but that doesn't mean we can't objectively critique things like having function definition being a mere ":". That one on its own is probably responsible for much of the indecipherability.

Also some of the higher level constructs really aren't clear (like overrideDerivation/overrideAttrs/override, I still don't fully get). And mkDefault/mkForce/etc could be a bit more front and center rather than in "lib".

Also every time I run nixos-rebuild and it takes several+ seconds to evaluate my Nix expression I've got to wonder how much the implementation being this cute functional fixpoint is holding performance back.


> overrideDerivation/overrideAttrs/override, I still don't fully get

Packages are implemented as functions that take dependencies as arguments, do some stuff and ultimately output the result of calling mkDerivation (i.e. a derivation).

`override` allows you to change the arguments of the package, while `overrideAttrs` allows you to change the arguments of mkDerivation: the implementation of the package itself. `overrideDerivation` is essentially a deprecated `overrideAttrs` version.

https://nixos.org/manual/nixpkgs/stable/#chap-overrides for a better explenation.


Ah, I had forgotten `override` is easy. Rereading that page I still don't get the difference between `overrideAttrs` and `overrideDerivation`. `overrideAttrs` is newer and preferred, yet it doesn't spell out where you cannot use it and have to fall back to `overrideDeriviation`. And looking through my Nix code I've got various uses of `overrideDerivation`, which most likely means I tried `overrideAttrs` only to have it not work, so I tried the "deprecated" version and it did.


I think the manual is intetionally flying over `overrideDerivation` because it shouldn't be used. To understand the difference you'd have to know that `stdenv.mkDerivation` is a library function that wraps the Nix primitive `builtins.derivation`, which is what actually makes a derivation.

`overrideAttrs` changes the arguments of the former, while `overrideDerivation` changes the arguments of the latter. The reason is it's "deprecated" is that it forces a full evaluation of `stdenv.mkDerivation` before it can continue and it override the lower-level derivation, so some niceties that Nixpkgs provides don't work with it.


Thank you for explaining the difference! I think the manual would be better served by an explanation like that, rather than just implying that `overrideDerivation` may still need to be used for some unspecified reason. I'll have to revisit my uses and see why `overrideAttr` didn't work for me. Perhaps I was using the wrong argument name.


> the Nix language is about as straightforward as "JSON plus functions".

Except that they concoct weird custom terminology for everything.

It's not a hashmap, or a map, or an object. No. It's an attrset. Riiiiight.

It's not a build script, it's a derivation.

Etc, etc.


> It's not a hashmap, or a map, or an object. No. It's an attrset. Riiiiight.

This was intentional. Most hashing algorithms don't guarantee a stable ordering of keys. Nix was inspired by a research project called Aterm, which did have this property. Also, you can do things like reference other keys in an attr set, which many other dictionary data structures don't support.

> It's not a build script, it's a derivation.

Derivation encapsulates everything which goes into a building something. Dependencies, env vars, flags, sources, patches, the build script, package version, and package name all get packed into a derivation. These paramaters get hashed which is how the hash for the store path gets determined.

Derivations can be thought of an unambiguous build "recipe", which has already resolved based upon all configurable inputs (dependencies, build platform, host platform, target platform) .

https://book.divnix.com/ch04-01-create-a-derivation.html#ins...

Nix is optimized for creating reproducible builds.


> > It's not a hashmap, or a map, or an object. No. It's an attrset. Riiiiight.

> This was intentional. Most hashing algorithms don't guarantee a stable ordering of keys.

Then it's an ordered map.

Since you're creating a new language, you can say right up front "in Nix, all maps are ordered maps" and then just refer to them as maps.

There is existing terminology for all of this stuff, and it should have been reused.


Isn't a stable ordering just an n-tuple of pairs? Or is it more like a hashmap you access in O(1)?


> It's not a build script, it's a derivation.

Yes, because derivation's aren't build scripts. If you look in a .drv file (or better yet, pipe it through pretty-derivation), you'll see that derivations have a reference to a builder (i.e. the path to a binary), but derivations themselves aren't build scripts; they define the arguments and env vars for the builder (plus references to dependencies, output paths, and OS/architecture).

In practice, pretty much everything in Nixpkgs uses Bash as its builder; and almost all of those use the same 'default-builder.sh' script. However, Nix itself is independent of Nixpkgs, and we can give it any executable we like.


> It's not a build script, it's a derivation.

I think this one's a fair example of a specific terminology, though? The idea of building a package in a certain way, and storing it in a certain way, is foundational to the Nix packaging system.


Yeah, naming a key-value data structure `set` was a really poor choice. A derivation isn't really a build script though.


Fair enough, but prepending an adjective to "build script" would still have been a much better choice.

Like "elaborated build script". Or "transitively-hashed build script".


I think derivation as a new term of art is reasonable, because it's something that did previously not exist.

In particular it's not an "elaborate build script. The actual derivation is basically just a json file¹ with a path-name that's a function of its contents that includes specifying everything that goes into the build (env-var settings and arguments, the input sources, other required derivations, the build script, the standard environment -- all with transitively hashed paths) as well as what comes out of it.

They could maybe have called it build-spec, or something like that, but I think giving a novel thing a novel name is in fact novel is less confusing than trying to repurpose some existing terminology and trick people into thinking it's something they are already familiar with when actually it isn't.

¹ It's not actually json, but that's just some historical accident; `nix show-derivation ...` will actually output the derivation as pretty-printed json. But the key thing is that it's just simple data, not an executable script.


"elaborated", not "elaborate"

The former means "with all the details spelled out explicitly". The latter means "intricate".

And, I've yet to see a .drv file that contains absolutely 0% shell script. Let's be honest, nix is used mainly as a metaprogramming language for bash.


Seeing that the nix language was influenced heavily by haskell, perhaps Dhall would be an interesting “port”/change for the project - it is a deliberately non-Turing complete configuration language with types (but heavy type inference) and functions.

Because let’s be honest, types are user documentation as well, and the project would benefit greatly from the latter.


Can’t edit anymore but this more or less exists already: dhall-nix can convert from dhall to nix, but unfortunately not everything done in nixpkgs can be converted clearly into dhall (due to nixpkgs’s use of some non-strongly typed parts and general recursion)


We already had JSON plus functions. We called it 'JavaScript' :)


Ha, nice!

Seriously though, javascript is not a great language if you are looking for robust determinism. I can understand their decision to go elsewhere, given the domain and design.


People have tried: https://github.com/jkcfg/jk

But yeah I agree. The thing is, if all you need is robust determinism why do you need a full functional language with currying and other complex concepts?

Google had the same problem for Bazel, and their solution (Starlark) is way easier to understand.


> The thing is, if all you need is robust determinism why do you need a full functional language with currying and other complex concepts?

Yeah, that's a good point.

I don't see any reason determinism requires a functional paradigm. That said, the functional language communities probably value determinism more than the others, currently.

Choosing a language whose community values align with your needs can be a good thing.


JavaScript also includes bad footgun-features like procedures, aka functions with effects. Neither Nix nor JSON have this feature, making Nix in this aspect closer to JSON than JavaScript is.


> Maybe this is nitpicking, but: the Nix language is about as straightforward as "JSON plus functions".

It's too simple and straightforward. Writing Nix expressions is a bit like writing Javascript without frameworks or libraries.

Some sort of opinionated framework that compiles to Nix would be great for maintaining large Nix projects. (The Javascript world already went though this phase.)


The Nix community itself keeps talking about something similar as a first class citizen, like a YAML or TOML-type package definition scheme to be used for "simple" packages.

I don't agree with this direction, though. Basic packages can already be described pretty simply using native nixlang, and the complexity scales well— you can add patches, fixups, flags on dependencies, extra fetcher args, whatever and it's a pretty smooth ramp all the way up. Whereas it would be considerably more jarring if you could use the "simple" thing up until the point where you abruptly hit a wall and then suddenly had to start over in real Nix when you hit that one thing that happens to require it.

Quite apart from that side of things, if a significant portion of nixpkgs were converted to the new simplified package definition style, it would majorly shrink the potential pool of example package definitions for novices to examine.


> The Nix community itself keeps talking about something similar as a first class citizen, like a YAML or TOML-type package definition scheme to be used for "simple" packages.

No, I'm talking about things like classes, abstract data types, modules, type checking and docstrings.

"Programming in the large".

Nixpkgs and NixOS does its own hand-rolled versions of these for each subproject. Eventually (a long time from now in a galaxy far away) we'll have some sort of standard Nix++ language and standard library.


Okay, yes, I agree with all of that. Flakes are kind of a module system, but having clearer semantics around functions and classes would definitely help with both code readability and producing better stack traces.

And a proper type system would be awesome.

Regarding docs, it is interesting that portions of the nixpkgs source do seem to have docblock-like comments above the functions, but AFAICT there's no formalized process for extracting or rendering those. Given that https://github.com/nix-community/rnix-parser exists, I wonder how big of a leap it would be to actually extract those, render them into rST pages, and generate a searchable Sphinx manual.


> Some sort of opinionated framework that compiles to Nix would be great for maintaining large Nix projects. (The Javascript world already went though this phase.)

Out of curiosity, are you talking about Typescript? Not really a framework, but I can't really think of a framework that 'won' in this sense for JS, and also compiles to JS.

On the front-end there's Svelte, which is actually compiled with Javascript, and React, which is has the biggest market share of opinionated Javascript frameworks, but at best you could say JSX is compiled to HTML.

On the back-end, there's express, but that doesn't involve any compilation, and is far from opinionated...


Before Typescript there was also Coffeescript. Sic transit. :)

But also things like Vue do their own compiling step too.


https://github.com/divnix/devos is something close to a framework.

I agree that language is too simple. Also I think some degree of IDE / language server support would help a lot. Refactoring modules, writing and importing custom functions was a bad experience for me - some arcane stacktraces were common, using repl was too verbose and with no clear way to debug whole configuration.


I just got a spare mini-PC up on DevOS over the weekend! It's currently looking for maintainers but it's cutting edge (think hlissner's NixOS config but more generic) and gives you a huge amount of flexibility out of the box. Highly recommended if you want a Flake-based NixOS system, local or remote but don't know where to start or you'd like an example of best practices to build off of.


This post is dead-on. I've been trying nix in different forms for years, and it's been incredibly painful. The way the folks who use it normally talk about it is so different from real-life use by non-experts it feels like gaslighting. I only felt vindicated when reading https://ianthehenry.com/posts/how-to-learn-nix/ -- it's not me it's them.

Maybe one day it'll get good enough to be usable. Right now it's a long, long fight to get productive in any non-trivial (e.g., explicitly supported and documented) configuration.


I've been messing with GUIX, just because the configuration language is much less confusing to learn and understand.


You might as well try Spack, it's Python + a dsl to customize builds in a single line. Guix package descriptions look very daunting to me.

[1] https://github.com/spack/spack/


I used to think the ideas of NixOS were great, but then I learned about `ostree`, and in my opinion things like Project Atomic (in Fedora), are probably more pragmatic or straightforward idiomatic implementation of those ideas. In essence the software packages resemble how git works behind the curtain.

Git has a bunch of objects with cryptic names resembling hashsum's, and those are the files in your git working tree, they are sorta symbolically linked into the working-tree of the git repo. In the case of ostree, the working tree is a filesystem path such as the root of your filesystem. You can checkout a branch of the root filesystem on bootup, etc... and it's great for A/B testing because if there are any problems one simply boot into the previously known-good branch, etc.

The problem there is people sorta hate having to reboot after modifying the HEAD of the ostree repo, which might conceptually resemble a git commit into the main branch. Flatpak to the rescue, which is itself another implementation of `ostree` but not for entire root fielsystems, but for the package's filesystem layout, what you might think of as the contents of any given nixos package path. The same kind of things happen, bundle libraries can be different in different runtimes, and multiple runtimes can co-exist, and their file objects are deduplicated, and shared or isolated. The configuration language is JSON, so it's easy to manage as a packager.


You can checkout a branch of the root filesystem on bootup, etc... and it's great for A/B testing because if there are any problems one simply boot into the previously known-good branch, etc.

You can also do this with NixOS, or even ZFSBootMenu.


Not just ostree. Using butane / ignition configs to setup coreos is very nice.


Sry for the necropost, but... I just started working on coreos at $dayjob. So your comment struck a nerve, but in a totally good way! :-)

I'll checkout butane / ignition ASAP.

Thanks!


I understand the first complaint but not the second

> The second flaw is that NixOS does not actually provide real isolation.

I thought this would going to be about security, but then it was about having to repackage the world. But Nixpkgs is current the biggest and most up to date package repository in the world (we finally beat AUR). Packaging enough stuff is emphatically not the problem.

What I want to see is development be easier with Nix, because winning over the hearts and minds of upstream developers is the final frontier. To that end I wrote https://github.com/NixOS/rfcs/pull/92 and https://github.com/NixOS/rfcs/pull/109.

As a side benefit, autogenerating more Nix means less handwritten Nix! I do think people give the language itself to much grief when the problem is more the library side with the copious override shenanagins and other stuff.


I think you missed the key point. Software needs to be altered to run on Nix, and it needs to do so _systematically_.

Grab a script from the internet? Edit the shebang! Download some prebuilt software? Good luck finding /lib/ld-linux.so.2! Need to use some proprietary software you cannot edit? HALP!

Of course there must be some tooling to get around this, since stuff like steam is available (at least I hope), or else anything not packaged by nix would be a nightmare to install and use as a user. Any niche or domain specific software would become unusable. Take for example Quartus II or Vivado (just to name the two top FPGA EDAs), those two are not packaged by nix. Yes, there are a few recipes floating around, but those honestly scare me enough to not want to do it with more obscure tools like those for other, smaller brands, FPGAs.

(disclaimer: I never used nix, but that's my understanding from the article, and my own experience running proprietary stuff)


Hehe Quartus II is terrifying. It is like 10s of gigs of basically it's own distro. I have seen it showhorned into Nix but at the end of data it's a garbage heap of a "package".

Nice example :)

Yeah for me it's just more interesting to attack the problem at the root: developer UX, autogenerating packages from things like Cargo.toml, etc.. Prebuilt binaries is not something I really care about that much, and are existing tools do handle it quite well surprisingly automatically .

It's linux, if you are using some rando prebuilt binaries (not your distro's, or other trusted build artifacts cache) it's always a tragedy.


I actually added Quartus Prime in nixpkgs a couple years ago! Haven't gotten round to the 2021 update, but that should get rid of the annoying 32-bit dependencies.

[1]: https://github.com/NixOS/nixpkgs/blob/1e643a385290e54947594a...


I suspect that the FPGA EDAs should run fin under steam-run. Packaging up giant balls of binaries is a mess.


Side note: It would be nice to have a way to look up library -> package name.


> I do think people give the language itself to much grief when the problem is more the library side with the copious override shenanagins and other stuff.

I sympathize with this, honestly—the separation between Nix the language and Nixpkgs the library/codebase is just an implementation detail from the point of view of the user. This is especially true for core constructs like overrides—overriding is a pretty fundamental abstraction that would be part of a differently designed language, so the fact that it's more like a Nixpkgs convention is not going to be clear (or particularly useful) to somebody learning Nix.


I think you have the mindset right, so thanks for elucidating that. I would still push back.

Ultimately library + language is serving a single goal here, but "means to an end" != mere implementation detail!

As a user of a language, it's incumbent upon you to learn the difference between library vs langauge in what you write. Full stop. I do wish Nixpkgs would be less rediculous. I wish there was a GUI too! But people who are going to write some code still need to learn the difference between language and library way.

Ultimately I don't think a more mainstream language would even help with the override soup problem, and we will need a logic programming type thing, but that's a separate point.


> I'm going to keep using it, since I can't stand anything else after having a taste of NixOS

This is how I feel about NixOS. I like what it gives me but I don't really like much else about it. Unless something _just works_, you're looking at hours of debugging which will most likely lead to failure. Off the top of my head there are a couple of things I've hit a dead end on recently:

- Packaging a Flutter desktop app because its build process downloads fonts from a URL and so makes the whole thing impure

- Getting Blender's GPU support working, I think the same problem is causing Webcamoid to crash on launch

Previously I used Arch, which I loved, but it was all to easy to completely shaft my setup and not be able to roll back.


> Unless something _just works_, you're looking at hours of debugging which will most likely lead to failure.

Exactly my experience, in this case I simply switch to my OS package system (I use Nix only as a package manager on my Ubuntu). I think, I have round 95% of software coming from Nix, for the other 5% that doesn't work out of box, I just do 'apt-get install'

Also, for installing different programing languages I switched from Nix packages to 'asdf', because packages on Nix often not up to date if I want to try recent release.


I think they were using NixOS, not just Nix, which means that you can't fall back to the "OS package system" because Nix is the OS package system.

The difference between Nix and NixOS was a branding mistake by the Nix team. It's unnecessarily confusing.


I just switched away from asdf (primarily developing Go on Mac) because of the problems it caused.


What problems, if I may ask?


working on multiple projects with varied versions of Go from 1.16.1 to 1.17.6. Asdf would switch versions (mostly correctly) for each project, but then the VSCode tools would not and VSCode would refuse to run tests because there was a mismatch between tool version and Go version.

Golangci-lint was also a nightmare around versions, and I suspect asdf was involved in this, too. We had 4 separate instances of it (the one VSCode used on save, the globally-installed version we could invoke from the CLI, a dockerised version that supposedly emulated what the CI engine would do, and the CI engine itself). None of these ran the same versions of the linters with the same config and the same version of tools.

I uninstalled asdf and used a more conventional version manager. Although there was some pain about remembering to manually switch versions, it actually didn't matter that much. Go's backwards compatibility guarantee meant that I bumped into way less problems from working on a 1.16 project using Go 1.17 than I did from asdf trying to match versions. The important bit was that the tools and the Go version used matched.


Did you try teaching VScode about the right version of Go by having it call `asdf exec go` instead of `go`? (Not sure this is possible with VScode but I'd be interested in whether this might be a way out.)

In any case, aren't those issues more of an IDE problem, though? IntelliJ is the same, and the tools you set for one project might even leak into other projects. It's one of the reasons I hate IntelliJ so much.


I'd say that packaging an app for NixOS should include rewriting all such impure steps, and making the fonts packaged along with the app is one such step.

If the fonts can't be packaged for licensing reasons, even as a separate package, then well, you indeed can guarantee that you'd be able to install the app next time. If it's not what you seriously need, then likely the value proposition of Nix is not for you.


I agree. That doesn't negate the fact that it took me a while to understand what the issue was, see if I could get around it, work out how to disable the font downloading in Flutter, etc, etc. I may have been trying to force a round peg into a square hole but NixOS didn't do much to help me see the shape of the pegs and holes in the first place.


> Previously I used Arch, which I loved, but it was all to easy to completely shaft my setup and not be able to roll back.

Here is my attempt at solving this problem: https://github.com/CyberShadow/aconfmgr


I remember reading about your project a previous time NixOS had been mentioned on HN, and tried to look for it again but couldn't manage to bring up the correct set of keywords to find it.

Thank you for posting it again. I don't think I'll go back to Arch unless I grow tired of NixOS, but if I do I'll be sure to give it a spin!


I'm going to look into this, thank you.


This was a dumb choice by the flutter team. Yes, Nix doesn't provide an easy workaround, but let's not put all the blame on Nix.


Nix 'flakes' aim to address your first issue by requiring all inputs be explicitly defined (including nixpackage itself).


I think the author is quite wrong that the nix language is not very good and hard to learn. In fact it's not only the best external DSL I've seen, but also one of the few cases where inventing a new language rather than just repurposing a subset of something already established looks like a good decision to me. In fact, I consider it a marvel of language design. The mix of properties that nix has (laziness, purity, relative simplicity and good syntax for things like path-names, records or interpolation) absolutely makes sense for the problem domain and would be difficult to achieve with something more off-the-shelf.

That is not to say there are no significant problems with it: the lack of static typing coupled with low runtime performance and bad error messages and debugging support means tracking down more complicated errors can be a pain and the laziness can further compound this.

The main big design flaw in nix (as an ecosystem), from my perspective, is that there isn't enough separation between the parts of the build specification that a purely data driven (git hashes, version numbers, license info etc) and the parts that actually need computation (specifying a complex build recipe conditioned on various things). This lack of separation makes it harder than it should be to have tooling to programmatically transform the pure data bit (e.g. to bump the version) and also really raises the barrier of entry. Stuff that should just require a newbie to tweak some toml file often requires messing around with nix instead. There is tooling that has this separation (niv, for example) but there are a bunch of different solutions following different conventions and nixpkgs itself happily jumbles metadata with actual code. I think this has significantly held back the ecosystem. Specifically 99% of users ought to be able to just tweak some toml files to satisfy their needs rather than learning nix. This is starting to improve a bit with stuff like toml files for flake, but it's still not great. I also think the ergonomics of many of the nix command line tools are rather poor.

But none of these issues seems to require significant changes to the core nix language (probably even the lack of static typing). So whilst I think there are significant practical warts, the core design is IMO extremely good.


I wish nix would implement the stdlib next to the language, even if it's written in nix and not C++. I hate that so many of the basic building blocks are hidden behind lib in nixpkgs. I feel like that accounts for a good bit of the barrier to learning the language. It's difficult to know the difference between a core language feature for the language's sake and a core language feature for the sake of nixpkgs.


Yes. I find the core language elegant, but I'm not equally enamored with the "batteries". I agree that this is a big barrier to learning how to do something useful with nix, I'd say much bigger than understanding the actual core language.

There are at least two problems: one is that, as several commenters have already mentioned, there is not enough consistency amongst different idioms for achieving certain results even within nixpkgs itself and sometimes both some deprecated and some new idiom are in use, which especially matters for stuff that is not trivial to start with (like overrideAttrs or overlays).

The other is that IMO even some of the conceptually simple stuff imposes too much mental overhead.

So instead of writing `if true then ["foo"] else []` (which is trivial to understand without any nix knowledge) the more idiomatic way is to do `lib.lists.optional true "foo"` and instead of `if true then ["foo" "bar"] else []` `lib.lists.optionals true ["foo" "bar"]` (which is completely obscure, unless you look it up).

I'm not sure such minor abstractions pull their weight for a special purpose language that most users should only spend very little time with.

I think what nix would really need to fulfill its potential is a multi-month top down pruning and rationalization effort from a person or team with a good eye for developer UX and not too much expert blindness to trim back many years of organic growth and provide a smoother learning curve. At the moment you need to learn far too many things at once to get anything done, whereas it ought to be much more pay-as-you-go: as long as you don't need to customize or create your own build recipes, you shouldn't have to touch nix code at all (and no, bumping a version doesn't count as long as there's no need to change the actual build steps).

No matter the benefits, I don't think Nix will grow beyond a narrow niche as long as any deployment of it requires anybody who comes in contact with it (i.e. not just the local nix guru) to devote significant mental resources to it. The typical (developer or sysadmin) end user needs something where simple stuff can be solved by editing a straightforward toml file (or similar) and running a cli command that requires as much (or less memorization) as typical uses of docker.


I find the language/ecosystem-specific idioms really interesting from a historical perspective. I feel like it makes for great reading if one wants to know the common problems involved in building packages for a given framework. I spent some time fixing some packages on Darwin for the most recent release, and I was surprised by how many of those were stdenv-based. I don't really write C/C++, so I don't end up using the stdenv builders myself.

At some point, one of my fixes used an "old" idiom; it was valid, but it had fallen out of use. It was just the first example I found for what I thought would fix the package, and it turned out to be correct, but the new idiom was cleaner and clearer. Nevertheless, they took my fix with the old idiom, so it will continue to propagate for what is likely the same reason. Some of this is just a result of the size and scope of the work going on, especially around release regression time.

In a separate personal example, I went down a very deep rabbit hole of trying to figure out why I couldn't use a shell as a base for a derivation that was essentially just packaged code used as a payload for an external system. It turns out that shell derivations are special things and can't be used directly for standard derivations. My eventual solution, using them indirectly, seemed hinky and didn't incrementally cache well, but it worked.

I agree very much with the problems you've described, but I am a bit more bullish on the potential of the larger ecosystem to outgrow at least some of those problems. I think flakes are the key to enabling better decoupling from nixpkgs. One could rewrite a saner stdlib as a flake, which could then get pulled back into nixpkgs as a flake input. There are promising tools like devshell and flox that are making a serious attempt at unlocking broader adoption through ease of use.


>That is not to say there are no significant problems with it: the lack of static typing coupled with low runtime performance and bad error messages and debugging support means tracking down more complicated errors can be a pain and the laziness can further compound this.

Some of this problems can be aleviated with good IDE support or language server.


Yes, this is a very good take.


I've been using Nix for about a year. I don't use NixOS but home-manager and nix-darwin. Just about every single time I've tried to do something new, it has ended up being way more complicated than I thought at first.

- Installing emacs? You'll want to use the binary cache. No biggie. Except when it doesn't work, and your system update ends up building it from scratch--a two-hour process on my stalwart old desktop.

- Setting up a local environment for a Go project? No problem. Er, except some tools need to be installed globally, quite against the Nix philosophy.

- For a system or home configuration, you'll encounter packages, modules, and flakes. Packages are explained in the Nix documentation. The functions people actually use to make packages are explained in the Nixpkgs documentation. Modules are explained in the NixOS documentation. And flakes are explained in...uh...well, you'll figure it out. (You don't need flakes at this point, but they're on their way. And they won't replace packages entirely.)

- Debugging? gl,hf

- And this is all before we get into my admittedly complex system configuration using flakes, nix-darwin, and home-manager. Some things need to be set up at the system level, others at the home level. And cross-referencing the two is a pain. I just got an M1 mac. Some packages need to have their x86 versions installed, against Nix's protestations. I appreciate the guy who made a configuration framework that figured out how to do this, because I certainly couldn't.

Anyway, I love it.

I think it's the mathematician in me. The whole idea is too aesthetically pleasing for me to resist. The hope of having everything placed just so by a mere wave of my hand, too tempting. Damn the consequences.


>Installing emacs? You'll want to use the binary cache. No biggie. Except when it doesn't work, and your system update ends up building it from scratch--a two-hour process on my stalwart old desktop.

Just don’t use emacs ;)

In all seriousness, were you pulling from unstable? Disallowing installation because the hydra build is failing has gotten me, albeit for an obscure package I never use.

>And flakes are explained in...uh...well, you'll figure it out.

This is a funny quirk in the community I’ve seen, where everyone talks about using flakes but has a very difficult time of saying how. I finally figured out the weird incantation to get flakes to work. It was not trivial. (And the reason I’m not saying it here is because I simply don’t remember.)


> I finally figured out the weird incantation to get flakes to work. It was not trivial.

Care to share with the class? ;)


For the system, I like the devos template:

https://github.com/divnix/devos

The idea of flakes is how you define inputs, and you define the system (and packages, and shell etc.) in the outputs using the inputs. The inputs are git repos which point to other flakes. You can mix and match these as much as you want (see the devos repo for examples) and when you build the derivation, it generates a lockfile for exact commits in that point in time what were used in the given inputs.

You commit the lockfile and in the other systems where you pull your config from the repo, it uses exactly those commits and installs the same versions as you did in your other systems.

This was quite annoying and hard to do before flakes. Now it's easy.

The problem what people face with building their system as a flake is combining the packages so you can point to `jq` from the unstable nixos and firefox from the stable train. I think this aspect needs better documentation so it wouldn't be so damn hard to learn (believe me, I know). Luckily there are projects like devos that give a nice template for people to play with (with documentation!)

Another use for flakes is to create a development shell for your repo, an example what I did a while ago:

https://github.com/pimeys/nix-prisma-example

Either have `nix-direnv` installed, enter the directory and say `direnv allow`, or just `nix develop` and it will gather, compile and install the correct versions of packages to your shell. Updating the packages? Call `nix flake update` in the directory, commit the lockfile and everybody else gets the new versions to their shell.


I wrote a blog post on NixOS/home manager configurations with nix flakes. I make sure to link out to other resources for those who don’t have the whole background. See: https://jdisaacs.com/blog/nixos-config/


> Setting up a local environment for a Go project? No problem. Er, except some tools need to be installed globally, quite against the Nix philosophy.

I have yet to encounter an environment I can't fit into a nix she'll, what requires globally installed tools? Sounds like a bug.


I always found Guix more appealing, because it uses a programming language that is useful for other things as well (GNU Guile). I really can't stand having to learn a new language for each thing I want to manage.


I completely agree, but the main benefit nix has over guix is some amount of MacOS compatibility. I don’t currently use a Mac, but I know that next time I have to use one for work, my home-manager-mediated config, which contains almost all of my user-specific configuration, will port right over just fine. The guix team doesn’t seem particularly interested in supporting MacOS, and I can’t blame them.

The other benefit is mindshare, so things tend to get updated more quickly in nix than in guix.

I do really wish I could use guile rather than the nix language though. I’ve gotten used to it, but the complicated abstractions in nixpkgs are still often inscrutable.


I have had NixOS as my daily driver for nearly three years now and maintain a small number of packages in Nixpkgs. From my perspective, macOS support is a headache as it all of the sudden may require you to debug on a proprietary OS that you lack access to and that runs on two different hardware architectures in order to get a patch accepted. I am probably not going to make any friends for saying so, but I would much prefer if Nixpkgs for macOS was maintained separately.

My personal reasons for not going with Guix is that it is a GNU project and thus you have to buy into the entire FSF philosophy. Sadly I do need to run on “problematic” hardware from time to time and would prefer if doing so did not require me to add unofficial extensions and be wary of uttering such heresy in the company of my fellow users. That being said, I like what Guix is doing and their documentation does frankly look better compared to the grass where I am standing.

Lastly, yes, NixOS is a damn curse. Once you get the taste of having a declarative operating system it is really hard to go back. Trying out tiny tweaks to your audio, kernel, etc. all with the confidence that you know how to get back to what you had before is so very addictive. We need more diversity in this space.


> My personal reasons for not going with Guix is that it is a GNU project

That's a high price to pay.

Look, the GNU project means different things to different people.

On of the goals of the GNU project is to give users the tools to liberate themselves from arbitrary restrictions. The Hurd pretty much does away with the concept of an all-powerful root user as the only privileged account to alter settings such as network, file system virtualization, drivers, etc.

Emacs is designed to be a collection of extensions; the Emacs paper makes it a point to show that Emacs brings programming to people who aren't traditionally seen (nor see themselves) as programmers.

Guile was designed to be the extension language for every part of the GNU system that was still constrained by the dead systems programming language C.

Likewise, Guix aims to give “end users” control over their software environments and systems, privileges that used to be reserved for the sysadmin class. All design decisions in Guix are aimed at extending privileges to users: package transformations, package inheritance, building packages from JSON descriptions for those averse to Scheme, per-user channels, time machine, an extensive API to build and export systems, virtual machines, containers, environments, etc.

That's what I feel the GNU project stands for, and that's why I work on it and claim the name despite the PR problems that some GNU contributors keep producing.

> you have to buy into the entire FSF philosophy

Hell no! I don't donate to the FSF, I'm not affiliated with the FSF. The FSF has no say on what happens with Guix (and when I was co-maintainer and rms tried to tell us to remove clang from the package collection we told him we disagree and that was that). Guix abides by the Free System Distribution Guidelines, which were published by the FSF. This means that Guix does not come with proprietary software by default.

Guix makes it trivial to add the nonguix repo (or any other repo for that matter): just add it to your channels and run `guix pull`. Now you've got the vanilla kernel and firmware packages and whatnot. You can chat about it all you like on #nonguix. We just ask to keep discussions of proprietary software out of the main channels. Doing that anyway is not "heresy" (I'm sick and tired of the religious vocabulary being applied to people who work on replacing proprietary software with free software) but just ... rude, I guess.

So, I welcome you to sample that greener grass up close. It might pale a little when you're debugging, but at least you get to use Scheme.


During the earliest days of Guix certain relatively voluble community members had a tendency to create an impression that you -did- have to buy into the entire philosophy or you were actively unwelcome as a user.

Things seem to be much more sensible these days but it's worth bearing in mind that a bunch of people who tried to be early adopters will have come away with that impression and plan your evangelism for the current state of the project accordingly.


First of all, you and all other of my fellow FLOSS developers have my respect. We may disagree on major or minute aspects of software design and philosophy, but ultimately we are a force for good in this world. I also admit that there are – as you say – nuances between GNU projects.

> I'm sick and tired of the religious vocabulary being applied to people who work on replacing proprietary software with free software

Well, I should have used quotes (“heresy”) to highlight the fact that I do not consider the usage to be entirely appropriate. But maybe instead of turning outwards it may be better to turn inwards and ask oneself why people such as myself, that have written less than about 10,000 lines of proprietary code in our entire lives, still draw upon such analogies to describe our GNU brethren?

One may think that following the Free System Distribution Guidelines only leads to the base system not shipping with proprietary nonsense. However, as you point out, there is also the fact that a chunk of the developer base find mentions of proprietary software offensive (?) and this necessitates shielding them with the #guix and #nonguix distinction. Perhaps naively, I find this counterproductive as I would rather have the “sinners” (sorry, I could not resist) on board and empower them to control the proprietary software that plenty of times is forced upon them because they have to make a living in a partially proprietary world. I want to allow them to transition cleanly to what we must ultimately turn into better non-proprietary alternatives. For example, I am happy that I can control and proud to have managed to keep my exceptions down to:

    nixpkgs.config.allowUnfreePredicate = pkg: builtins.elem (lib.getName pkg) [
      # “So Nvidia, fuck you!”
      "cudatoolkit"
      "nvidia-persistenced"
      "nvidia-settings"
      "nvidia-x11"
    ];
Accommodating a user’s ability to do this, in my mind, makes us stronger as we acknowledge a reality where compromises have to be struck. Yes, one could bring out nonguix to accomplish this, but it is very clear from how Guix is structured as a project that these people and the software they rely upon is at the very best a secondary concern to the project’s goal and the software they produce. In summary I want to engage with users of proprietary software rather than to shield myself from them, as I think this is more likely to lead to long-term success of and superior free software.

Note that all of this is regardless of whether you are a card carrying FSF member and my general dislike for the law-based approach that the GPL takes to free us from proprietary software.


Oh yeah no doubt MacOS is a huge pain. We use nix for installing system deps at work, and it seems like once every six months or so something will inexplicably stop working on Mac. Luckily I have an old Mac at home, but the new architecture means that one is becoming pretty much useless now for debugging some of the stuff that comes up.

Still, being able to guarantee everyone working on the project has the same versions of everything is pretty nice. No need to worry about ancient Mac versions of bash or awk or sed or whatever, so we can script confidently.


In the long run macOS support will prove valuable for shaking out a ton of cross-platform build issues. I see the macOS support as a huge, compelling and underfunded reason to try nix. The fact that I can avoid homebrew entirely is incredible.

It helps me and my peers flex our Nix muscle regularly, which means when we have to do real Nix work it's fresh in out minds.

Sure, there are rough edges and baggage and purism pain but I wouldn't trade it for anything else.

Frankly I think nix-darwin should be rolled into nixOS.


Is the author aware of Guix?


Apparently not, or forgot about it when writing the article:

> it's so clearly the only operating system that actually gets how package management should be done


He might be one of the many, many people whose hardware isn't supported by Guix.


Oh right, I forgot it's a purely Free Software distro.


Isn't Guix just nix with guile?


as much respect as I have for Guile, I'd love the nix configuration language to have at least partially enforced types...


It really is a pity that Guix didn't go for Common Lisp instead — it does allow for gradual typing. And it has standardised a ton of stuff that Guile had to implement on an ad hoc basis.


Honestly, I kind of like how Guix and Guile have been growing together. Like Rust and Servo (and the Rust compiler), I think the constraint of having a general-purpose language, but a general purpose language that is also responsive to a bigger project, has been beneficial to both.


time for hix .. and lets revive http://lambda-the-ultimate.org/node/299


Does guix also have any equivalent to patchelf?


Guix uses patchelf.

Patchelf is very useful anytime you want to change the location of dynamic loaded libraries but don't want to set LD_LIBRARY_PATH.

It is used in more places than you might originally think.


We don't use patchelf for any official package definition. We have a package for patchelf, but it's only really useful when you have pre-built binaries that you can't rebuild from source.


The main guix monorepo may set that hard requirement the everything just be built from source, but if you want to run closed source software than your only two options are make a fake FHS or use patchelf. Both methods have pros and cons. Only patchelf makes the result work well with other closed source programs.

While from a philosophical point of view guix is pure, real world usage often needs closed source software.


Yes, for pre-built binaries you can't really get around patchelf. For example, that's also how CUDA is packaged for Guix.


GoboLinux also uses patchelf.


Interesting to see exponential growth in NixOS github stars https://star-history.com/#NixOS/nixpkgs&Date . Perhaps the user base is reaching a tipping point where there will be the resources to address some of these problems.


What fraction of those who starred a project on GitHub become committers to and/or donate more than once to it?

I've always assumed that this is a vanishingly small amount, on the order of one in ten thousand or so, but I could be underestimating.


Currently, there's 8.7k stars, 7.2k forks. And 2000+ maintainers have added themselves: https://repology.org/repository/nix_unstable

So, I would say a pretty large portion.

Also, the contribution model for nixpkgs is just opening a PR, so it's a fairly low (non-technical) barrier-to-entry for most contributors.


> but the connection between those two things is essentially undocumented

I often think documentation would be vastly improved if the author was simply answering the question _what problem did you solve, and how?_, but that seems like it would reveal too much of the dirty details of the design process somehow, so they try to describe the all the neatly encapsulated abstractions and metaphors they've come up with to describe the system (and to encode their own eventual understanding of it) instead. Basically - give us the dirty details and we can make the generalisations ourselves, instead of trying to load the generalised structures into our minds in the most efficient way possible.


This implies that the author of the documentation is the person that wrote the thing though, which is surprisingly rare in larger projects.

I wrote one of the more popular Nix language tutorials[0] and my approach there has been to try and explain everything as straightforwardly as possible and make it work both as a reference and a manual (i.e. you can quickly jump to a section that interests you, or read it from top-to-bottom).

One thing that nixpkgs (the Nix package set, which is ~most of the public Nix code) suffers from is reinvention of abstractions (simply due to its age). For example, most programming languages have their own slightly different implementation of a `withPackages` function that gives you a prepared environment with some dependencies. Some of these use open sets, some of them use closed sets, in the latter cases overlaying things into them can get really confusing even for experienced Nix users and so on.

In my opinion, the community should slow its focus on making lots of new features and spend time consolidating existing logic and agreeing on some basic abstractions. Maintenance work like that is not always fun though, and especially in unpaid open-source projects it's difficult to keep up the motivation for it.

[0]: https://github.com/tazjin/nix-1p


I too have tried nixOS and nix many times in the past few years (and just recently went back to having a computer with it permanently installed, albeit not my main one).

I think another underappreciated reason it's been difficult to use is the CLIs involved are often really arcane (in ways that often remind me of using GNU find, a program I don't think is one that scores high UX marks).

For example, uninstalling something with `nix-env` is spelled `nix-env -e something`, and querying is like `nix-env -iA ...` (yes there's a long form of the options, and most people tell you to never use nix-env, but these were the examples I recalled, I think the UX is similar across other tools). Trying to decipher what these command lines mean and searching for which ones do what you want is just really challenging, they don't follow the "rules" I'm familiar with for other CLI tools. (Maybe they follow norms from some other part of the OSS community though.)

The new `nix` CLI seems to be trying to "dumb" things down a bit (or optimistically -- make a CLI that is more transparent about what operations it's doing), which is great, but it too exhibits some odd behaviors. E.g. recently after an upgrade I found it wouldn't work without passing `nix --extra-features flakes --extra-features nix-experimental-command`, even if I seemingly wasn't doing anything related to flakes. It is indeed marked experimental, so I can't fault it too much, but yeah I've found significant challenge as a new user in just trying to understand the various nix package manager binaries, their options, and how they map to "normal" package manager operations (even ignoring the procedural vs. declarative angle for how you actually manifest changes).

EDIT: Other things I recall get me every time I reinstall nixOS is re-learning that you get to run `nix-collect-garbage` manually, that `nix-channel --add` doesn't automatically update, and I see I wrote down to remember that package names are case sensitive (though I didn't recheck that recently, hopefully that's changed?).


> you get to run `nix-collect-garbage` manually

If you mean running NixOS (the distro), rather than just Nix (the package manager), then there’s an option for this `nix.gc.automatic`. You may also want to look at `nix.optimise`. When using just Nix however, I don’t know if they bundle the timers.


`nix.gc.automatic` is implemented as a systemd timer to run `nix-collect-garbage` periodically.

`nix.optimise` is just a nix conf option. Meaning anytime nix creates a store path, it will automatically dedup files by hardlinking them to a `/nix/store/.links/` path. There's no timer involved with `nix.optimise`. You can manually force existing paths to be "optimised" by doing `nix-store --optimise`


That might be true, but the config would suggest otherwise.

`nix.optimise.automatic` is an option which creates a systemd timer. The existence of an option that creates a timer suggests that it is not automatically deduping the files (or the option is outdated).


Nix has two kinds of problems: the language and the interface. They can solve interface problems, but the Nix language is a tumor that won't be easily removed.

The problem is that Nix the language is the worst of all worlds:

- You can't print anything because the language is lazy. Forcing any values to print them can and will result in random operations happening on your store. You can never know which values are safe to inspect. This kills debugging.

- Everything in Nix is recursive, datastructures contain copies of themselves as hacks to avoid building proper APIs. So again, you can't print anything, even if you're sure that printing the object won't do something crazy to your store.

- There are almost no APIs and no uniformity in Nix. Packages are given total freedom to do anything. The Python ecosystem works totally differently from the Haskell one which works totally differently from the C++ one. It's insane. They use different datastructures, functions, etc. To do the same thing.

- The Nix language has impossible error messages. You will get stuck. You can't view values and you can't get error messages. There's no moving forward from that unless you ask someone.

- Laziness in Nix is different from laziness in Haskell. In Haskell, it's mostly about performance improvements and some cool tricks here and there. In Nix, laziness fundamentally means something: you build up packages and you force them to install them.

Nix the package manager also has serious issues:

- You cannot install Nix in your home account without root permissions (yes, there are hacks, but they break terribly). So Nix is actually less isolated and less portable than something like Anaconda!

- The commandline experience is terrible. Nothing makes any sense. Not the names of the tools. Not their arguments. Why sometimes something is a binary and other times it's a mode of another tool, etc.

- In exchange for making some hard things easy, Nix makes a lot of easy things very hard. Sure, it will manage an isolated environment. But, now you want a package from pip that isn't in Nix? There is literally no way for you to figure out how to do this, you need to find a tutorial online, hope it's up to date enough to work, and follow it step by step. And.. there's a good chance you misunderstood what the tutorial was doing and won't have the correct environment at the end.


> The commandline experience is terrible. Nothing makes any sense. Not the names of the tools. Not their arguments. Why sometimes something is a binary and other times it's a mode of another tool, etc.

As a casual user of nix, this is the worst. I can never remember commands and it just increases barrier to entry. Git has a bad command line UI but nix is far worse even after their recent command line improvements.

Edit: that said, I do like using home-manager to manage configs across multiple machines even though it can be confusing.


> You can't print anything because the language is lazy. Forcing any values to print them can and will result in random operations happening on your store. You can never know which values are safe to inspect. This kills debugging.

I don’t quite follow the conclusion. Forcing values will not result in random operations on the store. Forcing values can only ever create store derivations, which is an internal representation of all derivations that were encountered while evaluating some expression. I won’t ever mutate existing data or remove anything from the store, nor will it actually build anything or fetch cached packages, so forcing values should be unproblematic. Store derivations can later be garbage collected.


Yes. So, if I force a value, like say, I very naively do what I would do in any other programming language, I try to print something I don't understand (How are all of the python derivations stored, I wonder if I can print them?), the result is that I create endless garbage and .. sit there pointlessly looking at an empty screen. Evaluating derivations shouldn't result in work being performed. It should result in derivations being printed to the screen.

But.. it doesn't really matter that much because half of the datastructures in Nix are so circular that you couldn't print them out anyway.


Not being able to use it in a home directory without root was a major turn off for me. I actually started writing a python module to install packages in a way similar to nix (albeit I never got to reproducibility) but ran into problems building glibc and installing it to the home dir. I’d like to continue it one day.

https://github.com/DylanEHolland/pykgr


Yeah, one of the consequences of purity is that packages can't/shouldn't refer to paths outside of the nix store (/nix/store), so it needs to be hardcoded. Otherwise you end up not being able to match hashes against a binary cache.

The glibc issue you ran into is actually a prime example of this problem, and the version shipped in nixpkgs is patched to allow multiple instances by changing various hardcoded paths to the glibc's path in the nix store.


I've never understood why Nix cannot simply rewrite those paths to "look like" the canonical ones when calculating hashes. Surely a pre-processing rule could transform "/home/me/nix" to "/nix", based on my Nix configuration.


Again, purity. Changing the inputs (the derivation with all its referenced paths) would result in the same hash output, and someone substituting your package from a binary cache would receive a package that doesn't work on their system, yet matches the expected hash.


I don't want to publish packages to a binary cache. I just want to install a package somewhere other than /nix.


> - You can't print anything because the language is lazy. Forcing any values to print them can and will result in random operations happening on your store. You can never know which values are safe to inspect. This kills debugging.

`lib.deepSeq` can be used to fully evaluate a thunk. `lib.trace` can be used to emit a log each time a value gets evaluated.

> - Everything in Nix is recursive, datastructures contain copies of themselves as hacks to avoid building proper APIs. So again, you can't print anything, even if you're sure that printing the object won't do something crazy to your store.

derivations are just a dictionary of information, passed to a `derivation` function; which communicates to nix that it should be built/realised. There is a minimal API for a derivation, which is just `name` and `builder` in attr set. In general, the builder will refer to script which may have additional levers. For example, `stdenv.mkDerivation` also expects a src.

> - There are almost no APIs and no uniformity in Nix. Packages are given total freedom to do anything. The Python ecosystem works totally differently from the Haskell one which works totally differently from the C++ one. It's insane. They use different datastructures, functions, etc. To do the same thing.

"Just because two things are similar, doesn't mean they're the same". I wouldn't expect building openssl to look similar to building ripgrep, a python package, or a node package. Each domain has it's own oddities. Python for example expects packages to be installed at `${prefix}/lib/python-${python.majorVersion}.${python.minorVersion}/site-package`

> - The Nix language has impossible error messages. You will get stuck. You can't view values and you can't get error messages. There's no moving forward from that unless you ask someone.

This has significantly improved with nix 2.4+. `--show-trace` will now show you an entire stack trace with files and line numbers.

> - Laziness in Nix is different from laziness in Haskell. In Haskell, it's mostly about performance improvements and some cool tricks here and there. In Nix, laziness fundamentally means something: you build up packages and you force them to install them.

For haskell, I think it was originally a side-effect of how they implemented the language. For nixpkgs, this is still important because nixpkgs is just a large dictionary (attr set). However, doing something like `nix-shell -p cargo`, will avoid having to evaluate all of nixpkgs, just what I need for cargo.

Also, nix doesn't build a package when it evaluates it. Building is done as part of realization (which many commands do implicitly).

https://book.divnix.com/ch04-02-realise-a-derivation.html

> - You cannot install Nix in your home account without root permissions (yes, there are hacks, but they break terribly). So Nix is actually less isolated and less portable than something like Anaconda!

This is because `/nix` needs to exist, and can't be a symlink. sudo only needs to be done once. user installs can be performed after that.

> - The commandline experience is terrible. Nothing makes any sense. Not the names of the tools. Not their arguments. Why sometimes something is a binary and other times it's a mode of another tool, etc.

The `nix-*` commands are a carry-over from the phd days of nix. Where they are reflect closely to the underlying nix machinery. The nix 2.0 `nix <cmd>` cli tries to better reflect "user scenarios", but still kind of a WIP. So I agree, cli is probably one of the weakest aspects of nix right now.

> - In exchange for making some hard things easy, Nix makes a lot of easy things very hard. Sure, it will manage an isolated environment. But, now you want a package from pip that isn't in Nix? There is literally no way for you to figure out how to do this, you need to find a tutorial online, hope it's up to date enough to work, and follow it step by step. And.. there's a good chance you misunderstood what the tutorial was doing and won't have the correct environment at the end.

Depends on the toolchain. Rust and Go builds are almost trivial to support now. Python and other ecosystems which rely on a lot of impure behavior will have the most impedance mismatch with nix.

Mixing a nix python packages with venv is described here: https://nixos.org/manual/nixpkgs/stable/#how-to-consume-pyth...


[flagged]


Impressive trolling.

> deepSeq is basically useless because of how almost every datastructure is ciruclar

Circular references will cause "infinite recursion", and these will cause evaluation errors. So valid nix code will not contain circular references. Compositions of derivations create merkel DAGs. There are also fixed points, but deepSeq can still handle those scenarios.

> trace is almost useless because you can never know if a statement is a value or a computation.

use of trace creates a thunk. So it will always be a thunk.

> Derivations actually put something in your store when you evaluate them.

No. Instantiation will create store derivations, realizations will perform a build, and successful builds create store paths. `nix-instantiate '<nixpkgs>' --eval -A hello.version` Evaluates the expression, but doesn't produce a store derivation.

> Except that I don't need to know this for apt or any other package manager.

You're also probably not creating your own .deb's. You would seek other options.

> Nix is the only one that forces me to learn endless minutia about every single language.

Eventually you will to learn your problem domain. Just because you do python dev on ubuntu, doesn't eventually you will eventually need to know how python finds modules.

> Haha. HAHA. HAHHAHAHAHHA. Oh sorry. That wasn't laughter. That was me trying to hide my tears.

Hmm, I can't see your face. So don't worry about saving it.

> show-trace is about as useless of an addition as I can imagine.

Pretty useful for me, but I also use for work, and free time.

> Nix totally creates a file on your disk in this case. Yes, it doesn't "build" the package as in, it doesn't compile it or fetch it. But it actually does do something on your machine.

Sure?

> The terminology around nix is just disastrously bad.

Because the terms don't align exactly with other existing terms. Haskell has similar issues, where the terminology is foreign for many, but accurate.

> And conda works perfectly without this hack.

Well, conda tries to solve different things. It's definitely not a generic package manager. For supporting python use cases, conda does what it does.

Also, using escalated privileges to install packages is the norm for almost all other package managers. user-level installation is definitely the exception.

> I get it, nix people love nix and love to make excuses for its horrible failures. But seriously, other package managers do this beautifully. Can't we just sometimes accept reality?

Do what? FHS? How many borked distribution upgrades have occurred because of FHS incoherence.

> And that was 20 years ago. The 2.0 CLI is still a mess.

`nix-*` commands makes sense in context of a phd thesis. The `nix <cmd>` 2.0 cli, is definitely more ergonomic. Most package manager cli's until about 5 years ago were also pretty bad. Still remember having to use dpkg on ubuntu to fix some issues.

> Oh boy... how I wish this was true!

It is true, that's why I said it.

> Haskell is such a mess in Nix that there are two entire incompatible toolchains.

nixpkgs' haskellPackages is reflective of all hackage packages, and works well for consumption. If you're eluding to haskell.nix, that's a bit more complex and magical.

> Both of which are incredibly hard to use by the way.

This makes sense that it's hard for you.

> And Python packages which are pure are also a disaster!

The python package ecosystem is a hot mess for all distros [0] [1]. The "I can selectively choose small version ranges of dependencies to satisfy my singular use case" doesn't integrate well to distro's trying to present a coherent package set.

[0]: https://drewdevault.com/2021/11/16/Python-stop-screwing-dist... [1]: https://blogs.gentoo.org/mgorny/2021/11/07/the-future-of-pyt...


I didn't go all-in with nixos -- and I'm thankful I didn't.

I simply tried using it for local development. I thought that I could replace the various language version managers with nix. I spent weeks working with nix to understand the language, how to use it, and reading as much documentation/blogs as possible.

One really frustrating thing about nix I noticed early on is the lack of support for older language versions. For example, if you want to use something like ruby2.2 you'll get a notice saying that the package is insecure and it won't allow you to install it. They mention an environment variable you can set to get around it but even that didn't work properly. Is my use-case not relevant to nix? Even the simplest requests are met with unavoidable blockers. Frustrating.

Something simple like installing a specific `node` version, corresponding `yarn`, and have it work just as well as `nvm` proved to be a challenge.

I managed to get it working relatively well until I ran into weird issues with native bindings on a project that I have literally never seen before. Issues with `dlopen` and the ilk just made me frustrated enough to post this:

https://twitter.com/neurosnap/status/1485427740610375680?s=2...

I'm not even talking about the final stage of nix where all my npm packages are shasum'd and checked into nix, I'm simply installing two packages: node and yarn. Something this simple didn't work for every project I had. This is all not to mention that I have to add a `flake.nix` and `flake.lock` to every project or create some nix repo where I put all these files.

I love the idea of nix and nixos and really wanted to get it to work. Ultimately it's just not there yet and I'm not sure it ever will be.

Right now, the time you save by using nix is completely lost when you traverse down the rabbit hole of outdated documentation and never ending bugs with the actual system.


I am all-in on nixos. I've also experienced those problems, but I for one am glad that those don't work. Let me explain.

My use-case for Nix is to make my build system so reproducible that, if I get back to the project in five years, it should still compile, and if I give it to someone else, they should be able to make it work without any issue. Like the install steps for them should be as simple as "Install nix, then run `nix build .`". If that doesn't work, then I've failed.

About your issue. It's unclear what it is given you give so little information, but I often run into similar problem of native bindings in NodeJS and Python not working failing at runtime. I've found that very often, those native bindings are distributed as prebuilt blobs. This obviously doesn't work on NixOS, because those prebuilt blobs will depend on paths that aren't present in NixOS. Here's the thing: That's not a NixOS bug. It's a bug with that package, which should check whether the prebuilt blob will work on your distro. Most don't though, and just assume the user runs Ubuntu or something.

Here's the thing: That's not a NixOS exclusive issue - NixOS just makes it much, much more visible. I sometimes (though more rarely) had similar problems on old Debians or very new ArchLinux, where I'd have, say, a different openssl version and everything would break. Anyone that has a distro that deviates slightly from how the blob was generated will run into the same issues.

At least with nix, once I have a recipe that works, I can be more or less guaranteed that the resulting flake will work reliably for everyone, everywhere. The only thing that could break it is if the source of the packages goes down. And that is great. That's my number one use-case. And for this, Nix is absolutely prime-time ready, and in fact it's the only package manager that has such high guarantees around.


> It's a bug with that package, which should check whether the prebuilt blob will work on your distro.

I used nix on arch. I tried using nix via flakes and it didn't work. Then I removed flake from the project, loaded `nvm` and ran it again: it works.

For both `nix` and `nvm` the underlying OS was the same. I buy your argument but at the end of the day, I have to get work done. I also feel like I need to mention that I have already sunk dozens of hours getting nix to work properly, this was really the last straw for me. Nix debugging fatigue got the best of me.

Again I need to mention, my flake file had two packages: node and yarn -- and this didn't work for a project with native bindings.


Yes, for what it's worth I don't mean to diminish your experience. Nix definitely requires pouring a lot of time to get to a working state, is very under-documented, has terrible error messages, will cause errors that don't exist elsewhere, and has no clear "best practice" guide on how to use it. Those are all super annoying pain-points and I do hope they'll get better over time.

At the end of the day, it's a bit of a balancing act. For me, the up-front work of getting a nix build up was definitely worth it, as it allowed my collaborators to get up to speed on complex projects with complex build systems quickly, and gave me a very simple way to reproduce old binaries when I need to bissect bugs. But it definitely took a huge up-front investment.

If all you're after is a way to pin your toolchains, I echo the recommendation of asdf made in a sibling comment. I use it on simpler projects where I just need to pin a language version (and let the language tooling pin the rest of the dependency tree - package.lock-style). It will often be enough, and is much easier to get started with! The only thing that won't pin will be system dependencies (like openssl and such), but that is often an acceptable tradeoff.


I fully agree, but which there would be an escape hatch when needed. It is especially frustrating for python developers.


This is a maximalist position and an admirable goal. As far as I can tell though, this forces Nix (or the developer) to get into the business of re-packaging entire ecosystems.


I've recently discovered `asdf` (https://github.com/asdf-vm/asdf). It's not as technically clever as nix, but it does allow you to manage version of most language toolchains with one tool.


This is a great tool, I've used it for years. Its not really meant for production systems, but to manage multiple versions of Python, Node, Elixir, Erlang, Ruby etc it works great.


I used Nix and NixOS for more than a year as my daily driver. I've contributed to Nixpkgs. I agree with the author on most if not all of their points. Here's some advice: If you're interested in Nix, take steps not leaps. Use the Nix package manager. It's a great addition to distros like Debian. Then maybe use Home Manager (even though I think Home Manager is way over-hyped, see below). Then write a bunch of derivations. Then go learn the ins and outs of systemd. THEN go play with NixOS. If you skip any steps before the NixOS part, you will get lost in the weeds. The happy path of NixOS is very happy. The other paths are very challenging. You should really know a lot about both Nix and more traditional Linux packaging methods before you go all in on NixOS. And honestly, learning more traditional Linux utilities is a far better use of your time than learning the complex, ever-changing Nix landscape. It's a super cool project, I'm so glad it exists. But it should really be viewed as a _research_ project. Many good things will come out of it. Hopefully one of those things (like the author mentions) is a Nix clone written in a common language with a much better UX. Until then, try to avoid the urge to play with shiny things and stick to just using the Nix package manager and more traditional tooling.

And about Home Manager, the reason why I think it's over-hyped is because it provides a declarative approach to something that was... already declarative. Your $XDG_CONFIG directory does not need a leaky Nix abstraction on top of it. It actually just makes everything 10x harder. The Home Manager Nix abstractions are terrible but people seem to love it? Why would I write my i3 config in Nix?? An i3 config literally cannot get any simpler. Why make it more complicated for no benefit? I also don't need a declarative list of all installed packages. I'd rather just use `nix-env` personally.


> And about Home Manager, the reason why I think it's over-hyped is because it provides a declarative approach to something that was... already declarative. Your $XDG_CONFIG directory does not need a leaky Nix abstraction on top of it

I don't really agree, I spent about 30mins to get my home-manager config to support an m1 mac [0]. I don't really want to think how long it would take me to look up all of the homebrew package names, and learn a new package manager. Instead, I just pushed all of the linux-specific items into their own bin, a little more logic, and I was able to get back to a comfortable terminal + git + vim settings.

Also, nix exposes congruent configuration management[1]. The state of my system is an exact reflection of the configuration. With other tools like ansible, vagrant, etc, I would get reconciliation configuration which is close on initial install but configuration drift is an ever-present concern; not to mention that large recipes and playbooks can take a very long time to run. Going the homebrew route would be divergent configuration, it would be very hard for me to get back to a certain configuration. With nix (and by extension home-manager), I can version control the configuration, improve it, roll it back, w/e I want.

> Why would I write my i3 config in Nix??

You do get some type checking, although the iteration time would probably be similar. You could also just do `xsession.windowManager.i3.extraConfig = builtins.readFile ./i3.config;` if you really just wanted to wholesale read in your existing profile.

> I'd rather just use `nix-env` personally.

nix-env is a double edge sword. You can rollback (somewhat, I believe it's just a stack of all changes), which is an improvement. However, nix only retains the "derivation name" to try and management. But for packages like python38, if you try to upgrade it, it will determine that `python-3.11-a3` is the package which is the most up-to-date. I try to discourage using nix-env.

[0]: https://github.com/jonringer/nixpkgs-config/commit/37ddfefa1... [1]: https://blog.flyingcircus.io/2016/05/06/thoughts-on-systems-...


WRT Home Manager, that makes sense and portability is definitely a cool benefit of Home Manager. But keeping all of your config for multiple workstations in one has drawbacks too. I found myself frustrated when I wanted to install something but I forgot to push changes from another computer so I needed to stash or rebase later on. Just to install a package! I only have one workstation and a small amount of servers, and I tried running NixOS on everything. It worked well (I still have multiple servers running NixOS actually) but for me, the benefits just weren't worth the overhead of living full time in Nix land. Now I use Debian stable for new servers and I prefer it. But yeah, lots of personal preference here. And WRT nix-env, it seems troubling to me that the fundamental tool of the Nix CLI is something that you recommend people stay away from. I think you may have be right about that recommendation, but I think it shows that Nix as a product is far off from being a "pick up and run" tool for average developers. I really hope that it will somehow get to that point but, like the author says, I don't think it will be Nix itself because it has burrowed itself into a bespoke, complicated UX that has a huge learning curve. I like simple things with elegant UX so that's where I'm coming from. There's just so many ways to use Nix, like, TOO many ways. But I'm glad that they're pioneering this space.


The major drawback for me with the nix language is that at in any location in the code I don't know what is the "shape" of the data it expects. Runtime does tell me but wish I could just go and look it up. Having a typed language would fix this.

Other than that, been using nixos for ~2 years to configure my dev desktops: using the same config everywhere makes the machines identical and is thus a great relief, mostly don't carry a laptop anymore.


I generally use `nix repl` for this.

I have an example of myself looking at what my configuration.nix renders to while debugging a module I was adding to nixpkgs/NixOS.

https://youtu.be/bkDYmvKINm8?t=1949


I wish someone wrote a GUI for Nix that

- System and home manager support

- Multi-machine and multi-user support by following the practice of splitting the config up by roles / intent

- Provided an "about:config" like experience for well defined fields

- Integrated with git hosting for easy backup/restore

- Included built-in search of other Nix config repos so you can see how other people did things

(I was originally hoping to but my attention has been pulled in other directions since I keep giving up on adopting Nix)


There isn't a GUI (although nix-gui [1] is in progress) but there is a very good and decently mature flake template called Devos [2] for accomplishing most of what you ask. It basically lets you write very simple profiles to group configuration settings together and then you can assign those profiles to machines or users.

Things could still be improved but I found Devos to be good enough that NixOS is usable for personal and even some production use.

Once I get around to it, I'll move/mirror my configs to Github which should serve as good examples.

[1] https://github.com/nix-gui/nix-gui [2] https://github.com/divnix/devos


> It also means that it's impossible to statically know what other packages a given package might depend on. Currently, the way this is implemented is essentially grepping a package for /nix/store/ to try to figure out what the dependencies are, which is obviously... not great.

That’s not quite how it works. One of the features of the Nix language is that the interpreter associates an invisible “context” with each string to carry dependency information. When you coerce a derivation to a string (in order to use its path in another derivation), the context remembers the build dependency on that derivation. Any derivation built using that string will inherit build dependencies from its context.

https://nixos.org/manual/nixpkgs/stable/#function-library-li...

https://shealevy.com/blog/2018/08/05/understanding-nixs-stri...

It is true that runtime dependencies are computed by string-searching the build output, but only for paths that have already been determined to be part of the build dependency closure.

https://nixos.org/guides/nix-pills/automatic-runtime-depende...

Since store paths have a fixed format with a cryptographic hash, this works plenty well enough in practice—as evidenced by the fact that Nixpkgs exists and has more packages than any other Linux distribution.

https://repology.org/repositories/statistics/total


Ranting about Nix being hard is like going to Russia and complaining that the local language doesn't sound English. Like many other things worth your time, it's something you learn through deliberate practice and focus.


> it's something you learn through deliberate practice and focus.

That sounds like a massive barrier to entry and more effort than solving the problems I currently have.


> That sounds like a massive barrier to entry and more effort than solving the problems I currently have.

It's a lot of effort, and in the end you're left with a different, more interesting set of problems.

I don't regret spending that effort, but it needs to be easier if we want more users. I don't think there's any answer besides lots more effort from the nixos dev side, however.


There are many things that require tremendous practice and study that aren't worth your time as well.

You have to justify it somehow. And I just don't see the value prop in Nix yet. I would compare it as such: Nix is to Docker what Google+ was to Facebook. Maybe it's superior. But the benefits are so marginal that the costs of switching will prohibit most from giving it a shot. Most people want reproducible builds and easy configuration. If they are getting that from Docker, why switch?

At the distro level, average people are going to benefit by having reproducible builds done upstream. Debian does this, IIRC. As does F-Droid and a few others. If you trust Debian, then you implicitly trust the packages Debian installs.

NixOS doesn't even solve the real aspect of Linux that I find terrifying: security. Linux is a blob of overlapping and bewildering security mechanisms and tools. You have groups, permissions, SELinux, capabilities. The whole thing leaks like a sieve.


I am a NixOS user, but am interested in Guix. Based on a cursory look I have some questions:

- How big is Guix on GNU? Does it throw wrenches in your way if you do anything "unfree"? How easy is it to install the nvidia-drivers? - How is the package ecosystem? 20.000 official packages seems a little low? Are there community packages? How easy it is to create your own packages? Why is the Neovim package only at version 4.4 [1]? Isn't version 5 officially released? - How good is the documentation? - Is there a project roadmap? Are there any issues preventing adoption?

[1]: https://guix.gnu.org/en/packages/neovim-0.4.4/


As someone who's used Guix:

They only package linux-libre, so if your device needs proprietary wifi, you're out of luck. Also they package icecat instead of firefox which is kind of cool.

I tried to package slash'em a few years ago, and found it very difficult. The documentation is gigantic and it's hard to find what you're looking for. I found folks on IRC very helpful.


>They only package linux-libre, so if your device needs proprietary wifi, you're out of luck.

I wouldn't say you're out of luck in that case. If you need upstream Linux, nonguix has it: https://gitlab.com/nonguix/nonguix


Oh yeah, nonguix. What a mess.

An unsupported pile of packages that break regularly. And if you dare ask a question about it in any official forum you're kicked out.

Nix has its issues, but guix is ideologically radical in a extremist "We don't care if your machine is unusable, enjoy your freedom", kind of way.


Wow. Not true.

You're not kicked out if you "dare" (how brave!) ask a question about it. You're welcome to chat about it on #nonguix. Is it really that hard to accept that #guix is not #nonguix and that we don't discuss proprietary software on the main channels?


> You're not kicked out if you "dare" (how brave!) ask a question about it. You're welcome to chat about it on #nonguix. Is it really that hard to accept that #guix is not #nonguix and that we don't discuss proprietary software on the main channels?

So... if I go to the official channel to discuss guix and I said the forbidden word "CUDA" I get kicked off? You wrote a sentence in a tone that implies it disagrees with what I said, but it just restates my point.

Guix puts ideology over the welfare of its users.


"CUDA" is not a forbidden word. Where does that idea come from?

Here: CUDA, CUDA, CUDA!

If you go to the official channels and ask about CUDA people like me will likely tell you that CUDA is proprietary software, so we don't cater to it in Guix. If you then acted all offended and angry that would be the end of my interactions with you; otherwise I'd tell you the right venue to ask for help with CUDA without starting a discussion of proprietary software on our main channels.

If that's not good enough for you then we're working on incompatible assumptions of how communities work. If you think you're entitled to discuss whatever you want on the community's channels then you're going to have a bad time. Life must be hard demanding of other people to humor you when they really don't want to.

> Guix puts ideology over the welfare of its users.

No.

light_hue puts ideology over the welfare of the community.

PS: CUDA!


You acknowledge that I'm not allowed to talk about CUDA in the guix community but then are turning it into some sort of game?

This is the childish and dismissive attitude of the guix community.

I'm not replying anymore.


There's two kinds of projects:

Those who care about ideology

And those that want things to actually work and get stuff done.

The second group is the one that wins all of the users. Of course, there's room for both types of projects in the world. But don't be surprised when people who don't want to waste time on a political discussion that doesn't effect their lives in any way don't want to put up with it.


> But don't be surprised when people who don't want to waste time on a political discussion that doesn't effect their lives in any way don't want to put up with it.

There are no political discussions about the merits of proprietary software on #guix or the Guix mailing lists. It's just not the right venue for that.

So, yeah, I wouldn't want to waste my time on discussions like that, and we effectively don't.

> And those that want things to actually work and get stuff done.

This is a false dichotomy, but if you don't see it this way I'm not going to be able to convince you otherwise.


As with Debian, wifi devices are about the only thing you might have "troubles" with when opting for Linux-libre:

https://guix.gnu.org/manual/en/html_node/Hardware-Considerat...


Yes, the low number of readily available packages is the primary reason I switched from Guix to Nix.


The default channel of Guix has more than 21,000 packages. Is that what counts as a "low number" these days?


People don't care about 21000 packages they care about say 5. An environment which has 5 has excellent software availability for that user. One that has 3 it has for that user poor availability.

Debian has far more packages but pure numbers are hard to compare when packages are broken up in different fashions and missing 1 package that 30% of users desire is more problematic than missing 1000 package that collectively 3% of users desire. More so if that 1 package is required for the users computer to work properly or indeed at all.

For a substantial number of users Guix has insufficient software availability and one notes that for example that for example Fedora which is very free software focused doesn't forbid you from discussing nonfree software in its channel. Elsewhere you tout that RMS wasn't allowed to remove clang from the package collection as if it were proof of value. Other distributions don't have benevolent dictators trying to remove useful free software for ideological reasons.

We are all habituated to what we perceive as normal. Neither situation is actually normal and indeed is sufficient reason by itself to ignore a superior product like GUIX and does a disservice to the cause of free software by decreasing the number of people willing to put up with it. I'd go so far as to say that without more liberal attitude from other player the entire free software movement would already be dead. A free software only default is perfectly fine as is acknowledging practical consideration to drive interest in free software in order to provide a larger base of people from which to draw potential contributors in order to continue to strengthen the base.

If GUIX is Debian than someone ought to just make a Ubuntu that largely ignores ideology and focuses on the practical.


> Elsewhere you tout that RMS wasn't allowed to remove clang from the package collection as if it were proof of value. Other distributions don't have benevolent dictators trying to remove useful free software for ideological reasons.

My point was that neither does Guix. But you do you.


Well, I need CUDA so...

I'm aware of nonguix, but it seems to break a lot more than the same packages in nixpkgs.


For me coming from Arch and AUR, yes.


And properitary drivers.


The "nonguix" repo has got you covered for all your vanilla Linux needs.


No, it is an unofficial mess that breaks very often.


For me, Nix is one of those systems where I just had to spend a long time with it to understand it. It wasn't something I could master within a few weeks. After about a year of continued use and struggle, I've finally gotten to the point that I'm comfortable going off the beaten path and even contributing to nixpkgs.


Yeah I agree here, I used to find Nix somewhat unintuitive but I have now managed a few non-trivial projects. One being to package Xilinx ISE as a flake [0], and another to build an immutable Linux image using busybox and runit [1]. For the latter I still haven't quite gotten where I want but that's because I realised I needed dbus for avahi and dbus is just so ridiculously complex.

[0] - https://github.com/benpye/nix-fpga-tools/

[1] - https://github.com/benpye/nix-embedded


As a NixOS user, I like this article a lot. But one thing stood out to me:

> It also means that it's impossible to statically know what other packages a given package might depend on. Currently, the way this is implemented is essentially grepping a package for /nix/store/ to try to figure out what the dependencies are, which is obviously... not great.

I'm not sure what the author is talking about here. He says it's impossible to find static dependencies, but then describes how it's implemented.

I'm also not sure why the author thinks this solution is bad? Maybe because it could accidentally pull in more runtime dependencies than you were expecting?


Just like any other package manager, this information can be queried without grep.

  nix-store --query --references /run/current-system
  # or with new CLI:
  nix path-info --recursive /run/current-system

This information is managed and tracked for you. And while I understand the desire to have "normal" FHS paths, the hashing and the injection of the hashes (think "tags") is exactly the mechanism that makes dependency tracking possible to do automatically. This is similar to the trick by log aggregation tools that assign each source event a UUID to make it possible to search and find it in all sorts of unexpected places. Otherwise how would you know what "/usr/bin/bash" really means if you came across it in an error log? What version was it at THAT EXACT MOMENT? That hash in the path is the tag that enables all of it to work.


Right, after the package is built the dependencies are stored with the package metadata like any other distro.


There is also a nix store query to find this information. I’m on my phone so I can’t get it right now, but I’ll try to remember to grab it later. You can query all the dependencies of any derivation… this is how I verify the full suite of software that nix-direnv installs for local projects, by querying the store for its cached derivation’s deps.


i think “statically” means without executing the nix language stuff


Oh, I see, thank you for the clarification. I figure it’s just as hard to get a list of transitive packages dependencies without apt or whatever as it is without nix, though. And there’s an argument to be made that a pure function on a derivation can be considered static. But I don’t disagree with any of the general points: especially that nix the language, or more specifically nix in practice as it’s used in nixpgs, is very hard to learn.


I've been using NixOS for years and this article made me reflect on the nix language a bit. The nix language is weird looking, particularly if you don't have haskell/ml experience. It also has some unusual semantics that make for punctuation being necessary in places it otherwise might not.

However, I think what trips people up is how foreign the semantics for the nixpkgs standard are. If you are spending all of your time confused and there's this thing that looks weird and confusing, which one are you going to blame?

I will say that a dynamically typed lazy language is its own special hell for debugging though. The only sane way to find an error is to:

1. Start with something that is error-free

2. Make a single small change and see if it is still error free

3. repeat until you get an error; that last change is where the error was. Unless of course you just caused something else to be evaluated that wasn't before. Then that something else is where the error is.


> But at the same time, I can patch the Python interpreter and then have some software running on my system actually use the patched version, since all of this stuff is configured through the same configuration system.

If I want to do that, then I have to rebuild the entire system so as to use my patched version. This is because the NixOS "dependencies" are basically expressed as hashes of binaries.

That is for me the single biggest deal-killer of NixOS, Guix, and while we're at it -- static linking.


This can be avoided my not overriding the package attribute, but instead creating a new one and then only using that as-needed.

What you describe as a deal-killer is just purity and safety. Updating a build input gives no guarantee that the build output will be the same or even work. Sure, as smart humans we can say “I know this change does not impact X, so it’s fine”, but NixOS is based partly on the claim that we do not do that reliably or consistently, and tries to save us from ourselves. Yes, it means more rebuilding, but it also means less debugging, confusion, etc., when things do go awry.


> What you describe as a deal-killer is just purity and safety.

It's not really "pure" since you are packaging software with side-effects to begin with. Even if you use the same text bit by bit it's also no guarantee that the output will be the same or that even it will work. You are still relying on some human claiming that it will do.

So the advantage is not really that big in practice, and the disadvantage means rebuilding the world on every low-level package change. Even Gentoo fares better in that regard.

And yes, I am aware that the deal-killer is at the core of these approaches, which is why I don't take them seriously.


I think you might be vastly understating the advantage here. I've been the guy who shipped software that "worked on my machine" but didn't work in production because on my machine there was a system installed dependency that got used but didn't exist in production and the only way to discover that was to ship to production. Nix makes that a far less likely scenario. It get's a lot of things wrong in how it achieves that but it fixes a real and very substantive problem.


Sure, if you have a program that outputs a random number, NixOS doesn’t make it suddenly always output the same number (would one even want that?). Nix aims to remove side-effects from the build system (and some aspects of the execution environment), not from within the program itself. Whether the program itself is pure is up to it, not NixOS. But the point is that NixOS eliminates whole classes of why the output may be different.


NixOS and Guix do _not_ require static linking. They go even one step further and allow many versions of a dynamic object to exist at the same time and each software on your system chooses, which version of the object to link against.


You cannot change the version used by a given binary unless you rebuilt that binary, effectively making it similar to static linking (you still get sharing of DSO files & pages, though). i.e. even if I have two versions of some low-level library installed, depending installed packages still hardcode which one of the two low-level libraries to use, and if I want to switch the system from one to the other, I have to rebuilt it (or at least all depending packages).

Suppose I make an overlay of libc to add some functionality that I am debugging which changes the libc binary, albeit not the ABI. Can I still reuse the same packages? Can I still reuse someone else's binary cache ? Basically, can I do without having to rebuild the entire system (save for libc) ?


It is correct to say that dynamic linking with rebuild-on-input-dependency-change (like NixOS does) is similar to static linking when it comes to rebuild behaviour.

However, also remember that changing dynamic libraries behind executables back is a concept that only makes sense in the presence of ABI compatibility. This is predominantly a C concept (or at least popular in the C world), and much less so for other linked programming languages like C++, Haskell, Go, and so on. Thus Nix being a general-purpose build system takes the general route here, and builds can also use their dependencies during the build step (e.g. a packaged program's autoconf suite might check the version or a symbol provided by a library), which requires full rebuilds for reproducibility.

(Nix is working on content-addressed instead of input-hash-haddressed builds, which might open the door for avoiding many rebuilds that do not affect build _output_.)

That said, it might still be quite easy to achieve what you want:

* If you want to iterate on a C library that's early in the dependency graph, in am ABI-compatible fashion, you coul duse LD_LIBRARY_PATH or LD_PRELOAD on it.

* If you want to override libc in an ABI-compatible way, you can use `patchelf` with its `--set-interpreter` and `--set-rpath` flags to replace the libc an executable is linked against after the fact. For example, you can make an `.override` of a nix derivation that just `cp -r`'s the existing files, and then calls `patchelf` on them.

I have used both these methods to work around glibc bugs that I patched out, avoiding even to have to recompile my own software at the top of the build dependency tree.

Some more relevant links about replacing glibc specifically:

* https://github.com/NixOS/nixpkgs/issues/50329

* https://github.com/NixOS/nixpkgs/issues/129595

If you want to replace things system-wide instead of for your own software, the `system.replaceRuntimeDependencies` mentioned by a sibling comment might be a good choice.


Yes, there is system.replaceRuntimeDependencies [1] that does what you ask: It replaces dependencies recursively in derivation outputs through binary patching, creating _new_ store paths (so immutability is preserved). It does so by replacing all occurrences of the store paths of the original dependency (e.g., libc) with the new ones [2].

[1] https://search.nixos.org/options?channel=21.11&show=system.r...

[2] https://github.com/NixOS/nixpkgs/blob/fad04722fc3d692e3511e5...


Grafts in Guix support this use case: you know you're providing an ABI-compatible package replacement and don't want to rebuild everything that depends on it:

https://guix.gnu.org/en/blog/2020/grafts-continued/

We use that for security updates, but also in other situations where we know we can take advantage of it such as the new `--tune` package transformation option, which tunes a package for a specific CPU:

https://hpc.guix.info/blog/2022/01/tuning-packages-for-a-cpu...

Similarly, as a user, you can "graft" a replacement straight from the command line using `--with-graft`:

https://guix.gnu.org/manual/devel/en/html_node/Package-Trans...


Guix has graft for something similar to this: https://guix.gnu.org/manual/devel/en/html_node/Security-Upda...

It's one of the points where it differs from Nix.


Is this different from Nix's `system.replaceRuntimeDependencies` posted in the sibling comment?


I was hoping to get an answer to your question as well, but I've only used Guix, so I can't really tell.

Guix grafts are used to distribute updates, and they're quite easy to use, that's all I can say.


If you are fine with sidestepping the system as intended then nothing prevents you from forcefully replacing the dynamically-linked, shared libc in the nix store.


I don't understand what's wrong with the Nix language. Sure, the space-delimited lists can get a little annoying, but other than that it's minimalistic enough to use a configuration language but complex enough to do more complicated things like overriding derivations. Maybe it's because I learnt Haskell before Nix so the syntax is very familiar to me, whereas I see how it could be a learning curve for other users.

What I find the most annoying is the lack of documentation of some of the nixpkgs functions. Some of the functions provided by import <nixpkgs> {}.lib appear to be the same as the ones builtin to the Nix language and there doesn't seem to be any clear guidance on when to use which version. I've also had to look at the source code to find out the difference between writeTextFile, writeText, writeTextDir, writeScript, and writeScriptBin. [The docs][1] explain writeTextFile, but the only documentation for the rest is ‘Many more commands wrap writeTextFile including writeText, writeTextDir, writeScript, and writeScriptBin. These are convenience functions over writeTextFile.’.

Additionally, it's a bit frustrating for me how all the documentation for nixpkgs — the lib functions, how to make a derivation, specific details for building packages in certain languages, how to contribute to nixpkgs, overriding packages/overlays etc — are in [one gigantic web page][2] that's quite slow to load and even slower to search for things in.

[1]: https://nixos.org/manual/nixpkgs/stable/#trivial-builder-wri...

[2]: https://nixos.org/manual/nixpkgs/stable/

(copied from my Lobsters comment https://lobste.rs/s/psfsfo/curse_nixos#c_y83hvy)


There is Nickel in development which might replace Nix (the configuration language) in future.

https://github.com/tweag/nickel


Let's hope that it gets proper docs until then, that actually lay out its syntax before trying to compare it to other languages. From what I gather, it's branching off of Nix and assuming familiarity with it. What does the `in` keyword do and why is it everywhere?

From everything I can see from the code examples I highly doubt that it will reduce the number of complaints NixOS has about using a weird language.


NixOS and Guix are nice! Guix folks are doing some wonderful work off late.

I have used NixOS for a while. But for casual desktop GNU/Linux users, it seem like solving a problem that doesn't exist. Have been using Debian on all my computers since 1998. Unless you are using Debian unstable and updating it daily, I hardly hit any breakage.

Servers and deployments is another story.


The magic sauce for desktop users is home-manager, which allows declarative setup of your user environment. And rollbacks to previous generations with one command , in the OS or the boot screen.

I use it to set up my window manager, fonts, Vim/VS Code/Firefox plugins, configure all my tools, set up user-specific systemd jobs, ...

I recently got a new laptop and I had my perfect setup in 20 minutes (install NixOS, git clone config repo, add new machine specific config (disks), apply NixOS config).

I appreciate that it's not for everyone , but it is brilliant.


I also forgot to add one thing that tripped me recently but I am sure a solution exists for it that I am not aware of.

In Debian we do: apt-cache search <search string>

It gives quick results.

The equivalent in nix is (correct me if I am wrong, it has been a while since I did this): nix-env -qaP <search string>

This command takes so long to execute. On a tiny VPS, when I did this, it ran out of memory. I wish things like this are improved.


How does Guix compare to Nix? It seems like by using a scheme-based DSL instead of an ad-hoc configuration language, it solves one of the main complaints the author has about Nix.


IMO Guix is better but still has some work. My major pet-peeve of Guix is it's anti-proprietary software, which is a necessary compromise. The world is composed of many different people and beliefs; software should be belief-agnostic.


I think it's worth pointing out that guix will not package proprietary or binary software in the main channels but nonguix exists for those needs if one absolutely needs to have those packages in guix. At the same time guix packages flatpak which allows one to install most, if not all, of the proprietary packages they may want to use. I think the compromise from the guix maintainers is to develop and distribute free software but at the same time being silent on how a user goes about adding proprietary packages to their system. Which is fair IMO.


Guix is cool but has two big drawbacks:

The major one: package ecosystem. Nix has the largest and most up to date package repo of any distro [1]).

Nix is already somewhat niche, but the Guix community is tiny.

They also have a hard stance against proprietary software, so many things that are packaged for Nix aren't for Guix.

Second: it's pretty slow. A lot of the Nix functionality is implemented in C++ . Guix is all written in Scheme, and uses a rather slow implementation.

The main advantage is more coherent tooling and documentation. (Guix is much younger and doesn't have decades of legacy cruft)

[1] https://repology.org/repositories/statistics


> its pretty slow

Would love to see benchmarks of this if you have them.


In the footnote:

> borgcfg ... Truly one of the worst languages I have ever seen.

I'll see your borgcfg and raise you a reverse-Polish notation stack based one, written in a couple of hours and supporting a repo trading system for an investment bank for years.


And this whole time I thought the finance world was held up on an Excel spreadsheet.


Oh, it’s waaaay more fragile than that…


Absolutely agree that "isolated installations" are the future. Plan9 was way ahead of their time when they used their version of chroot and mounting to achieve basically something like that. And now the Fuchsia team is making this a pillar of their architecture even going so far that all installs are transparent: almost like running an app directly from an URL. And sandboxing it like a webpage, even when it's a native app. Ooops, I digress. But I really see this as a trend and NixOs one of the systems which are used to collect experience on the concept of isolation and refine it.


From everything I've read it seems like NixOS would really take off if they had great docs like Rust (both api docs and compiler error messages). Well, that and debugging.


I certainly echo that it's a bit of a curse because as soon as you go off the happy path you start needing to learn the internals & the language neither of which are particularly easy. That being said, the rollback capacity, the ability to include a deterministic set of software needed for any particular software project (the inclusion of shell.nix or build.nix in a repo) is one of the few new pieces of technology that I'm unambiguously excited about becoming wide spread. It doesn't feel like it carries a dual use problem like AI or cryptocurrencies.

This could mean the first time you want to use something that isn't inside nixpkgs, which to be fair has now overtaken AUR to be the biggest single repo of packages: https://repology.org/repositories/statistics/total


> The worst is, of course, GCL/borgcfg, the (turing complete) configuration language for Google's internal job scheduling software.

“Noone has borgmon readability.”

— Broccoli Man (https://www.youtube.com/watch?v=3t6L-FlfeaI)


I feel the conflict between the good and bad here is interesting.

First, it has to be acknowledged that the reason everything can be installed to separate directories is a modern luxury. Hard drive space used to be much more constrained.

Second, everything being installed to a unique location is why everything has to be rebuilt. And gets into the fun that you can't update just part.

The problem, then, is we never converged to foundational level software. That is, if we agreed that bash was a foundation of the system, there would just be the one. And you wouldn't get an odd web of versions of it linked everywhere. All code would just use bash.

I'm assuming there has to be some level of this in the package management. That some software is foundational to the system, while some is convenience shared code for applications?


> The problem, then, is we never converged to foundational level software. That is, if we agreed that bash was a foundation of the system, there would just be the one. And you wouldn't get an odd web of versions of it linked everywhere. All code would just use bash.

I can't imagine how this would be possible, in practice. Software has fractal complexity, there are always things that need to be tweaked, unexpected interactions that need to be decided one way or the other (and changed back).

I remember when Android decided to ship sqlite as part of the base OS, and everyone celebrated this robust, stable inclusion. And yet, sqlite continued to evolve, such that many Android apps nowadays just bundle their own version rather than use the OS's outdated version.

Perhaps we're still in the early days of computing, and only just now starting to see a flattening of the S-curve of change (more likely, I've gotten older and "everything old is new again"), and now some kinds of "foundational" software can stabilize.

But I note that there hasn't been a modern replacement of POSIX, and what POSIX defines is considered hopelessly outdated and limited.


This can really only work if we accept that some things will not change. Only software I know that did this is TeX.

And it was deliberate there. He acknowledged that done things could be improved. Or just done differently. But, he made the choice that it is a foundation for a lot of other code, so bug fixes only.

Edit: I should say explicitly that I agree with you. And I am interested in how this can be solved otherwise.


Author might like to try Guix!

It shares all the advantages listed and fixes the main complaint via using Scheme Everywhere. That said, the EDSLs for package and operating-system declarations are pretty darn nice. You don't really need to know lisp at all to be effective. Plus, the extensive documentation of both Guix and Guile Scheme are all available right there at the command line via the info docs.

Even more, Guix already has tooling to spin up containers and VMs with profile content exposed on FHS paths! That addresses the /lib/ld-linux.so.2 and #!/bin/bash issues for using foreign binaries.

About the only advatage of Nix over Guix I am aware of is that nixpkgs has more packages than Guix. Curious if there are others?


Guix configuration is in Lisp, a well-established language (60 years and still strong). Moreover, Guix package code is written in a clear, declarative style. Nix package definitions, on the other hand, typically embed shell scripts in them.


Yes, but systemd is missing and most drivers because they are not gnu.


Guix System uses Linux as its kernel, so no driver "is GNU", whatever that means.

The choice to use the Shepherd rather than systemd is motivated by a vision that we can get better integration using a single language and set of tools all the way down. As an example, this gave us services in containers several years ago:

https://guix.gnu.org/en/blog/2017/running-system-services-in...

Overall Guix System alone gives a coherent and unified view of the system; systemd also does that to some extent, but it's not in as good a position as Guix's declarative OS configuration to do that. For example, this is the kind of system exploration that Guix enables:

https://notabug.org/civodul/guix-explorer


I have been using NixOS for the last 6 months or so and it's been great for a daily driver. I use it across a half dozen machines I use for various purposes (desktop/laptop, some servers and VMs) and keep a similar setup on all my machines so they're similar and have all similar packages for reuse for my workflows. The author is on point when you have to do any custom stuff (I basically have a bunch of hacks that do things a very non-NixOS way or use docker/containers for stand-alones) how rapidly things get out of hand. I still haven’t bothered trying to decipher some of that world yet.


Using NixOS to avoid the problems of dependency hell is like intentionally getting infected with sars-cov-2 to get immunity. You're intentionally exposing yourself the very thing you're trying to avoid.... to avoid it in the future?

Instead of only having to, say, statically compile a single program that doesn't quite work with your system version of glibc or whatever, now you have to do do it for every single program. For popular software this is okay, someone else has probably already done the work. But for the long tail (most software) a pre-existing config won't.


I've been running NixOS for 2 months having had my eye on it for a while.

So far I feel like I'm making progress. Having started from the masochism of Gentoo followed by the relatively lesser masochism of Arch my feeling is at least these hours of configurating are not being poured into the void but will have a substantially longer payback period. Reinstalling Arch or Gentoo always inevitably involved a large amount of remembering long forgotten config incantations which despite a certain number of git configuration repos for system and home and some scripted installation tended not to avoid a significant amount of tedious unfulfilling grunt work at the bootstrapping console.

Mind you I'm interested in views from those who are further along and feeling the pain. I can well believe that that getting to the first '80% how I want things' configuration might be much easier than the next 10% on the same path.

I'm using unstable, flakes, and home manager as a module in the main configuration.nix.

The language doesn't feel particularly arcane or difficult though so I don't relate to that complaint yet. As others have mentioned. A few fairly standard functional features (recursion, destructuring, first class functions) plus JSON sounds about right.


It's a pretty good ride still (depending what you need). I tend to run a tiling wm with lots of customization and it's great to just have all these tools and behavior synced everywhere. I build emacs from scratch with lots of packages and special config. I have neovim with all the goodies, firefox installing with the plugins and config I want.

All automatically synced between my machines. I also really pushed myself to learn to maintain nixpkgs. It's been tough, but definitely worth it.

Config, if you want to take a look: https://github.com/pimeys/nixos


Thanks for the config it looks interesting and quite mature, the possibility of sharing such kind of has me on board.

Incidentally I was coming from i3 (xmonad before but wanted something I could switch over to wayland where I could watch video like its 1995). Never in love with configuring all my own task bar, power management, disk mounting, screenshot program, rubbish task launcher with no icons, etc. but did it anyway for the sake of beautiful ceramic tiles... I've been playing with pop shell gnome extension for creature comforts, which via dconf plugin of home manager actually seems text configurable. Not that I've done it yet.

See you are using sway, ever considered going IKEA with gnome?


I've used Gnome a lot in my past. Also KDE since the early betas. At some point I just found out how tiling wms are kind of nice. After that it's no going back. It's like learning vim and then trying to go back to an editor with no modal editing.

Like. I need my editor, a browser, a few chat apps, music and lots of terminals. And I've used this config since i3 already for maybe over a decade. Every damn keyboard shortcut has been there for a long time. It's kind of hard to learn out of that...


The main issue with NixOS is not NixOS itself but software which sometimes has an insanely complicated build process.

An example that's hitting me at the moment is Anki. The build process was changed by the developers a while ago and became some sort of Bazel mess. There's a GH issue that was opened and closed by the developer because, well, it works for them.

https://github.com/ankitects/anki/issues/1378

Since then Anki has been broken on Nix and the only way to get it working is to either run the binary in a bubblewrap container (which has issues like breaking IME support) or just use a VM.

Could I fix this? Probably, yeah, if I spent enough time, but really the way to fix it would be to do it upstream, and getting a project to change their build process (particularly once they've recently changed it) is basically a full time job.

It's a bit similar to working with reproducible builds. The ecosystem is there, but you very often run into an upstream that just thinks "yo mate, it's not military software, why do I care". And really, they're probably not wrong, this is OSS, no-one is paying you.


There are some fundamental problems that no distribution can solve (not even Nix). Every level of abstraction has its own set of dependency issues, yet there exists no abstraction or concept that handles conflicts in every one of those levels. There are a lot of hacks and kludges used by Nix in order to try to get there, but it's only most of the way. We've hit the limit of what we can kludge.

I think the future of software dependency conflict resolution requires a fundamental shift in software design. We have to start new paradigms for software where handling interoperability with conflicting dependencies is a first-order feature of all systems.

BerkeleyDB is a famous example of conflicts. You can't have one BerkeleyDB database shared by multiple apps designed for conflicting versions of BerkeleyDB. If you have two apps doing two different things with the same database, but they use different versions, you need two separate databases.

This could be solved if both the database library and the applications were written with a version-independent abstraction. Regardless of the API/ABI, you have the app say "what versions do you support?", and then "you want v3? here's your interface", or "you want v4? here's this other interface". Internally, each app/db will use its own interface, but a translation layer will handle the v3 or v4 layer, without the application even being aware of it. As long as one of the apps is newer than the other, that newer app will have backwards compatibility with the old app. We won't need to install multiple stacks of dependencies (ala Nix) because different versions of apps will just be compatible. When in doubt, install a newer version.

At first this seems unworkable. What, require everyone to constantly support every old interface?? But I think we can start over with some new concepts and primitives to handle the majority of use cases so the programmer (mostly) doesn't have to think about it. We already have relocatable versioned ABIs for things like Glibc where it advertises what minimum/maximum version it supports. We just take that a few steps further and make backwards compatibility the default. Maybe we get new build systems and new ways of linking and calling code, and ship old code bundled along with the new? We need new research to develop these new methods, because the current system we're dealing with has no more fixes possible.


You might enjoy Stripe's "APIs as infrastructure: future-proofing Stripe with versioning" -- https://stripe.com/blog/api-versioning Original HN discussion: https://news.ycombinator.com/item?id=15020726


I gave nix (the package manager) a shot about a month back and gave up quickly. The lack of documentation is real for anyone getting started. And it got even more confusing as you’d typically need more tooling and hacks for particular dev environments (Node.js is what I tried).

As much as I dislike docker for dev environments, it is easy and consistent to setup and works for all the languages I’ve had to deal with.


Something that perhaps detracts from the positives of NixOS is that a lot of the tools that have the biggest issues with the need/difficulty of dealing with multiple versions and environments have long ago come up with their own solutions to the problem.

Python has pyenv, NodeJS has nvm, Terraform has tfenv, and the list goes on.

Then if you're running on a server/VM/cloud, it seems to me that you should be treating your systems as cattle instead of pets. I never update packages on servers, I build a new image/instance, test it, and replace the server.

I wonder if some of the benefits of NixOS were either solutions looking for problems or solutions where better solutions rose to prominence.

I haven't even started talking about containerization, either!


> Then if you're running on a server/VM/cloud, it seems to me that you should be treating your systems as cattle instead of pets. I never update packages on servers, I build a new image/instance, test it, and replace the server.

I have six or so servers, all of them configured differently. Not completely differently; they share some configuration. But not very similarly either.

They're pets. They have a lot of state. Backups are running, but I certainly don't want to reinstall them.

Despite being pets, I still want to do as little work to manage them as I possibly can. NixOS is great in this scenario.


> Python has pyenv, NodeJS has nvm, Terraform has tfenv, and the list goes on.

Nix makes these all largely irrelevant, in a consistent way. If anything I would say this supports nix's usage.

> Then if you're running on a server/VM/cloud, it seems to me that you should be treating your systems as cattle instead of pets. I never update packages on servers, I build a new image/instance, test it, and replace the server.

You can do this is nix as well. You can take a configuration.nix and ship it as a version controlled file, an iso image, pxe boot image, ami, vhd, or other formats.

> I wonder if some of the benefits of NixOS were either solutions looking for problems or solutions where better solutions rose to prominence.

I don't think so: https://blog.flyingcircus.io/2016/05/06/thoughts-on-systems-...

> I haven't even started talking about containerization, either!

Containers generally take an impure process and create a "snapshot-in-time". Nix allows you to "hydrate" your configuration in a reproducible and repeatable fashion. OCI images can only really be layered, can't be composed, and commands like `sudo apt-get install` or `pip install` also makes the runtime behavior dependent on when the image was created.


I actually think as a language Nix's runtime semantics are worse than borgcfg. Both have an obtuse and confusing syntax it's true. But Nix's extreme approach to laziness in the language means it essentially has "come from" semantics which is the source of the spooky action at a distance that the author speaks about. This combined with a lack of compile time type checking means that that debugging your nix code comes pretty close to feeling impossible. It is deterministic but tracking down what exactly broke your nix package code can be difficult when the source could literally be almost anywhere.


Would Nix benefit from something like the old CMU depot program?

http://www.cs.cmu.edu/afs/.cs.cmu.edu/help/content/unix_linu...

Consolidating everything you want into /usr/local (exclude packages you don't want). I used to use this at my old job before package managers were ubiquitous.


The main problem with NixOS that the author touches on is the isolation as well as the global state NixOS maintains for mappings in the nixstore which makes NixOS root unsharable over NFS.

There have been talks on #nixos of creating a LKM, etc to fix the security issues with NixOS but there is a better way to deal with this and just treat the FS as a file system and not an object store like NixOS does, state could be encoded in the file system.


I gave up on NixOS years ago.

It doesn't matter how many nice ideas you have, even revolutionary ones, when the core experience is just such a giant PITA.


The Nix package manager is amazing. I'm currently using it on my work OSX machine (instead of homebrew). It works well (with home-manager).

The author is spot on about the language though. I don't understand why people have to keep reinventing the wheel and designing bad languages when good options exist. At least they didn't use YAML...


> The worst is, of course, GCL/borgcfg

There should be a compendium of non-deliberately-hideous languages. I'll start.

https://bigfix.me/search?page=18&searchString=relevance%20ex...


Reading about NixOS reminded me of the assign command in Amiga OS: https://github.com/TheBeef/AmigaAssign


Does anyone have some tips on how I can partition an EFI laptop for NixOS with full disk encryption? I’m struggling to install NixOS on my Librem 14 - and have resolved to Ubuntu as it does all that for me.


I'd recommend LUKS. NixOS has a guide in the wiki but if you're looking for an example, see my nix readme. Might have a typo or two but it's 99 percent there

https://github.com/NelsonJeppesen/nix-lifestyle#luks-encrypt...

Unencrypted boot with LUKS root, nothing fancy

https://nixos.wiki/wiki/Full_Disk_Encryption


Use LUKS, don't use LVM.

I do that with ZFS for all the non-ancient stuff in github.com/Ericson2314/nixos-configuration


Why avoid LVM?

If you could point me to a practical instruction on how to get it running I’d be very thankful! I’m sure I’m missing something basic - as is usual when struggling with UNIXes…


There is some LVM FUD I can't confirm, but really, it's just simpler.

ZFS filesystems within the same "pool" are also just more flexible.

The NixOS config part you can take from my repo. The command line part is the same as for any other linux. From memory:

cryptsetup luksformat physical-part

cryptsetup open physical-part arbitrary-name

some-command-to-make-file-system ... /dev/mapper/arbitrary-name

mount /dev/mapper/arbitrary-name


The real break for me using NixOS as a regular driver was that I just could not figure out how to get Ruby on Rails to play nice with it. NixOS breaks a lot of assumptions that programs make about $PATH, and even if it's done for good reasons it plays holy hell with the development stack of some languages. I eventually gave up and moved my dev setup back to OSX (and then that PC died anyways). Maybe they've fixed it since then, I haven't tried.

I do continue to use it for my homelab infrastructure though. It is extremely good at handling new versions of configuration of Prometheus, and I would never consider anything else at this point. Just not what I'd use as a dev environment.


I moved to Docker for ruby/rails dev long ago and never looked back. Trying to do it locally, especially if there are others involved, is a path to pain and frustration as "works on my machine" creeps into a `bundle install` every few months.


After having spent weeks learning nix, I'm convinced that docker is simply a better tool for most use-cases of nix.


Coming from docker before nix, I think nix is a better version of docker for some use-cases. If the ecosystem is supported then I much prefer nix for speed (both runtime and development speed)


I agree that when nix works, it's great.

However, I find debugging docker infinitely easier than debugging nix. I played around with the idea of adding nix for work projects and getting buy-in from other colleagues. But after working with it for weeks and still running into issues, there's no way I could in good conscience recommend the tool.


Yep, definitely have to be careful about exposing nix to others. I did it on a 2 man team to great effect, but would really have to vet teammates before doing it on a broader scale. I do, however, generally write all of my scripts with nix-shell shebangs to give a layer of consistency/guarantee, and that has had a couple people ask me about it.


What problem did you have? I had an issue getting an RoR project running on NixOS, but it was just a lack of understanding. I needed Ruby 2.7.4 for the project, which is not available in current nixpkgs. Instead, I just imported the nixpkgs tarball with 2.7.4 and I was good to go. I found it by searching ruby_2_7_4 on hydra.nixos.org


It's worth noting that one can handle really badly behaving software using buildFHSUserEnv.


Is NixOps still a thing? It might be cool unifying nixpkgs with cloud infra, although Terraform and friends seem to already provide 80% of the value of such a solution.


terraform is more about reconciliation of a configuration, and moving a state closer to a desired end state. NixOS is congruent configuration management. There's a 1 to 1 correlation of configuration to reality, with no configuration drift.


I meant NixOps, not NixOS :)


> Almost all of the good things about NixOS are natural consequences of this single decision.

This quote stood out to me because you can really use it anywhere, just replace NixOS with any other piece of software, like:

"Almost all of the good things about <rusts_module_system_that_treats_a_file_as_a_module> are natural consequences of this single decision."

Looking at you, c++20[1]

- [1] https://gitlab.kitware.com/cmake/cmake/-/issues/18355#note_1...


> The first is relatively simple: they developed their own programming language to do configuration

Exactly this. I do not understand why did they do that.


Because it's a domain-specific-language, optimized for packaging software.

The relevant passage:

> Its only purpose is to describe components and compositions. A component is created through the derivation primitive operation, which accepts all information necessary to build the component. This includes its dependencies, which are other derivations or sources.

Full phd thesis: https://edolstra.github.io/pubs/phd-thesis.pdf


I understand that but I do not think that packaging is such a difficult subject that justifies a DSL as a new language.


I hope the Simula VR guys are reconsidering having NixOS the default.


Why do we need a programming language to run software installers?

Like, all I really need is the ability to say, "Hey, for this thing, install it in a versioned directory, please, and leave /usr/bin the heck alone." or even "Okay you're stubborn and you won't give up on /usr/bin but at least do the first part for me."

For most things (ls, find, etc) this is unnecessary but yeah for the pythons & javas and so forth, it's helpful. Seems like a lot of package managers already do the latter but sysadmins insist on There Shall Be Only One even when /usr/bin is already symbolic links into versioned installs (or links to links to links to versioned installs..).

Maybe I'm missing the point entirely.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: