First, you want a "dependency manager". That's not what Nix is, clearly. Nix is a package manager.
Second, to use this package manager, you first need to install direnv. How do you install it? with brew, a different package manager.
Third, you have to learn a new functional programming language. Right. Because normally to put together a toolbox, I often learn to speak a few phrases in Swahili first.
Fourth, finally, we get to install a simple Unix program, that any package manager could have provided.
For the fifth trick, freezing dependencies, you first have to have all the correct versions of all the dependencies to do what you want. How do you establish the right ones? Manually. Just like with any other package manager that builds apps against a specific tree of build deps. "Reproducibility!" Because no other package manager could possibly download the same tarballs and run the same commands. Must be all that functional programming magic.
And sixth, the idea that this will work forever. Right. Because any other distro and package manager could not possibly work again in the future, with a mirror of all the packages in a release. That's only for super cool futuristic package managers. No way to reproduce the same packages in the future with old package managers.
Look, Nixians, this is all old hat. Every decent package manager can do all these things, just in a less complicated way. Modern systems use containers, and they don't need Nix for that. Nix is basically just the new Gentoo: a distro for hobbyists who tell themselves they're advanced while all the professionals use something else.
First, dependency manager vs package manager. Potayto potatoh.
Second. The reason I recommend direnv is because of ergonomics, so you don't have to remember to reload your shell. Most current version managers either do it themselves through shell hooks or ask you to install direnv. Point taken on using brew: I've changed it to use nix-env.
Third. Yep. Sort of like having to learn English before you can learn programming.
Fourth. As they say: start with the basics.
Fifth. No other package manager can freeze the complete dependency set you are using AND allow you to revert it exactly when things don't work. That's the power of declarative, reproducible builds: you try to update and it doesn't work? No problem, revert to the last working version and things are all good again.
Sixth. Ever tried updating a project using a Dockerfile that uses an old version of ubuntu? Then you can't find the packages anymore or no is building them anymore for the new version of Ubuntu and you have to pull random PPAs from the Internet to get things to work? In Nix, you can keep two pinned nixpkgs versions at the same time. In this way you can keep working on upgrading your project one working step at the time.
So yeah, it's not just cHeCKmATe FP-atheists sort of stuff.
For the docker issues, that's just one of those quirks you learn and then in the future you make your containers reproducible. Or don't, and just store them after they're built the first time in an artifact repository. Any builds of your app that use a container need to be versioned to prove they've been tested successfully, anyway, and you want to keep them around to do fast rollbacks without having to kick off a build just to roll back. So you don't need reproducibility for reliability. It's mostly used for security: build on two siloed systems, compare the results.
Version-pinning in general is to prevent silent upgrades that break your builds. But you also don't want to version-pin everything, as you want security patches to get automatically applied with regularly-recurring builds. So in practice, you stick with a major branch of something, silently accept upgraded packages or minor version bumps, and have an automated system flag if you aren't running the latest security packages.
All of that works fine with Docker and Apt. More often than breakage due to silent upgrade is the security patch problem, where somebody pinned to a version 4 years ago and has never patched, and it's been so long that all the downstream effects of patching force you to do an entire lift-and-shift to new everything. Gradual continuous updates (stick to a major version, run tests) is the best middle ground. Modern projects with Docker containers have standardized their tagging around this.
> No other package manager can freeze the complete dependency set you are using
I don't know if you realise that your post makes a poor case for that, and for imagined reproducibility of nix builds?
1. Direct quote from your post: "if you tried to follow this article step by step, you’ll have noticed that the versions of ruby and node you installed are probably slightly different from the ones above".
The "solution" for that is to dig through some commit hashes, and use those.
Commit hashes are not versions.
2. Your example points to a completely random version of a package
The previous issue already points to a random version, apparently, but this is further compounded by the fact that "ruby_2_6" and "nodejs-10_x" point to a random version that is available at the time.
To truly make the claim that nix allows reproducible builds, it needs to provide:
- a way to specify versions that don't depend on the commit hash of a "nixpck channel" whatever that is
- a way to actually properly pin package versions. Something that all package/dependency managers allow you to. If I want to pin ruby to exactly 2.6.7, for an actual reproducible build, what is nix' solution for that?
> To truly make the claim that nix allows reproducible builds, it needs to provide: - a way to specify versions that don't depend on the commit hash of a "nixpck channel" whatever that is
I don't see why a commit hash in insufficient to declare a package reproducible. For my system, I just use channels, but for anything mission critical, I think most people pin their commit hash of nixpkgs [0]. A pinned commit of nixpkgs will produce the exact same set of packages, forever.
> a way to actually properly pin package versions
The way you do this is to pull in a specific commit of nixpkgs (you can install packages from many versions of it at the same time, if you want). Alternatively, you can manually vendor the package file, which is what I usually do for one or two packages.
You'd think such a simple thing, especially in the context of "ditch version managers" would be easy and warrant at least an example.
So far I've had:
- ignoring the question or avoiding the answer
- one answer "just use overlays" with no example
- one answer "just use flakes" with no example
2. It's a separate, unstable feature
It amazes me that after 18 years, you still need (probably? no idea) to use some entirely separate feature that is still unstable/experimental and uses an entirely different system of describing what's needed than nix proper.
And still no answer how to do the thing that is literally required for every single reproducible build: pin a tool to a specific version.
Meanwhile tools that we're supposed to ditch are as simple as
It amazes me how many people in that thread keep saying that we shouldn't even want it, that this is not what nix is about, that no one should install historical versions etc.
Given that my question never got an answer in HN (not until I had a discussion with the original article's author on Twitter and he posted this solution), it's clear that:
- this issue still exists
- how to do this is still not properly documented
- no one really knows how to do it in nix
- this extremely common functionality that is literally required for actual reproducible builds is entirely glossed over in all the articles praising nix
1- Finding the commit hash of of nixpkgs which contains a specific version of a dependency (e.g. ruby 2.6.3) and use that to bring in the dependency
2- If a specific version does not exist in any of the hashes, we should patch the `.nix` file and build the dependency ourselves. This causes all other transitive dependencies to be rebuilt.
Arguments against 1:
Why should we rely on an external thrid-party webservice to find the hashes of a specific package? This should be integrated to the `nix` tooling. Hope the community incorporates history search into the tooling :)
Arguments against 2:
@dimitriid says the solution is cumbersome.
Arguments for 2:
- What if rvm or nvm don't have a specific version or deprecate that version from their repos? At least there is a solution via Nix.
- What are you going to do for other languages which don't have a package/VM manager like Java, C/C++, etc? At least the Nix solution is universal and you don't have to learn the tooling of every technology.
> I feel like Nix/Guix vs Docker is like … do you want the right idea with not-enough-polish-applied, or do you want the wrong idea with way-too-much-polish-applied?
> - What if rvm or nvm don't have a specific version
Let's get back to reality for a second. For most development needs this is a hypothetical situation.
Right now, in reality, ruby-lang hosts versions dating to 2002.
Also, right now, in reality nix limits the number of available versions. Quote: "in NixPkgs we choose to restrict the number of maintained versions", https://github.com/NixOS/nixpkgs/issues/93327#issuecomment-6... So it's already worse than the hypothetical "what if rvm don't have a specific version".
Also, right now, in reality there's no easy way to pin a version of a tool in nix. Except going to a third-party website (lazamar's site), hope that your specific version is listed there (he only maintains a snapshot of a few years), and dig through nixpckg commit hashes. And this has been a known issue for as long as nix has been around. 18 years, at least. And we're still hoping that someone somewhere will incorporate this into standard tooling. You yourself linked to an issue on this.
> What are you going to do for other languages which don't have a package/VM manager like Java, C/C++, etc?
That varies between languages, of course. I have several Java versions installed side-by-side. C/C++ is worse because they are mostly stuck in 1960s, tooling-wise (though on windows you can have multiple msvcrt versions installed side-by-side).
This still can be (and is) handled. And nix doesn't offer great advantages, except providing a complex, complicated, poorly documented tool.
>Right now, in reality, ruby-lang hosts versions dating to 2002.
OK, so you want to pin ruby 1.3.4 in your project and use a ruby gem that is dependent on a specific glibc (or maybe some standard ruby library itself is dependent on a low-level C dependency.). With python these stuff happens all the time. What would you do?
Craft a dockerfile inheriting from an old Debian and hope ruby 1.3.4 installed with rvm would play nicely with this specific version of Debian? How do you find what version of say, Ubuntu, has fixed security updates of the low-level dependencies of your project? And what happens if the apt-managed JRE v.1.6.x in that instance would break other parts of the project that was dependent on JRE v1.6.y?
>That varies between languages, of course.
Well, that's the point. For each technology, you have to handle their specific "VM manager" and put up with their quirks + random state-changes imposed by apt. With Nix, the solution would not vary between languages, setups and teams. I mean, at least theoretically, the Nix/Guix way makes much more sense than shipping the whole state of the computer (with docker).
> OK, so you want to pin ruby 1.3.4 in your project and use a ruby gem that is dependent on a specific glibc (or maybe some standard ruby library itself is dependent on a low-level C dependency.). With python these stuff happens all the time. What would you do?
What amazes me is how you keep asking what would happen in other places instead of asking/answering "how easy/convenient would that be in nix".
Let me give you a different question: I have ruby x.x.x. A serious security issue is discovered, and everyone is encouraged to upgrade to ruby x.x.x+1.
Let's see how easy it is to do in nix which is "a tool that takes a unique approach to package management and system configuration."
Oh. Wait. There's literally no such way. Except manually looking for git commit hashes... somewhere.
Let me quote a reply from the issue you yourself linked:
--- start quote ---
I've read the Nix Package Manager Guide, but everything in there about reproducibility seems to point to installing (and therefore finding) specific versions. I mean the first title in chapter 1 is multiple versions
Any other multiple-simultaneous-version managers: asdf, kiex, gvm, rvm, rbenv, pyenv, jenv, nvm, Renv, luavm, all have a list-available-versions as either their 1st or top-5 most-used commands. But nix has 5.3K stars, so there must be something I'm not understanding if nix is missing something so vital
--- end quote ---
> With Nix, the solution would not vary
What was this solution again? In the entirety of the discussion to this post there was exactly one person (the author of the article) who could show how to do it.
Even nix proponents don't know how to do that, apparently.
Sorry if I may have come up as a Nix professional/proponent. I'm actually a nix beginner and as can be seen in this thread, I've asked the author similar question to yours. Using Ubuntu, I have been bitten by glibc and other dependency mismatch in my projects and I'm looking for something more elegant than docker/VBox.
I've read posts from both sides of this discussion.
> Oh. Wait. There's literally no such way.
I think there is?
Check to see if the nixpkgs history have that version (+), if not, patch the derivation (could be as simple as bumping the version and recalculating the hash, or, doing so could cause a breakage in one of the transitive dependencies, at which point, you have to fix that also (meaning a version increment of decrement for the dependency or more. Essentially you have to build the specific ruby version down to all its dependencies)
(+) How can we see what versions are available in nixpkgs history?
It seems we have to use third-party webservices! I agree with you here that this should have been implemented in the nix tooling itself.
I know of two such services:
I think this also answers that quote from the github issue.
> What was this solution again?
From what I understood, in the project `.nix` file, we inherent from a forked nixpkgs repo which has our patched `default.nix` in the ruby folder, for building ruby. (Nix pros, correct me if I'm wrong here). Now we have to build all deps locally since a matching hash would not be present on Nix cache servers.
Now, with nvm, how could you guarantee that a particular version would not conflict with one the host OSs low-level libraries (Like glibC)? The difficulty and complexity of patching the Nix derivation has to do with this part. In Ubuntu/Debian, you can't simply mess with system level low level libraries.
There is no official way to do that. There's no history. There are no tools in nix to do that. As you noted, you have to rely on the kindness of strangers to provide this service. Here's lazamar's own explanation why he built the tool: https://lazamar.github.io/download-specific-package-version-... Look at the beginning of the "automating" section to see what is required to do all that with nix.
On top of that, in the entire discussion here exactly one person could actually show how to pin something to a version, so it's poorly documented, extremely complex and not really understood even for people who propose nix.
> Now, with nvm, how could you guarantee that a particular version would not conflict with one the host OSs low-level libraries
You probably don't, or stick it in a container somewhere. But this is about tradeoffs: does nix provide enough tradeoffs to use this complicated, complex and poorly documented system?
That's the point: without using Nix, you'd have to do LD_LIBRARY_PATH hacking to have two programs use two different versions of glibc. When people say "oh nix looks so complicated I can do this using bash scripts" I reply "well show me these beautifully expressive and simple bash scripts then".
Funny how you're talking about how horrible it is to install glibc, when the entire reason lazamar's tool exists is that he couldn't figure out how to install a specific version of a tool using nix https://lazamar.github.io/download-specific-package-version-...
"Why do you look at the speck of sawdust in your brother’s eye and pay no attention to the plank in your own eye?"
Also, not all of us depend on glibc or changes in glibc. Making this solely about glibc while ignoring the larger issue is... can't find a proper word for it.
I updated my reply. Can you confirm that inheriting from a forked nixpkgs with a patched default.nix in ruby directory is the solution for cases that a particular version was not ever present in history?
There is no need to literally fork the nixpkgs repo. Most language derivations were written with the idea of supporting multiple versions at the same time. In this guide (https://www.breakds.org/post/nix-shell-for-nodejs), the key example is this one:
let pkgs = import <nixpkgs> {};
buildNodejs = pkgs.callPackage <nixpkgs/pkgs/development/web/nodejs/nodejs.nix> {};
nodejs-8 = buildNodejs {
enableNpm = true; # We need npm, do we?
version = "8.17.0";
sha256 = "1zzn7s9wpz1cr4vzrr8n6l1mvg6gdvcfm6f24h1ky9rb93drc3av";
};
in pkgs.mkShell rec {
name = "webdev";
buildInputs = with pkgs; [
nodejs-8
(yarn.override { nodejs = nodejs-8; })
];
}
where we import the normal nixpkgs, then we use `callPackage` to re-use the code that was written by the Nix mantainers, and we specify our own version and SHA.
If the derivation hadn't been written to be reusable what you can do is to copy this whole directory https://github.com/NixOS/nixpkgs/tree/nixos-21.05/pkgs/devel... to your local project and import the nix files locally. In this case, some of the artifact might be missing from the nix cache and you might need to compile some from source. We do this in some cases and upload the artifacts to our private store on https://www.cachix.org so that no engineer has to recompile them again on their own computer.
> Nix is basically just the new Gentoo: a distro for hobbyists who tell themselves they're advanced while all the professionals use something else.
As a Gentoo user and someone who has tried and grokked Nix/Guix, this is not true at all. Gentoo is essentially a system for creating distributions, a meta-distro if you like. It just so happens to be something that appeals to someone like me to run at home because, well, why not? I know exactly what I need/want on my computer and this lets me build my own custom distro tailored for me.
Nix/Guix is something completely different. It addresses real problems that arise in a variety of places in an elegant way. Containers are another solution to some of these problems, but come with their own tradeoffs.
Your blanket statement about "professionals" makes no sense. I don't use Gentoo at work because it's not my job to maintain custom Linux systems or create distros. But guess what? Not everyone has the same job. You don't speak for all "professionals". I can distinctly remember around 2007 people saying this exact thing about git. I was told geeks were only learning it to make themselves feel clever and that SVN was fine. Well look how that turned out. I'm glad I ignored those people.
It's not about what is possible, it is about ergonomics. We could have used email for conversations online, yet we use Slack. Other tool require lots of thought and carefull execution for what Nix gives you for free.
> Other tool require lots of thought and carefull execution for what Nix gives you for free.
How do I pin ruby version to 2.6.3 in Nix? This is given to me for free in other tools. As are reproducible builds because all modern package/dependency managers lock versions.
Last week I had a problem with our CI/CD pipeline because an old package that's a sub dependency wouldn't install anymore. [1]
See that `rev` line in the sources.json? The whole build system is pinned by on git commit. If it builds today, it will build tomorrow. No matter what. That alone resolves a huge category of potential problems.
There's much, much more to it. Nix stores each package in its own path. [2] This makes it possible to install multiple versions of one application or library next to each other.
With the power of direnv only the versions one needs are included.
> a problem with our CI/CD pipeline because an old package that's a sub dependency wouldn't install anymore
> The whole build system is pinned by on git commit. If it builds today, it will build tomorrow. No matter what.
If that commit is still available. This still doesn't answer the question of how I easily pin a version. The package versions in the post point to what amounts to a random package version.
And that's on top the fact that "if you tried to follow this article step by step, you’ll have noticed that the versions of ruby and node you installed are probably slightly different from the ones above"
Why wouldn't it be available? Nix packages are on github. [1]
> And that's on top the fact that "if you tried to follow this article step by step, you’ll have noticed that the versions of ruby and node you installed are probably slightly different from the ones above"
That's the same as on any other host. If I write about installing ruby on ubuntu today and you read the blog post a whole later you'll probably get a newer version.
> Why wouldn't it be available? Nix packages are on github. [1]
1. Git history can be re-written.
2. Nix packages still point to some specific versions that may or not may be available [1]
So how exactly is nix so much different from "a problem with our CI/CD pipeline because an old package that's a sub dependency wouldn't install anymore"?
> That's the same as on any other host. If I write about installing ruby on ubuntu today and you read the blog post a whole later you'll probably get a newer version.
That is only true insofar as you use a "get me the latest version of the package, regardless".
In any sane system the solution to that is to do `npm install <specific version>` or `gem install <specific version>` or `pip install <specific version>` or `brew install <specific version>`
In nix, apparently it is:
- find the commit hash of the nix package channel
- pull a package that still points to a rather random version
Which raises another question:
nixpackages 21.05's ruby_2_6 is actually ruby 2.6.8. It's not available in nixpackages 20.09. But 21.05 doesn't contain the newer Go versions.
So, if I wanted ruby 2.6.7 and go 1.13 (neither are available in either 21.05 or 20.09), I would do what exactly? Or if I wanted ruby_2_6 (available in 21.05) and go_1_16 (available in 20.09)? All other package/dependency management tools don't even have such a problem.
So far I've asked this at least three different times, and every time the question been ignored or avoided. But sure, nix is amazing, and is a reproducible build system unlike any other.
It's been around for at least 18 years. You'd think there would be easy answers to these questions by now.
Maybe nix is not for you. I'm not well versed with nix packages and only use the basics.
Your answer on how to pin this package hasn't been answered good enough for you because it takes work to pin packages and it isn't as trivial as on ubuntu. To paraphrase it's a great place to live but not to visit.
Some of my friends are active in the NIX community. They mentioned that it was around as hard as learning haskell. One of them only grokked nix after he read nix pills [1].
I think nix has good concepts and in theory is the package manager I would like to use. In practice it isn't that.
Personally, I run ubuntu and use nix packages for three things:
- quickly test a program without needing to install it system wide via apt-get
`nix-shell -p appname`
- together with direnv for the dependencies of CPP projects I work on and
- installing commandline tools packages
You probably use a variety of tools to manage your Ruby, Node, Python, Elixir versions, such as rvm, rbenv, nvm, or asdf. They all work reasonably well, right?
Well, not quite.
...
My ideal dependency manager would allow me to specify each and every dependency that is required to work on my projects. It should be easily reproducible, declarative and easy to upgrade.
--- end quote ---
To this end, the author proposes to ditch rvm, rbenv, nvm etc. and go for nix.
Here's the entirety of what I need to do to pin my ruby version:
rvm install 2.1.1
rvm use 2.1.1
That is it.
I've yet to see anything that approaches this in nix. We are talking about reproducible builds, aren't we?
That page says: "Overlays provide a method to extend and change nixpkgs"
I don't want to change something. I want to simply pin a version. This is neither simple, nor scalable, nor maintainable (also here: https://news.ycombinator.com/item?id=28591202):
Overriding a version
self: super:
{
sl = super.sl.overrideAttrs (old: {
src = super.fetchFromGitHub {
owner = "mtoyoda";
repo = "sl";
rev = "923e7d7ebc5c1f009755bdeb789ac25658ccce03";
# If you don't know the hash, the first time, set:
# sha256 = "0000000000000000000000000000000000000000000000000000";
# then nix will fail the build with such an error message:
# hash mismatch in fixed-output derivation '/nix/store/m1ga09c0z1a6n7rj8ky3s31dpgalsn0n-source':
# wanted: sha256:0000000000000000000000000000000000000000000000000000
# got: sha256:173gxk0ymiw94glyjzjizp8bv8g72gwkjhacigd1an09jshdrjb4
sha256 = "173gxk0ymiw94glyjzjizp8bv8g72gwkjhacigd1an09jshdrjb4";
};
});
}
Reading your linked post, it seems you are wrestling with how Nix works.
Nix is hard to learn. The concepts used are almost the same, but different. That's because it's solving slightly different problems than package managers currently do.
As the overriding example shows, the versions are pinned by hash and are stored in a file.
What about it is not maintainable or scalable?
> First, you want a "dependency manager". That's not what Nix is, clearly. Nix is a package manager.
This is not true. The defining point about Nix is that it allows to define all dependencies explicitly. Every derivation in it is built inside of a sandbox with no access to network and restricted filesystem. This ensures that nothing is accidentally missed. Package management is just one of features.
> Second, to use this package manager, you first need to install direnv. How do you install it? with brew, a different package manager.
direnv is not needed to use Nix, it's just a nice add-on. It's like you don't need Ranger to navigate in a file system, but it is nice. Besides, you can install direnv through nix, I use it this way. Even nix is installed by itself as well.
> Third, you have to learn a new functional programming language. Right. Because normally to put together a toolbox, I often learn to speak a few phrases in Swahili first.
Well, yes you do, but language is actually quite simple it's the nixpkgs (repo containing all packages) that's quite complex. Nix's purely functional, lazily evaluated properties exactly fit as a language describing dependencies.
> Fourth, finally, we get to install a simple Unix program, that any package manager could have provided.
If you were trying to install a package, you don't need to learn anything, you just install it (for example to install ranger you just run nix-env -iA nixpkgs.ranger). Nix language is only needed if you want to create a new package.
> For the fifth trick, freezing dependencies, you first have to have all the correct versions of all the dependencies to do what you want. How do you establish the right ones? Manually. Just like with any other package manager that builds apps against a specific tree of build deps. "Reproducibility!" Because no other package manager could possibly download the same tarballs and run the same commands. Must be all that functional programming magic.
> And sixth, the idea that this will work forever. Right. Because any other distro and package manager could not possibly work again in the future, with a mirror of all the packages in a release. That's only for super cool futuristic package managers. No way to reproduce the same packages in the future with old package managers.
> Look, Nixians, this is all old hat. Every decent package manager can do all these things, just in a less complicated way. Modern systems use containers, and they don't need Nix for that. Nix is basically just the new Gentoo: a distro for hobbyists who tell themselves they're advanced while all the professionals use something else.
Nix delivers what docker promised and ultimately failed to deliver. Docker promised to reproduce developer's environment to production. What it did was to zip developer's computer and deploy it. When docker got adopted, relying on images was impractical, so for deployment Dockerfile was used, but that file is not much different than a shell script.
Nix instead describes the entire dependency tree down to libc. Because the starting state and all dependencies are known it can always create the same result, that's the biggest selling point of Nix to me.
> Because the starting state and all dependencies are known it can always create the same result, that's the biggest selling point of Nix to me.
And you can do that with the package manager of any Linux distro today. It sounds like Nix's biggest selling point is it does something we can already do.
You can pretty much say the same thing for any new technology. Flushing toilets? But I can already empty my chamber pot far away from my house! The fact is, even if current package managers can do this, people (including professionals) don't do it. The way forward isn't to develop new tooling that works with human nature, not against it.
> My ideal dependency manager would allow me to specify each and every dependency that is required to work on my projects. It should be easily reproducible, declarative and easy to upgrade.
To me it feels like what's left out here is the fact that you need the amount of dependencies that you have to be reasonable in the first place, otherwise no single piece of software is going to help you all that much.
For example, a new project that was created with "create-react-app" takes 181 MB of space on disk and has 35'894 files in it. That includes about 1467 modules, all for a relatively simple web application. It doesn't matter if you're using package.json/package.lock, or any other tool or solution out there - with that amount of dependencies you're simply not doing dependency "management" of any kind.
I'd argue that if you have >100 dependencies in your project, it's probably too big, unless you have a team that's dedicated to managing and auditing all of them. Of course, no one actually audits their dependencies when faced with such large numbers, and so what's left for most developers is to just trust what's out there.
You're not wrong, but that's a different problem. Nix makes sure that you get a predictable and reproducible tree of dependencies, and allows different applications to depend on different versions of the same dependencies. That is, it's a solution to DLL hell. It's solid engineering based on solid theory, and it really does let you manage configuration with a level of reliability that most other package managers only pretend to have.
Now auditing dependencies, knowing that the packages you depend on aren't malicious and have no known vulnerabilities... well, that's a whole separate problem. And yeah, we don't really have a solution to that right now. The best we can do, as you say, is keep the attack surface small.
But if we did try to solve the auditing problem, the solution would have to sit on top of nix or something like it. If you can't precisely specify a dependency graph and reliably install from that specification, it doesn't matter how good your auditing is or what sort of system of trust you can create. You don't know what you're getting anyway.
+1 a good dependency manager should 1) pull all necessary packages for you 2) build them if required.
It's your job when downloading a package to use or when adding a dependency to make sure you trust the source - this is not a dependency management problem.
A good dependency manager should help you better filter trusted sources or at least have a good build file layout to make it obvious what the sources are but that's it.
I don't have an issue with the size of a project. On the other hand, I worry about a large number of dependencies and the need to understand the security and compatibility models of their respective authors.
I'm fine to depend on, say, JVM or Qt - large projects. Not with thousands of small packages developed by thousands of independent developers.
Nix, guix and traditional package managers that support multiple versions (like dnf) are better solutions than just containerizing the whole thing.
But here we are, some projects only give development instructions based on containers, others support only container based deployment. You need to go out of the way trying to convert dockerfiles into regular install/setup instructions.
Containers are great and all, they solve many problems but they're not the only solution and theyre definitely not the best solution to every problem.
> Containers are great and all, they solve many problems but they're not the only solution and theyre definitely not the best solution to every problem.
There a large number of developers around now who've never done traditional (RPM, DEB, SysV) packaging and don't understand it. So while the large distros like Redhat and Debian push on with it, the development community only sees Dockerfiles, Snaps, Flatpaks etc.
> There a large number of developers around now who've never done traditional (RPM, DEB, SysV) packaging and don't understand it.
I'm one of these developers (at least as far as my day job is concerned) and the only methods of delivery in a professional context that i've seen have been:
- delivering .jar or .war application files (Java), mostly through FTP or even e-mail
- delivering other files, like documentation etc., mostly through FTP or even e-mail
- using containers and pushing images to registries (mostly due to my initiative)
This is across ~5 years of software development and while looking at projects from multiple companies in my country - not once have i seen a DEB or RPM package repository be used. To me, it seems like previously people opted for the simplest solution in their eyes, which was just shuffling a bunch of files around through FTP, i guess since that doesn't lock you down to using a particular distro or even OS.
However, streamlining that process on our end and eventually migrating over to containers, to where both the CI and CD are automated (or at least based on a manually invoked pipeline, when we're sure about making a release) seemed to be a no-brainer. Suddenly delivering software isn't something that takes an hour - even if needless cruft (making Jira issues manually etc.) is still there, any automation and getting rid of the human factor makes things safer and easier.
That said, it surprises me that previously no one bothered to set up their own DEB or RPM repositories, since then the clients could use the package management solutions that they're already familiar with. Any idea what are some of the reasons for it? Why doesn't everyone just make it so that their software can be installed and updated through apt/yum/...?
> That said, it surprises me that previously no one bothered to set up their own DEB or RPM repositories, since then the clients could use the package management solutions that they're already familiar with. Any idea what are some of the reasons for it? Why doesn't everyone just make it so that their software can be installed and updated through apt/yum/...?
A reflexive, nearly irrational fear of ever, ever installing software outside of the official repositories. Search around for "Frankendebian" and you'll find a lot of people who simply refuse to discuss, let alone support, any individual who has the temerity to actually want to use their computer to run software instead of to simply maintain a pristine operating system.
In other words, the culture treats third party repositories as a pejorative.
Somebody replied to a post of mine in a thread like this years ago stating that he was converting every Ruby gem in his dependencies into a .deb and he was installing them with apt. That was the right way to do and everybody else is doing it wrong. I don't remember if I replied but the sheer amount of work is astounding.
I did this once. It was a terrible idea. Nix solves the same problem of wanting better control of system-level dependencies but without all the manual packaging.
Yes, packaging has always been a distro issue never an upstream issue. At the extreme end of this you can look at Arch's saga with Visual Studio Code, since the Arch packagers want to disentangle the .asar app from the specific Electron version it ships inside of (they view Electron as a system library).
Chromium (and thus Electron, although they deliberately lag behind Chromium) has very frequent security updates. While clearly not all of those affect Electron, and not all of those affect every Electron app, some of them will affect Electron and some will affect some Electron apps. While some Electron apps will follow the (deliberately lagging) Electron security update cycle, most developers probably do not update their app for every Electron security update. So it makes sense for distros like Arch to want to ship the security supported Electron version and have every Electron app use that, but maybe all the changes in Electron are breaking apps that use it, so maybe it makes sense to ship one Electron version per Electron app, but then you get an unknown amount of security issues with each app, depending on how out of date the app's Electron version is. So the choice is between a working app with an unknown amount of security issues, or a regularly broken app with fewer security issues. Neither seem like great options to me. Perhaps Electron needs an LTS version to solve this issue.
One thing I like with Nix is that it can build containers, so it's totally compatible with deployment systems that require containers. I don't mean using Nix commands in a Dockerfile, I mean having a Nix derivation that produces a container.
I've been writing normal Dockerfiles at work recently and the whole toolchain is annoying on multiple fronts; I'd love to jump to Nix instead, except it's really hard to convince other people to learn non-trivial technology that's off the beaten path :(.
Using nix for building docker images [1] is brilliant, especially when combined with flakes.
You don't need messy shell scripts to set up the environment and can get 100% reproducibility.
You just build a regular Nix derivation (aka package) with the software you need and then just specify that in your image.
It does take some work to get the image size down though.
I'll also mention Nixery [2], which is a Docker registry that automatically builds images based on Nix.
nix is really nice (finding it much better than homebrew cross macOS and Linux) and do good jobs at the best cost for some of my uses cases (kind of binenv / arkade, etc. that manages some binaries that the target hosts use). However, nix has its own problems that I may not qualified to comment (not savvy enough user)
- a bit of learning curve (NixOS vs nix package manager)
- memory hogging while running system-wide update
- disk space consumption is a mystery (GC & optimisation doesn't do good jobs freeing up space)
The learning curve is the biggest issue. Nix is different, and that makes it hard to learn unless you're already into functional programming. I like it too, but damn, it's so hard to get someone up to speed, even if they're sold on the benefits and want to learn.
"Supporting multiple versions" is a misnomer. You can't use multiple versions of a dependency on a single system without completely separating both the build-time and run-time environments, which is what containers do. If Nix did that implicitly it would mean that the applications would all have to be patched - not only to load specific library versions, but to call exec() on specific versions of binaries of completely unrelated apps. Otherwise two apps that call each other but use incompatible dependencies will never know that they're incompatible, creating an unresolvable conflict.
I doubt anyone behind Nix is writing patches for every single application that exists, and every version of every application, in order to achieve this. The only other way to do it is by re-building all of the deps and installing them in an environment specific to an application (what containers do), and then having an interface to be able to choose what versions of what environments to combine with which applications in what ways. You can't even create a Bash one-liner using two different apps/environments and ensure it'll work, unless you explicitly define what versions of the shell commands you're going to use together. It's impossible to automate. Hence "regular" distros with specific package trees, and containers to provide the interface to mix application/environment pairs.
There is no better solution, because all software development itself is inherently flawed. It's all 1970s programming: a single system with a single set of dependencies per application (both build-time & run-time). There is no solution to that problem other than to reinvent how software works so that versions are explicit in all code execution on a system. An application would have to request code execution with a specific version, and the system would have to figure out how to provide that version of that code at run time.
They don't all have to be patched, they just have to be rebuilt in a carefully-controlled environment.
Nix installs everything that it manages (libraries, applications, even managed conf files) under a path that is derived from its exact build description (it's a hash plus some extra stuff to help humans identify what the component is by inspection). This allows, e.g., two different versions of Firefox or two different versions of zlib to exist on the system at the same time. Because the exact build description for the application refers to the exact build description for its dependences, there's a certain Merkle Tree like sense to it where if you were to e.g. rebuild Firefox against a different version of zlib, your resulting Firefox build would have a new build description hash and thus a new path.
An example of what an executable path might look like is: /nix/store/v5sv61sszx301i0x6xysaqzla09nksnd-hello-2.10/bin/hello
Stuff like autotools can actually generally support all of this without patching. In the case that the user wants to run a binary built for a non-nix system outside of a container on a nix system, binary patching does become necessary, in the form of appending to the RPATH of the executable. Nix does provide tools to eliminate as much pain as possible from this process.
It is worth noting that you can still only have one version of a program on your PATH with the canonical name at once.
As far as programs invoking each other with exec: that establishes no requirement at all that the invoked program be using the same versions of libraries as the invoking program. Programs packaged with nix sometimes "purely" refer to each other by their full paths. Other times, they discover each other "impurely" by just searching the PATH.
> You can't use multiple versions of a dependency on a single system without completely separating both the build-time and run-time environments
Sure you can. It's what version numbers in library file names are for. Works for executable names, too. Until Python 2.7 died, you commonly had 2.7 and 3.x installed at the same time.
First, install python 2 and python 3. Next, install a script that calls "#!/usr/bin/python". How do you make the script work, without modifying the script, so that it will work with the specific version of Python it was designed against? You can't. You have to either modify the script to point at the specific executable it was designed for, or you have to modify the environment to point "/usr/bin/python" at either python2 or python3.
The former, "modifying the script", is recompiling all applications to point all of the executables they run at a custom executable, like "/run/app/v2/bin/app".
The latter, "changing the environment", requires that you specify the environment before you run the script (". /etc/python2.env ; /some/script.py"). But then you have to know that, right now, you want to run the script against Python 2. What if you install an upgraded version of the script that's built against Python 3? How do you know which script to run when? You have to have an interface and pick which one to use each time you run it. And every application that runs that script.
You can, of course, install two versions of your script, and rename the scripts, so that you know script2.py runs against python2 and script3.py runs against python3. But what if you have another program that needs to call the script? How does it know which of these 2 custom names to run? Now you'll have to compile a new custom package for this new program, and rename it too, and hard-code that one to call the specific new custom script name that targets the specific Python version.
And on, and on, and on, you will need to have duplicate packages for every possible dependency graph that could possibly exist of different versions of different programs. You would end up with 16,000 packages of this python script, each with a custom name and custom paths. Because some combination of dependent packages could lead to a recursive bug where the wrong app eventually calls the wrong script.
And it also requires patching literally every single application that is ever packaged by Nix to be able to rename applications and libraries. Every C program that ever calls "wc" needs to be patched to run "wc" version 2.0.0, "wc" version 2.0.1, "wc" version 3.0.0, etc, for every version of wc that may exist, and for every version of wc that may call a different program which also has multiple versions.
This is why the problem is unsolvable with any tooling whatsoever. There is no way to automatically determine the correct dependencies without modifying and packaging every application for every dependency tree that could possibly exist. The only solution is what containers give you: an interface that forces you to pick a version of an application with one dependency tree, and lets you figure out how to combine them (how to run containers together).
The only way to solve it without either 16,000 variants of one package, or an interface to pick environments at exec time, is to fundamentally change how software is designed and run today.
The cases that you're raising are all ones that the nix ecosystem has had to find solutions for, but they have generally done so.
The first thing to note is that it's generally discouraged to use full paths like the one you gave for python in shebangs in loose scripts. Instead, it is encouraged to write e.g. /usr/bin/env python. In this case, the python from the current environment will be picked up "impurely" when the script is run.
That said, it's pretty heavily encouraged to package anything that will be used seriously, since this is very easy with nix and gives access to better tools.
For example, there is built in automation for recognizing shell shebangs and rewriting them, based on the dependencies expressed in the build description.
As far as your case of programs shelling out to other programs: that can either happen "impurely" (which would work like that you describe, including the potential drawbacks) or "purely" (where a specific version is captured). In order to attain "purity", nix has automation for generating wrapper scripts which manipulate the PATH before delegating to some underlying executable.
It's a better solution for dependency management specifically. I need to go to bed so I can't go in too much depth, but here's a high-level summary:
- Nix packages can compose in arbitrary ways, containers can't
- Nix can get much more consistent and granular caching
- once an image is built it's reproducible, but the Dockerfile to run it probably isn't—unless it was written very, very carefully to pin everything—so what happens if you need to update one or two packages in an image?
- not 100% sure here, but it seems that lightweight sandboxing like Nix shells or virtualenvs are a lot more convenient for local development and tooling integration
That said, containers might make more sense for, say, deploying and sandboxing services—that's just a different problem from reproducibly managing dependencies and development environments. Nix can build container images, which lets you get the best of both worlds.
Most package managers running in containers (whether apt, apk, or something else) aren't built for reproducibility, so doing e.g. `apt update && apt install neovim` can change the versions of libraries inside the container. Furthermore, people don't (in general) write Dockerfiles for reproducibility -- it's very rare to see one where the `apt install` command (or equivalent) has exact version numbers specified for all dependencies, the FROM command uses a hash, etc.
$ VER=$(dpkg -s alsa-oss | grep -m1 '^Version: ' | sed -e 's/^Version: //g')
$ cat > Dockerfile <<EOF
FROM ubuntu:21.04
RUN apt-get update && apt-get install -y alsa-oss=$VER
EOF
$ cat Dockerfile
FROM ubuntu:21.04
RUN apt-get install alsa-oss=1.1.8-1
$ docker build -t testimg .
$ docker run --rm -it testimg dpkg -s alsa-oss | grep ^Version:
Version: 1.1.8-1
That's a complicated example. All you need to do is get a working system, specify the base image tag, run dpkg -l and write down all the versions, then pass them to apt-get install. It's super easy. It's much more complicated to do something like download a static Go app from a specific URL, import a GPG key, validate the key, verify the checksum, etc.
This would freeze all the packages on the system into a file for use in a Docker container:
If you just want to find out the dependencies for a package, use something like apt-cache rdepends alsa-oss and then get the versions and pin them. It's pretty trivial to just read the man pages for dpkg or apt and get what you want done.
You commit packages.txt and Dockerfile to a Git repo, and you push your built images to an artifact repository. Test it, validate it, ship it to production. It's all immutable so it doesn't need to be reproduced, you just roll back to the last artifact.
The use of a non-static tag is intentional to pick up security and bug fixes. It's like using a "stable" branch, where you expect to get any emergency fixes to the stable branch. Only in this case it's a release-specific stable branch. If you ran a system with some compliance mandate that not even security fixes could be automatically applied, then you'd pin to the hash.
I may be wrong, but I believe one caveat here is apt may not always have some version of packages available. I believe only the latest one in each `deb-src` is available.
This is not an issue with nix as nix has the ability to build the entire system from source forever in a single command.
Your software is first a separate application running on the OS.
But gradually over time as its usage increase and it's extended by more and more even larger applications -- your app will turn into part of the OS. Containerizing your app doesn't help in this case.
Imagine how many services from the OS you are using. Next imagine each one running in a different container and you have to build your app using 100s such containers.
The point behind the XKCD is that competing standards cause interoperability problems. I would argue, as does the original post, that Nix solves interoperability problems rather than creating them—it is a tool that helps you get stuff done, not a standard that purports to tell other people how they need to do things so you can get stuff done. But I’m not sure whom I’d be arguing this with, since a link to an XKCD does not constitute an argument to begin with.
Arrived expecting to vehemently disagree, given how much I rely on version managers for everything (Ruby, Node, etc.) in my work. Came away thinking, yeah, okay, Nix probably is a better solution to all of this and more.
Managing dependencies is a big problem, and I feel like we've given up on solving it directly, and instead built workarounds.
If we question our assumptions, the first question is: Why do we need multiple Ruby versions at all? Why isn't the latest version of Ruby sufficient? Well, obviously, Ruby's behavior has changed over time. But why isn't it backward-compatible? Why can't I just run Ruby 3.0 with a flag that tells it to emulate Ruby 2.6? Or 1.8 for that matter?
Okay another one. "Nokogiri" is the Ruby gem (library) for parsing XML (including HTML). You always need to remember to install libxml when you install it. Why? Why doesn't Nokogiri include an XML parser? Because it would be slow? You can include a native binary in the gem which does the hard computations. Will that work on random new architectures like M1? No? But you can fall back to a Ruby implementation and show a warning message. Also, what if I just need some quick XML parsing and don't care if the parsing is 1000x slower? Can I just get Nokogiri with the Ruby-implemented parser? No? Why not?
Then every single Ruby gem out there starts using newer Ruby features and thus, you must update your Ruby version. Why can't library authors gracefully handle older versions of Ruby?
This just adds burden to programming language and library authors. Sure I could almost certainly support Python 2.7 and Python < 3.6 in all my libraries, but it introduces more code and shims I've got to maintain. Given I'm not being paid for this work and do it for fun, there's no compelling reason to support the small subset of users that cannot or will not update.
I imagine variations of the above apply to nokogiri and some amount of Ruby core developers.
I basically stopped writing Python because of the 2 -> 3 transition: the cost of waiting for dependencies to be ready for Python 3 was high enough that it made sense just to switch languages for everything.
Python was saved, basically, by it becoming _the_ language for data science in the same time frame. But, this could have very well turned into a Perl 6-style fiasco.
> These dependencies are met by default by Nokogiri's packaged versions of the libxml2 and libxslt source code, but a configuration option --use-system-libraries is provided to allow specification of alternative library locations.
Some authors work hard to have their tools do the right thing and consistently.
> Why can't I just run Ruby 3.0 with a flag that tells it to emulate Ruby 2.6? Or 1.8 for that matter?
I suspect it would be a nightmare - it's not just small parts of the code, it would be other parts of the code that depend on it.
Take for instance Python 2 & Python 3's approach to if ("hello" > "goodbye") - in Python 2 that would be False and in Python 3 that would be TypeError. So if you are running Python 3 with a Python 2.7 flag, how should Python's internal modules that are written in Python 3 handle this? Do all Python's internal modules need to work on every legacy version of Python and load the correct version, or does the interpreter need to somehow load up instructions for both and parse them together? Then there is the whole problem about libraries - are these allowed to selectively choose which version of python they run on?
This is of course only one small example of the complexity - If it was easy to make these things backwards compatible it would be great, but I would expect the cost for implementing full backwards compatibility to be high and the language authors are probably motivated to make people move towards the latest versions.
> Take for instance Python 2 & Python 3's approach to if ("hello" > "goodbye") - in Python 2 that would be False and in Python 3 that would be TypeError. So if you are running Python 3 with a Python 2.7 flag, how should Python's internal modules that are written in Python 3 handle this?
If the software has been changed so much you cannot be backwards compatible anymore, name it something else and avoid the conflict and confusion.
Even if you can somehow convince the major tooling to take that route, you're never going to convince every piece of software to follow suite. It's a problem that must be tackled, there's no side stepping it.
> You can include a native binary in the gem which does the hard computations. Will that work on random new architectures like M1? No? But you can fall back to a Ruby implementation and show a warning message.
Promptly ignored by most people, all while doubling maintenance efforts.
The obvious answer is that every customer runs a different Ruby and not every customer pays to upgrade it at the same time. I currently have customers and personal projects with any Ruby from 2.4 from 3.0. I'm not upgrading that Ruby 2.4 project for free so it's going to stay like that for a long time.
Yeah, a ruby -2.4 switch would be nice but then every gem should pick its dependencies from back at the 2.4 time. Some could be dead by now. Actually those old projects usually get updated when the OS reaches end of life and they can't install the software on the new OS.
So economical and people reasons, not technical ones.
It is, mostly. 1.8->1.9 was famously a big change, but breaking changes since have been comparatively minor.
IMX the most common cause for gem breakage is simply rot - old gem, maintainer lost interest, a tiny change required (often due to an open-ended dependency on another gem) but there's no one to do it. Unfortunately, adopting an abandoned gem is non-trivial if the original maintainer doesn't respond, and a lot of gems were written in the Rails goldrush but have fallen into abeyance since.
> Why can't I just run Ruby 3.0 with a flag that tells it to emulate Ruby 2.6? Or 1.8 for that matter?
My understanding is that Perl 5 supports this, for any version of Perl released in the 1994-2021 range, but in large part because it hasn't had drastic changes in that time period.
Looks interesting enough... But how does one solve the issue of security level updates for some dependency language? Or when a particular version of some application reaches EOL and is no longer maintained, or theres some functionality in a newer version of Nodejs|Ruby|etc thats needed?
From what I understand this would require an update to the Nix version that supports it... but that also potentially means bumping other environmental versions as well, which might not be desired.
But I suppose this would amount to the user arranging the structure of their filesystem correctly so its one "system" per dir/folder...
Or is there a better way to cater to this? And I suppose this still means that the node modules, gems, etc that are being used then anyway also need to be updated after this accordingly.
From my limited understanding of Nix, it seems interesting, and the article was actually useful to me. But I cant seem to shake the feeling that this is another packaging abstraction like others before it, and while it seems like a better variant, its not much different to having X, Y and Z listed as requirements and then letting the dev go off and install such dependencies on their system, in the way that they best know how. Juniors or those new to a specific environment might not know the ecosystem so well so as to know to use rbenv or nvm or whatever, but I'm not sure how Nix solves this issue differently than one of the specific tools its replacing.
Theres clearly more to Nix than just setting up language environments, which I'm guessing is where its usefuleness really kicks in. But purely for lang env set up, I'm not sure I see a point over other tooling...
> But how does one solve the issue of security level updates for some dependency language? Or when a particular version of some application reaches EOL and is no longer maintained, or theres some functionality in a newer version of Nodejs|Ruby|etc thats needed?
Nix supports building packages from multiple package sources. e.g. maybe you want an old version of some package which is only available in an older release of nixpkgs. It would be possible to use the older nixpkgs release to install that old package, and a newer nixpkgs release for the others. -- You even don't have to use the main nixpkgs repository.
> But I suppose this would amount to the user arranging the structure of their filesystem correctly so its one "system" per dir/folder.
Nix handles its filesystem arrangement for all of the "multiple different versions of some package" already. That's part of Nix's value proposition.
This also allows being able have packages available in your shell without having to 'pollute' the rest of the system is a neat dynamic, particularly with direnv.
> Theres clearly more to Nix than just setting up language environments, which I'm guessing is where its usefuleness really kicks in.
Depends how often you (want to) jump into fresh environments, but e.g. Nix allows being able to have a consistent set of programs installed quite easily.
There are multiple ways of doing it. The obvious one (updating nixpkgs) you already mentioned.
Second way is to override[1], in documentation they are showing how to change compilation parameters, but you can also use this to change version of dependencies or source tarball for the package. As you use Nix you will eventually need to do it as sometimes package was not updated, or perhaps you need to use older version, or enable compilation option.
Third way is to use overlay[2]. In previous way an existing package was modified. Overlay allows to completely replace or add a new one.
For example there is a tool called poetry2nix[3], which on the fly translates python poetry lock file to Nix so nix can build them. Nixpkgs includes it and generally is frequently updated, but maybe there was a fix yesterday that hadn't made it there yet and it fixes an important bug. You can fetch that repo independently and attach it to nixpkgs (or you can use it directly).
Nix also has upcoming feature flakes which to my understanding takes this to a new level. So you can easily compose multiple repos like this in your application.
> Theres clearly more to Nix than just setting up language environments, which I'm guessing is where its usefuleness really kicks in. But purely for lang env set up, I'm not sure I see a point over other tooling...
I use it this way and the killer feature for myself is that for a project all I need to have installed is Nix and I can have exact environment the dev used.
It's not mentioned often, but I think a demo of it would be the repo for Nix program[4]. Typically when you want to compile some open source program, after you check out the repo, a hunt starts for the building tools and libraries needed. With nix you just issue build command or enter build shell[5] and things just work with no errors (or at least I did not get them when trying it a while ago. Everything worked on first try).
4th way is to just package it yourself and use pkgs.callPackage to add it. Overlays are nice for the whole system or if you're still using nix-env, but I find it more clear to use callPackage directly when I'm explicitly building an environment.
> Nix is a tool that takes a unique approach to package management and system configuration.
Nix is the basis of an OS distro: NixOS.
This article is just re-articulating the idea that {the/an} OS distro should be managing this stuff, and not some fledgling language-specific programs (that go behind its back, and do a half-baked job by not controlling the packages external to their language ghetto).
I think NixOS is great, but you don’t need to switch to NixOS to use Nix. You can also install Nix on other Linux distributions or even on macOS, and use Nix to manage individual tools and projects without using it to manage your entire system.
the article is also pointing out that Nix can solve for multiple versions, shrink-wrapped environments, cross platform, etc, in a way that most OS distros can't. Nix is more than NixOS of course
For me, the version manager that provides the nicest experience cross-platform is Cargo from Rust.
It can specify and pin specific versions. It is super easy to give to a fellow developer and have them reproduce the build. In addition, because Rust is suitable for low-level, high performance work, many times all the libraries you actually need are written in Rust and everything works very seemlessly (for example, there is no need to have libxml like the Ruby xml library needed, because you can probably write an XML parser in Rust that will be on par or better performance wise with libxml).
Nix is very unixy so it probably could be made to work on WSL, but not native windows.
Yes, if all of your dependencies are written in a single language, you can use the language-specific version manager. TFA was pointing out a solution that work across different languages.
Conda manages this pretty well and the community ran “conda-forge” channel works. It’s language, platform and architecture agnostic. It only supports distributing binaries which can be a benefit or downside when compared to Nix’s “build the world with caching” approach depending on your needs. Personally I find conda’s approach to be more pragmatic for working with unprivileged systems and how packages expect to be used. Though I do hop the Nix-store like model continues to grow in popularity as it’s much better than the classic posix install layout.
In conda, packages are installed into “environments” which are just folders like Python’s venv except language agnostic so you can install specific OpenSSL/Clang/GCC/libWhatever versions and have one environment per project you work on. It’s also the only package manager I’m aware of that can provide a good experience across Windows/Linux/macOS for x86_64/arm64/ppc64le.
One issue with conda is that the ecosystem of tooling is a little fragmented. The classic “conda” package has performance issues but there is a second implementation “mamba” which fixes this at the cost of minor changes in behaviour and attempts to merge mamba into conda seem to have stalled.
If you want to try it out Id recommend using the “Mambaforge” distribution and using “mamba” for everything except “conda activate” (which is actually a shell function).
The more I see posts praising nix, the more I am confused by the decisions made.
In this post:
--- start quote ---
How about a different version of Ruby or Node? Let’s say that our project depends on Ruby 2.6 and Node 10. We can go and search for those specific versions
--- end quote ---
So, to begin with, we still need a "version manager", because we want specific package versions. But look at how this is implemented in nix:
let
pkgs = import <nixpkgs> { };
in
pkgs.mkShell {
buildInputs = [
pkgs.hello
pkgs.ruby_2_6
pkgs.nodejs-10_x
];
}
Why? Because unlike every sane package/dependency manager where you specify a package and a version, nix pretends each version is a separate package.
And these packages aren't even correct. If you do go and search for ruby, for example [1], you get the following:
This... This is laughable. How do I install ruby 2.6.8? Oh, there's no ruby_2_6_8, because of course there isn't. And this could be difference between a secure system and all your base are belong to us.
And they call this reproducible builds?
And that's before getting into the ridiculous
--- start quote ---
All the software that we installed depends on the specific version of the nixpkgs channel that we installed on our system [whose only version is a commit hash in a git repo]
--- end quote ---
So you need an extra tool [2] for, quote, "painless dependencies for Nix projects."
Yes, sure. I'm definitely ditching my version managers in favor of this tool, that hasn't solved these issues in 18 years of its existence.
Nix is not easy nor simple. It's complicated but not complex.
re: ruby version
In ubuntu I can only install ruby2.7 and I don't which minor version. [1] I would need to use rvm anyway.
It's the same with nix. What is possible is to pin the version you need by specifying the commit. [2] shows the diff of the commit that moved ruby_2_7 from 2.7.3 to 2.7.4.
Say for example ruby 2.7.4 has a regression and the project needs to stay on 2.7.3.
The revision has for 2.7.3 is used.
[1] > apt search ruby |egrep "^ruby2|^ruby3"
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
ruby2.7/hirsute-updates,hirsute-security 2.7.2-4ubuntu1.2 amd64
ruby2.7-dev/hirsute-updates,hirsute-security 2.7.2-4ubuntu1.2 amd64
ruby2.7-doc/hirsute-updates,hirsute-updates,hirsute-security,hirsute-security 2.7.2-4ubuntu1.2 all
I hoped you would tell me how to pin a specific version in nix, and you didn't. So far, I've asked this question at least three times. This question is either avoided or ignored. Ah yes. There's one "solution" with overlays which is neither maintainable nor scalable.
> ruby version In ubuntu
Is nix ubuntu?
> What is possible is to pin the version you need by specifying the commit.
Commit hashes are not versions
> I would need to use rvm anyway. It's the same with nix
We keep hearing the claim that nix is about builds. Builds often need their dependencies pinned to specific versions. If I still need to use rvm with nix, what's the point?
This is especially funny in the context: we're in the comment section to a post claiming that we should ditch our "version managers" while providing an alternative that, so far, looks to be significantly worse at what it claims to do.
As an alternative, I use asdf[0] which is written in bash. It allows you to manage versions of any package through plugins. Plugins can be written in any language (as long as they can be set to chmod +x). Most plugins are written in bash.
Though there is a wealth of plugins, what's nice is that it's trivial to write your plugin. So if something doesn't exist yet, or you have some internal tool you need to version, it takes less than an hour to write a plugin.
I had the feeling this would be an ad for Nix after reading the first few paragraphs.
Is there really a difference from the Docker example given (“your Ubuntu version is EOL”) though? Any of your language or tools can be EOL’ed even if installed via nix, and you have the exact same problem in your hands.
When that wasn't doable, I could google the name of the .tgz file, download a couple and find the one with the matching hash, drop the .tgz inside the directory and change the source to a relative path. Actually I do this even in the "release -> archive" case because "Move a URL once, shame on you. Move it twice, shame on me."
[edit]
If you really don't want to change the .nix expression, you can manually add the file to the cache, and things will Just Work.
Everything to install with Nix must be packaged "the Nix way", I assume? How is the breadth and quality of Nix packages? Just for fun I checked the kiwi image builder, from the openSUSE project, which I recently had some versioning problems with, and it was not there.
Nixpkgs is quite good. It has the package I'm looking for most of the time.
According to Repology[1], it's got more packaged projects than any other package repository, and more packages that are up to date than any other repository. It's also pretty on top of CVEs, with only 0.38% of packages having potential vulnerabilities.
The problem I have with Nix (and Guix) is that you're shit out of luck if you need it to work together with some network based package manager or existing project that someone didn't add to it yet.
It gets really complicated really fast, and things that "just work" with other packages managers and projects, because they are opinionated and linux-y, most of the time don't work well with Nix or have to be shoehorned into it.
It always feels like you're fighting against everything to get things into Nix that just weren't meant to be.
> you need it to work together with some network based package manager
Well, you can disable sandboxing if you want, although it's not recommended. This lets you run a package manager that requires networking in a derivation.
The fact you need a version manager on top of your new, pristine and reproducible version manager is a bit concerning. Can’t whatever shortcomings exist be addressed in nix itself?
it's not on top of Nix itself but (about to be) a part of it. according to the NixOS Wiki [1] Flakes is an upcoming feature of the base package manager. it's not part of the stable channel yet, and "proper" documentation only currently exists in the unstable manual [2].
Good introduction to Nix. I wish I would have had this when I started.
But what do they mean with “works forever”? What if the Git repo or Web server of the project is gone in several years? How do you reproduce it? Does niv include a local source code mirror?
This seems like a good thing for system-level packages, but what about those language-level packages? Could Nix replace both dockerfiles (or a list of apt packages) and pip/conda?
For Docker, you just call a Nix function and specify what programs to include (both implicitly and explicitly via use) and Nix automatically includes the transitive closure of what you need
Looks interesting, but I don't see any benefits over using conda. And given the network effect benefit, conda seems like it will remain the more useful tool.
I've been using it at work for a couple years now, across tons of projects and multiple languages/runtimes/etc.
Very highly recommended. It's fast, safe, and effective. Much, much better than the various nvm/rvm/rbenv/etc that came before it, and it takes no effort to integrate.
In my experience, Nix and Guix are nice toys, but I'm just not ready for the kind of lifestyle change they require to actually use them for anything. For me, "pinning" a package to a specific version means not downloading and building a newer version.
Second, to use this package manager, you first need to install direnv. How do you install it? with brew, a different package manager.
Third, you have to learn a new functional programming language. Right. Because normally to put together a toolbox, I often learn to speak a few phrases in Swahili first.
Fourth, finally, we get to install a simple Unix program, that any package manager could have provided.
For the fifth trick, freezing dependencies, you first have to have all the correct versions of all the dependencies to do what you want. How do you establish the right ones? Manually. Just like with any other package manager that builds apps against a specific tree of build deps. "Reproducibility!" Because no other package manager could possibly download the same tarballs and run the same commands. Must be all that functional programming magic.
And sixth, the idea that this will work forever. Right. Because any other distro and package manager could not possibly work again in the future, with a mirror of all the packages in a release. That's only for super cool futuristic package managers. No way to reproduce the same packages in the future with old package managers.
Look, Nixians, this is all old hat. Every decent package manager can do all these things, just in a less complicated way. Modern systems use containers, and they don't need Nix for that. Nix is basically just the new Gentoo: a distro for hobbyists who tell themselves they're advanced while all the professionals use something else.