I can vouch for the immense improvement Nix has made to my software development process. I use NixOS on my desktop and laptop. At the OS-level, it gets a lot of things right: reproducible, immutable system configs; lightweight containers; devops with declarative configs. At a software project level, nix-shell is an indispensible tool. Compilers and interpreters aren't even part of my system-wide config; instead each project has it's own shell.nix file that installs all dependencies I need on the fly without polluting system state or virtualization. Nix is a god-send, and the developers that contribute to it are nothing short of awesome!
The area that needs improvement is the documentation. Once you learn the Nix language, reading the source code is pretty helpful, but it would be nice to make it more approachable. For example, the nixpkgs repo has a bunch of Nix helper functions that are useful to developers when writing their own packages, but these functions' documentation is either buried in a long manual, or non-existent.
By reading your comment and a little bit of the package declarations in some nix packages, it seems like the software package system of nix is very similar to rez.
Cool! Indeed, Rez does something similar on the package management side.
For Nix, this is almost a side-effect, however. Nix's primary task is to describe how to build packages, and the entire OS is just another package that groups all dependencies. You can make a single change to a file everything depends on and it will result in a new OS.
Nix can evaluate the entire system configuration in seconds, and build or download missing binaries in parallel.
As a result you can e.g. have a complete OS based on a different glibc (maybe with some patch you like) installed and running alongside the normal OS without the glibc patch. Packages that do not use glibc are simply shared.
It takes a list of requested packages, does some dependency resolution to arrive at a matching package list, generates environment variable setting/exporting code (in Python, e.g. setting things like PYTHONPATH and MAYA_PLUGIN_PATH), then launches a new shell with that code. So, yes, you can have multiple shells with different packages in use simultaneously. I don't think it operates below the industry software package level, which makes sense, as most industry software is non-free, closed-source binaries. Each package has a `package.py` with its own list of dependencies, and code for things like manual tweaks to env vars.
>"Using Rez you can create standalone environments configured for a given set of packages. However, unlike many other package managers, packages are not installed into these standalone environments. Instead, all package versions are installed into a central repository, and standalone environments reference these existing packages. This means that configured environments are lightweight, and very fast to create, often taking just a few seconds to configure despite containing hundreds of packages."
At different runtimes (e.g: different package.py configurations) yes.
I can have a software called my_application which has a version that uses A and another version that uses B.
That differentiation can be a flag at runtime, a "variance" e.g: gcc-4 vs gcc-3 etc.
How does nix allow you to do that? For example, say I am using library foo and it has the function bar
In version 1
bar definition is
```
def bar(a, b):
return a + b
```
In version 2
bar definition is
```
def bar(a):
return a + 10
```
I don't understand how you can use v1 and v2 of foo at the same runtime? Unless nix does some namespacing based on the lib version invisibly for the enduser but that looks very prone to error...
If you release them as separated libraries it won't complain, for example, shotgun API imports can be by version
```
import shotgun3
import shotgun2
```
so theoretically you can release a rez package whose name is shotgun3 and another one whose name is shotgun2.
That way you would effectively have shotgun2 and shotgun3 in a same runtime environment.
I wasn't really thinking about language-level runtimes, like, I don't want to solve the problem of only having one global scope in python and running into module name collisions. I'm not sure how concerned rez is with shell environments beyond what (eg.) python packages are available in it, but I was wondering about having a shell where a program foo is available that, say, requires a lib compiled with gcc-4, and another program bar that requires the same lib at the same version but compiled with gcc-3.
Nix expends a bunch of complexity into fucking with dynamic linking to make that work transparently most of the time and deploying wrapper scripts to cover other cases, so I was curious if rez has a cleverer solution there.
Essentially, Nix treats every version of a package, or every "instance" of a package compiled with different flags, or different "build inputs", as a completely different dependency. So foo-1.0, foo-2.0, and foo-2.0-debug are all separate things you can depend on, from the POV of the package manager. A huge "symphony" (a friend's term) of symlinks holds it all together.
Sounds like this solves a similar problem to Docker. Can you comment on what the differences are, and the relative strengths and weaknesses of each approach?
So Nix is ultimately a tool for making and sharing reproducible package builds. It has a binary cache, but it's not necessary. Like Ports, packages get built by default.
Docker, on the other hand, is a distribution and execution mechanism. It provides an abstract way to move around a fully assembled, ready-to-go service or application running in isolation.
It's entirely reasonable to use both. You can use Nix to build and manage docker images and make extremely minimalist docker images. You can use Nix knowing that the entire process is perfectly reproducible, and the Docker containerization is only a final integration step.
With this, you sorta get the best of both worlds. You get a reproducible build (and if done right, also a reproducible dev environment via nix-shell) and with Docker you get the ability to build and run a prepped copy with a well-defined interface.
Docker really doesn't provide a way to reproduce a built image from scratch. You sort of have to trust and build on existing images, and most folks making bulk images appeal to external tooling outside docker files to do this.
> Sounds like this solves a similar problem to Docker. Can you comment on what the differences are, and the relative strengths and weaknesses of each approach?
NixOS committer here.
Docker attempts to achieve reproducibility by capturing the entire state of a system in an image file. It also attempts to conserve space by taking a layered approach to images, so when you base your Dockerfile on some base image, your resulting image is the union of the base image's layers and your own changes.
Here's where Docker's approach falls down, and how this could be fixed (and indeed is, by Nix).
Flaw #1:
Building an image from a given Dockerfile is not guaranteed to be reproducible. You can, for example, access a resource over the network in one of your build steps; if the contents of that resource changes between two `docker build`s (e.g. a new version of whatever you're downloading is released, or an attacker substitutes the resource), you'll silently get different resulting images, and very likely will run into "well, it works on my machine" issues.
Solution:
Prohibit any step of your build process from accessing the network, unless you've supplied the expected hash (say, sha256) of any resulting artifacts.
For the projects I work on in my free time, using NixOS on my personal computers, I've never been bitten by nondetermism.
I wish I could say the same about my work projects that use Docker. My team members and I have run into countless issues where our Dockerfiles stop working and then we have to drop everything and play detective so e.g. new hires can get to work, or put fires out in our C.I. env when the cached layers are flushed, etc. So many wasted hours.
Flaw #2:
What happens if you have two Dockerfiles that don't share the same lineage, but you install some of the same packages? You end up with multiples layers on your disk that contain the same contents. That's wasted space.
Solution:
I'll use NixOS as an example, again. In NixOS, you can look at any package and compute the entire dependency graph. It should be noted that this graph includes not only the names of the packages, but also precisely which version of each package was used as a build input. This goes for both build inputs and runtime dependencies.
Note: by "version" I mean not only the version as listed in the release notes, but every detail of how the package was built: which version of Python was used? And then transitively: what version of C was used to compile that Python? etc, etc.
NixOS exploits this by allowing you to share packages with the host machine, and then each NixOS container you spin up has the necessary runtime dependencies bind-mounted into the container's root file system. As a result, NixOS has better deduplication (read: zero). Also, by the same graph traversal mechanism, it's trivial to take any environment, serialize the runtime dependency graph, and send that graph to another machine, back it up somewhere, create a bootable ISO for CD/USB -- whatever you can dream up.
Thanks to Nix's enforced determinism, you can trivially build container environments (and all of their required packages) in parallel across a fleet of build machines. In fact, Nix's superiority at building packages is so strong that people have gone so far as to build Docker images using Nix instead of `docker` (where they can't avoid Docker entirely for whatever reason): https://github.com/NixOS/nixpkgs/blob/2a036ca1a5eafdaed11be1...
I'm keeping things simple here and trying to address the most salient "Docker vs Nix" points, though I could continue talking about other strengths of NixOS outside of the scope of Docker/container tech, if desired.
Docker's union filesystem approach is great in a world where you can't use a better package manager. For everyone else, there are package managers that obviate the need for such hacks, don't waste space, provide determinism at both runtime and build time, etc.
The main difference between the two is Nix's ideas come from functional programming, and Docker's are imperative. The practical outcome of this is that it's easier to keep your system clean over time with Nix than Docker because of the way it's been designed.
Nix isn't only a package manager, it is also a functional programming language intended for system administration. This means that, while a Nix file is comparative to a Dockerfile, it has several key differences:
1. All Nix files are just functions that take a number of arguments and return a system config (like a JSON object, but with some nice functionality). A Dockerfile is a set of commands you run to build a system to a "starting" state. This is imperative (you're telling the computer to "do this", then "do that", etc.), so once it's done, you can mutate state to deviate from what you specified in your Dockerfile. With Nix, while you can technically do this on some systems, it does provide you with the command line tools so you don't break things (e.g. nix-shell, nix-env). Note, on NixOS, other measures are taken to encourage safety.
2. If things go wrong in Nix(OS), the idea is that you can do a fresh install in your system, copy your old Nix file to it, and with one bash command, be back to where you were before things went haywire. In terms of containers, there's nothing new with this because this is exactly what Docker does. However, Nix also has this concept of generations, so every time you use a Nix command to change your system state either declaratively via a Nix file or imperatively in the command-line using nix-env, you can roll back to a previous version of state. This is especially nice with NixOS, because it creates generations for your entire system too (includes hardware config, drivers, kernel etc.), and makes separate GRUB entries for each generation, so if something breaks after you do a system upgrade, you just chose an old GRUB entry to go back to where you were. AFAIK, Docker doesn't offer anything like this, and this is a good example of how these tools' designs can impact their feature-set so dramatically.
3. A neat feature of Docker is composability. You can inherit from other, pre-existing Dockerfiles, and you can deploy multi-container apps with various tools. Composability at a single-container level is very straightforward with Nix. Since every config is just a function, you simply call the function exposed by a different Nix file with the correct arguments, and... voila! Once you've made your desired Nix file, you can run it either using nix-shell or nixos-container. While I'm no expert, I believe they perform better than Docker as they don't use virtualization. For multi-container deployment, there is NixOps. You write some Nix files describing the VMs you want to deploy, and run a bash command to deploy them to various back-ends (AWS, Azure, etc.). Again, the big difference here is that you can incrementally modify these VMs in a safe way using Nix. If you change your deployment config file, Nix will figure out something has changed, and modify the corresponding VMs to achieve the desired state.
Some may believe that Docker and Nix are very similar, and to their credit, they are in certain scenarios. The thing I like about Nix is that it's one language (and architecture) that was designed well. It's minimal, yet makes it possible to do so much in a safe way.
Nix has been around for a while, but I think the community is growing quickly as functional programming continues to take off. I'm excited to see where it goes, and am super grateful I have a tool like this to use while coding.
these functions' documentation is either buried in a long manual
This is a problem with lots of feature-rich software, even with meticulously-documented APIs. What we need is reverse-indexed documentation. That is, an extensive API reference is only useful for someone who already knows what functions are in the API and just needs to remember how to use them. But even the most thorough API reference does nothing to promote discovering new functionality. This is often left to the authors, who then have to go about writing a User's Guide that gradually explains concepts, idioms, etc. in prose.
Thorough User's Guides are rare because they are tough to write, and even tougher to write well. Users don't often have the time to read through potentially hundreds of pages of prose to find what they're looking for. We need a better way to let users search or browse for concepts, and then be given a list of the functions that implement each concept.
That is, addition to documentation like:
size_t strlen(const char * s);
RETURN: Length of string s.
size_t strnlen(const char * s, size_t maxlen);
RETURN: Length of string s, or maxlen (whichever is smaller).
NOTE: Stops reading after maxlen.
char * stpcpy(char * dst, const char * src);
Copy src to dst.
RETURN: pointer to trailing '\0' of dst, or dst[n] if no trailing NUL.
NOTE: Undefined behavior if dst and src overlap.
char * stpncpy(char * dst, const char * src, size_t len);
Copy up to len bytes from src to dst.
RETURN: pointer to trailing '\0' of dst, or dst[n] if no trailing NUL.
NOTE: Undefined behavior if dst and src overlap.
char * strcpy(char * dst, const char * src);
Copy src to dst.
RETURN: dst.
NOTE: Undefined behavior if dst and src overlap.
char * strncpy(char * dst, const char * src, size_t len);
Copy up to len bytes from src to dst
RETURN: dst.
NOTE: Undefined behavior if dst and src overlap.
We also need to be able to "tag" functions. So we might have the following tags that allow us to search for concepts:
One great approach is the Hoogle search engine for Haskell [1]. The idea with that is that you search by type, instead of name. So if you were looking for a function to take a item, and return a list with n copies of that item, you would search for `a -> Int -> [a]`, which would give you back replicate.
Looks like the Nix expression language is untyped, so this wouldn't work directly, but maybe adding a rough type signature in the docstring would get some of those benefits (and it should be a bit better for discover-ability, since you wouldn't need to guess the same tags/concept the author choose).
It's a total hack, of course, but none the less effective or useful for that.
It has a list of methods marked "safe to experiment with", and simply tries them out.
It gets a big boost from being able to evaluate the receiver (the first in the input list) to a concrete object, and then only consider methods on that object's class.
I worked with Visual Smalltalk for years and early on I created my own tool which found all methods whose source-code contained a given string or string with wild-cards. It was surprisingly effective because it easily integrated into the Smalltalk browser-windows in general so rather than printing out the set of methods found it opened a list-browser with all the found methods in it, so I could then easily browse for their senders or methods called in them and so on.
So how did you know what to search for? A simple case was to look up the ready-built application's GUI which usually always contains some strings. So if you wanted to change something in the system which would result in a change to its GUI just search for source-code with some of the words you saw in the GUI.
I've looked into it a bit. Visualworks smalltalk documentation gave me an idea[1].
In Smalltalk everything is an object, that is including numbers, characters etc. The standar method-lookup searches from the innermost class methods to the outermost class methods.
The method finder could simply iteratively go through all applicable methods that return the class requested and match results.
I don't recall reading about it anywhere, but I did once look at the implementation (it's wonderful having a reflective system where a single click gets you source of anything onscreen), and it's very straightforward but also kind of gross.
One neat trick I'd forgotten is that if it doesn't find anything interesting with the inputs in the order written, it permutes them and tries again!
This just looks like you missing things like classes, namespaces etc. in C. Classes, namespaces etc. are a natural way of making API discoverable (among other things). For example if you want to know how to search for a somthing in a string just look at the methods exposed by string.
Then you just pass the buck over to classes and namespaces. You have the same fundamental problem in Ruby and Python, perhaps even worse for lack of typed function signatures.
I think this is a great idea, and something that could hugely improve semi-automated documentation sites, man pages, etc.
I also think that you could get a lot of people to agree that it is a great idea, and STILL have a lot of trouble enforcing it in a project without very rigid code review policies and very good linting rules.
Anybody can run guix publish however and self publish their own non-free substitutes if they wanted, plus there's all the existing non-free substitute servers around, and you can try your luck with guix import from nixpkgs. The community won't help anybody if they ask for help with non-free software though.
I haven't missed any non-free packages since switching to Guix surprisingly, I always seem to find a free alternative though my daily req are pretty basic.
To me, when i look at new distro, i look for how well the repo covers binary packages, i have no time/resource to build compilers/big projects. Last time i checked nix, there aren't that many packages in repo to be usable on laptop.
But the package manager itself might be a good idea for dev environments, but all the languages i use provide similar facility(not counting system-library dependencies).
I'm confused by this statement. `nixpkgs` contains tens of thousands of packages, the majority of which have binary substitutions via the NixOS Hydra build farm. Usually the only time I build things from source on my NixOS laptop are when I override some package to use a patch of my own design. Hell, when I was using Arch Linux I found myself building things from source _way_ more often than after switching to NixOS.
I don’t know when you were last looking but for as long as I’ve been using nix (since early 2015) nixpkgs has had definitions of the vast majority of commonly used software, and pre-built binaries of most of those. Definitely worth another look. Also note that you don’t have to switch distros as nix can be plugged into almost any Linux environment without conflicting with whatever else is there.
I used Arch Linux for a while (~3 years) before making the jump to NixOS.
Many things I got from AUR packages are binary-cached in stable Nixpkgs, not to mention "compilers" (GCC, clang, and various versions of GHC in my case). I don't know when you last tried it out, but binary availability isn't a problem now if you'd like to give NixOS a spin. The only thing I build from source is polybar[0].
That article on "What is Nix and .. " does not contain the words "Nix is a..." A quick read gave a vague sense that it was about installs and file paths so maybe something package-related? Nothing definite.
I get the feel that Nix does something like what virtualenv does for python, but more generalized to make that work for other software. Is that approximately right?
Yes, I also was immediately reminded of Python's conda based environment management. E.g. 'source activate <env1>' may have python3 but when used with <env2> will use python2 and different version of other libraries.
My experience with Nix was a real pleasure, and a complete failure.
I set out with the goal of building a development environment for a software I was working on. I thought - Nix sounds like a better Docker, where it's possible to choose package versions independently of the rest of the system. It's perfect for testing!
I found the package description language refreshing - it was powerful without getting too complicated. Over the course of a couple of days I was able to adapt a few packages and create my own too, with the build environment.
It was almost finished, but the last problem remained: installing CUDA and the userspace portion of the Nvidia driver. It's not rocket science – I thought – all I need is a few .so's, it even works with Docker.
Alas, after battling Nix for another couple of days, trying to use generic GL, installing the Nvidia one, ignoring it altogether or trying to using the host version - I gave up. I found no way to build a package linked against OpenGL that would actually work.
Despite that, I hope to use Nix with a different project in the future.
It is essentially Docker, but you can boot bare-metal with it. It even uses Docker under the hood.
Every boot is a fresh boot. You can read-write during boot, but all changes are made to ram. You mount with fstab things you want persisted, like home.
It uses very simply primitives, which makes it easy to use. Squashfs/grub-mkconfig/overlayfs/etc. Docker isn't exactly a "primitive", but it is easily reasoned with.
I had a similar experience, although in my case the stumbling block was virtualenv. It needed to zip some .py files installed by nix and that totally failed because nix sets mtime to 0 on files it manages. I get why that's a thing they do but zip only allows file times after 1980 or so. NixOS is a really, really interesting project that I will be very interested in using once it's ready for prime time.
This is very similar to my experience, as well. I tried Nix for over a year. It has excellent ideas and principles. In practice, though, I ran into so many bugs that were near impossible for me to figure out on my own. I personally didn't find the community all that helpful, either. YMMV.
Hm, I'm using CUDA in production via Nix. When did you attempt this? It may be much easier now than it was before. If there's some particular technicality you need solved- like requiring a specific driver version- I can help.
Although other people have different opinions to me, I don't really see Nix as something for the desktop, but for development, deployment and server software it's a killer solution for me.
None of what he mentioned was desktop-specific. GPUs are useful, and if CUDA is involved then it's being used for acceleration of some kind, which can perfectly well be done on servers.
I agree that Nix is hard to deal with when it comes to binary distributions of any kind, which includes CUDA.
It's a problem of altering the embedded paths to point at the nix store. There are some tools for doing that -- patchelf, etc. -- but it's far more difficult to get all that right than to compile from source.
A source package using autoconf will, in general, just work. A binary package always needs extra work.
Extremely happy user of NixOS on my laptop (Dell XPS) for 2.5 years chiming in to say that being able to share parameterized configuration modules between your laptop and your servers is ridiculously nice for a developer and admin!
I recently wrote about my experience using NixOS on my VPS [1]; it's truly a very different way of approaching systems administration and Nix is at the heart of it. While I think the Nix expression language has a negative impact on usability, Nix still ends up being a net gain for not only entire OSs, but also reproducible environments for packages on many other OSs.
Oooh! Thanks for linking this write-up. I've been thinking of playing around with Nix and basically only be able to do it on a VPS.
Before I get started though, what things could I consider implicit prerequisites of running Nix? For example: I've never built a Debian or RPM package. Would I be better served gaining some background in OS package management before diving into this?
No: each package manager had its own strange rules and concepts. I’ve worked with deb packaging, and I wouldn’t say that experience transferred to working on Homebrew or Nix. You might as well start with what you want to use.
On my macOS work machine, Nix has replaced Homebrew. The ability to switch between generations of your system state [1] when a new install goes awry is underrated, IMO.
I really think the biggest roadblock is the command line interface[1]. To replace Homebrew you only need like 3 or 4 commands. Right now, that means 3 or 4 different nix commands with weird arguments. The way Nix is currently set up is great and make sense for the OS, but Homebrew has a much simpler use-case (single-user, all user land packages, very low barrier to entry). I can't wait until that ticket is addressed.
I've installed both without issue. It might be a headache long-term because of libs and installing things in both systems, etc. Just make sure you know which comes first in PATH.
Simula is a bleeding edge VR Desktop project for Linux; virtually all of its dependencies are highly novel and require building from source for almost any distro. Nix allowed us to reduce the effort to building our project from 1hr of sifting through build documentation to a single build command.
Nix's only issue is that it so far can't handle OpenGL dependencies very well. I'm not sure if this is something in principle it cannot handle, or if it just hasn't been done yet. I'm praying it's the latter.
Hardware is the one thing that actually varies between systems no matter how declarative your configuration is - this is normally not a problem, since the kernel abstracts it away.
The problem with OpenGL, then, is that every graphics driver provides its own set of OpenGL libraries that is subtly different from all the others. This means that applications need to be built against a specific set of OpenGL libraries from a specific graphics driver, and the kernel is of no help here.
That's fundamentally where the OpenGL issues come from; Nix packages typically expect the libraries to be semi-statefully provided in a `/run` directory. This gets especially hairy when targeting different distros.
EDIT: And because the package doesn't know what graphics drivers it'll be running against on non-NixOS, it can't automatically build against a copy of the correct drivers in nixpkgs either.
No. I think both languages are equally expressive in the end (though one will be a bit uglier than the other). But the problem is probably less big in Guix because only a few graphics drivers are actually supported due to the fact that all the drivers needs to be Free and free of binary blobs.
> But the problem is probably less big in Guix because only a few graphics drivers are actually supported due to the fact that all the drivers needs to be Free and free of binary blobs.
This is a property of Linux libre, not of Guix per se.
I recently wrote a blog post about how we use Nix at Pinterest (just the package manager, not the OS). It helps us give all developers and CI an identical environment. On iOS, we run most build commands and other tools through the nix-shell. New engineers set up their environment by just running a script that installs nix and enters the shell.
I have recently started looking into Nix to solve the problem of dotfile deployment to new OSX workstations. When you get a new laptop, you clone your dotfiles, maybe symlink them to your home directory -- and then what? Nix solves this problem for me by coupling my dotfile deployment with the installation of the related software.
The learning curve has been absolutely tremendous, however. That said, the support on Freenode has been some of the best I've encountered in more than a decade.
Can you elaborate a bit on how Nix helps you set up your user space?
Do you install software with
"nix-env -i package"
or do you have a declarative way of doing it? If you use nix-env, how is that better than
"brew install package"?
I'm not doubting your setup, I'm just trying to use nix for the same thing and am having a hard time finding the canonical way of doing a declarative user setup.
For me the part that I'm saying nix-env -i rather than editing a config file and nix-whatever --read-the-config-file-again isn't really a big deal. It's just a cute tool to author and maintain user environments, I'm sure I could, like, define a single package that just depends on and exposes all the packages I actually want installed and call that my declarative user environment but I'm not sure what that's actually gaining me.
Declarative or not, I'm still gonna get the advertised benefits of Nix like, say, having some confidence that I got all the dependencies specified properly if my stuff builds, easily customizing packages by overriding specific bits, and coping with arbitrary version requirements really easily.
I'm sure someone who's more used to configuring homebrew has a different perspective there, but it definitely seems preferable to taking over /usr/local and rotating things in and out of there as I switch between projects or w/e.
Also, the OS X windowmanager has a memory leak and you have to resart every so often. Yesterday mine was 22GB and after restart <100MB… Nothing to do with Nix, just venting ;-)
It took a while to get used to, especially considering the way to learn how something works seems to always be to go and read source code on GitHub. There are good things and bad things about this approach but I fear it makes it difficult for your average user to use since it is a bit of a barrier to entry.
I loved how extensible the system was, and how easy it was to make modifications simply by modifying configuration.nix. It made it really feel as though my system was truly 'mine', in some ways being similar to the feeling I get from setting up an Arch machine. Being able to just recreate a machine by cloning your configuration onto the new machine, and running 'nixos-rebuild switch' is extremely powerful. Being able to roll back machine is also amazing. Something else that really impressed me was how great though ZFS support was. Setting up a ZFS system on Nix is one of the best experiences I've had using ZFS on Linux. It really feels as though ZFS is a first-class citizen on Nix, not something tacked on top. It's possible to set up ZFS without adding any sort of external repository like with Arch.
I ended up leaving NixOS after running into an issue that I would get related to UEFI, and at that point didn't have the time to track down the problem, but I'm really excited for where NixOS is going in the future, especially when declarative user environments https://github.com/NixOS/nixpkgs/pull/9250 (user configuration) with NixUP becomes a thing. I will definitely be coming back to it in the future.
Can I ask what your UEFI problem was? I run NixOS on a couple MacBook Pros using UEFI with systemd-boot (née gummiboot), and I haven't had any problems yet.
I'm a big fan of Nix, but I really with there was a declarative way to define a user's environment similar to what NixOS has with /etc/nixos/configuration.nix.
It seems a little strange to me to have a functional package manager, but an imperative package install process. I think there are a few projects like NixUP and home-manager that are looking to address this, but I'm hoping Nix will provide an official way to do it.
That's a bit too strong a statement to make based on a two-year-old still-open PR with an uncertain future. It seems like it might be coming at some point in the future.
Thanks for linking to this! Do you know of a way to get a list of installed packages then? "nix-env -q" lists the meta-package but not the packages installed within. Also, when you update packages, does the update propagate to the sub-packages?
I'd expect you list installed packages by pointing your text editor at the file where you defined which packages you want your meta-package to pull in, no?
For those new to NixOS, the default package installation method is through a concept called channels. It can be a bit confusing reconciling the nature of channels and the real strength of NixOS which is functional reproducibility. I wrote a blog post explaining what channels really are and an alternative which is to use Git commit hashes instead https://matrix.ai/2017/03/13/intro-to-nix-channels-and-repro...
I do pretty much that using GNU Stow. Of course I then build packages and libraries on my own, in my $HOME, because I don't have root access on that machine.
I install any PACKAGE of version VERSION in:
~/stow/pkgs/PACKAGE/VERSION/
So taking nim for example, I'd have:
~/stow/pkgs/nim/0.17.1/
Under that, I'd have the whole FHS system for just nim 0.17.1:
I set ~/stowed/ as my STOW_TARGET. So on running stow, symlinks to all:
~/stow/pkgs/PACKAGE/VERSION/{bin,lib,share,..}/..
get created in:
~/stowed/{bin,lib,share,..}/..
I then have aliases set up to do something like "unstow PKG/OLD_VER" and "restow PKG/NEW_VER" when I want to change the package version.
I skip the "restow" step if I just want to "soft uninstall" a package i.e. remove it from PATH, PKG_CONFIG_PATH, MAN_PATH, .. by removing it only from the STOW_TARGET ~/stowed.
I'm a NixOS contributor/committer, and the principle author (and co-maintainer) of the Bundler-based packaging integration. If you or anyone reading this has any feedback and/or questions about NixOS and Ruby (or NixOS in general) feel free to shoot me a message. I idle on #nixos on Freenode (as cstrahan), and you can email me at charles {{at}} cstrahan.com.
Recently, I got the opportunity to play with Nix. To me, it felt to be a very nice way to bundle the dependencies along with the application. But there are few questions which always come to my mind were - how people run it in production. Can they achieve all the things (eg monitoring, debugging..) easily just like the app hosted on a linux machine ?
Is there a way to install nix into home folder (suppose I don't have sudo on server). I spend some time recently trying to setup nix in such environment and failed. I ended up compiling recent versions of neovim and tmux myself.
Yes and no (mostly yes). Ideally, you have /nix/ where everything gets installed, then folders with symlinks in your homedir (one that builds your workspaces and another that versions them) with an environment variable in your bash_rc to hook you in.
You can put that /nix folder anywhere, but you can no longer use their binary package caches and it will recompile everything. This is because they hard-code paths for dependencies during build time.
what would happen if we want to install, for example, two different versions of ruby at the same time?
This is treating the symptom instead of the root cause, that being ineptitude to design backward-compatible software. We as an industry should fight such band aids and take a stand. Yes, designing backwards compatible software is hard, but exactly that is the difference between engineering and amateurism. And if one is getting paid to program, I’m going so far as to argue that it’s one’s job to have their head hurt solving such difficult problems so that users of computers don’t have to.
Sounds like a similar solution to gentoo's slotted ebuilds [0]. Switching the 'default' implementation system wide or per user (eselect) has the same issues discussed. Not sure if there is an existing utility (beyond your standard shell script) that can produce sane per-shell environments.
Mhm, doesn't sound all that similar to me. This looks like something that needs to be maintained on a per-package basis as a fairly ad-hoc solution for different library versions, in Nix the whole thing just falls out of the way it's set up in general.
Funnily enough, NixOS has had its initial release in the same year as Gobolinux, 2003.
I'm not sure how Gobolinux organizes files exactly, but with NixOS, the hash in /nix/store/<hash>-packagename is produced by combining every input that might influence the package. By input I mean for example:
- The source: git revision / source url / etc.
- All dependencies, which means if a dependency of a dependency of this package changes, it gets a new hash
- Build instructions, including build flags
This wouldn't be possible with a more conventional file structure.
Just a heads-up in case anyone is confused as I was: Nix is not 100% bit-for-bit deterministic/reproducible yet, though they do deliver functional equivalence.
Can you elaborate? I always thought deterministic builds would enable faster builds (because it would basically allow for downloading and verification of binaries).
As an example, gcc includes code generators that try a few random things in order to optimize code. So two builds, on the same system, won't result in the same byte code.
Nix already does binary substitution. It just has the assumption baked in that builds don't do anything nondeterministic (and the Nix build sandbox removes a lot, but not all, of those opportunities; the ones remaining are things like threads, hence the slowdown).
Software Collections (https://www.softwarecollections.org/en/) give you the power to build, install, and use
multiple versions of software on the same system, without
affecting system-wide installed packages.
Is this what Nix OS is able to do? Install multiple versions of software and select which version to use before running?
It is essentially Docker, but you can boot bare-metal with it. It even uses Docker under the hood.
Every boot is a fresh boot. You can read-write during boot, but all changes are made to ram. You mount with fstab things you want persisted, like home.
Archiso/etc are similar, but that lack some things.
---
Layering
----
I can change my inherited build without rebuilding everything.
For example: base > xorg-nvidia > plasma > homepc
I can easily switch desktop environments by making "homepc" inherit "i3", or w/e.
Each layer is essentially a "Dockerfile". If you understand docker, you understand how the images are built and managed.
---
Fast switching.
---
I can create images for different dev environemnts/clients. I can create an image purely for gaming. I can create an image steam.
Darch integrates with grub, so entries will dynamically show up on boot.
---
Sharing with DockerHub.
---
Sharing with other machines (and other people).
Since they are docker images, I can easily upload them to DockerHub.
I can update my local build by doing "docker pull && darch extract && sudo darch stage".
I will soon setup Travis-CI to auto build and push to DockerHub. Then, I never have to build locally. A cron job will queue a build every few days to make my image "rolling". I can easily revert back to previously working docker images (all tagged by date).
I'm gonna need a better reason to switch distros than that. I do think that is an improvement that needs to be adopted in the OS. But, I mean, it's not that much of a improvement over containers. I don't feel like it gives me, the user, that much more power over the OS and would be a feature that I would only use very rarely.
The blogpost just highlights one nice aspect of it, there's more though:
- The really powerful NixOS module system, which let's you specify your system declaratively and rather simply: Which programs should be installed, what systemd services should run, use grub for booting, allow this ssh key to login to this user, enable the IPFS daemon, etc. All options (for the 17.09 release) are available here: https://nixos.org/nixos/options.html
- System generations: Upon changing an option in your system config, you run `nixos-rebuild switch` which will build, activate the new system and add it to the system generations. All of these are bootable from your bootloader! So if something breaks miserably, you can just rollback to a previous one that worked. New kernel breaks on your hardware? No problem, just roll back.
- If you have a new nixos machine, you don't need to run a bunch of odd commands to have it set up. Since the config is declarative, everything you set up is right there, and all you need is a `nixos-rebuild switch` to make it work.
- Wanna test some configuration but are too afraid to use your machine? Just build a VM with it by doing `nixos-rebuild build-vm`. Like with everything else with nix, this fetches all dependencies, no need to install them manually.
Well, you don't have to use NixOS, you can just use NixPkgs either as a way to familiarize yourself or to manage a project--personally, I'm hoping after a few things get fixed I can replace Homebrew with it.
Even though containers have had a lot of buzz and tooling built around them in the past few years, it's good to see how other approaches can solve similar problems with different pros and cons.
Kind of. You can version your workspace, too. Updates are basically atomic. You build the new workspace (with updated versions), then under the hood it just moves a symlink to the new version.
It's got some clever fundamentals that allow for really cool (and clean) features because of it.
I think biggest ones are i) nix command is a bit complex, but it's getting redesigned [1] & ii) nixpkg definitions sometimes contain a lot of cruft.
I've found Guix and GuixSD, which are a GNU-blessed Guile Scheme-based reimplementation of Nix and NixOS, more aesthetically pleasing. Quite simple and elegant in fact.
The major differences are that Guix DSLs are implemented on top of Scheme, whereas Nix uses a custom DSL. Furthermore, Guix avoids systemd and uses GNU herd instead.
You could use nix the same way guix uses scheme. The lowest level operation in nix is not "run-this-bash-script" but "run- this-executable-with-these-arguments".
But even still you are mixing languages, so the problem remains. The issue is not that Nix uses Bash, but that Nix uses different languages for the host-side and build-side. AFAIK people aren't writing the build-side code in the Nix language, and I'm not sure if it even has the features necessary to do it. Guix unifies the different layers of code execution with a single language, and using a Lisp enabled that design.
He is probably referring to the thing you see when you look into a .drv file, aka the immediate result of evaluating a derivation in nix. That language/datastructure is what nix-build evaluates to actually build things, otherwise the nix language itself is purely functional and cannot really do anything on it's own.
I guess in guile/scheme the equivalent data structure happens to be an s-expression.
Edit: except the paper actually describes that it's a g-expression.
Nitpick---but while we're here---Hurd technically refers to the set of (microkernel) servers. The microkernel itself is Mach. In spirit, Mach and the Hurd servers together provide what the Linux kernel provides monolithically.
For my practical reasons I can not use pure GNU distributions, I am not aware of easy community way to re use packages for other distro or something hacky like that. Not everyone have brainpower and time to build custom packages for rare distro, it is a shame, I really like Guix.
NixOS got some recepies for using not 100% libre software.
This is also what keeps me away from Guix. I get that there are third-party repos with non-free software, but using a fringe package of a fringe packager is too far for me.
There’s no good choice here: Nix package definitions and tools are inscrutable and user-hostile; and the alternative Guix is burden with GNU/free software zealotry and GPL.
A somewhat comparable third option is Habitat, from Chef. It also has a pure build system, a package source, and pure, discrete environments. But it’s also got a bunch of service orchestration parts...
Hopefully the Nix redesign will make the tools more palatable for mortals.
Including non-free software in Guix would defeat the goal of a 100% bit-for-bit reproducible distribution. Non-free software is incompatible with Guix both ethically and technically.
The redesign is merged and available via `nix-shell -p nixUnstable`. It isn't so much an issue of merging, but calling the new version stable. We're working on it. No definite timetable though.
Discoverability is a problem everywhere in Nix, from configuration options to documentation. Which makes it necessary to learn its weird poorly implemented functional programming language and read code. Then there is performance, it's very slow and uses a lot of memory making it impossible to use on a 512 MB vm, for example.
If you just want a solution for your packaging management needs - Nix is definitely not it. Nix is more like a thing to get inspired by to make a package manager.
> Then there is performance, it's very slow and uses a lot of memory making it impossible to use on a 512 MB vm, for example.
No, this is just a side effect of the CLI user experience being garbage.
`nix-env -i <regex>` is almost never what you want, because its semantics are to evaluate everything in nixpkgs, find the set of packages whose `name` attribute is matched by the given regex, and install all of the packages in that set modulo the equivalence relation generated by package `name` equality ignoring version numbers.
Instead, you should use `nix-env -iA <channel>.<attribute-path>`, where `<channel>` is the name of a Nix channel (viewable via `nix-channel --list`, usually it is `nixos` or `nixpkgs`) and `<attribute-path>` is a valid attribute path in the nixpkgs package set (these can be discovered by running `nix-repl '<nixpkgs>'` and tab-completing).
That's nice to hear you say, because I am making a package manager, and I find nix inspiring :p
I keep trying to convince myself that someone's already done it better, and there's no point, but then every solution I look at doesn't seem to be good enough.
Do you use Windows 10 Creators Update on production? It needs to work on Windows Server 2012 or no dice. Maybe Nix will be an option in 5 years when companies upgrade their servers, but that's not an option now.
You could say the same thing about Windows: Windows just isn’t an option yet.... maybe in 5 years when WSL is performant and MSFT has open-sourced enough to not be a threat to more cautious companies.
No, I don't think you can say that. The point is that I have clients that use linux, clients that use windows, and clients that use both windows and linux which need to talk to each other. I need a solution that can factor out the operating system into a module and build packages that will work for both the quants and the servers. I don't get to tell my clients what operating systems to use. I must provide solutions that work on whatever they're running.
Nix has facilities for creating Docker and Singularity containers from a nix expression, which lets you package up arbitrary environments. Does that satisfy your use-case?
Primarily from my perspective... the entire package repo acts as one enormous program. Which is both good and bad. Its not that easy imo to create ad hoc packages or packaging scripts
I strongly disagree. Creating a Nix package is significantly easier than creating, e.g.: an Arch package, from my experience. packageOverrides is really nice too.
If you tell me which specific piece of software you tried to package, I can probably explain what is going on. I strongly suspect that it is either not Nix's fault, or an inherent tradeoff in packaging software (i.e.: not a design flaw, but simply a result of the correctness of package expressions being more tightly controlled in Nix than in other languages). The main exception to that statement that I've seen are cases where Nix works fine, but the Nix expression takes so long to build that iterating on it is very difficult. Luckily, there is a solution for that in the pipeline: "recursive Nix", which allows `nix-build` to be called _inside_ a Nix build sandbox, thus allowing safe memoization of things like `gcc`.
Nix the package manager is great. Poorly documented, but it works as expected. I've also had uneventful experiences using NixOS on a server.
On the other hand, trying to use NixOS on my laptop has been struggle after struggle. Every update breaks something. Many times updates cause the entire system to crash. Different combinations of display managers, window managers, and various system level daemons interact in complex ways.
None of this is the fault of NixOS, really it is the fault of the Unix philosophy scaled to the level of the desktop Linux ecosystem, combined with the traditional assurance on upstream developers that packagers in distros like Ubuntu will fix their shit.
What would be really great for NixOS would be a set of various well tested base configurations for the various DE/WM combos, like all the spinoff Ubuntu distros. These would fix the versions of all the fragile graphical components on some kind of release schedule, while probably still using nixos-unstable underneath for all the relatively reliable stuff like the kernel, emacs, vim, coreutils, etc.
Yes, this is something that I think is really important in the future. NixOS really needs something like Ubuntu is to Debian or Manjaro is to Arch. Lots of polishing that a standard Linux user can start using out of the box (no instructions required).
Python solves this problem with "virtualenv", where the user creates a virtual environment and installs all programs and libraries into that environment. The user can have multiple virtual environments, and it is easy to switch between environments.
virtualenv solves virtual environment as much as pip solves OS package management. Not that it's not useful, but you see all over where you have little fiefdoms that work pretty well for their little ecosystem, but become a mess if you integrate stuff outside (virtualenv, rbenv or pip, npm, gem, rpm, apt or make, ant, scons, jam).
A lot of these systems are just reimplementations tailored to language-X. Nix is pretty unique and clever. For example, you can effectively version control you virtual environment, it's at the OS level, and accommodates major languages and libraries (Python, Perl, Lua, Java, Go, Qt).
Every language has it's equivalent of virtualenv, and that's the problem: it only works for that language's tools and libraries. I don't use Nix, but I do hack on an alternative called GNU Guix sometimes, and I wrote a tool called 'guix environment' which is similar to virtualenv but applicable to any software. Having a generalized solution is much better than one virtualenv tool and package manager per programming language. In Nix the equivalent tool is called nix-shell.
It seems that Docker is more concerned with containment and Nix with packages. I would love to be able to use Nix to define my project+dependencies and Docker to run it.
I'd be surprised if someone hasn't already made a baseline NixOS docker image to build off of.
The area that needs improvement is the documentation. Once you learn the Nix language, reading the source code is pretty helpful, but it would be nice to make it more approachable. For example, the nixpkgs repo has a bunch of Nix helper functions that are useful to developers when writing their own packages, but these functions' documentation is either buried in a long manual, or non-existent.