Hacker News new | comments | show | ask | jobs | submit login
Nix as OS X Package Manager (ofilabs.com)
357 points by ingve on May 25, 2016 | hide | past | web | favorite | 204 comments



When I first heard about Docker, I thought it did what Nix does.

Then someone handed me a docker image saying, "I got this working on my laptop, deploy this. Isn't this great!". I had no idea how they arrived at that configuration. If something happened to them or that image, I wouldn't have known how to reproduce that "working state".

So I am happy to see Nix / Guix become popular. It is what I imagined package management to be.

Before that we just used rpm and deb packages. For all the hate they get, they are actually well thought out, tested, stable format. With {pre/post}-{install/uninstall} script support, dependency resolution and so on.

But Nix / Guix is a qualitatively different things. Hope it becomes popular.


I mean, that's exactly what a Dockerfile is for. I don't know of anyone seriously using Docker that isn't using Dockerfiles exclusively. I only know of a single time when it's valuable to "commit" a new image (recovering logs from a stopped container).

None of which is to detract from Nix. Nix+Docker is a powerful combination. It takes Docker's declarative nature all the way down to the compiler used to build the bits placed in the container.


The big difference is that nothing about Dockerfiles implies that the processes they perform to build an image are deterministic or repeatable. Will a build step that runs `curl https://github.com/something/whatever` do the same thing in six months that it does today? Docker doesn't help with that at all. Nix improves determinism by (almost) guaranteeing that if you build the same nix expression six months from now, you'll get exactly the same dependencies, all the way down to glibc. Either that or the build will just fail, if remote dependencies aren't downloadable anymore, which is always tons of fun.


YES.

Finally someone understands what really annoys me when I say that docker isn't 100% reproducible if you aren't using version pinning or something similar.

If you have 2 developers, and one of them does a build the next day, you could have them with two different versions of a package when the version went up.


That isn't even the bad part - the bad part is that give some big complicated docker image (or any build in general!), you really have no practical way of knowing what's different.

That should scare people.


sounds like 'emerge hell' from gentoo 12 years ago.


Ex-Gentoo dev here. It is not the same. Gentoo emerge runs in place, and you can definitely break your system with a broken build.

NixOS on the other hand, builds everything in sandboxes that only expose the requested dependencies. Builds are almost 100% deterministic. The cherry on top is that your system installation is simply a package consisting of all other packages and configuration symlinked together, and you replace your entire system in one go (atomically, by writing a symlink).

If the build fails at any point along the chain, you get an error and your system remains unchanged. You can totally switch from one major release to the next (and back) without hassle.


the non-deterministic nature is what I was getting at.

I worked at a place where they'd spin up a new gentoo box for a dev with stock "emerge foo" and let everything run up.

Then 3 weeks later, they'd do it again for the next dev, and they'd have many diff versions.

I know they weren't "doing it right", but they were doing it the default way they learned, and it caused a lot of problems.

Thanks for the info/clarification though.


It's been a LONG time, but when you build gentoo packages you can save the builds, right? Then if you bring up identical servers, you can just install from precompiled packages.


You're not wrong, but Docker comes at it from a different angle; Docker expects you to be handing around tagged images, to solve a similar set (but not the same set!) of problems that reproducible builds solve.

Ideally, I want _both_ of them: tagged Docker images being sent between environments (with `docker run -e SOME_ENV=testing imagename` for changing the internal config), but reproducible builds in my Dockerfile. I wonder if Nix can be used to achieve that last part?


The problem is tagged images are fundamentally broken. If I can't reproduce, bit-for-bit, what is in the tag, how can I vouch for the tag?

Images are nothing but a caching technique for the results of deterministic builds.

If you're using them for any other reason, you have a flawed process that is going to come back to bite you some day.


You solve it by signing the image. Sometimes you just won't be able to get a completely reproducible build processes that's fully deterministic, so Nix cannot fix this problem entirely. Additionally, sometimes you just don't have time, or experience to take another program/tool you didn't write and package it up such that it never downloads things from the internet.

It's a hard problem to solve, and Nix helps a lot, but it's not going to fix all the problems, which is why we should care more about trusting the packager to package something relatively reproducible and sign it so that we it can be vouched for.


Yes, you can do this. There is a utility "nix-docker" that will convert a Nix configuration file into a BusyBox Docker image (see e.g. http://zef.me/blog/6049/nix-docker)


with nix you can now build docker images ... docs are in the manual http://nixos.org/nixpkgs/manual/#sec-pkgs-dockerTools

and it is actually rather cool since you don't rely on any docker command :)


Unfortunately the repo says:

> DISCLAIMER: This project is no longer actively maintained and probably broken


Oh brilliant! Thanks for that, definitely going to give that a shot :)


In practice, one can check the Dockerfile lineage of a given Docker image and assess the quality of the commands. Good practices are to use package managers with explicit versions, for example "apt-get install subversion-tools=1.3.2-5~bpo1". Or, if you so desire, just pipe NixOS outputs into Docker, as somebody else in this thread mentioned.


There's a post on building Docker images with Nix: https://news.ycombinator.com/item?id=11502989


As another use points out, there was some relevant discussion earlier on HN about using Nix+Docker: https://news.ycombinator.com/item?id=11502989

Copying a comment I left on that blog post:

---

Q: "What does building a a Docker image using Nix give you over creating a regular Dockerfile to build a Docker image?"

A:

* Better abstraction (e.g. the example of a function that produces docker images)

* The Hydra build/CI server obviates the need for paying for (or administering a self hosted) docker registry, and avoids the imperative push and pull model. Because a docker image is just another Nix package, you get distributed building, caching and signing for free.

* Because Nix caches intermediate packages builds, building a Docker image via Nix will likely be faster than letting Docker do it.

* Determinism. With Docker, you're not guaranteed that you'll build the same image across two machines (imagine the state of package repositories changing -- it's trivial to find different versions of packages across two builds of the same Dockerfile). With Nix, you're guaranteed that you have the same determinism that any other Nix package has (e.g. everything builds in a chroot without network access (unless you provide a hash of the result, for e.g. tarball downloads))

---

IMO, NixOS almost subsumes Docker. Where Docker gives you a way to guarantee that the contents of an image will be the same across two machines, it does little to ensure that building that image from a given Dockerfile is a deterministic. One of Docker's neat tricks is using layers to try to save on disk util -- but even then, Nix beats Docker; if you start two different containers using Nix packages, the common packages are shared across both containers, whereas two different Docker images with an uncommon base would not share the common files.

Docker mostly looks like someone set out to answer two questions:

1. How can we work around all of the problems inherent in conventional package management so we can have a smidgen of determinism?

2. How can we have a relatively nice/cohesive UI around launching/inspecting containers and managing networking/bind-mounts?

#1 can be side-stepped entirely with a package manager that doesn't cause all of those problems in the first place (Nix/Guix).

#2 is still quite useful, but would have been better served as layering on top of something like Nix or Guix (not that I really approve of the "wannabe systemd" daemon approach that Docker employs (all while being, ironically, rather difficult to use with systemd: https://lwn.net/Articles/676831/ )).

Edit: grammar


There's a little company called 'RedHat'...

https://github.com/openshift/source-to-image


Tangential, but if you want to recover logs from a stopped container, why not use 'docker cp'?


I sort of agree, but I also think there's no reason to view Docker commits any differently to Git commits.

Are you screwed if you lose a working .git project, or its maintainer? Well, yeah. But we don't actively worry about that happening all that much because we routinely take precautions like distributing it across developer's machines and servers.

Maybe a commit does break, but that's why we didn't just overwrite the previous image.

But also:

    > I only know of a single time when it's valuable to
    > "commit" a new image (recovering logs from a stopped
    > container)
Not everything's open source.


It doesn't have anything to do with a commit breaking anything. It has to do with reproducability. As others have mentioned, a Dockerfile isn't perfect (I'd argue Dockerfile+Nix is pretty close) but at least you don't have to ask "Oh god, what did OJ Ford run manually before commiting the image we use in Prod." That's truly horrific and simply isn't a problem if you use a Dockerfile.

>Not everything's open source.

I don't know what that means. Whether the source is closed or not has zero impact on the use-cases I'm discussing for Docker. Even if the source of the base layer isn't available, it doesn't stop you from using that base layer in a new Dockerfile, at all.


Also people forget that the Nix philosophy doesn't end in the package manager Nix, there's also NixOS which takes a functional approach to configuration management of the whole OS. And many other things.


And nixops which allows you to build everything locally (or delegated to a build machine) and push configuration to multiple machines.


But how do you reproduce the dpkg packages? When your friend hands you a deb, instead of a docker image, what have you accomplished as far as knowing how they arrived at their configuration? Debs are also binary blobs, just more of them, and separately.

From what I can see, there are two philosophically pure approaches to software packaging. Either we can build everything from source, so that we have reproducibility (see e.g. emerge for a practical system, although it is still not bit-for-bit reproducible) or else we can ship everything as a blob so we have total consistency (e.g. Docker, Go).

The other approaches, apt and nix, seem to me to be compromise positions. They may even be practical or useful compromises. But they are ultimately about the degree to which we allow binary blobs, not whether we do.

But it seems to me that once you have accepted installing binaries via apt then installing them from Docker is a question of degree, not of kind. It seems very hard to me to argue that distributing binaries via Docker is really worse than via nix.


> Either we can build everything from source, so that we have reproducibility (see e.g. emerge for a practical system, although it is still not bit-for-bit reproducible) or else we can ship everything as a blob so we have total consistency (e.g. Docker, Go).

Nix is really both of these combined, and is not like apt/dpkg in how it separates source and binary packages.

A Nix package is really instructions (including all compiler flags) of how to build something from source [1] and nothing more. Due to being pure functions, given the binary inputs (e.g. a zlib dependency previously built) and the instructions, any two people will always get the same binary output upon executing those instructions [2].

Furthermore, without executing the instructions, I can generate the unique identifier of the resulting binary package by hashing the inputs and the instructions. This unique identifier can be used to determine if the package already exists locally in /nix/store, and optionally be used to check if some trusted binary cache (such as Nix's binary cache) already has the binary package.

There is no binary package in the sense that apt/dpkg have source packages and many derived binary packages which are installed without need of the source package.

For example (contrived and simplified):

  gcc-uniqueGccHash = bootstrappingProblem()
  libc-uniqueLibcHash = buildLibc(gcc-uniqueGccHash)
  zlib-uniqueZlibHash = buildZlib(gcc..., libc...)
  myprog-uniqueMyProgHash = buildMyProg(gcc..., libc..., zlib...)
To download "myprog" from a binary cache, I can't calculate uniqueMyProgHash using any old versions of libc and zlib; I must have exactly the same hashes that were used by the system that built "myprog", all the way up the dependency chain. And I'll only ever have that if the builds are reproducible and/or I downloaded them all from the same binary cache.

[1] In practice, there's nothing stopping the instructions from saying "download this binary blob" instead of "download this source tarball".

[2] The problem of build reproduciblity is not solved, but many outside of Nix also see this as important, see https://wiki.debian.org/ReproducibleBuilds


Hm. Can you elaborate on why nix is a compromise? Nix is effectively building from source, but with lots of caching in place to avoid end users constantly having to rebuild every last package from source. Nix and Apt are very different in this regard.

>But they are ultimately about the degree to which we allow binary blobs, not whether we do.

I don't agree with this assesment of Nix and Nix packages at all. My local system evaluates the package definition (a description of how to build it from source) and merely is smart enough to retrieve a prebuilt binary with the exact same source inputs and configuration. Sure, I'm still trusting the remote party to have built it as described, but that's true of any system where pre-built binaries are distributed.

For another explanation, I could disable the NixOS Hydra Cache and could theoretically rebuild my current system state without their binary cache and still wind up with the exact same bits. At least, that's the idea behind Nix.


It is a major problem - that's why debian has been on a major push for reproducible builds too.


> I wouldn't have known how to reproduce that "working state".

So what?

You don't need to reproduce that working state: The work has already been done!

Think about how the interactive approach Lisp and Smalltalk programmers use is superior to the compile/run/rerun approach used by C++ and Java programmers -- by exploring the problem in a nonlinear way, you can find your way to a solution more quickly.

Docker package management is like that: You make some changes, you commit often, fork your instance and try lots of different things. Eventually you find something that works and you tag it. You share your tags with other people. This is an extremely productive way to get work done.


> You don't need to reproduce that working state: The work has already been done!

My background is physics, and the philosophy there is "If you can't reproduce it, it doesn't exist."

In my experience with software, sharing opaque binary blobs is a Bad Thing. It doesn't matter if the blob is a proprietary firmware image, or a "open source" docker image. It's a magic unreproducible image. And (to me, at least) therefore suspect.

> Docker package management is like that: You make some changes, you commit often, fork your instance and try lots of different things. Eventually you find something that works and you tag it.

Again, the physics philosophy is "If you don't know what you did, you didn't do it".

For software, programming by randomly trying things is one of the worst methods available. You should understand what you're doing, and understand the system you're building.

> You share your tags with other people. This is an extremely productive way to get work done.

Sure. It's a way to quickly create, test, roll back, and share opaque binary blobs. It's "productive" in that you get things done. But you have no real idea what you got done. You just have a magic image that "works".

I would not hire a programmer with that kind of attitude. I've been burnt by that method, and those kind of programmers, too many times.


We wrote an article about the same thing https://blog.wearewizards.io/why-docker-is-not-the-answer-to... (Tom has a background in physics as well)


You are assuming "that reproducibility is good".

I believe this assumption needs to be challenged.


Any significant contribution you might have for software engineering in regards to that?

Aside from merely believing that.

I can't even tell what you mean. If I want to rebuild my machine same as before, or deploy build 100s of servers at different points in time with the same setup, reproducibility is necessary.

There's not even one argument against that. If you have any, please oblige us.

Should deployments be a surprise?


> If I want to rebuild my machine same as before, or deploy build 100s of servers at different points in time with the same setup, reproducibility is necessary.

No it isn't.

Reproducible is talking about the process or steps needed to make a machine. You do not need to repeat the steps to produce the first machine to produce the second machine. You can simply clone the first machine, and Docker makes this very easy to do.

Keats begins his article with the statement offered, without justification:

In this post I'm going to assume that reproducibility is good and necessary.

Since I don't believe that, and in fact believe the opposite, I take issue with the entire article. Indeed, he goes on to construct a straw man comparing the exact opposite of what I propose -- a non-interactive "dockerfile" and not even a particularly good one, with a very well thought-out "purely functional" build system.

> Any significant contribution you might have for software engineering in regards to that?

I don't know of any big technology company that doesn't design their live infrastructure around ephemeral machines cloned from a template.

They don't all use Docker, and they don't all use interactive development, however, but these two tools can make it easier for people who (for example) don't have Google's economy of scale.

> Should deployments be a surprise?

No they should not.

Deployments take less time with my interactive method because you do not have to wait to "rebuild everything" which can take hours.

Remember heartbleed? People using nixos had to rebuild every component that used OpenSSL, then test them, then find something else broke and repeat the process. Only once they had finished building their new process could they roll out this process to all machines, doing one more final rebuild. Even if done perfectly this has a minimum of two iterations.

I could simply create a new clone, get it working, then tag it as the next release. New instances can then be cloned from the release, monitored, and the old instances destroyed. The entire process took under an hour for the entire fleet, including reading about heartbleed on HN. This is clearly almost as fast as running `apt-get dist-upgrade` on each of the live systems, but without any of the risk.

The organisational complexity, however is admittedly much higher than "wash, wait, repeat", which is why most system administrators could not even begin to use live clones until virtualization and tooling had become much more commonplace.

> I can't even tell what you mean.

And yet you feel qualified to tell me I'm wrong.


>Reproducible is talking about the process or steps needed to make a machine. You do not need to repeat the steps to produce the first machine to produce the second machine. You can simply clone the first machine, and Docker makes this very easy to do.

Cloning is not the answer since it's all or nothing -- where reproducibility can be achieved piecemeal.

Only offering cloning means that starting with the same initial Dockerfile and making some needed updates to it, one can't be sure that the NON updated parts will be producible. So you don't get flexible reuse.

Only offering cloning also means that when you need to update only a part (e.g. because of security reasons) you can't just use the same dockerfile with the changed component and be sure that you'll get an otherwise identical system. So you don't get flexible updating.

Cloning also required an already set-in-stone configured system to be the prototype.

But until one reaches that, they have to experiment with various configurations (to end with the specific Docker image they want). Without reproducibility, they can't be guaranteed that their various test configurations under test are the same and only their latest changes differ. Perhaps another thing they left as is broke too.

>I don't know of any big technology company that doesn't design their live infrastructure around ephemeral machines cloned from a template.

That's just an argument from popularity. And not necessarily a popularity they opted for, while also being provided with the alternative.

That they use "ephemeral machines cloned from a template" is just what they have to do -- not what would be ideal.

Besides, reproducibility doesn't preclude cloning as part of the process -- it's a superset of it, offering way more flexibility.

>I could simply create a new clone, get it working, then tag it as the next release.

But if you mess with the clone it's not a clone anymore. And if a whole team has to mess with the clone over some period of time to update or fix it, there's hell to keep track what went on.

Nobody argues that you can't simple install cloned images as they are.


> Only offering cloning means that starting with the same initial Dockerfile and making some needed updates to it, one can't be sure that the NON updated parts will be producible. So you don't get flexible reuse.

What exactly do you mean by flexible reuse, and why is it good? What exactly do you mean by NON updated parts and what does it mean that they are "producible"?

I don't understand your words. Speak plainly!

I don't use a Dockerfile, and the interactive development approach means normally you start up your clone, work on it for a bit, then commit the clone.

Here's an example:

    $ docker run -t -i debian:jessie bash
    root@264b51d62d0e:/# apt-get install nvi
    E: Unable to locate package nvi
    root@264b51d62d0e:/# apt-get update
    root@264b51d62d0e:/# apt-get install nvi
    $ v=`docker commit 264b51d62d0e`
    $ docker tag $v whatever
The mistake with the first line didn't mean I needed to start over. In a realistic example that could save me an hour of rebuilding. I can push my tag to other developers I'm working with, and they can use `docker diff` to find out what the differences are.

> But until one reaches that, they have to experiment with various configurations (to end with the specific Docker image they want). Without reproducibility, they can't be guaranteed that their various test configurations under test are the same and only their latest changes differ. Perhaps another thing they left as is broke too.

This is a danger that requires discipline. Committing instances is very cheap, so it is good practice to commit often, and keep notes (commit messages). I think version control is common enough that most sysadmins know how to do this, and other packaging systems suffer from the exact same problem.

However because it's interactive, that need to experiment is satisfied fully and the sysadmin/user gets to use all of their tools to develop the working prototypes.

> But if you mess with the clone it's not a clone anymore. And if a whole team has to mess with the clone over some period of time to update or fix it, there's hell to keep track what went on.

This is not true.

With Docker you are encouraged to make many clones and branches and try different things out. The clones that are useful get tagged and forwarded to others.

A sysadmin who isn't using version control has other problems!

> Only offering cloning also means that when you need to update only a part (e.g. because of security reasons) you can't just use the same dockerfile with the changed component and be sure that you'll get an otherwise identical system. So you don't get flexible updating.

I don't understand this complaint, but it mentions dockerfiles again. Let me be clear: I never use dockerfiles.

If I want to update for security reasons (as I mentioned in my heartbleed example), then I will find it easier and faster to implement the fixes interactively. Having to write my recipe for nix-os and run it and wait while it runs is insecure because you are vulnerable longer.

> Besides, reproducibility doesn't preclude cloning as part of the process -- it's a superset of it, offering way more flexibility.

The non-interactive (and "reproducible") approach is slower, less secure, more prone to errors. I don't believe it can possibly be "more flexible" since the interactive approach can do everything the non-interactive approach can do and faster.

I do admit that Docker's build artefacts use more disk space and more Internet bandwidth than plain text nix configuration artefacts, but this is not fundamental to interactive development, which is specifically what I'm advocating.

> Any significant contribution you might have for software engineering in regards to that?

> That's just an argument from popularity. And not necessarily a popularity they opted for, while also being provided with the alternative.

Arguing that "nobody does it so it must be wrong" isn't any better than "just because everyone does it doesn't make it right".

I really think you need to learn about something and make up your own mind, instead of repeating "Docker is Bad" because you read something like that on a blog.

I also think you should really try to understand what I am suggesting before you go arguing against it.


> Only offering cloning means that starting with the same initial Dockerfile and making some needed updates to it, one can't be sure that the NON updated parts will be producible. So you don't get flexible reuse.

> What exactly do you mean by flexible reuse, and why is it good?

I already explained. What exactly don't you understand?

It's about being able to reuse a recipe (dockerfile, etc.) for creating your final artifact while ALSO being able to change it -- as opposed to the "take it or leave it" case with cloning some binary blob.

> What exactly do you mean by NON updated parts

I wrote "Only offering cloning means that starting with the same initial Dockerfile and making some needed updates to it, one can't be sure that the NON updated parts will be (re)producible".

My typo aside, the meaning is clear: with cloning available but no reproducibility, when you want to only change part of a container (e.g. update a specific piece of software there), you don't know (when you re-create the container) that the other parts you didn't change are as they were before.

Basically that's the very definition of non-reproducibility.

> I don't use a Dockerfile, and the interactive development approach means normally you start up your clone, work on it for a bit, then commit the clone.

That's neither a good, nor a new process. That's like 90's sysadmin work. For one, you're doing all this work manually. Second, if you want to revert/change something you made, you have to mess with your clone, potentially putting it in some weirdo state -- you basically remove the scriptability part and just mess with your clone manually. Or you write some custom scripts or use some provisioning software -- which brings you right back to the reproducibility discussion.

With reproducibility you wouldn't have to commit the clone -- just the "recipe" to make. And you would still COULD commit the clone if you wanted (as I said, reproducibility is a superset of merely working with clones).

> The mistake with the first line didn't mean I needed to start over. In a realistic example that could save me an hour of rebuilding.

Doesn't save you anything over reproducibility -- since the latter does not prevent cloning. But having only cloning (which is what the parent lamented) does deprive you of the benefits of reproducibility.

"In a realistic example", for example, things could get much more hairy (and many more tests and changes could be needed), beyond merely forgetting to update apt before installing nvi.

And all of these changes and process would be totally opaque to your clonable blob.

> I can push my tag to other developers I'm working with, and they can use `docker diff` to find out what the differences are.

The can use docker diff and try to guess what the differences are -- and what the intention was, etc. Because docker diff is just a record of file changes, not of procedures followed and the intentions behind them.

> This is a danger that requires discipline.

And that's (part) of the whole problem. Anything that requires discipline on the human part and that could be automated is busy-work -- and error prone.

> With Docker you are encouraged to make many clones and branches and try different things out.

Yes, but that's meant for safe-keeping, like vm-snapshots. It doesn't replacing actually having a written record of what you did, what's supposed to be istalled inside, and why.

> I don't understand this complaint, but it mentions dockerfiles again. Let me be clear: I never use dockerfiles.

So again, a throwback to the manual devops age.

> Arguing that "nobody does it so it must be wrong" isn't any better than "just because everyone does it doesn't make it right".

Perhaps. But I fail to see where I did that.

> repeating "Docker is Bad"

Who said Docker is bad? Docker is great. It's non reproducibility that's the issue.


> It's about being able to reuse a recipe (dockerfile, etc.) for creating your final artifact while ALSO being able to change it -- as opposed to the "take it or leave it" case with cloning some binary blob.

You must be confused. I can create and delete files on my filesystem -- changing it -- even though it is a binary blob.

Perhaps you mean something else?

> with cloning available but no reproducibility, when you want to only change part of a container (e.g. update a specific piece of software there), you don't know (when you re-create the container) that the other parts you didn't change are as they were before

I keep hearing you say when I want to re-create the container, but I don't ever hear you say why I would want to re-create the container.

> For one, you're doing all this work manually

Wrong. It's less manual work.

Writing a nixfile is manual work. It's hard manual work because none of the tools are interactive, and it's very distracting waiting for the computer to reply the nixfile between interations.

Interactive development is superior.

> if you want to revert/change something you made, you have to mess with your clone

No I don't. I discard it.

Do you use git rebase on your own history? Or do you create a new branch with a cleaned up history?

I do the latter.

> potentially putting it in some weirdo state

I don't need to reinstall my operating system every day because I get confused about what is on my computer.

> The can use docker diff and try to guess what the differences are -- and what the intention was, etc. Because docker diff is just a record of file changes, not of procedures followed and the intentions behind them.

The intentions are recorded in the commit log. That's why docker commit allows commit messages.

Procedures are only recorded if someone records them. Why would I do that if I only have to do it once?

> "In a realistic example", for example, things could get much more hairy (and many more tests and changes could be needed), beyond merely forgetting to update apt before installing nvi.

That's what I said, but you're missing the part where I don't have to wait hours and hours and hours while nix rebuilds my system over and over again.

"Reproducibility" has enormous costs, and the value proffered can be had with better tools.

> Anything that requires discipline on the human part and that could be automated is busy-work -- and error prone.

You're not automating the nixscript-writing, so you haven't saved any work. You've created busy-work by reinstalling your operating system over and over again.

> Yes, but that's meant for safe-keeping, like vm-snapshots. It doesn't replacing actually having a written record of what you did, what's supposed to be istalled inside, and why.

No, it's exactly a written record because that's what the commit messages are for.

If I want to know what's inside, there are other tools (dpkg) for taking inventory.

> So again, a throwback to the manual devops age.

Nonsense.

For some reason you only want to count the time that your script is running from when it is finished, instead of the time and expertise needed to develop the script in the first place, that you only need to run once.

If you can't understand that a sysadmin's job, whether they are writing a nixfile or they are directly interacting with a machine, is manual work, then I just can't imagine how I can be understood by you.

That's insane.


> You do not need to repeat the steps to produce the first machine to produce the second machine. You can simply clone the first machine, and Docker makes this very easy to do.

That's nice for cloning, it's not reproducability.

How do you know that the docker image was created with the right set of software? How do you know that there wasn't anything else added to it?

Copying the final image means you're just copying an opaque binary blob. You can't do the work yourself to build an image, and get the same docker image.

Something like "nix" makes that simple. Everyone can agree on the steps required to get from A to Z. Everyone can independently agree that each step looks the same for everyone.

For me, it's not about ease of use. It's about security and trust.


> How do you know that the docker image was created with the right set of software?

Define "the right set". If I ask someone to prepare something I might use

    docker export/diff
in part of the review process.

> How do you know that there wasn't anything else added to it?

    docker diff
> Copying the final image means you're just copying an opaque binary blob. You can't do the work yourself to build an image, and get the same docker image.

So what?

Why exactly do you think "copying an opaque binary blob" is bad?

Don't you ever copy files on your hard disk?

Have you ever used `dd` to copy a filesystem as a binary blob to a disk or a USB key?

Did you know "git" copies "opaque binary blobs" around? Do you really prefer RCS because the version control is stored in plain text files?

What exactly is your complaint here?

My position is simple, but apparently radical: I recommend interactive development of systems because it is better, and I define better as faster and more secure. "Reproducible" simply is not a goal.

> For me, it's not about ease of use. It's about security and trust.

You should rethink your position.

Nix packages from sources you don't trust aren't more secure than docker images (even if you use nix.useChroot=true), which, barring bugs in docker and Linux, at least get virtual memory, disk, and networking.

Nix packages built by yourself are harder to write than docker tags you build yourself.


So let's say you are deploying 10 servers. Are you fine with all of them being different?

Or to get out of computers, you have a drug where they say it cures cancer 100% of the time and no one can reproduce the results, it's fine for you?


Straw man.

Reproducible in this case is talking about the process or steps needed to make a machine. You do not need to repeat the steps to produce the first machine to produce the second machine. You can simply clone the first machine, and Docker makes this very easy to do.

That I can clone a machine does not mean that I can clone the interaction between a patient and their drugs, but if I could it would most certainly be preferable to trying to repeat the experiments with larger and larger populations.


> If you can't reproduce it, it doesn't exist.

I can't reproduce me, so do I not exist?


> In my experience with software, sharing opaque binary blobs is a Bad Thing

And yet you don't know why.

This is called dogma, and dogma is bad: It leads people to make stupid decisions over and over, all the while repeating that the alternative is worse.


> This is called dogma,

Then you don't know what the word "dogma" means.

What I said was "In my experience." i.e. practice learned over time.

This cannot be interpreted by any reasonable person as "blobs are bad, just 'cause."

That kind of straw man argument leads me to conclude that similar issues are being your inability to understand the use of reproducability.


Dogma: a principle or set of principles laid down by an authority as incontrovertibly true.[1]

You aren't offering any evidence or justification besides the statement. I might as well say, "in my experience, ``reproducability (sic) is a waste of time''" -- what's the point of that?

Instead, I'm providing concrete examples of why it's a waste of time, and rather than anyone arguing the contrary, they're doing exactly what you're doing: Repeating that there's some point of "reproducability" (sic). There isn't, you're wrong. End of.

But whatever, my background isn't physics, so what could I possibly know? Running engineering teams for the last two decades that have to develop and deal with high volume, high uptime application servers only means my systems have to work and make money, not that they be "reproducable" (sic).

[1]: http://www.merriam-webster.com/dictionary/dogma


No, this is called learning from experience.

If you put your hand in a fire 5 times and it burns you 5 times, you don't have to understand anything about fire to know that you should probably stop putting your hand in. And if you saw other people about to put their hand in, you would mention to them that everytime you put your hand in, you end up regretting it.


And yet, here I have asbestos gloves and I can put them in the fire just fine. This allows me to work faster and get done without waiting a day for the fire to run out of fuel.

We don't need religion to keep from getting burned when we have tools.


And again, you don't need to have any understanding of asbestos in order for the gloves to work. You can believe that they are magic gloves from God, and they will work just as well.


Working with Lisp images: it's horrible when you lose track of what runtime change allows your code to continue working. You typically keep a working compilation between code and your intended environment, and use the run time to experiment in between changes.


In Smalltalk implementations like Pharo, Squeak etc changes are captured in a changes file. If the image crashes before being saved, it is possible to cherry pick and replay unsaved changes from the changes file.


You can do this with Docker as well: Mount two commits and diff between them, pull bits into a third.

It's also a surprisingly convenient way to build tests.


I think this all the time. I have to refresh my clojure repl because I forget what I've eval'd or not and the cognitive load of mentally tracking that state is not something I want to bother with.


Clojure is unusually leaky compared to Unix/Linux or (for example) Erlang.

If we design systems out of a collection of processes, that we can replace at will, then what we actually have is a collection of checkpoints and branches that we can try simultaneously. Replacing individual branches allows us to try out changes there, without having to rebuild (and wait) the entire system.

Another way to think about this, is that Clojure is more like subversion than like git: It might be better than RCS, but you're still stuck with a linear revision history, and new changes can't completely conflict with old objects and old changes that are created.

And yet another way, is to imagine all of those old class definitions that old objects hold onto are actually global variables, because they have global (time) extent, and yet are unreachable, so it's difficult to find out whether your new code is being affected by those old settings.

Smalltalk, and even other (esp. Commercial) Lisps don't have this problem. It has nothing to do with interactivity, but with hidden state.


This reminds me of one of the good things APL has. You can interactively change anything in your workspace, and all the things (array, function, operators) can be serialized, programmatically within the language.


How do you recommend applying security updates to the image, short of running (for example) `apt-get dist-upgrade`? What if that breaks? Do you just continue with the out-of-date image?


This is the biggie for me. When the next heartbleed comes out, it would REALLY suck to be running a bunch of opaque images that you have no idea how to patch.


That's the thing, I surprised how many people were "ok" with that binary imagine that someone built on their laptop. Few saw an issue with it, and instead praised it as a great win.

I mean, it was a win, not having to untangle the hairball of rpm/deb dependencies. I like that part too. But not understanding the downside was what shocked me. It was like everyone drank the magic kool-aid potion.


> I surprised how many people were "ok" with that binary imagine that someone built

Really? This surprises you?

The gross majority of the world runs software this way.

The real issues are quality control and release process.


That's an unrelated problem. Developing your systems interactively, instead of reimplementing them over and over again produces the same results, just faster. If your servers aren't ephemeral, or you only have "one machine", or if you don't know about heartbleed patching, then you're going to be fucked anyway.


> How do you recommend applying security updates to the image, short of running (for example) `apt-get dist-upgrade`?

That's exactly how I recommend applying patches.

If it breaks things you can fix it.

You don't do this on a live system: If you're smart and your live systems are ephemeral you can copy your image, get it working, verify it interactively, and once you have a working image, you spin up new instances using the new system, and start shutting down the old systems.


There's tools like zypper-docker that make this possible.


It sure sounds like you are advocating Docker as a way to produce and distribute a http://martinfowler.com/bliki/SnowflakeServer.html

We've been down this road before. Hacking something until it works, then imaging it and passing it on to your coworkers is a maintenance nightmare.


I appreciate how difficult it is to understand someone when you begin the conversation by assuming you understand them.

For example, did you read that article?

You know Docker has none of those problems

> The first problem with a snowflake server is that it's difficult to reproduce. Should your hardware start having problems, this means that it's difficult to fire up another server to support the same functions.

    docker tag/push/pull
> If you need to run a cluster, you get difficulties keeping all of the instances of the cluster in sync.

    docker start tag
> You can't easily mirror your production environment for testing.

    docker commit/export/import
> When you get production faults, you can't investigate them by reproducing the transaction execution in a development environment.

    docker push/pull
> The true fragility of snowflakes, however, comes when you need to change them. Snowflakes soon become hard to understand and modify. Upgrades of one bit software cause unpredictable knock-on effects.

    docker history/diff
The issue that Martin is describing is one where people work on the server - update it in place, and do not have any change management process besides "do stuff" and any quality checking process besides "check stuff".

He spent over 500 words to say that's probably not a good idea.

No kidding.

Now.

What I'm actually advocating is interactive development: The exact same way we develop our APL workspaces and our Smalltalk images, can be used to design and build server infrastructure.

Instead of a development cycle that goes:

* Edit nixfile/dockerfile/vagrantfile

* Wait (sometimes hours)

* Test things

* Repeat

You get a very short:

* Do stuff

* Get happy

* Tag the results

Then when you want to run "in production", you publish your tags/image, review the history with your team, not dissimilar to how Smalltalk people will clean up their changes file, and then introduce the new version into the cluster. Once you're satisfied the world hasn't ended, you can decommission the old images.

You can even make scripts that do these many of these steps automatically (divert x% of traffic to new instances, collect results, and so on). If you have a well-designed infrastructure, you can have junior developers use those scripts to make changes on their first day.

Now.

I get it, you have realised that if you start over often enough you get good at it.

I get that you've seen people who treat systems development as a completely linear descent into unmaintainable bat barf that can only be recovered by a bad day reinstalling everything and "starting over",

I also get that you think amortising that cost by taking those hours to rebuild your entire system when glibc changes is necessary to avoid that bad day.

I also get you like static external configuration like Dockerfiles or Vagrantfiles, and view them as instrumental in the audit of that process.

However while you understood that I disagree with that, you didn't understand why: If you had a decade or so of Smalltalk or Lisp experience instead of just ruby experience, you might have known exactly what I meant, but instead of asking, you just ignored the part you didn't understand, making a straw man argument that you did understand.

After all, what's more likely, someone knows something important that you don't? Or someone knows something unimportant and just isn't as smart as you?


I appreciate your presentation of an alternate viewpoint.

The "build from source" camp provides solutions to a couple problems that at first glance seem to be difficult in a "principled mutation of binary image" model:

- Moving to a different architecture, e.g. x86 to ARM or ppcle. In build from source, I essentially change some parameters and components in the lowest level and rebuild.

- Other big changes to base components, e.g. compiling for Linux/libc vs. building atop a unikernel (obviously heavily dependent on what the upper layers are doing, but I can imagine wanting to run similar code atop multiple bases).

- Sharing common components. This is not really a "from source" thing, but is an important part of Nix. If two images include components in different orders, in Nix the common components will be shared (at the arguably coarse granularity of a Nix package), while the linear structure of Docker layers means once two images diverge, everything further down the chain will be its own copy, even if the same bits.


"Sharing common components" is useful, and the fact that any combination of Nix packages is a valid combination is not something that is guaranteed by other package management systems[1].

However there's another, perhaps simpler solution: I can have a separate "wu-ftpd" container, and a separate "hylafax" container. This admittedly might not have been as easy when nix started, but it is very very easy now.

[1]: https://cr.yp.to/slashpackage/studies.html


Since I discovered Nix, I've been using it as a package manager for OS X and Linux. Every project my team is working on now has a default.nix file. Getting started on a project is so easy for a new hire now: just one `nix-env` away from a fully working dev environment! :D


By chance, do you have some public/sample projects that use this approach?


Yes! Look here: https://git.hso.rocks/hso/def

Is a Telegram bot that queries the RAE dictionary (Spanish). It doesn't have any documentation yet, though :S

You can look for the project deps code here: https://git.hso.rocks/hso


Looks great, thanks for sharing! I'd definitely try this approach.

Do you write default.nix files by hand, or there are some tools to automate the process?

I suppose for a project with a lot of dependencies (one of my current requirements.txt is over 100 SLOC long - almost all deps are on PyPI; haven't checked nixpkgs as I'm stuck with Debian at the moment) it would be somewhat tiresome to check every package and for every missing one list all the dependencies.


give pypi2nix a try and ping me on irc if it fails (nick: garbas)

https://github.com/garbas/pypi2nix


Could someone post (or link to) the history of Nix, the package manager? Who developed it, key project milestones, etc.? I understand it comes from, or led to, NixOS Linux distribution which uses Nix of course.

The nixos.org website is a bit sparse on the Nix project's history, and same with wikipedia (though separately mentions that NixOS was a research project started by Eelco Dolstra in 2003).


I don't know if it's useful to you, but I can link you to Eelco Dolstra’s PhD Thesis (2006):

http://nixos.org/~eelco/pubs/phd-thesis.pdf

He also wrote this InfoQ article on the subject of Nix and NixOS which may have some useful information (2014):

https://www.infoq.com/articles/configuration-management-with...

The Wikipedia pages for Nix and NixOS also have some interesting references.

https://en.wikipedia.org/wiki/NixOS

https://en.wikipedia.org/wiki/Nix_package_manager

Edit: I have been playing with NixOS at home for a few months and I think it's very interesting, but I'm a bit surprised I haven't heard more about it over the past decade or so. I'm thinking that the recent industry focus on deployments and provisioning has caused more people to come across it.


Thanks for posting these links. I didn't know about the PhD thesis and the InfoQ article is the best introduction to Nix that I've read yet!


* 2003 Nix & nixpkgs Language and packages

* 2007 NixOS Linux distribution

* 2009 Hydra Continuous Integration

* 2011 NixOps Cloud deployer

* 2013 First stable NixOS branch

(from http://wmertens.github.io/nixos-cfgmgmtcamp-slides/#/1/1 - non-authorative source :))


Nix (package manager):

- There is a version history in the release notes appendix of the manual. [0]

- As others have posted here, the project is currently hosted on github [1], though it was originally in a svn repo. [0]

- Eelco Dolstra is the active project maintainer. [2]

- First checkin for Nix was ~March 12, 2003 (in the file nix.c/nix.cc). [1]

- Version 1.0 release on May 11, 2012 (also first release from github). [0]

[0] Release Notes: http://nixos.org/nix/manual/#sec-relnotes

[1] https://github.com/NixOS/nix

[2] https://github.com/edolstra

Edit: Thanks to sibling posts for the other info they were able to dig up for us.


I was forcefully relocated from my comfortable decade-old Debian home into OS X, and the package management here in unsatisfying.

I'm looking forward to future posts that show how Nix is better than Homebrew or MacPorts, because that hasn't been demonstrated so far.

And as a bikeshed nitpick, I was always unhappy with the meaningless rpm options, preferring apt's clearer options (rpm -qa vs aptitude search). I'm a bit disappointed that nix chose to mimic rpm's style. Oh well, tomayto/tomahto.


"I'm looking forward to future posts that show how Nix is better than Homebrew or MacPorts, because that hasn't been demonstrated so far."

Homebrew is downright dangerous (I've written about why: http://inthebox.webmin.com/homebrew-package-installation-for... ).

MacPorts is fine. Not wonderful, but OK. Not reproducible in the way Nix is.

PkgSrc is also pretty good. Also not reproducible the way Nix is. But, the packaging policies are good, the packages tend to be well-thought out, and the technical solution to stuff like dependency resolution is pretty OK. The people building and maintaining PkgSrc know what they're doing.

Nix is clearly superior to all of them, however, in being reproducible, easy to roll-back in a predictable way (try going backward with almost any other package manager, even RPM/yum and dpkg/apt can't do that...I love'em both, but recognize that we now have better solutions to package management). Nix is just a fundamentally better tool.

Flatpak is another really promising packaging technology with some of the same benefits (but it's tackling things in a different way; I think I still have to consider Nix a better overall solution; it is certainly simpler in both implementation and usage, for packagers/developers and end users).

But, don't use Homebrew. For now, PkgSrc is probably the best method of getting a lot of reasonably good packages for Mac OS X. Nix will be the better option eventually, once there's a lot of people packaging for it (assuming people get over their bizarre love affair with Homebrew and start working with Nix, instead).


Homebrew is not dangerous. Please don't spread FUD.

Your link specifically covers the case of Homebrew on the server. But Homebrew isn't meant to be a server package manager. It's an OS X package manager, and OS X is a desktop operating system, not something that most people choose to use as a server (OS X Server exists but is primarily used for intranet stuff rather than as a production server). The complaints that your link has are only problems for an external-facing server. They're not problems for desktop use. And the simple answer to that is: don't use a desktop package manager on your server.


Totally agreed. Homebrew is not an appropriate tool for server package management (It assumes everything should be installed under a single user account)- but 98-99%+ of the people that use OS X and servers are not running OS X on their servers, macmini colo users excepted.

The wording dangerous was poorly used, inappropriate for server package management would have been better.


"inappropriate for server package management" is totally fair. That's why we try to generally default to only listening on localhost.


I had no trouble finding people talking about using it on servers during my research. I even found people wanting it and discussing porting it for use on Linux (which is among the most ridiculous suggestions I can imagine, since Linux has an embarrassment of riches in terms of good and great package management options).

So, yes, to be clear: The dangers of Homebrew are primarily when considering it for use in a server context. I personally wouldn't use it anywhere. But, it has many good qualities that many people value highly.


The people who are trying to use it on servers are misguided. The people who are trying to use it on Linux are just crazy. Homebrew itself has never pretended to be anything other than an OS X package manager aimed at desktop users.

If you want to say "Homebrew is dangerous when used on a public-facing server", that's entirely fair. But that's not what you said. I'm glad that you do recognize this distinction though, and I hope you will think twice before claiming again that Homebrew is dangerous without qualification.


There's a new commandline tool in the works that will improve all those interfaces; I'm not sure how far along it is, though. Here's the discussion: https://github.com/NixOS/nix/issues/779


To continue picking nits, that's apples/apple pie, not tomatoes. rpm's equivalent is dpkg, not aptitude (whose match would be yum/dnf/zypper).


I heard the hype. Figured I would give it a try. Installed emacs with nix Install spacemacs on top of new emacs. Throws odd errors. Uninstall emacs with nix

Install emacs with brew Install spacemacs on top of emacs It works!

To pull users from brew it needs to work at least as well, especially for common tools like emacs. Go check out the number of issues related just to emacs installs, it is kind of amazing.


Spacemacs is pretty complex with its own updating logic too. We really want to package it but I don't think anybody had gotten around to it then.

Just about everything else I'd like to use is packaged, so Id like to think we're over the adoption-pkg-count chicken egg problem.


Try pkgsrc. It's much more like apt than nix, macports, or homebrew.


pkgsrc is great and I prefer it to Homebrew on Mac OS X, but it only updates quarterly. :(


Not true any longer, at least for the binary packages I produce[0]. I switched them over to trunk-based builds a while ago, which are updated every few days.

[0] https://pkgsrc.joyent.com/install-on-osx/


How is situation with python3 on osx? Last time I used pkgin, it was not compiled as Framework preventing some libraries to work properly


They're all available:

    $ pkgin se python3
    python35-3.5.1nb2    Interpreted, interactive, object-oriented programming language
    python34-3.4.4       Interpreted, interactive, object-oriented programming language
    python33-3.3.6nb3    Interpreted, interactive, object-oriented programming language
along with a bunch of pre-packaged modules:

    $ pkgin avail | grep ^py3 | wc -l
        1412
Let me know if anything you need is missing and I'll add it.


Thanks. I had to look back at my notes. I had issues with matplotlib and few other packages at the time with python 3.4, due to python not being built as framework.

I have just found a related issue: https://github.com/joyent/pkgsrc/issues/331 Unfortunately, from what I read there, I have still to use homebrew or install from python.org (with those I have no issue). I really enjoyed pkgin otherwise.


    apt-cache search
aptitude isn't necessarily installed.


<even more tangential>

The new apt, if you have it, now combines apt-get with apt-cache plus a bunch of UI improvements.

    apt search ...
    apt install ...
If you've got it, it's well worth a look.


I recently set up a new server for a toy project with nixos and it was a pleasurable experience, even if I did have to pull together several sources of somewhat thin documentation. Recommended. Next time I set up a new dev machine I'm going to give nix a try for dev environment management.

Using nixops to push a closure from your local system up to ec2 is very pleasing. It feels like the future.


I did this too until there was a glibc vulnerability. and it took ages for a 'patch' to come out that had to be installed in an obscure way.

For a toy project this is not a problem of course. But before I'd use nixos in production, it needs a better story for security patches. If you try to look up the word 'security' you get viagra spam on their wiki. and there is no security mailinglist whatsoever iirc.

I'm sure this will improve over time though. And for development, nix is still a saviour.


In Guix, we address this problem with grafting.

Rather than rebuild everything that depends on the fixed glibc, we build the fixed glibc, then rewrite all references to the old glibc to refer to the new one. That rewriting happens in new copies of the referring packages, of course, since the "store" is immutable.

https://www.gnu.org/software/guix/manual/html_node/Security-... https://savannah.gnu.org/forum/forum.php?forum_id=8470


Nix has a way of doing this as well, just needs a better UX (like most of nix).


I was quite surprised that `nix install vim` didn't work out of the box. More so after realising that there's not even a `nix` command.

I see that there's a bunch of `nix-whatever` tools around, but even their names aren't much sensible.

I'm not in doubt that nix is technically superior to homebrew, but UX-wise it's at least been a disappointing experience for me.


What Nix for Mac needs more than anything is a great, simple overview of how to use it in practice, for casual users. I use it as my main package manager, but I install software with it rarely enough that I need to dig around in the manpages every time.


> just needs a better UX

Luckily guix is a better UX for nix. =)


Doesn't this affect the promised reproducibility of builds?


You end up with a new, grafted dependency graph, which can be reproduced just like the un-grafted graph. Is there some other reproducibility issue you see?


No, there's no compromise there.


NixOS is pretty awesome :) For anyone trying to start with NixOS, I suggest trying it out on a small DigitalOcean server. Check out this[0] blog post on some hints how to install NixOS there.

A big help is IMO to get a solid understanding about how nix itself works[1]. Also check out some existing `configuration.nix` files on GitHub. Just google for "github configuration.nix". This might give you some inspiration what you can do with the `configuration.nix`.

If you have some trouble with customizing vim on NixOS/via nix, checkout this[2] blog post I wrote a month ago. Hope it helps :)

[0] http://blog.tinco.nl/2016/02/05/nixos-on-digital-ocean.html [1] https://nixos.org/nix/about.html [2] https://www.mpscholten.de/nixos/2016/04/11/setting-up-vim-on...


Is there a reason you suggest trying it on DO instead of a local VM?



Why use nix when pkg-src has been available on Darwin for over 15 years, boasts more packages, and has decades of support (plus Jonathan Perkin and the guys over at Joyent)? Why people assume the gross project known as homebrew (and the better macports, where homebrew gets all of their real package patches) are the only two package managers on OS X/Darwin is perplexing to me... Not to say nix is not interesting, but still. The OS X open source/package community would be insanely better if it was solely centralized around pkg-src.

https://pkgsrc.joyent.com/install-on-osx/

http://github.com/cmacrae/saveosx


Hmm.. I've never tried pkg-src before. I'll give it a try.

Step 1: I have to download a gig of stuff before I can install any packages? Even though my drive is big nowadays, that still seems like a bit of a waste...

Step 2: Let's try building a package, the last thing I installed with homebrew.

    > cd math/minisat
    > make install clean
    Makefile:27: *** missing separator.  Stop.
    > make
    Makefile:27: *** missing separator.  Stop.
Hmm.. this isn't the most user-friendly system...

UPDATE: I have to use the bmake I bootstrapped, didn't realise (and the quick-start docs didn't mention it). Now building begins, but the executable doesn't link.


use the `pkgin` binary to install binaries built from joyent... (https://pkgsrc.joyent.com/packages/Darwin/) No need to clone the package tree. This fixes step 1. `sudo bmake install clean` fixes step two.... (opt needs superuser permissions). see the joyent link in particular to see how easy it is to install pre-compiled binary packages with pkgin


Does it allow you to install several versions of a package and use everything down to compiler version and compilation flags as dependencies?


Maybe against monoculture and user experience with pkg-src on OSX was very bad for years. Nix introduces modern concepts that pkg-src will never have without complete rewrite ... so we need to look forward


I'm going to give it a try for a while, thanks.


Homebrew and MacPorts users both patch upstream projects and share patches between projects.


Both of your links are broken.


oops, copied it over from my comment on the article itself. fixed now


Anyone who's used Nix for a while -- how does package availability compare to Homebrew or Macports? Other than the default.nix thing (which Homebrew also has a version of [1]) are there any other clear benefits to switching from Homebrew?

[1]: https://robots.thoughtbot.com/brewfile-a-gemfile-but-for-hom...


The last time I tried (a month ago), I could reproduce most of my current Homebrew packages. There are few that are broken (e.g. rust, which has since been fixed), or do not have Darwin support (aria2c, which I have a PR open[1]). There are also packages that are a little bit hard to discover, e.g. those that are imported from Haskell's Hackage where it is not listed by default[2] when you do `nix-env -i <packageName>` (e.g. shellcheck).

From my few days of trying to replace Homebrew with Nix, being able to rollback the environment to last known good state (`nix-env --rollback`), ability to keep track of package changes (`nix-env --list-generations`), ability to try a package without installing it (`nix-shell`, well, technically it's installed on the machine, but nothing is linked in the profile, and you can nuke them with `nix-collect-garbage`) are very nice additions.

Apart from these features, the fact that it doesn't chown `/usr/local/` to my user is a big plus. Nix will also not linking anything not explicitly installed by user to their profile (e.g. if package A has B as dependency, B won't be linked until user actually choose to install it, which is then managed separately from B that were explicitly installed as A's dependency).

That said, I've since revert back to Homebrew because there are few packages broken that are crucial to what I'm working on. Creating and fixing package is not too hard (I did with aria2c), although time didn't allow me to do it back then. I would love to try to switch to Nix again sometimes.

People in ##nix-darwin are super-nice, also! :-)

[1]: https://github.com/NixOS/nixpkgs/pull/15029

[2]: https://nixos.org/nixpkgs/manual/#users-guide-to-the-haskell...


Thanks for the review. I think I'll definitely have to give it a try next time I've got a free weekend.


The default.nix thing is nothing like what home-brew has. Not really kidding here, nix is like git is to svn. Very different in philosophy.

I can setup scripts that install and configure things for emacs to make sure it always runs in an environment that has all it needs. But with the benefit of not having to have everything installed globally.

I can have multiple versions of whatever installed into /nix at once. But still not have to worry about breaking things that need both versions.

As for package availability, I've not hit any issues. It has most of what I personally need.

What I hit more often is random breakage or having to update derivations that only ever got created or maintained from a linux perspective.

I'd say its in the "developer not afraid to fix broken build problems" category as a package manager. Its not bad, but it is really annoying when you have to revert back to a prior generation because some package you use got updated and now won't build because of its test suite failing.

Random stuff like that happens a few times a month, nothing huge though honestly.


For me the biggest benefit is reproducibility. I can get a package to build on any machine exactly the same and with no hidden dependencies.


That does seem pretty cool, and I also see it works out-of-the-box on Linux machines. It doesn't seem to have a homebrew-cask equivalent though, which is a bit of a bummer. (Makes sense though since GUI applications aren't really cross-platform.)

How does it handle OS-specific dependencies/distinctions? Or if e.g. a package requires ruby or python to be installed?


We use it on Python projects instead of virtualenv. A problem that you sometimes have with virtualenv is that you must install libpq-dev and python headers and what not to install from pip if the package has native extensions.

With Nix you just declare postgresql as your package inputs and you are done. All needed parts to build and install psycopg2 are there.

Your package/project is completely isolated from your OS libs.


I've been able to replace _most_ of what I wanted from homebrew with Nix. I still keep brew around for a few things that haven't been packaged (or packaged properly) for Darwin in nix, like bazel and aptly. Works out pretty well overall.


You can browse available Nix packages here:

https://github.com/NixOS/nixpkgs/tree/master/pkgs

...and search here if you're looking for something in particular:

https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/...


Nix is impressive, NixOS is more impressive still.


It seems like when installing Nix, it needs to create a 'nix' directory in your root directory. While I haven’t tried installing Nix yet myself, I’m guessing this probably won’t work on OS X v10.11 El Capitan due to System Integrity Protection¹. You’ll have to temporarily turn it off² first before attempting to install Nix (and then optionally turn it back on after).

――――――

¹ — https://derflounder.wordpress.com/2015/10/01/system-integrit...

² — http://www.madrau.com/support/support/srx_1011.html


Nope, System Integrity Protection only affects you if you delete/modify system level files. It does not prevent you from creating a `/nix` directory with regular root privileges, nor does it prevent you deleting it with root privileges.

It's only when you try to delete something like /Library, that it then complains.

This is one reason why I strongly believe SIP is blown out of proportion. I've used El Capitan since the betas and created root directories, deleted them, added to existing root directories without having to disable SIP.

Again, it's only when you try to modify the built in directories inside / does it then complain, and frankly, I am more than happy with that, even with sudo privileges.


I have SIP enabled and I was able to install nix just by `sudo mkdir \nix` before installation. No problems.


Ooh, that's an excellent point -- thanks for bringing it up. I've had Nix installed since early 2014, so I haven't run into any trouble with SIP yet. Looks like we might need to update our docs, and maybe have the installer give a helpful tip.


Unless /nix modifies files/folders, then no, you don't need to modify your install instructions.

SIP only prevents users from modifying system level files. It does not prevent you from adding to, or deleting your own files within the `/` directory.


No need, `sudo mkdir /nix` works even with SIP enabled.


Is there any reason to switch if homebrew worked perfectly for me in the last couple of years?


Ability to install several versions of any package and guarantee that nothing can break installed packages are two main points of Nix.


none, which is why this article is so vague.

like i needed another folder in root, as if `/opt/`, `/usr/local/` or `~/.local/share` weren't good enough and this is somehow a feature. another package manager that has less features than homebrew? oh, like fink and macports - no thanks.


It would appear that your primary takeaway is that we put stuff in a different directory, for an apparently arbitrary reason. I agree, that would be silly -- if it were the case.

With /usr/local/{bin,lib,include}, how would you install multiple versions of openssl? Would you violate convention and do something like /usr/local/openssl-x.x.x/{bin,lib,include}? How do you avoid file path conflicts? When a package is built against a particular version of openssl, how do you ensure that version of openssl sticks around, and that that, upon execution of said program, the dynamic linker loads specifically that version of openssl and not one of the others?

While I encourage you to consider those questions, the short answer is "you can't". At least, not with conventional package managers like Homebrew.

Because Nix builds each package in a unique prefix (governed by the hash of all of the build inputs, recursively), the Nix package manager can use chroots (and Sandboxes on OSX) to ensure that packages are deterministic, and that each package refers specifically to the versions of the dependencies they need (dynamic libraries are fixed via rpaths and we apply patches for references to executables -- so "python -m foo" becomes e.g. "/nix/store/<HASH>-python-2.7/bin/python -m foo").

If you don't use a unique prefix per package, the problem of "how do I install stuff without worrying about path collisions amongst the multiple versions of requisites" becomes intractable. Ditto for build determinism, binary caches, lightweight environments a la nix-shell, etc.

I suppose we could shoehorn each package into /usr/local/<HASH>-<NAME>-<VERSION>/{lib,bin,include}, and then we could tick the "use an existing directory under /" box, but is that really better? I don't see what we gain from that, since that's already an abuse of /usr/local.


Thanks for the enlightened response to my rather curt post.

I guess my main issue with the original article was that if you don't already know what nix is, the benefit is not at all obvious, especially for the (above?) average joe for whom homebrew works fine. For a sysadmin who is sick of dependencies breaking, even when using something like aptitude, it sounds perfect though.

Although after skimming the nix docs [1] and reading your reply, I'm still not sure why e.g. `/opt/nix/` wouldn't work. Maybe it would, except for the fact that `/nix` was decided on and if you now change that, the prebuilt packages break (see [1])? Obviously, it isn't a deal-breaker, but e.g. consider an existing backup solution which might have been configured inclusive, not exclusive.

Also, I get that as a package manager, nix is in a different position than most programs, but putting stuff in `/` isn't a great precedent, and other package managers don't AFAIK.

[1] https://nixos.org/nix/manual/#ch-about-nix


Great cliffhanger for the next article at the end.


Having set up my last company doing all development and deployment using nix, moving to a new team which just uses pip/virtualenv and a "bootstrapping script" feels a little unnerving to start with.


Is there any good guides to get started on nix with python?


The nixpkgs manual has a section on it https://nixos.org/nixpkgs/manual/#sec-python but the key to it is that it's really mostly about managing your PYTHONPATH. You wouldn't typically be installing a python package and just expecting it to be available to a session-launched python invocation (because PYTHONPATH is not exported to the user env of a profile), instead you would probably use nix-shell or a custom-built python environment with the exact package setup you desire...


I demoed NixOS a few years back when I was wrangling with the Haskell ecosystem.

I was pleasantly surprised, as I could rollback the entire state of my system from GRUB itself if anything went screwy.


> curl https://nixos.org/nix/install | sh

Stop doing this.

Look, even if you are rolling your eyes and thinking, "it's https and I'm not Ed Snowden, I think I can afford the risk for the benefit of an easy install process", what happens if curl is interrupted? Are you excited at the prospect of a half-ran install script that you didn't even look at?


No [1].

If you are installing software from scratch, you must trust the https server that serves it to you.

They could publish sha256 check sums to https://nixos.org/hashes, but you would have to trust the https server.

They could publish their gpg key to https://nixos.org/gpg, but you would have to trust the https server.

Things would be better if we had a reasonable certificate system for verifying open source software, but we don't. There is no gpg web of trust. There is no hierarchical CA system for code signing that open source authors can easily plug into, and expect their users to verify.

If you care this much, stop astroturfing and go build a system that does better. It will be very hard work, and you will have to distribute the first version of your solution with an https server that your users will have to trust.

[1]: Ok, so maybe the instructions could be "curl <url> -o install.sh && sh install.sh".


> If you are installing software from scratch, you must trust the https server that serves it to you

This is incorrect - you are forced by `curl | sh` to trust all of the 1000+ CAs in your OS' keystore. And if you can't trust all of them (since some have given out Google certs before, you can't), how can you trust what you're getting over HTTPS?

With hashes, they are not typically served from the same domain. That's the minimum level of protection for the user.

Signing with a key is another level of protection; it lets the user (not the OS) decide the level of trust, and even verify the key with the author of the package via a completely separate channel.

Now the question really becomes, why doesn't the provider use the existing (secure) channels of software distribution? RPMs, DEBs, installer packages for Macs, MSAs for Windows?


> you are forced by `curl | sh` to trust all of the 1000+ CAs in your OS' keystore. And if you can't trust all of them

If you don't trust them, they should not be in your certificate store. That's not a problem with using `curl | sh`, it's a problem with the certificate store and what the user is trusting.


That's not the only thing you have to trust.

http://ariya.ofilabs.com/2016/05/nix-as-os-x-package-manager... is served over HTTP. Saying "Copy and paste this `curl | sh` command" allows this attack: http://thejh.net/misc/website-terminal-copy-paste


What do you find inadequate about PGP key-servers?


So you pull the package down and the signature checks out to some key on a pgp key server. How do you know it's the correct key?


Generally, the key already exists on your system, as it's coming through the official channels. If you decide to add the cert manually, you can inspect the key and see who has signed it, and decide your level of trust.

With HTTPS, you only ever have the choice of trusting the 1000+ CAs in your keyring.


> If you decide to add the cert manually, you can inspect the key and see who has signed it, and decide your level of trust.

No one has signed it, because the mythical web of trust does not exist. What now?


Contact the developer (say, over IRC, Slack, email, GitHub, etc), and ask them to sign a sentence you provide with it. Then you can sign their key.


Where is the web of trust that lets me cryptographically verify that my install.sh file was published by some PhD student in the Netherlands? And unless I trust the https server, how do I know which PhD student in the Netherlands I'm supposed to trust?

If this were software that I had to install multiple times (say there are several updates a year), it might make sense to implement TOFU with a published GPG key -- each successive update would use the same key I imported the first time. However, the nix installer installs a package manager which is self updating, and therefore implements TOFU itself.


The script doesn't run if it downloads. Go read it yourself.

What attack vector would you actually be protecting against if you downloaded a .tar.gz full of binaries and extracted it to /, then started installing things with it?


Good point, I didn't look at the script. My point about half-downloads is bogus.

Kind of a funny situation. If you look at the script first, you'll notice that it would have been okay for the download to be interrupted and so, assuming everything else is kosher with you, the '| sh' would have been "safe".

And if you don't look at the script first, you'd never know that what you just did was "safe."

Schroedinger's-pipe-to-sh.


I think this is just "good design," you expect tools to be designed to handle failure in a reasonable way.


You might expect it, but if you're blindly running `curl > sh` (as recommended), you can't be certain.


On that note, is there some valid reason Nix absolutely needs access to / to create /nix? Why can't it live somewhere else? I believe some parts can go anywhere, but /nix is hard coded for the rest, which I'm not entirely comfortable with.


The pre-built binary packages available from the Nix project are all compiled for /nix; this choice is baked in to the binaries, which refer to shared libs via absolute paths for example.

I understand this path is configurable, but you'll be compiling everything yourself, unable to use Nix's binary cache. You also risk running into unique problems since everyone else is using /nix


Yep, additionally, /nix is really no different than /usr/local here. Just in a different spot.

I'd recommend everyone try nixos out as well to see why everything (mostly) in /nix is a good idea.

nixos is easily the most un unix unix i've seen, and honestly i can't wait to get rid of apt/rpm/yum/zypper. Not dealing with /some/file/blah being version X or Y in general is so freeing.


> I understand this path is configurable, but you'll be compiling everything yourself, unable to use Nix's binary cache

Not necessarily. Homebrew compiles stuff for a special prefix then substitutes your own prefix when you install the binary. It doesn’t work with everything (that’s why some compiled packages are available for /usr/local only) but it’s a start.


Nix uses rpaths on Linux (install names on OSX) to fix the precise location of dynamically linked dependencies, and it also patches the paths to executables (e.g. In string laterals or shebangs) needed at run time. To change the location of a binary, one would have to patch the dependents so that /nix/store is replaced with the new prefix, which is generally not possible to do, unless the new prefix has the same length as the old prefix - in which case you can just sed every file.

Homebrew just assumes that the needed executables will be available on $PATH and that the dynamic linker will find libs in /usr/lib or /usr/local/lib.

So yes, you really would need to recompile everything if change the prefix from /nix/store to something else.

Edits: fixed spelling and structure from originally typing via phone.


All Nix packages are built with full-path references to other packages in Nix - if you move it elsewhere, the result is that you can't use the binary cache built by Hydra, and you have to build everything from source.


I don't know for sure, but I think it's installed under /nix thinking of multi-user Nix installs. With Nix any system user can install its own set of packages and on NixOS there are build users that install packages on behalf of normal users.


There is even timing server side attack exploiting "curl |bash" recently posted on HN [1].

[1]: https://news.ycombinator.com/item?id=11532599


This is an irrelevant "attack." If you don't trust the site to serve you a safe installer, then you shouldn't trust the site to serve you a safe program either. Do you regularly verify the entire source code of programs you install?


> what happens if curl is interrupted?

Absolutely nothing. The script is written intelligently, so an interrupted download just results in a syntax error. Specifically, the entire file is wrapped in {}, and if the terminating } is missing, sh will throw an "unexpected end of file" error instead of executing anything.


The install script (for Nix, at least) is written such that it will not execute if partially downloaded, so that is less of an issue.

Though I do agree wholeheartedly that piping scripts to shell, sight unseen, is a remarkably awful idea.


Yes. I'm unconfortable doing this as well. Not only with Nix, I really think you shouldn't do it, ever. On Ubuntu/Debian, at least, I'm installing the .deb package.


Take a look at the install script. It's not so complicated, pretty much only download a tar.gz. So why using curl?


Did you know that a malicious server can detect if you are piping to a shell and serve a completely different thing? D:


How so? By looking for a curl UA?


No. The shell is slower processing the download. You can detect this on the server. See https://news.ycombinator.com/item?id=11532599


> It's not so complicated, pretty much only download a tar.gz. So why using curl?

Because it goes further than just downloading a .tar.gz. It checks your OS and its dependencies; download the right tarball; check it has an install script; run the install script.


What happens when packages need to store data?

For example, installing Datomic via Brew will result in databases being stored in:

/usr/local/Cellar/datomic/0.9.5206/libexec/data

How would that work with Nix? Would the data live under /nix?

And most importantly, would that data get squashed because of some kind of idempotency assumptions that Nix might make?


>would that work with Nix?

Yes.

>Would the data live under /nix?

No, Datomic will be built with another default data directory.

>would that data get squashed because of some kind of idempotency assumptions that Nix might make?

No, since data is stored separately.


/nix is mounted readonly, only the special nix user can write to it. Data goes in /var or similar.


This is amazing. As someone who hates clutter and having to use sudo for anything, Nix is just what I needed!


Is there a way to install packages on user space without proot? Why does it need /nix?


You can install packages by installing nix into your homedir. But you will never get binary packages this way.

The reason is because when the software is compiled, it's stashed in the /nix directory, and might refer to other artifacts in /nix. For example, zlib.so exists in /nix/store/zlib-.../lib/zlib.so, and links to glibc.so, which exists in /nix/store/glibc-.../lib/libc.so.6. So if you want to install zlib and get binary packages, you need to first install glibc. And you need to put glibc in the place that zlib will expect it.


Wait... binaries inside a ~homedir/.dotfile/? Why on earth?


I believe those are just symlinks to the binaries in /nix/xxxx, so that $HOME/.nix/bin can be added to PATH.


I believe they're just soft links to the latest/appropriate version of the executable under the /nix/... directory.


Just curious, why would a package manager using a folder under the home directory be weird?


Only two results about nix on windows, and not very enthusiastic ones. Too bad.


This sounds exactly like what homebrew does. It also installs under /usr/local and does not touch anything else. What does nix package manager provide that homebrew does not? (from a user point of view)


Nix groups all of your installed packages in one environment. When you install a new package, it creates a new environment with the new package added to it. This alone may sound insignificant, but it enables having multiple nix environments (e.g. one per project) and switching between them very quickly and easily.


Can you give me an example of projects which require this setup? (Even a personal anecdote will do)


I've never heard of nix, so I can't comment to that specifically.

But being able to have multiple / separated environments on my machine would be hugely beneficial.

Working for a full-time software consulting agency, I'm normally actively working on many projects at the same time, each of which have their own nuances of packages that are required (e.g. different versions of PHP, different sets of dependencies, etc.) So if nix truly offers seamless switching between environments (and if it can do it quickly and efficiently), it would definitely be worth it for me to look into it further.


It utterly does!


Here is a personal anecdote: Managing multiple versions of openCV on OS X. OpenCV has a long laundry list of dependencies and trying to get two versions to build can be a nightmare. (It's much easier to just update one of your projects to use the same version of OpenCV as the other one.) With Nix each of those OpenCV versions can live completely isolated from each other so their dynamically linked dependencies don't get overridden by each other.


Can't reply to sibling post for whatever reason.

> Working for a full-time software consulting agency, I'm normally actively working on many projects at the same time, each of which have their own nuances of packages that are required (e.g. different versions of PHP, different sets of dependencies, etc.)

I don't think this is what Nix offers. This is more like a homebrew competitor (I assume you don't use homebrew to install php...).


Yes, this is what Nix offers. You can trivially have have multiple versions of stuff installed without conflicts. Personally, I have multiple versions of the GHC Haskell compiler, Clang/LLVM, Go, Node, etc. In fact, I keep shell.nix file in each of my repositories at work, and all I have to do is $ nix-shell in each project root and -- like magic -- I have all of the compilers/environment-variables/etc setup for work. Meanwhile, outside of that terminal window, my system is untouched.

Nix is quite different from just about any other package manager out there. If there's something that seems unfeasible or impossible, please feel free to ask specific questions to challenge my assertions. Nix is so different that it will be tempting to think "surely he means... no, that can't be possible, he must be mistaken... but how would that work?" Rather than silently thinking those things, please feel free to skip right to asking "how?"

(For more details, see also my long post to your original statement/question)


According to zachrose though (GP) it's exactly what it's supposed to do.

See also this comment (great uncle post?) https://news.ycombinator.com/item?id=11773206

It talks about being able to setup and instance of emacs referencing a specific nix environment.

Edit:

This comment also talks about using it to solve virtualenv shortcomings. https://news.ycombinator.com/item?id=11773036


Hi, I'm a NixOS/Nixpkgs comitter and package maintainer for plenty of stuff -- including X11/XQuartz for OSX.

Nope. They both do package management, but that's where the similarities end.

Homebrew packages are not patched to point at precisely the version of the stuff they were built against. For example, if you have program that dynamically links to openssl, that program is built with the expectation that the dynamic linker will be able to find e.g. libssl.dylib in either /usr/lib or /usr/local/lib (the specifics are harrier, but that's a decent approximation). If you later upgrade openssl and the ABI changes, you've now silently broken everything that uses it -- you'll now get a segfault at runtime (I've had this happen with openssl and many other libs on Homebrew, and is part of why I was (and still am) very excited about Nix).

When you compile a package with Homebrew, you're not guaranteed to end up with the same result as someone else, even if they have the same checkout of the formulae. Why? Well, each project's Makefile (or what have you) will try to run whatever's on $PATH, and poke around elsewhere on your system to e.g. automatically enable build flags (oh, luajit is installed? I guess I'll just go ahead and enable Lua scripting and link to it).

How does that compare with Nix?

Nix packages are patched so that their runtime dependencies (dynamic libraries, programs to be execed, shebang lines, etc) are locked down to the precise build that was specified as part of the package. The obvious implication here is that Nix packages can be trivially installed with differing versions of dependencies as necessary (back in the day, I wanted to play with both the Elixir langauge and the Riak DB, but they required two different major versions of Erlang. Unfortunately, I could only install one version, as the packages for both versions wanted to be placed in the exact same prefix and the filenames would overlap. This was on Ubuntu, but it applies equally to Homebrew).

When compiling a Nix package, I can rest assured that the build artifacts will be equivalent between two different machines (not bit-for-bit, but at least functionally equivalent -- excepting compiler bugs). Why? Because the build happens in an isolated environment where, as far as the build process is concerned (we take advantage of a number of kernel features on Linux (chroot) and OSX (Sandboxes)), the system only consists of the packages explicitly listed as build inputs -- nothing else. Excepting esoteric stuff like someone intentionally trying to subvert chroots and such, if you were to put a gun to my head and tell me that you were going to shoot me if you could find a case of non-determism, I'd shrug. We're serious about that whole determinism thing -- it's a core selling point of Nix, and the reason for Nix's "quirks" (the per package prefixes, hashing of build inputs, chroots, lack of network access, etc).

If the above doesn't make the differences obvious, I'll challenge you to answer some questions (the socratic method):

1. How does Homebrew support having two versions of openssl installed concurrently? Describe the paths involved, how the dynamic linker would be guaranteed to load the respective version of libssl, etc.

2. How does Homebrew ensure packages are built exactly the same way across two different machines (e.g. using the same version of autotools, clang, make, etc)? Be sure to explain how the build is guaranteed to not auto detect/enable stuff because of things present on my system that may not be present on your system. Note that Nix will deterministically fail if you fail to explicitly list all of its dependencies (rather than working because, say, cmake just so happened to be installed on the formula author's machine) -- so you'll need to describe how Homebrew achieves this when running a formula.


How do you solve unintended dependencies, that are in the path you have to include?

For example, when building postgresql on OSX, it always links against libpq.dylib in /usr/lib, not against the one it just built. In theory, it could be possible to exclude /usr/lib from the -L path, but then I would lose other libraries, that ship with the system (i.e. libz, libxml2, libxslt, libpam, libSystem). The other possibility is to build all these dependencies too - except libSystem...

Right now I'm using install_name_tool to fix the binary, but that is an ugly hack. (The other problem in this case is, that system libpq is x64-only. It means that i386 build would fail).


Hey, thanks for taking the time for the informative response. I see the difference now.


You're very welcome -- thanks for raising the question :)


See you all in five years when Nix is shit and we're all switching to Flurp as our package manager.


"when people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together." -Asimov

http://chem.tufts.edu/answersinscience/relativityofwrong.htm


I ended up searching "Flurp" because with the current state of things, there was a chance that this was real.


Add javascript to the search. Sometimes I do that with random searches just to see the crazy things people are doing with Javascript




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: