Hacker News new | past | comments | ask | show | jobs | submit login
Will Nix Overtake Docker? (replit.com)
327 points by Codemonkey51 51 days ago | hide | past | favorite | 250 comments



Oh god I hope not. Having worked in > 100kloc nix environments I am completely turned off of the idea. I really really tried, I installed NixOS as my main OS and used Nix whenever I could to try and pick it up, but it's such a complex beast I felt it slowed everything down. Simple tasks that would take 10 minutes in Docker suddenly became DevOps tickets. I suddenly had to write bindings for tools rather than apt-get them in a Dockerfile. Build times in Haskell were bad enough but suddenly you had nix into the mix and I'd be wasting half a day as Nix built the entire world - solved with internal caching servers but have fun trying to work out how to push something to that correctly.

There's a blog post that goes around from time to time about how a company have three risk tokens to allocate per project on non-boring technologies. Nix seriously needs all three. It's the only technology I've ever vetoed at a startup because I've seen the hell hole it can become - perhaps not for the DevOps engineers who know it, but the team who have to suddenly work within the resulting environment.


> There's a blog post that goes around from time to time about how a company have three risk tokens to allocate per project on non-boring technologies.

https://mcfunley.com/choose-boring-technology


Thanks, this is the one! Highly recommend it, especially when you're in the HN echo chamber of new and interesting technologies


Worth noting in this context of OP, though, is that Nix predates Docker by about a decade.


That is extremely concerning, then, that it is so difficult to use after all that time.

I had hopes in the back of my head that "maybe it'll get better/more ergonomic in the next few years".


Is it? Or is it just a tool that requires a different mindset so those coming to it from another regular distro? If the latter, then I’m not sure it’s something they will see as a problem but more its very ethos.


Maybe an apt comparison is git.

It's just as easy to make a mistake with git today as it was however many years ago; git hasn't fundamentally changed in ways that make it easier. Git still more/less requires you to have a good understanding of what's going on in order to be comfortable using it.

But, since use of git is now widespread, it's less of an issue. And the git CLI has seen some UX improvements.

Nix is very weird. I'm sure there are some ways its UX could be nicer; but to some extent, doing things in a strict/pure way is always going to have friction if the tools/environments don't easily support that.


> It's just as easy to make a mistake with git today as it was however many years ago;

Pro tip from an actual pro :

git became significantly easier for me once I decided

1. to just use a good comfortable gui (at least one tui also look good) for anything beyond commit and push. (maybe not everything can be done from your gui of choice but at least you get a good overview of the situation before you dive in with the cli.)

2. and to just stick to the basics. (And when you need to help someone else who didn't, take it carefully even if that means you don't look like a wizard.)

In fact even working on the command line feels easier once I had worked in a good gui for a while.

Don't believe experts who claim that you need to study vim and stick to the git cli to "get it".

But of course: if cli is your thing, use it, just stop writing blog posts that claiming it is the only true way.


Which git GUI do you recommend? The one in VSCode I find even more confusing than the CLI.

I do agree with you that some workflows are just easier with a GUI, since I used to use TortoiseSVN and it was much nicer for diffing two commits than the CLI is. But I haven't really dug into git GUIs.


Magit is the most recommended one. But it comes with an Emacs dependency.


I recommend finding one or more that you like, my preferences doesn't always match everyone elses ;-)

At times I have used ungit or git extensions (which, despite its name contains a full desktop application.

Today I use VS Code with a combination of Git Branch for visualization and Git Lens for the rest. Git Lens might need some configuration (I rarely use all the panes, but I use the interactive rebase tool from it as well as some annotationn features. For the visual rebase tool to work I had to configure VS Code as my git editor, but that is how I like it anyway.)

Again: try a few, find one that makes sense for you.

Jetbrains tools for example are generally good, liked by many and well though I understand, but still the git visualization (and a couple of other things) consistently manages to confuse me.


I found this website useful for helping me understand what was going on with git. https://learngitbranching.js.org/

GUI vs CLI shouldn't be about which is a less confusing way of using git.

(Though, yes, magit is excellent, and there are very few tools which come close).


https://www.syntevo.com/smartgit/ It's the most similar I've found to TortoiseHg which is the standard, and excellent, Mercurial GUI. (TortoiseGit does not live up to the family name)


Fork, hands down. https://fork.dev


Here are some recommendations:

SourceTree: https://www.sourcetreeapp.com/

Windows and Mac. Free. Feels sluggish, but is also really dependable, the graph view is lovely and it covers most of the common things that you want to do - also, staging/discarding chunks or even individual lines of code is lovely. Oh, and the Git LFS integration, and creating patches is also really easy. And it gives you the underlying Git commands it uses, in case you care about that.

GitKraken: https://www.gitkraken.com/

Windows, Mac and Linux. May need commercial license. Feels like a step up from SourceTree, but i find that using this for commercial needs is a no go. If that's not an issue, however, it has a good UI, is nice to work with and just generally doesn't have anything i'd object to. IIRC it saved my hide years back by letting me do a ctrl+z for a repo after accidentally forcing to the wrong remote, so that i could fix what i had done (memory might fail me, was years ago), just generally feels intuitive like that.

Git Cola: https://git-cola.github.io/

Windows, Mac and Linux. Free and open source. Perhaps one of the more basic interfaces, but as far as free software goes, it does what it sets out to do, and does it well. I use this on Linux, whenever i want to have that visual feedback about the state of the repo/staging area or just don't feel like using the CLI.

TortoiseGit: https://tortoisegit.org/

Windows only. Free. Recommending this just because you mentioned TortoiseSVN. If you just want a similar workflow, this is perhaps your best option. Honestly, there is definitely some merit to having a nice file system integration, i rather enjoyed that with SVN.

Whatever your IDE has built in: look at your IDE

On any platform that your IDE runs on. Same licensing as your IDE. Some people just shop around for an IDE that they enjoy and then just use whatever VCS workflows that they provide. I'd say that VS Code with some plugins is really nice, though others swear by JetBrains' IDEs, whereas others are fine with even just NetBeans or Eclipse (Java example, you can replace that with Visual Studio or whatever). If you're working within a particular stack/IDE, that's not too bad of an idea.

The CLI: https://git-scm.com/

Windows, Mac and Linux. Free and open source. You'll probably want to know a bit of the CLI anyways, just in case. Personally, i'm still way too used to using a GUI since dealing with branches and change sets just feels like something that's more easy when visual, but the CLI has occasionally helped me out nonetheless.

Actually, here's a useful list of some of the Git GUIs: https://git-scm.com/downloads/guis

For example, did you know that there are some simple GUI tools already built in, git-gui and gitk?


I avoid most git usability issues by using it as SVN.

When something goes wrong I just bork the whole repo and clone it again, then manually merge the last set of saved changes.


Since learning about reset --hard, I need about 80% less of that.


"git reflog" has also been a blessing in getting out of scrapes.


Thank you! Me too. I can finally come out of the closet.


I has been a long journey since I started with RCS back in 1998, but given that git now is everywhere, the least I deal with its complexity the better.

In some bad days I even miss Clearcase.


> It's just as easy to make a mistake with git today as it was however many years ago; git hasn't fundamentally changed in ways that make it easier.

Easier? Who needs that? Soon then you will have common unlearned folk and peasants trying to use git. </snark>

They had the super obtuse git reset, that was four different things bolted together, so they fixed it by adding git restore, that does slightly different things, but you still need both...


The latter. As a (bad) analogy: it’s a bit like learning Haskell. If you’ve only ever worked in strictly-evaluated, imperative languages, it’s not gonna be easy to learn. Nevertheless, Haskell predates e.g. Go by two decades.

Nix (and Haskell) has its warts, and undoubtedly a new system would avoid them (compatibility needs make some changes very challenging), but the fundamental difficulty remains because it is fundamentally different and solving a truly difficult problem set.


> That is extremely concerning, then, that it is so difficult to use after all that time.

> I had hopes in the back of my head that "maybe it'll get better/more ergonomic in the next few years".

I feel much the same way about wanting to run something like FreeBSD and having it just work, as opposed to running into weirdness because of the driver situation, with which GNU/Linux seems to be getting better at (even though you sometimes are forced to install proprietary ones for optimal experience, should get better in the next decade).

So, might have to wait for a bunch more years, or just pick something else, like OpenBSD, or just run in a set of constraints for having a really predictable and well supported hardware configuration, which isn't always possible. Alas, my off brand Polish netbook will just have to wait before i can (hopefully) run BSD on it comfortably. Well, short of making the missing pieces of puzzle myself, for which i'd probably also need a decade or so of systems level development experience, as opposed to just web dev and occasionally dabbling in lower level stuff to mediocre success. Of course, by then the netbook itself might not be relevant, so who knows.

I also feel much the same way about GNU/Hurd - a project that is conceptually interesting, but isn't yet quite stable, even though apparently you can get Debian for it: https://www.gnu.org/software/hurd/

Now, i don't have almost any experience with it, apart from reading about it, but some people tried figuring out when it could be released based on the bug reports and how quickly they were being addressed, and the number that they came up with was around 2070 or so.

In summary, there are probably projects out there, for which their age isn't necessarily related to how usable they are, whereas other pieces of software will have never truly achieved that stability in the first place. Not all old software is good (edit: at least not for certain use cases).

Of course, there are exceptions, for example, you can look at Apache2/httpd: it is regarded as old and dated for the most part, however just recently an update was released, which added mod_md. It now lets you get Let's Encrypt certificates without external tools (like Certbot), in some ways setting it ahead of Nginx: https://httpd.apache.org/docs/2.4/mod/mod_md.html Not all old software is always boring.

Same for Docker and any other tools. If developers use tool A for use case X, instead of tool B, then maybe there are some very good reasons for this widespread usage? That's also probably the answer to this debate - regardless of their conceptual/technical benefits, the usability will probably decide which tool will win in the long term.


My suspicion is that the HURD folk seized on new shiny that was not finished enough, and it's dogged the project ever since. Periodically the already-inadequate number of people have got diverted by investigating L4, then CoyotOS, then Viengoos.

I also submit that to understand why that happened, one needs to consider the context at the time that the decision was made:

http://www.h-online.com/open/features/GNU-HURD-Altered-visio...

http://www.groklaw.net/article.php?story=20050727225542530

« RMS was a very strong believer -- wrongly, I think -- in a very greedy-algorithm approach to code reuse issues. My first choice was to take the BSD 4.4-Lite release and make a kernel. I knew the code, I knew how to do it. It is now perfectly obvious to me that this would have succeeded splendidly and the world would be a very different place today.

RMS wanted to work together with people from Berkeley on such an effort. Some of them were interested, but some seem to have been deliberately dragging their feet: and the reason now seems to be that they had the goal of spinning off BSDI. A GNU based on 4.4-Lite would undercut BSDI.

So RMS said to himself, "Mach is a working kernel, 4.4-Lite is only partial, we will go with Mach." It was a decision which I strongly opposed. But ultimately it was not my decision to make, and I made the best go I could at working with Mach and doing something new from that standpoint.

This was all way before Linux; we're talking 1991 or so. » -- Friar Thomas Bushnell

They looked at BSD. The BSD people were not sure, so RMS decided on another path.

If GNU had picked the BSD kernel, there could have been a working Free xNix before Linux and things would have been very different.

Secondly, I think it's important when discussing microkernels and microkernel OSes to consider more than the most famous one: Mac OS X.

Xnu is based on Mach, but it's not a pure microkernel: its Xnu kernel contains a large, in-kernel "Unix server" derived from BSD code. This was done for performance reasons – remember, macOS is Mac OS X is NeXTstep, written for a 25MHz 68040 in around 1987-1988.

There are better examples. QNX is probably the best: a working, true-microkernel, Unix-like OS, dating from 1982. At one point shared-source but not any longer.

There is also Minix 3, which is a different OS from Minix 1 & 2, the OS that inspired Linux and upon which Linux was initially bootstrapped.

Minix 3 is still quite limited: no SMP, some missing APIs etc. But QNX proves that a true microkernel, on generic hardware such as x86 and ARM, can support SMP and so on.


Huh, cool! This is like Python being older than Java :mindblown:. Still, old technologies can be just as risky as new ones.


Nix popularity is still lower than Docker, even during dotcloud era(I think this was the og company). Boring means common, unnuanced, rather than older or first.



I wish I had seen that (and fully understood the implications) years ago, I mean it's been in the back of my head all the time but never explained this well. I've referenced it a couple of times now, especially the slides showing the graphs of technologies and links which is a really succinct way of saying "adding one technology multiplies complexity".


I hadn’t seen that before but it perfectly sums up my approach to tech. I’m going to share this the next time someone questions why I take a pragmatic approach rather than jumping on the next new tech stack

Thanks for sharing


Strangely, once a team adds one risk, they feel more comfortable adding a second risk. Ditto for the third risk, etc. The psychology is weird.


> apt-get them in a Dockerfile

> Nix built the entire world

You're comparing apples to oranges.

Docker just runs commands. If you want to run commands, you can do that in Nix using e.g. nixpkgs.runCommand.

apt-get just fetches prebuilt binaries. If you want to fetch prebuilt binaries, you can do that in Nix using e.g. nixpkgs.fetchurl.

You can even use runCommand to run apt-get (I do this to use Debian's Chromium package on 32bit NixOS)

In contrast, it sounds like you were using Nix as a build tool, to define your entire OS from scratch (presumably using Nixpkgs/NixOS). In which case apt-get isn't a comparable command; instead, the Debian equivalent would be:

    apt-build world
Guess what that does?


Funny, because I feel that simple tasks that would take minutes in my machine are now a dev adventure with docker.

And I mean funny. I suspect it is different mindsets. And I personally like that both seem to be thriving.


I feel similarly with Docker. But it's easier to explain to newer folks because it's only a single layer of abstraction above shell commands.


The issue with simple solutions is that the underlying problem still has the same complexity. This complexity rises up sooner or later and depending on your abstraction will be nice caught within it or make a complete mess.


I really wish something like singularity containers had taken over -- that was literally just shell commands.


Possibly https://buildah.io/ would be of interest.


I do use podman, not really stoked about buildah because it's still a bunch of buildah commands in a shell-driven DSL, which I suppose is better than a dockerfile. Singularity was literally a build file that was a shell script divided into sections that denoted phases.


Bit late now but you could probably make something like it using buildkit - https://github.com/moby/buildkit/blob/master/examples/buildk...


I can't say Docker is simpler than Nix.

I found Nix way easier, but the documentation is very... concise.


I call this. It isn't a single abstraction over shell. It is a single closure over shell. And that is surprisingly hard to explain.

Again, though, I am glad both are thriving. Competition of implementation is... not that compelling to me. Competition of approach, is amazing.


It's well documented that Docker and especially docker hub had a terrible impact on security.

Once you factor in the efforts required in the long term to mitigate a decade of bundling gigabytes of applications and libraries it's a huge "dev adventure"


More like developer Towers of Babel waiting to collapse on every build attempt. Docker is fine if it's used correctly but it does add a layer of complexity, it doesn't abstract it away..

The way I see it rampantly (ab)used is as a shortcut to get some software up and running by leveraging a public image and passing tech debt for some component of one's system onto the maintainer of the Docker image, then cobbling together multistage builds from those and microservice architectures to try and support this tech debt model. Sometimes it works, many times it breaks, often it turns into complete and utter nightmares to deal with.

"Amazing Andy got this fantastically complex system up and running in a week all by their self way under time and budget and it works. Now I just want to add this feature or modify this one thing and you're telling me that's going to take how long?" Yea, I've seen this more times than I care to admit.


“You mean Amazing Architect Andy who just left for Google?” :-(


That's at least partially why we have CV driven development be a thing in the industry. Either you jump on the NoSQL, AI, ML, blockchain, serverless, microservice, container, $FOO bandwagons, or you'll miss out on job opportunities and will have your CV stagnate.

That's not to say that those technologies don't have good use cases in particular circumstances, it's just that you as an IC probably don't have strong incentives to care about where the project will be in 5-10 years when you will have left for another company in 2-3. A lot of accidental complexity is an unavoidable reality when you have people who put their own careers ahead of most other concerns.


Just build your own images like you would otherwise install the software on bare metal. Base those images on official images, not community images.


YMMV.

Personally, you would have to pry my Bitnami images out of my cold dead hands.. there is just no way my team of 2 can do anywhere near as good.


Fair, there are a few high quality "vendors" that we also trust, and Bitnami is one of them. RandomJoe42/SomeImage is out, though.


> Just build your own images

This is like saying "just wear a mask": it's not enough.

You rely on what other people are doing now and what they've done in the past.

This is an ecosystem problem, not an an individual problem.


If you can do it in minutes on your machine, you can spend those minutes updating your Dockerfile to automate the steps instead. It's essentially the same thing.


Unless auth is concerned. There are loads of tools I run that I want to just be me. Not whatever user is configured in the image.

And learning how to manage that mapping was a heck of a time sink.


How you going to run those tools on CI? Or by your colleagues? These are all questions that need answering anyway, regardless of your usage of Docker.


I already authed as me on my machine. Annoying to also have to auth as me in the container.

Or have we reached the stage where folks have secrets management for containers fully orchestrated at a personal user level?


This mirrors my experience as a developer in an org that used Nix. If you want the dev team to have a strong dependency on the devops team for every little (often unpredictable) aspect of their workflow, Nix is the tool for the job.

Don’t get me wrong, I’m completely bought in on the vision of reproducible builds but there’s a long ways to go before it’s usable in real organizations. I’ve heard that some orgs manage it, but I really have no idea how. Maybe some critical mass of people who already know and love Nix (and implicitly packaging obscure dependencies)?


> If you want the dev team to have a strong dependency on the devops team for every little (often unpredictable) aspect of their workflow,

I have never used nix, but from the article the author only concentrated on the fact that docker and nix create reproducible environments, and completely misses the other benefits of containers.

As a devops guy if someone hands me a nix project, how do I deploy that so it is highly available and scales by itself?

With containers I just put it in kubernetes.


You can use Nix as a better docker build, see https://grahamc.com/blog/nix-and-layered-docker-images or https://nixery.dev/.


The dream is using something like Nix to not only reproducibly build a container image but also all of the infrastructure manifests which reference it. I _think_ this is achievable in Nix if you're willing to deal with all of the pain of actually using Nix; however, this would depend on pushing an image to a container registry as part of the Nix build and I'm pretty sure that violates Nix's idioms/conventions? I've certainly never heard of anyone building their entire infrastructure manifests this way.


> The dream is using something like Nix to not only reproducibly build a container image but also all of the infrastructure manifests which reference it. I _think_ this is achievable in Nix if you're willing to deal with all of the pain of actually using Nix;

We did that at an old job. Basically, Nix built images + K8s manifests + a script to push the images. Our CI job boiled down to `nix-build && ./result/push-images && kubectl apply -f ./result/manifests -R`. This is similar to how NixOS' nixos-rebuild boils down to `nix-build && ./result/activate`.


See https://dagger.io, the next gen tool by the creators of docker that does this.

CUE + BuildKit


That's because nix is not a docker alternative.

I'd love a nix-like docker alternative to be fair but that'd be a massive undertaking.


What is a "nix-like docker alternative"? Is that "using Nix to build container images"? Because Nix already does that.


Many orgs get reproducible builds by using different build tools and abstractions. Eg Amazon gets it via their Brazil tool.


I was talking about Nix in particular, but yes, if you’re an enormous and highly lucrative tech company you can pay a team of engineers to build your own reproducible build tool. If the same team is very talented they can probably even get Bazel working. ;) But I don’t think any large orgs at all use Nix, presumably because it doesn’t scale beyond a small number of (enthusiastic) developers.


I've always kept my usage of Nix language to a minimum and found it to be a joy every time. But it's no wonder you'd have trouble on something that humongous. Why did it get so large?


Nix can do everything and that's an issue - suddenly the CI servers, AWS deployments, build servers, testing, linting, dev package management, dev system environments, and more are all written in Nix packages. You need to write nix bindings. You need to add nix caching. And, as is fitting with a functional language, it can be beautifully (read: painfully) abstract. Some of the guys on the team were contributors to Nix so it wasn't like we weren't following best practices either.

I'm sure if everyone was willing to put in weeks/months of effort it'll be as fun to use as Haskell, but Nix isn't the thing I do so I need something simple - docker/docker-compose works perfectly well for that.


Sounds like your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should. Would it be nicer if you just used Nix to build the application and a shell, in the simplest possible manner?


This is totally fine and I like it, my worry is scope creep once it’s in. Nix repl to experiment with new languages is really cool. Reversing updates to my OS is amazing. If I could have a rule that that is the limit - personal dev environments - but with additional support for docker then I’d be very happy.


No, Nix can't do everything, it can do the one thing it does, mainly it maintains immutable paths and allows to define derivations of any number of those paths using namespaced execution of build scripts. All this is controlled using expression in the Nix language, which is well suited for the intended purpose. It is in important aspect the opposite of Haskell. Nix does so well what it does, that it is easy to use it where better solution exists (looking at you NixOps)

So I agree with some parts of your criticism, on the other hand if Nix is be used to do the sensible subset of "CI servers, AWS deployments, build servers, testing, linting, dev package management, dev system environments, and more are", that would be great.

And what that subset is depends heavily on the given circumstances.

For me, being able to derive the "CI servers, AWS deployments, build servers, testing, linting, dev package management, dev system environments" things from a single source of truth, actually does sell Nix pretty nicely.

When approaching a complex, multi-component source repository, with the requirement for dev-environments, CI builds and generation of production ready Docker-images, a careful and not over-eager Nix build, seems a sensible choice, given that the layered nature of Dockerfiles and docker-compose obscures the DAG nature of built artifacts and dependencies. Nix also doesn't build anything by itself, it merely invokes low-level build tools in isolated environments that only contain the specified dependencies.

Sure, when using Nix one is forced to explicitly state the dependencies of each (sub-component-) build-step, that results in more code than a bunch of RUN ... calls in Dockerfiles. Both approaches have their sweetspots.

An investment into learning Nix is done once. And you are completely right, that refactoring to a Nix based build takes weeks in a serious legacy software project. I wonder myself, if this will be worth it. Currently, I think that might be the case, and that part of the weeks put into the build are not due to Nix and would have been necessary for Docker-based builds, too, and also one can do a lot of stuff much easier with Nix, which leads to "feature-creep" like "oh let me add this preconfigured vim/emacs/vs-code into the build env" that one would probably not do if it weren't so easy to customize everything with overlays and overrides.

But that is a good thing, and it is much better to have a capable tool and critical discussions with colleagues to limit the usage of that tool, than the other way around.

Heck, I remeber when we switched to a Maven build, and had a build with all bells and whistles. That took weeks, too, and all the generated artifacts, like source code documentation sites, were probably a waste if time to implement.

I am not sure if that proves or disproves powerful, declarative and modular build systems.

Dockerfiles and docker-compose.yamls have a learning curve too, but more importantly, if you have to invest more time per build maintenance than with a Nix based build, it is only a matter time until that costs more than the nix approach.


I'm not the author and my opinion might not be warranted, but I would guess:

* When you modify your local nix file, there's only a couple things you need to change. It stays small. You can keep track of all these changes because they're in your head.

* The company needs a lot of little changes here and there so they build up. Foobar needs rooster v1,2 while Barfoom needs rooster v0,5. A lot different than managaging a users config. No longer possible to squeeze all the details into one brain.

This isn't to say Nix would be bad at handling any of these things. But I would agree with the OP. I have trouble teaching new devs how APT and shell work. Couldn't even imagine trying to explain Nix syntax to them.

The more Nix is used for, the more likely a dev will have to touch it. And teaching someone who doesn't care Nix is like teaching someone who doesn't care about Makefiles / Git. Doesn't work. They just get mad and say it's too complicated.


> I've always kept my usage of Nix language to a minimum and found it to be a joy every time.

The trouble is, it's more significant to consider what the pain is in the worst case.

e.g. Two things which count against Nix: With stuff like Docker, I can get away with not really understanding what's going on, and have a quick + dirty fix. And with Nix, the community is smaller, so it's less likely you'll find an answer to the problem you're having right now.

I worked at a company where the DevOps team used Nix for the production/staging. For local development (and ephemeral development deployments), the dev team came up with a docker-compose setup. -- That said, there was at least one case where the devs contributed to the Nix stuff by way of copy-pasting to setup a new deployment.


>Build times in Haskell were bad enough

this one's very true. Every time I try to build something in Haskell on my laptop it feels like we're moving closer to the heat death of the universe. Is there some good read on how/why Haskell compilation times are so long compared to some other languages?


I can't offer an answer specific to Haskell, but I know that OCaml does some things to make the compilation fast: interface files, no circular dependencies, no forward declarations, not a lot of optimizations. From what I understand, those tradeoffs come from languages designed by Niklaus Wirth, where efficiency of compilation was important.

In general, I feel like every language that wasn't made to compile fast like Go, OCaml, Pascal and derivatives is going to be called "slow to compile". There's Java and C# that are kind of a middle-ground, since they emit bytecode for a JIT compiler. So my answer to "why Haskell compilation times are so long compared to some other languages?" would be "because compilation times don't take priority over some other points for Haskell and its users".


Haskell compiles so slow because it needs way more optimizations then Ocaml before it performs good. The primary reason for that is its lazy evaluation semantics. Prior to GHC many researchers believed it to be impossible to execute lazy languages with comparable speed than imperative ones. Haskell and GHC were primarily developed as research tools. I.e. they valued the efficient exploration of new ideas in programming languages and implementations higher than the resulting language and implementation, at least in the beginning. And I would say Haskell was extremely successful considering those priorities.

That does not mean, that the implementation hasn't been moving towards industry-readiness for a long time.


If you're compiling a project for the first time it's probably pulling in a ton of libraries and compiling them all from scratch. But let's be real here - we're talking maybe 3-5 minutes for the first build of a huge Haskell codebase, one or two orders of magnitude faster on second build.

It seems comparable to what I've noticed with Rust or C, and pretty fast compared to some medium/large C++ codebases I've built.

I'm not really sure what workflows people are using where this is a problem, so this may be less of a pain point for me than it is for you. Personally I build Haskell projects once to pull in the dependencies, then use ghcid to get ~instantaneous typechecking on changes I make to the codebase.


From my experience with Haskell and Scala, derivations and not dependencies is what’s driving compile time. You can probably hide a crypto miner behind all that compiler magic. In Scala I usually move code that invokes a lot of derivations into sub projects, not a clue on how to do that in Haskell


I wish it took 3 - 5 minutes, I’ve built it projects that are far longer than that. I know this isn’t just me though, because the GHC build documentation tells you to go read stuff for a while as it builds. Longest I can remember is 2 hours.


> moving closer to the heat death of the universe

So… you're saying your laptop is super cool? Because the heat death of the universe is when thermodynamic energy is equally distributed everywhere, which, given the large space of the universe, means really cold.


Heh. Though while we're playing Pedantics, it's worth pointing out that the defining characteristic of Heat Death is maximum entropy, not uniform temperature. That said, it's entirely possible that the whole system of Energy Company + OP's computer actually does get cooler on average when compiling Haskell.

Here's a fun game. Have you ever played Follow the Energy? For example, depressing keys on your keyboard takes energy. Where does that energy come from? Well, the work performed by your fingers of course! But where does that energy come from? Well, the muscles in your fingers, of course! But where does that come from? Well, your food! But what about that? Well, your food's food! That? Eventually plants. That? The sun! That? Nuclear fusion. That? Gravitational potential! That? Etc...

But here's the funny thing. In this image, it kind of seems like energy gets "used up". The sun provides energy to the earth via solar radiation; the plants consume this energy; animals eat the plants, obtaining their energy; etc. However, energy is conserved. What gives?

Better yet, if the earth were not radiating just as much energy as it received, it should be heating up. However, the earth-atmosphere system is mostly constant temperature! This implies that the total energy flux is zero. If X Watts (Energy/Time) is coming in, then the earth actually radiates out X Watts as well.

So... if the total energy flux is zero, then how does your keyboard key actually get pressed? What gives?

The key is entropy. X Watts of solar radiation impinge upon the earth, but these photons are "hotter" (i.e. higher frequency) on average than those that radiate out. The balance is in numbers. You need more cooler photons to balance the energy of the hotter ones. E=hf; energy (E) is proportional (constant h) to frequency (f), after all.

This means that while energy is conserved, the flow of that energy increases the entropy of the system. In a very real sense, typing on your keyboard happens because of the waste heat generated in the process.

All driving us closer to Heat Death...


Thanks for elaborating on this. I guess the confusion also stems from the word "heat" which we usually associate with something that's warmer than some human day-to-day reference. Heat in the physical sense though just refers to the kinetic energy of a number of particles, which can be harvested (per the post above) provided there are differences in kinetic energy throughout space.

It's also interesting to play the "Follow the Energy" game to it's logical conclusion, namely that nearly all of the energy in the Universe (including that which you expend typing on your keyboard) originates from the gravitational potential created in the Big Bang (whatever that is, by the way). This begs the question; how was the entropy of the early Universe to low, that it could increase by such an enormous amount, to produce intelligent beings such as ourselves, typing things on a keyboard while we should be working?

It's really one of the most fundamental questions in cosmology, and one of the (many) reasons why I love physics.


> The sun! That? Nuclear fusion. That? Gravitational potential! That? Etc...

Minor pedantry on top of pedantry. The gravitational potential energy of a collapsing protostar gets you past the activation barrier for nuclear fusion, but isn't itself the source of energy beyond that.

Think of it like lighting campfire using friction. The friction heats up the kindling, but that investment allows you to access the potential chemical energy of the wood. The gravitational potential energy is converted into heat, and that investment allows you to access the potential nuclear energy of the unfused hydrogen.


Thanks! That's one place in the chain that I paused for quite a while, deliberating what to write down. Overall, the little "causal chain" I wrote down is littered with little (but useful!) fibs, and the one you point out is probably the most egregious.

Nice analogy, btw.


Actually, the heat death is when all potential energy in the universe has been converted to heat. So converting an excess of stored chemical energy in their laptop battery to heat by compiling a load of Haskell would be a fine way of increasing the entropy of the Universe. Thus moving ever so slightly closer to the inevitable heat death.


I'd say don't blame the tool for what sounds like an architectural mess.

TFA points out using nix as a reproducible build environment - which it is excellent at. Create a shell.nix in your repo, and every dev uses the same exact tools within the comfort of their own machine/shell. Docker is much more painful for this kind of local dev workflow.


Potentially it was a mess, but the guys building it contribute to nix so I don’t think that’s to blame. Dev environments are okay, but that’s not the limit of nix - it gets used for build servers, ci servers, linting, replacing stack cabal yarn etc, creating docker, hell I’m pretty sure I saw nix creating nix


I truly hope not as well. I spent an extensive period of time trying to learn Nix and integrate it into my homelab. Granted, it's not an enterprise environment, but it is the most painful, unstable, broken, buggy, bloated, hard-to-use, poorly-documented mess I've ever tried. I hope no one touches Nix with a ten-foot pole.

That's not to say it's not a great idea. It's just a huge pain to get to work, and the cost of trying to get a perfectly-working Nix environment is just *not* worth it. In my case, I was working with orchestrating virtual machines, which… well, there's a project called nixops that claims to do this. The thing is it flat-out didn't work half the time (also used python2.7 and I believe one of its dependencies were marked as insecure for a long period of time). I got so frustrated with this "advertised" nixops tool, that had to write my own replacement for nixops, and while it was a fun experience, I was so burnt-out from dealing with Nix and its breaking randomly every ten seconds, I just gave up on my homelab.

If you want any program that doesn't exist for Nix, you can't just use the program's build script. You have to manually make a Nix derivation, and then pray and hope it will actually compile. Want to deploy your Ruby on Rails application on NixOS? Prepare to spend three days trying to set this up, since there's only one other application and the entire process is poorly documented (even for a "regular" Ruby program that isn't Rails).

Without additional work, Nix will never be worth its cost (did I mention the interpreter took forever and sometimes failed on errors in files that had NOTHING to do with yours, leaving you to manually debug each line?). You could spend the days upon days upon days trying to get the damned thing to work for something far more useful instead. Since once you get Nix to work, it will break.

I'm sorry if I offended any Nix users/developers, but the product is just not ready for anything yet IMHO. I just don't have time to deal with it anymore, and I'd rather get on doing something more fun than dealing with a bloated, undocumented system that I can just replace with Docker and get my work done in 5 minutes instead of 5 days.


> did I mention the interpreter took forever and sometimes failed on errors in files that had NOTHING to do with yours, leaving you to manually debug each line?

This is a common (and horrible) issue with dynamic languages that pass functions or blocks of code around. There have been some major improvements to Nix error messages which were included in the last release, and there's also ongoing work to address this through gradual typing.

There was a talk on the latter recently, with some examples that I think make the overall issue pretty clear: https://www.youtube.com/watch?v=B0J-SclpRYE


> There have been some major improvements to Nix error messages which were included in the last release

That's good to hear (and I think I heard about it before). The problem is, there are still lingering issues with Nix, like how long it takes to figure out how to compile a new program, (exceptionally) poor documentation, packages being unmaintained, the nixpkgs repository being a gigantic blackhole that takes forever to eval, etc.

Don't get me wrong. I really like the idea behind Nix. Even with this fix though, I'm still not sure I would enjoy using Nix (since many, many other problems) or giving it another shot because of my emotional response to it that's been caused by burnout trying to wrangle with it.


> I'm still not sure I would enjoy using Nix (since many, many other problems) or giving it another shot because of my emotional response to it that's been caused by burnout trying to wrangle with it.

I'm not sure you would, either, and I won't ask you to give it another shot right now. I understand the feeling. Maybe it's something to revisit after more time than has already passed.


Was/would it be hard to switch back to traditional containers or was there some kind of lock-in effect? Or was the consensus just pro-Nix?


The conesnsus with those who mattered to make that change was pro-Nix, but I have no idea how you'd go back to containers because by that point builds were declarative and the implementation was spread over abstractions and repositories! So if DevOps decided to quit because another company was using SuperNix2, which I think is a plausible thing if you have a team using a risky technology, then it would have required hiring consultants to do it - I think it would have been a months-long refactor going in blind.


I think people are missing the forest for the trees with this.

In my view, the reason Docker has all the hype is because I can look at a Dockerfile, and know what's up. In seconds. Sometimes in milliseconds.

It's a user experience thing. Yes, Nix is better for 'technical people that spent the time learning the tool', but Dockerfiles rely almost entirely on existing System knowledge.

Yes, Nix is 'better', but the fact is Docker is 'good enough' and also 'stupid simple' to get started.

Also Docker-Compose, I don't know why people hate on YAML. But it takes that same KISS attitude to build complex systems that can also be used as sanity checks for migrating to things like kubernetes.

Being able to spin up a complex full stack app with one command and a compose file that doesn't take any brain cells to read is worth it's weight in gold.

This is like the 'general case tool' vs 'DSL' debate. If it's easy to use, people will use it.


Author here. As with most things, its all about the trade-offs. Docker has certainly proved itself and that approach has worked on a massive scale. However, its not a silver bullet. For us at Replit, our Docker approach was causing issues: our base image was large and unmaintainable and we had almost no way of knowing what changed between subsequent builds of the base image.

We've been able to utilize Nix to address both of those issues, and others who may be in a similar scenario might also find Nix to be valuable.

Of course Nix comes with its own set of opinions and complexities but it has been a worthwhile trade-off for us.


Correct, that's one of the cases where docker's layered image system doesn't work well. Nix is almost the perfect tool to perform incremental builds and deployments for the Replit requirements.

I wish that docker has the ability to merge multiple parent layers like git, then you can build the gigantic image by just updating single layer.

The only hack the docker can do is multistage-build, however that won't work reliably in some cases such as resolving conflicts.


Disclaimer: the following is still experimental, and will probably remain so for a while.

There is actually the --squash command that you can use during builds, to compress all of the layers: https://docs.docker.com/engine/reference/commandline/build/#...

For example:

  $ docker build --squash -t my-image .
In practice it can lead to smaller images, though in my experience, as long as you leverage the existing systems in place efficiently, you end up shuffling around less data.

E.g.:

  - layer N: whatever the base image needs
  - layer N+1: whatever system packages your container needs
  - layer N+2: whatever dependencies your application needs
  - layer N+3: your application, after it has been built
That way, i recently got a 300 MB Java app delivery down to about a few dozen MB actually being transferred, since nothing in the dependencies or the base image needed to be changed since, it just sent the latest application version, which was stored in the last layer.

Also, the above order also helps immensely with Docker build caching. No changes in your pom.xml or whatever file you use for keeping track of dependencies? The cached layers on your CI server can be used, no need to install everything again. No additional packages need to be installed? Cache. That way, you can just rebuild the application and push the new layer to your registry of choice, keeping all of the others present.

Using that sort of instruction ordering makes for faster builds, less network traffic and ergo, faster redeploys.

I even scheduled weekly base image builds and daily builds to have the dependencies ready (though that can largely be done away with by using something like Nexus as a proxy/mirror/cache for the actual dependencies too). It's pretty good.

Edit: actually, i think that i'm reading the parent comment wrong, maybe they just want to update a layer in the middle? I'm not sure. That would be nice too, to be honest, though.


Those sound like issues with your Docker usage - there are options to keep base image quite streamlined (e.g. alpine or distroless images).


For context, I'm referencing our (legacy) base image for projects on Replit: Polygott (https://github.com/replit/polygott/).

The image contains dependencies needed for 50+ languages. This means repls by default are packed with lots of commonly used tools. However, the image is massive, takes a long time to build, and is difficult to deploy.

Unfortunately, slimming the image down is not really an option: people rely on all the tools we provide out of the box.


> For context, I'm referencing our (legacy) base image for projects on Replit: Polygott (https://github.com/replit/polygott/).

May I ask why you didn't use something like Ansible to build such a complex image? With appropriate package version pinning (which it's the real crux here) it should work well enough to get a reproducible build.

I understand it would already have been something different from a pure Dockerfile so it's not that fair to compare buuut...


> May I ask why you didn't use something like Ansible

They did; it's called Nix, and they wrote a blog post about it ;)


> In my view, the reason Docker has all the hype is because I can look at a Dockerfile, and know what's up. In seconds. Sometimes in milliseconds.

Most of the times this just gives me more questions than answers, like: what does the entrypoint.sh file in this repo do? Only to discover a plethora of shell script for setting up the runtime environment based on different environment variables. Most of the time not aligned with any common standard or with how you generally would setup the application itself.


That’s because Docker just pushes dependency management to one layer below, doesn’t solve it.


Well I think both Nix and Docker delegate the actual resolution of dependencies and it's not about implicit vs explicit dependency management, alone, it's more that with explicit dependency management you get reproducability.

And with reproducability you move the work from fixing broken builds, to implementing builds.

Of course, I can tag docker images and upload them to an internal registry, but that seems more complex to me, than doing this at the source level with Nix.


But 99% of the times that's exactly what you need.


I think you are confusing a property of your expertise with a property of the tool. As someone who doesn't use docker all the time, I find it kind of a pain in the ass to read realistic dockerfiles or work with docker-compose. As a juxtaposition I found freebsd jails much more pleasant and sane to work with for security containerization. For deployment management I'm not sure if there are competitors to docker but it's not hard to imagine something vastly more pleasant to use.


Agreed with your opinion about `Dockerfile`. The article had me for a second until I saw the script code. I mean, my time is not infinite and I rather spend it to do things that are really important to me, not learning to write "yet-another build script" for a small system. So unless it's mainstream already, I'm not going to touch it.

`Dockerfile` is light enough for me to not hate it too much.

For the `docker-compose.yaml` story however, I can offer one reason: when you have so many variants(versions), so many setting options and so many data types (array, object, string etc), it's hard to find references to write one from scratch (have to read multiple documents to get it right). Your knowledge on docker command-line parameters does not translate to `docker-compose.yaml` smoothly(some option changed names, some don't work). And sometimes, some function works differently under docker-compose.


> I mean, my time is not infinite and I rather spend it to do things that are really important to me, not learning to write "yet-another build script" for a small system.

You don't have to jump into the deep end with Nix. If you're happy to just run shell commands (like Dockerfiles provide), then all you need is this:

    (import <nixpkgs> {}).runCommand "my-package" {} ''
      PUT YOUR BASH CODE HERE
    ''


No, please don't interpret it like this.

No matter what the format Nix script look like, it's still a script language designed to address something that has already been addressed (or can be addressed with light expansions). The very idea of "Hey let's build this whole new thing that does this specific old task a little bit better at the cost of learning many new concepts (and making many mistakes)" is not good at the core.

I would rather say, if the dudes there really wants to create a new language, fine, but at least make it big. By that, I mean don't just try to build a tool that preforms the old task a little bit better (at cost of learning), make a tool that does new things (in other words, "enables new possibilities") far better. Perhaps after that, the toolset could become something worth learning for.

(Currently, there are many ways to create reproducible builds. And even if you have reproducible builds, it does not mean the build will reproduce the same runtime result all the time. All factors combine, the benefit you can receive from the toolset is just not great enough at the moment. Hope you understand my point)


> The very idea of "Hey let's build this whole new thing that does this specific old task a little bit better at the cost of learning many new concepts (and making many mistakes)" is not good at the core.

You don't seem to have a problem with Dockerfiles, yet Nix was around for a decade before Docker existed. If you don't want people to reinvent things that already work, then your complaint should be directed at Dockerfiles. In fact, you should go and complain at the following projects, which were (a) created after Nix, (b) try to solve some subset of things that Nix can already handle and (c) are strictly worse than Nix (e.g. less secure, not reproducible, not cross-platform, tied to one language, etc.):

- Docker

- NPM

- Puppet

- Ansible

- Salt

- Webpack

- Grunt

- Gulp

- Homebrew

- Pip

- Conda

- Poetry

- Gradle

- Vagrant

- etc.


I'm a big fan of Docker-Compose so far because of it's powerful simplicity and it's introducing me to GitOps, Infrastructure-As-Code, and Terraform, all of which I'm really starting to like... and I really hate doing DevOps work.

I think your point is very valid, it has got to be simple and increase productivity instead of impede it. Using something better but getting stuck in the minutia every day is a waste, and not something anybody in senior leadership should ever approve.


Lots of people seem to be building containers with non-Dockerfile based things though, especially in the JVM world.


You mean through maven configuration? At the end of the day it is still a dockerfile but constructed using Maven's xml.

I hate it haha


We use Google Jib with Gradle (https://github.com/GoogleContainerTools/jib) and love it. It does some slight optimisations (just use the classes rather than jars) and removes some decision making about where files are laid out.

It also builds into the Gradle lifecycle neatly. I don't need a separate tool for building images.

I'm sure writing Maven xml wouldn't be fun though!


I thought at least some of those worked out without generating intermediate Dockerfiles or invoking "docker build". After all container images are just basically tar files with some metadata.

Or do you mean it's conceptually the same, just implemented differently? I agree there.


I know the author calls this out - but Docker and Nix _are_ different.

For immutable distribution they solve the same problem. But they solve it in fundamentally different ways.

If you go back and read the last 20 years or so of LISA papers, it can be … humbling/frustrating. You can read two papers back to back, separated by 15 years, and they’ll be describing a new solution to the same exact problem that still hasn’t been properly solved. Dependency management has been horribly broken in our industry and we’ve never really managed to address it - until Nix. The Nix LISA paper [1] was a breath of fresh air, it really cut to the core of the problems with modern dependency management and addressed them head on.

Docker declared bankruptcy on dependency management and just resorted to fs snapshots to guarantee bit perfect deployments.

[1] https://edolstra.github.io/pubs/nspfssd-lisa2004-final.pdf


As TFA itself answers, "no". But I can give a different reason: nix is too much of a barrier of entry, relative to Docker.

Docker might not be simple; there's a lot of moving parts to manage, some hidden gotchas, and networks are a mess. But it's comparatively easy, and once you bake an image, it's pretty simple. Dockerfiles are basically just sh. Package managers are the usual suspects. Images are easy to distribute. It's very familiar.

Nix is not easy. It's a new syntax to learn, it's a new way of building, it's an entirely new way of thinking about the problem. But in the end, it does simplify packaging in the Rich Hickey sense of the easy/simple dichotomy. No braiding of packages and shared libs. But using Nix is not so simple. There's all kinds of bindings and bodges and bespoke shims to make it all work.

History tends to show that easy+non-simple things beat out simple+non-easy things (devs are lazy). Easy-kinda-simple vs non-easy-kinda-simple-but-long-term-gains? No contest in my opinion.

I think Nix is a beautiful idea but it's an uphill battle.


I agree, there’s a reason no one uses Nix. It has terrible DX


Well "no one" is a bit absolute. But colloquially, I know what you mean. Relative to Docker, it's way less popular. But it has 5.2k stars on https://github.com/NixOS/nix .

But yeah. The DX "needs work". which is a nice way of saying, I find it downright painful to use.


Does it seem weird to anyone that Nix insists on installing its "/nix" directory on the root directory rather than somewhere sensible like "/usr" or even "/usr/local" ?


Quite a few people use Nix.


No, it definitely (but unfortunately) will not. Nix does everything docker does better than docker does, except most crucially, integrate with non nix tooling.

Nix vs Docker is like Rust vs JavaScript - you can point out every reason js is terrible and rust is better, but for the common developer looking to get things done, they’ll often gravitate to the tool that gets them the biggest impact with the least upfront investment, even if that tool ends up causing major predicable headaches in the future.


My favorite feature of NixOS so far though is the ease of creating containers via the configuration.nix file. There's a few services I run that don't have nix packages, but do have containers. It's essentially like writing a docker compose file, but instead of YAML, you use Nix and all of the niceties that come with it. Seems like the best of both worlds (as someone who isn't themselves creating containers).


I like to compare Nix to ice-nine from Cat's Cradle, in that it tends towards restructuring whatever it comes into contact with.


Funny you mention that. Guix which is a fork (of sorts) of Nix is written in Guile Scheme which uses ice-9 as it's namespace in a lot of places. https://lists.gnu.org/archive/html/guile-devel/2010-07/msg00...


This gets repeated a lot, but Guix really is not a fork of Nix. Not by any definition of "fork". Also not "a fork of sorts". The term "fork" only applies to one executable: "guix-daemon", which is a literal fork of "nix-daemon". They have, of course, diverged a lot, but this (and nothing else) is truly a fork.

Aside from this one executable there is no relationship between the two projects.

The daemon takes .drv files that list what preconditions a build has (= other .drv files), what build script to run (= a generated Guile script), and what outputs will exist when the .drv file has been processed. It processes these .drv files in an isolated environment (chroot + namespaces) where only the inputs mentioned in the .drv file are made available.

The drv files are not shared among the projects; they are not generated in even superficially similar ways. Guix is not a fork of Nix. "guix-daemon" is a fork of "nix-daemon".

Both are implementations of the same idea: functional package management.


This fits my definition of a fork (of sorts). Thanks for the nuance though.


I love reading your blogs on Guix, I'll keep that in mind for future reference sorry for the libel ;)


There’s just too many edge cases and system/language oddities that make me continuously reach for Docker as default, even after 4 years of NixOS as daily driver.


Yes, and Nix can still lose on maintainability in the long run, considering it is more difficult to onboard new devs with it. They have to learn the Nix expression language and write custom bindings for much software instead of calling the native package manager inside Docker.


I don't understand how these are comparable. I also don't understand what you mean by "bindings". Do you mean writing nix derivations for new packages? I would much rather do that than fiddle with Debian packaging. Or do you mean writing nix modules to configure a service? There are certainly some (IMO) over engineered nixos modules, but there are also some dead simple ones.


> I don't understand how these are comparable

They fulfill similar business functions - allowing you to run the same code on a bunch of dev machines and on prod (modulo modifications for e.g. database storage in Docker's case). Nix people get hung up on the fact that Docker runs containers, but it doesn't really matter that much. Often Docker is the shortest path to getting software running on multiple machines reproducibly.

> I also don't understand what you mean by "bindings"

I am referring to derivations and modules. Both are glue that you have to write for existing software that is already packaged. With Docker you leverage the existing packaging ecosystem like pip or apt. The packages are already written for you, and you can follow the installation instructions from a project repository and they translate seamlessly into Docker.

For example with ML & Python - If you want PyTorch with CUDA support, you can follow the official documentation [0] and basically copy and paste the installation instructions to a Dockerfile RUN statements. If anything breaks you can file an issue on the PyTorch issue tracker which has a wide audience. With Nix you have to write glue on top of the installation yourself, or a maintainer does it with a much smaller code review and support audience. Sometimes the audience is just the author, given that Nix project people commit directly to master frequently and do self-merges of PRs [1]. And there are other hurdles like compiling Python C extensions, which are pervasive.

Another example is with software systems, I guess this would be a Nix module. Here's GitLab: [2] where it was really difficult to translate the services into Nix. But a lot of company internal services can look like GitLab with a mix-mash of odd dependencies and languages. And writing a Dockerfile for this is much easier than Nix, since you can copy from the existing README specifying the Debian or language-specific dependencies. (edit: and if there are conflicts between dependencies of the services they can go into different containers. Getting the benefit of Nix - reproducibility - without the extra effort.)

[0]: https://pytorch.org/get-started/locally/

[1]: https://discourse.nixos.org/t/proposal-require-pr-authors-to...

[2]: https://news.ycombinator.com/item?id=14717852


> and if there are conflicts between dependencies of the services they can go into different containers. Getting the benefit of Nix - reproducibility - without the extra effort

To be clear, are you suggesting that

    RUN sudo apt-get update && sudo apt-get -y install ...
is somehow reproducible? I'm asking because I was surprised to see the above as being described as "reproducible" of all things. Splitting that into many different containers would likely exacerbate the reproducibility problem instead of improving it.

> With Docker you leverage the existing packaging ecosystem like pip or apt. The packages are already written for you

This is even more true for Nix, which has the largest and most up-to-date package repositories out there[1]. Plus, with Nix, you can easily make a new package based on existing packages with a mere few lines of code if the existing packages doesn't fit your needs. Other package managers besides Guix doesn't offer you that flexibility so you'd have to compile from scratch. That's way more tedious, hard to maintain, and definitely not reproducible.

[1]: https://repology.org/repositories/graphs


I am frustrated that you ignored my main argument about Docker sufficiently serving the same business function as Nix, with lower maintenance. Instead you focused on a semantic argument about one word in my post - "reproducible". You have the wrong idea about reproducibility, where you say basically everything that is not Nix or Guix is not reproducible. This ignores things like conda and techniques like package version pinning that allow researchers and businesspeople to get the same results from the same code. Here's a definition of "reproducible" from Wikipedia:

Any results should be documented by making all data and code available in such a way that the computations can be executed again with identical results.

https://en.wikipedia.org/wiki/Reproducibility

-----

> To be clear, are you suggesting that > RUN sudo apt-get update && sudo apt-get -y install ...

No, you need to pin the dependency versions. With Python this practice is already normalized with requirements.txt or conda yml files. So you would take an existing project and do:

    RUN conda env create -f environment.yml
which would likely be copy-pasted from the project README. The yml file specifies version numbers for dependencies. The SAT solver is deterministic. For other languages like C maybe the project didn't specify dependency version. So you need to figure them out when you first get a successful build, then specify their versions in apt. You can specify version numbers in the apt-get install line.

Yes, this is reproducible. Definitely good enough for most business use cases. When I say reproducible I do not mean ivory tower math proof reproducible. I just mean that the code will run on the relevant machines they are targeting. As I wrote in my initial comment. And as I defined at the top of this comment.

Also Nix provides a worse experience for pinning dependency versions since it does not have a native concept of version numbers [0]. Instead people have to grep through the Nixpkgs repo to find the correct hash of their dependency version.

> This is even more true for Nix, which has the largest and most up-to-date package repositories out there

No, Docker has the closure (to borrow Nix's terminology) of all of the package managers in that graph. If you add the height of the Debian packages with Pypi and Hackage you already have Nix beat. You can keep adding - cargo, ruby gems, etc all in their native package managers. If Nix were better off then people would be adapting Nix packages to other ecosystems. But the reality is the other way around.

> Plus, with Nix, you can easily make a new package based on existing packages with a mere few lines of code if the existing packages doesn't fit your needs. Other package managers besides Guix doesn't offer you that flexibility so you'd have to compile from scratch

With Nix, you are forced to make new packages based on existing packages. That is not a benefit. Regarding "if the existing packages doesn't fit your needs", compiling from source is not a big deal since Docker caches output artifacts.

[0]: https://github.com/NixOS/nixpkgs/issues/93327


> I am frustrated that you ignored my main argument

Your main argument was that Docker "sufficiently" serves the same goal of reproducibility. I just pointed out how it doesn't come anywhere close. Addressing the core of an argument is far from a "semantic" argument.

> where you say basically everything that is not Nix or Guix is not reproducible

My definition of reproducibility is that you get identical build results every time, which should meet the definition you quoted from Wikipedia. "docker build," which runs arbitrary shell commands with network access, is the farthest thing possible from any sane definition of "reproducible."

> Also Nix provides a worse experience for pinning dependency versions

The exact opposite is true. No other system-level package managers like apt or yum truly supports pinning packages. With apt or yum, packages in a repository snapshot are tightly coupled together since they're all installed into a single shared location. It's not possible to swap out or pin a subset of packages without the risk of breakage.

Nix provides a truly working way to pin packages. Packages are installed into its own isolated location to avoid collisions and dependencies are explicitly specified. This makes it possible to mix packages from stable channels, unstable channels, and even specific git commits of those channels. This can't be done with apt or yum.

Language-level package managers are somewhat more flexible regarding pinning, but still has problems. More on that next.

> No, you need to pin the dependency versions. With Python this practice is already normalized with requirements.txt or conda yml files.

Yet CI builds constantly break because Python dependencies are a moving target. And no, the SAT solver doesn't make the builds deterministic. The fact that you even need a SAT solver just makes it clear that dependency management is getting out of hand and we need better tools.

> If you add the height of the Debian packages with Pypi and Hackage you already have Nix beat. ... If Nix were better off then people would be adapting Nix packages to other ecosystems.

I don't know why you believe Nix and only Nix has to compete with all other package managers combined. You must really dislike it if you can convince yourself that is fair comparison.

But Nix can be used along with other package managers, so I don't see the point here. The only anomaly here are Docker images, that monolith binary blob that doesn't compose well like packages in other package managers.

And speaking of fairness

> or a maintainer does it with a much smaller code review

Where did you get this idea from? Nix has a growing community and Nixpkgs is one of the most active repositories on GitHub. Other package repositories with the possible exception of Homebrew and AUR has a much higher barrier to entry, which would most definitely result in, "smaller code review."

> Nix project people commit directly to master frequently and do self-merges of PRs

Self-merges are nowhere near being unique to Nixpkgs so it's unfair to only call Nixpkgs out for it. And if you count language-specific package repositories like NPM or PyPI, you should assume there is zero code review for most packages.

While regrettably there are self-merges in Nixpkgs, it is definitely in the minority and a lot of those changes are especially trivial stuff. Since Nixpkgs has a vibrant community, things like this tend to get attention and some community members are keeping an eye on it and is quick to bring these instances up. It's also worth noting that the Nix community is especially invested in automated testing compared to other package managers and these are run on PRs that ends up being self-merged.

> With Nix, you are forced to make new packages based on existing packages.

That is 100% FUD.

> "if the existing packages doesn't fit your needs", compiling from source is not a big deal since Docker caches output artifacts.

It is a big deal that you can't reuse code with non-Nix package managers. Docker caching isn't relevant and does nothing to deal with maintainability or reproducibility issues. Our company maintains custom OpenSSL RPMs, and it has been a constant source of pain due to RPM's lack of code reusability. Now we also have to maintain our own version of every single package that relies on our build of OpenSSL, which is a nightmare. This wouldn't have been a problem with Nix.


> You must really dislike it [Nix] if you can convince yourself that is fair comparison.

No, I do not give a single fuck about Nix versus Docker. I have no personal attachment to either. I am just worried that pushing Nix at my company would be some form of professional malpractice given the downsides. I literally have a meeting tomorrow about incorporating Docker into a different team's product. I've used both Docker and Nix before. If Nix would be better for them, I would tell them as much. I'd be fine continuing this discussion we are having, some parts were interesting. But unfortunately you seem incapable of formulating an argument without resorting to personal attacks and condescension. And I cannot tolerate that.


> With Docker you leverage the existing packaging ecosystem like pip or apt.

    with import <nixpkgs> {};
    runCommand "my-python-package" { buildInputs = [ pythonPackages.pip ]; } ''
      cd ${/my/project/dir}
      pip install
    ''


That compared to `RUN pip install <package>` is probably one of the things people are complaining about, no?


I think the complaint is about things like:

    (import <nixpkgs> {}).pythonPackages.callPackage
      ({ buildPythonPackage, dep1, dep2, dep3, pip }: buildPythonPackage {
        pname                 = "my-package";
        version               = "123";
        propagatedBuildInputs = [ dep1 dep2 dep3 ];
        doCheck               = true;
        src                   = /my/package/dir;
      })
      {}
That's how Nixpkgs tends to do things, which has nice features like building each dependency separately, allowing easy overrides, etc. but it requires knowledge of how Nixpkgs orchestrates its Python packages.

In contrast, 'runCommand' lets us just run a shell script like 'pip install', which is easier but doesn't have those niceties. Also, depending on the platform, the Nix sandbox may have to be disabled for 'pip install' to work, since Nix tries to prevent network access (I think it's enabled by default on Linux, but not on macOS)


I've set up my new M1 MacBook Pro using Nix and it's been going relatively well. Home Manager manages global tooling like Neovim, random CLI tools, and config files while I've set up `default.nix` files to use with `nix-shell` per-project. The set up of each project can be a little tedious as I still find the language confusing but once everything is set up the reliable re-creation is excellent. I love the feeling of opening the nix shell and feeling like I have each tool I need for that project without worry of polluting my user space or conflicts between projects.


I recommend also using direnv, with nix-direnv (home-manager has a setting to trivially enable nix-direnv). This lets you integrate your shell.nix environment into your existing shell without having to run `nix-shell` or use bash.


Oh wow, this is fantastic! Thanks for the tip and I'd like to thank you for all of the work you do on nix + Darwin. I'm using your `nix-env.fish` package and it's made things much easier than when I tried this setup a few years ago.


I'm glad to hear it! Though these days I don't actually use it at all, I finally switched to using nix-darwin[1] which means I get a NixOS-like fish initialization. Also prior to that I got the fish package updated with a new override option fishEnvPreInit that lets you write an overlay like

  self: super {
    fish = super.fish.override {
      fishEnvPreinit = sourceBash: sourceBash "${self.nix}/etc/profile.d/nix-daemon.sh";
    ];
  }
This would source the given bash script (using fenv IIRC) at the same point that NixOS fish would load its environment, thus producing the same behavior (notably, setting up all of the various directories prior to running any user code, unlike nix-env.fish that has to try and patch things up after the fact). The downside is that it means you have to recompile fish.

[1] https://github.com/LnL7/nix-darwin


My MBP left Shenzen early this morning and I'm super interested in details of how you did this. Are there any examples of doing this that you followed or recommend? Ansible is more my tooling of choice, though I'm fascinated by Nix, but I wasn't sure even how to get started with using Ansible to set up the Mac.


I just watched this Jeff Geerling video about setting up his M1 Macs using Ansible and his provided playbooks. That's probably the direction I'll end up going. https://www.youtube.com/watch?v=1VhPVu5EK5o


I have a Makefile that I use when I need to spin up a new laptop [1]. For your description it sounds like it is functionally similar. What does nix bring that a Makefile like this one doesn’t?

[1] https://github.com/jchilders/dotfiles/blob/main/Makefile


Drift. Nix (effectively) does not mutate in-place, it rebuilds and links. Delete a target from that Makefile and it doesn’t actually remove that thing. It’s the same problem Ansible etc. have. It’s not till actually fully reinstalling the OS that all the implicit dependencies reveal themselves. Sure one can write “clean” targets but the point of Nix is that such manual management is unneeded.


Not an answer to you're question, but do you feel safe doing (https://github.com/jchilders/dotfiles/blob/main/Makefile#L34)

> sudo curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/in... | /bin/bash

piping the output of a curl command to sh without first checking the sha256 of the file you just got?

In a similar situation I would not be comfortable without at least getting a specific version of the tool I'm downloading and then hardcoding an hash of its content I computed after manually downloading and inspecting its content.


Your Makefile is not a reproducible build. You would need to pin all the versions of the tools being used. Also as others have mentioned, Nix ensures that you do not rely on unspecified dependencies.


I guess it's reproducible - creates a system that has exactly the same versions of all programs and all libraries.


Wow I didn't know nix arm support was that far already. Honestly I even always felt that osx is kind of second class in the nix ecosystem.


If your interested in digging deeper into building docker containers with nix this is my favorite post on the topic: https://thewagner.net/blog/2021/02/25/building-container-ima.... Essentially you can use any nix package (including your own) to create the environment.

And if you really want to understand it, Graham Christensen (the contributor of the buildLayeredImage function) wrote a really good blog post on how it works: https://grahamc.com/blog/nix-and-layered-docker-images.


IMO initial value of docker for local development is enabling me to run two copies of postgres without them shitting on each other. I get that nix is supposed to be hermetic, but does it enable two of something?

nix being really good at package management is something docker needs to imitate -- out of order apt-get without requiring a re-downloading all the packages, for example, seems like it would shrink most cloud build costs. I guess this is what the article means by trouble 'sharing layers'

docker buildkit (or moby buildx or wherever the branding has settled) is supposed to to improve caching, but creating a simple API for caching that plays nicely with system + language package managers would really move the needle here; reorderable package installs would be the v2


Nix supports this quite well.

https://nixos.org/manual/nix/stable/#multiple-versions

The Nix and container mindset are very similar in that they refer to all of their dependencies, including down to glibc.


Not quite. This is true for dependencies such as libraries but for services it's significantly trickier.

Postgres, for example, would require configuring each version to use a distinct space for storage and configuration if you want to run them concurrently. It's still pretty easy with NixOS, but not as simple as you make it seem.


It's one line of override configuration, how is that not trivial?

Edit: Totally granted, figuring out that one line the first time might take 6 hours, but once you know how to do it you're good to go the next time. The documentation could certainly be improved.


Does Nix understand network namespacing? Or would 2 postgres instances clash over tcp listen ports?

I get you could configure different ports, or virtual interfaces, but it sounds like either of those would be outside of the nix tooling.


I don't think it would be possible to do this in an OS-agnostic way. LXC and jails are too different, and I'm not even sure what the option would be in macOS.


I don’t think NixOS (where service configuration would live) is system-agnostic. It relies heavily on systemd already.


You would have to deal with that yourself. But it's typically just a variable in the service definition, so it's very easy to override.


You can, if you take steps to containerize the postgres processes. This can be done with NixOS's nixos-container, or any other container runtimes including Docker. nixos-container is easy to use if you already use NixOS.

This separation of concerns is one of things I like about Nix when compared to something like Docker. For instance, if you use the Docker image format for packaging, then you're also forced to buy into its specific sandboxing model. With Nix, you can choose to run applications the way you see fit.


You can use Nix to create a tarball that can then be launched as a Docker container. However, I haven’t figured out a way to make Nix play nicely with container image layering—you get a small container image for deployment, but you’ll have lots of such largely-duplicative tarballs in the CI pipeline and the latency for generating them is annoying.


Have you tried dockerTools.buildLayeredImage (https://nixos.org/manual/nixpkgs/stable/#ssec-pkgs-dockerToo...)? It applies some smart heuristics (https://grahamc.com/blog/nix-and-layered-docker-images) to create images made of pretty reusable layers. This is subject to some false cache misses due to some laziness in the docker's cache implementation (https://github.com/moby/moby/issues/38446), but that is Docker's fault not nix's and it affects dockerfile builds too.


No, hadn’t seen that. Thanks!


I do this, but I don't use Docker; I just create a .tar.gz file for each layer, run them through the `sha256sum` command to get a digest, and generate a JSON config file (via builtins.toJSON). The result can be run by anything that supports OCI containers (e.g. AWS ECS).


> I get that nix is supposed to be hermetic, but does it enable two of something?

No, it doesn't solve the TCP port isolation problem. (But Docker doesn't really either. Linux network namespaces should, but nobody bothered to develop tools for that yet.)


on docker for linux you get different hosts for the containers. you'll still need to BYO way to assign them names -- I personally use direnv for this

I think this doesn't work as well on docker desktop for mac


Doesn't docker-compose set up a private network interface?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: