Guix and Nix lack the hype of technologies/buzzwords like Docker and anything container related, but they look incredibly pragmatic and easy to use.
Whenever I look at containers I always think they were started with trying to solve the problem of app packaging but at some point they tugged too hard on the tablecloth and all the other virtualization problems came tumbling down, too. I guess you can avoid a lot of these if you stick to running 1 container per host.
Anyway, Guix looks super user friendly and I'm going to give this a try!
I disagree, the advantage of Docker is that you can slap an image together by doing essentially what you were doing - running a couple of apt install commands, pip install -r requirements.txt, copy a few files and it's done. You don't need to learn a new configuration language, or package all your dependencies yourself, or try to understand a new filesystem layout when you enter a container.
I find Nix and Guix more interesting than Docker, but there's more to the latter's popularity than lack of marketing by the former.
> I disagree, the advantage of Docker is that you can slap an image together by doing essentially what you were doing - running a couple of apt install commands, pip install -r requirements.txt, copy a few files and it's done.
IMO that is the problem with Docker. People think that Docker somehow does anything for dependency management. It is an illusion because when you make or move a project into Docker you are essentially starting a new project, and new projects do not have dependency management problems. In a couple of years as people start upgrading components in their Docker-hosted apps dependency management and version conflict problems are going to appear, and will now be harder to resolve because image layering adds many possible failure points (not just at each layer, but also interactions between layers).
Docker is a convenient way to deploy Linux containers, but IMO it is a ticking time bomb when it comes to dependency management.
> the advantage of Docker is that you can slap an image together by doing essentially what you were doing - running a couple of apt install commands, pip install -r requirements.txt, copy a few files and it's done.
This is exactly what Nix expressions consist of, expect actually reproducible.
> You don't need to learn a new configuration language, or package all your dependencies yourself
Isn't this exactly what Docker and friends require...?
> or try to understand a new filesystem layout when you enter a container.
It wouldn't be new if it was standard as it should be.
This is exactly what Nix expressions consist of, expect actually reproducible.
Wait, really? You run "apt-get install X Y Z" in a Nix expression? Or are you saying they're equivalent? Because my point is that Docker runs the exact same commands.
Isn't this exactly what Docker and friends require...?
Sort of, but not really. Dockerfiles are extremely basic, and most of the work is done by shell commands, which developers are know and use.
As for packaging all the dependencies, well, yes, but as a big ball of mud. Nix & Guix encourage you to use different packages and connect them. Dockerfiles are literally just dumb container in which you shove everything (using the same commands you already knew).
It wouldn't be new if it was standard as it should be.
And if my grandmother... well, you know the rest.
=================
Don't get me wrong, I'm not praising Docker, on the contrary. I'm just explaining why I think it's more popular than Nix/Guix.
> Wait, really? You run "apt-get install X Y Z" in a Nix expression?
It's probably possible, if you're running Nix on a Debian-based distro, assuming permissions and things are set up right. Of course, that probably counts as Nix abuse, the results wouldn't be any nicer than using apt-get as-is, and it may play badly with Nix's caching.
As a more realistic example, it's perfectly possible to fetch a bunch of repos and configure/build them using the normal processes for that language, e.g. something like this for a Haskell package:
If you like opaque, monolithic blobs of bash, then the above seems pretty comparable to Docker. The reason few Nix users would do such a thing isn't that it's not possible, it's just that Nix's language makes it easy to decouple those pieces into composable modules; verify, cache and share intermediate results; selectively override parameters; inspect and debug at a REPL; etc.
In fact, it's probably quite straightforward to implement a docker-like system as a bunch of Nix functions, e.g. to get something like:
with import <nixpkgs> {};
fauxDocker {
baseImage = http://some/huge/docker/blob;
# Will be run in a chroot of the above image
setup = ''
apt-get update
apt-get install foo bar
...
'';
};
> In fact, it's probably quite straightforward to implement a docker-like system as a bunch of Nix functions
Oh, it turns out that already exists. Just came across some tests, which appear to chroot into another OS image and run commands. Complete coincidence :)
Not exclusively. If I want to pay someone for support for running NixOS-based servers, who do I pay? Does NixOS have a support term anywhere near Red Hat (or even Ubuntu)? What about the awkward CLI UX? What about a turn-key solution to deploy my app? These questions are reasonably well answered with Docker, less so with Nix/NixOS.
I say this as someone who really likes the ideas of Nix. In fact, I see the benefits of Docker as being solved reasonably well by Nix:
* Isolation of multiple applications on the same server are handled by NixOS containers, which use the same configuration as a NixOS system and use things like cgroups and namespaces,
* Immutable/declarative deployments (through things like docker-compose) are elegantly handled through nix-expressions,
* Multiple versions of a dependency are handled through the nix store.
I think a good measure would be to provide good tooling for generating Docker images from Nix, and this is what `guix pack` can do today. Once Nix has gotten over these pains, it can become a good developer tool, and from there, ops may choose to adopt it.
This company uses Nix & recently funded certain features. If you really want some help with Nix, they might have expertiese for you: http://www.tweag.io
> If I want to pay someone for support for running NixOS-based servers, who do I pay?
This is exactly my point; the industry is blissfully unaware of the concepts themselves behind Nix. I'm not saying Nix itself is the ultimate piece of engineering but it's clearly ahead of industry thinking.
> What about a turn-key solution to deploy my app?
I'd say calling it user friendly is a touch misleading.
It has a pretty steep learning curve. Expect to spend a while banging your head against it before you get anywhere.
There are definitely times when you have to dig into Nixpkgs source code or be willing to package things yourself.
I do think Nix or Guix or something like them is the future though. Probably the most pain-saving development in computers since distributed version control.
I really love GNU GUIX SD (the distro based on this package manager). I just wish the linux-libre crew would bite the bullet and simply publish an actual HCL. "Some transmitters with this chipset may not work because they require non-free firmware (that we deliberately changed the kernel to refuse to load)" doesn't really work for me.
The district looks very consistent and nice. I wish there was a version that did have firmwares so those of us with hardware that requires non-free firmware could use it. Yeah, idealism of FSF, but I'm not in a position to buy a Linux-libre-compatible machine right now.
FWIW: it's not hard to obtain hardware that works with Linux-libre. My main machine is a Thinkpad X200S where the restrictive Lenovo BIOS has been replaced with Libreboot, allowing me to install a WiFi card from Tehnoetic (using the Atheros driver).
I have a couple of other machines and only one of them has reduced functionality when using Linux libre (e.g. the on-board Radeon graphics chip needs nonfree firmware for higher resolutions).
Guix and GuixSD makes it easy to use a custom kernel package, so you can choose to use a blobby kernel, if you can accept that.
To share my own story, I flashed a hacked version of the proprietary BIOS on a Thinkpad X220 and was able to replace the Intel wireless chip with an Atheros chip. At the time Libreboot wasn't available, but it could still be a viable method of using a newer Thinkpad with a fully free OS, downside being that the proprietary BIOS is still in use.
The HCL you're looking for pretty much already exists.[0] If you're looking for more specific recommendations, the Respects Your Freedom certification is a more limited list of hardware that is known to work without non-free firmware.[1]
> that we deliberately changed the kernel to refuse to load
I've heard this myth before too. Here's someone else's succinct answer to it:
It's a common misunderstanding to think that the Linux-libre project forbids loading non-free modules/drivers by user's request after the kernel was installed. In fact, it's a bug that is very hard to fix due to the way that the Linux kernel itself is made, as Alexandre Oliva, member of Linux-libre project, states at http://www.fsfla.org/ikiwiki/blogs/lxo/2013-11-08-linux-libr... (when answering the question "Is there anything that the project wants to do that it can't do now? In general, what are the future plans for the project?").
Thanks to user namespaces, it's easy to map, say, /home/myself/gnu to /gnu, as shown in the article (with "unshare").
For cases where user namespaces are unavailable, Pjotr Prins wrote a neat hack that allows users to "relocate" binaries to a different file system location by rewriting file names in the binaries: https://github.com/pjotrp/gnu-install-bin . Scary to some, but definitely does the job. :-)
I need to look at how pack is implemented, maybe I'm missing something. I already used this to install guile 2.2 on a debian machine, so I'm excited for this development.
But to clarify my point - instead of all the content hashes being based on "/guix/store", they'd be based on "guix/store", which would be relative to a configured "store root" to the daemon (or defaulting to /).
Then packages generated would also have relative links, thus "installing a package from guix" would just be a matter of downloading the dependency graph, and tucking away wherever you please.
Whenever I look at containers I always think they were started with trying to solve the problem of app packaging but at some point they tugged too hard on the tablecloth and all the other virtualization problems came tumbling down, too. I guess you can avoid a lot of these if you stick to running 1 container per host.
Anyway, Guix looks super user friendly and I'm going to give this a try!