I'm intrigued by Guix but right now I'm invested in NixOS. Finally I've learned most of the Nix language, which is elegant and quite clever, but it was a little bit of a hurdle. Scheme is more familiar and easy to learn, I think, and I can easily see macros as being extremely useful for system configuration. I'm fully convinced that the Nix/Guix style of package management and system configuration is Teh Future. Anyone who isn't drinking koolaid on that bandwagon yet should hop on and check it out.
I also love Nix/Guix, but I have the feeling the future will sadly be giving us container-ized apps systemd-style. Much worse security and sysadmin-wise.
Containers are cool for big things. But imagine every single program stuck inside a container without proper management tools to track dependencies. Now imagine a heartbleed-like scenario where you need to patch a big security hole ASAP. With containers you'd have a hard time.
"Container" is a rather vague term and the implementation details vary. For example, Guix containers (though not fully baked yet) do not have the downsides that we've come to expect from Docker and friends like being based on opaque images with no useful provenance, relying upon complicated overlay file systems, duplication of the same software amongst different container images, and not doing anything to solve the very crucial reproducible builds problem. Instead of using disk image layers, Guix can simply bind-mount the required package builds (we call them "store items") from the host into the container. One of pleasant consequences of this design is that software shared amongst N containers is on the file system in exactly one place. No need to rely on complicated file systems that may or may not actually deduplicate things depending on the circumstances.
Let's use heartbleed as an example. It is easily possible with Guix to walk the dependency graph for any package or system configuration (container or otherwise) and inspect for known vulnerable software. Docker simply does not, and cannot, know the level of detail that Guix knows about the composition of software and systems.
Furthermore, the Guix tools that build containers use a functional, declarative programming interface. Contrast this with Dockerfiles which are an imperative sequence of steps in which you mutate a disk image in a specific order. To utilize Docker's caching abilities, you have to very careful about the order in which the directives in the Dockerfile are evaluated. In Guix, the order doesn't matter and the "cache" (it's really memoization) can be effectively used when any component of the system has already been built before.
I've picked on Docker a lot here because it's the most popular, but you can substitute Docker for any other image-based container system and the same things will more-or-less apply. Docker is great for making things easier right now, in which the status quo package managers are weak and you need on average 3 to 4 package managers to deploy any given web application. But it's not the future. Docker is based on binaries whereas functional package/configuration management is based on source code. We can do much better (and we are doing it already) by building on the functional package management paradigm.
Depends how those "containers" interact with the system at large.
In the Windows world, it's common to have the libraries you need stored in ~/Program Files/Program_name/ . Now, each program dumps in its program home the library, only because it cannot guarantee the library would be in the system.
In Linux, we deal with having different major versions of said library. nd minor versions are clobbered by major. So when someone who wrote stuff using python 2.7.2 and 2.7.3(bugfixes) comes out, there is invariably something that is broken.
A container that encompasses "Python", and holds all the versions would be a great deal. Same with other programs. They could interact, but their install and environment would be encompassed as a complete "Python", regardless of version or bugfix.
Yeah, I don't want to run every single program in a container... and I don't plan to. That said, containers have their advantages even in that security scenario. For example, once I was running into obscure segfaults that seemed related to some interaction between Sidekiq a particular Ruby version. Once I figured out the fix, I just changed the top line of a couple of Dockerfiles to change their Ruby versions.
Generally also containers are built on existing Linux distributions and use their packages. I still haven't figured out exactly what I want to do but my vague future plan involves containers bootstrapped from Nix expressions. That's just to get a level of indirection to abstract away NixOS so I can run app services on whatever distribution.
Depends on how stuff is implemented no? You change a library, you do a rebuild of all packages that uses that library, the user downloads package deltas and the filesystem does deduplication so the same file is used in all the containers. Would only cause a problem with packages from third party sources. But thats a possible problem now already?
I'm intrigued, too, but I'm planning on drinking the Kool-Aid only after someone figures out how to sanely include support for late binding of dependencies. If that happens—allowing security updates without explicit action for every package—I think I'll be in.
I'm just trying to get all of this "package management" sorted out. So could anyone please comment.
Guix is a package manager that is currently limited at 2619 packages [1]
NixOS is a GNU/Linux distro which uses the Nix package manager (makes package management reliable and reproducible. It provides atomic upgrades and rollbacks, side-by-side installation of multiple versions of a package, multi-user package management and easy setup of build environments) and currently provides 11630 packages [2]
Ansible/Chef/Puppet is a package manager oriented at deployment at multiple nodes, which works with multiple GNU/Linux distros.
Vagrant is a package manager. A wrapper around Ansible/Chef/Puppet.
My understanding is that I can fix all problems with NixOS, with the caveat that I'm limited to NixOS, so I can't Nix on a Debian or Ubuntu or Redhat.
Assuming the adoption of NixOS isn't a constraint, why should I bother to learn Ansible/Chef/Puppet and Vagrant?
And if Ansible/Chef/Puppet are still worth learning (and please comment why) besides NixOS, is there any reason to learn Vagrant as well?
> Ansible/Chef/Puppet is a package manager oriented at deployment at multiple nodes, which works with multiple GNU/Linux distros.
These aren't really package managers, but configuration management, orchestration and desire state configuration tools. You can use them to execute package manager tasks (yum, apt-get for example) across one or more servers at the same time.
Learning one of these tools is worthwhile. For example if I want to deploy a bunch of files to a specific locations across a farm of 100 servers then these are the tools for job. I'm learning ansible right now. The reason I chose it was because it's fairly lightweight and has a low barrier of entry compared to Chef or Puppet (for me anyway). The documentation is also very good. The sysadmincast chap recently knocked out four (free) videos introducing Vagrant and Ansible, they're quite good:
> Vagrant is a package manager. A wrapper around Ansible/Chef/Puppet.
No, Vagrant is a tool for standing up virtual machines with repeatable builds. It's mostly used as a development tool so you can quickly build an environment on your desktop (e.g. VM's running on VirtualBox). You'd then, for example, pass the Vagrant script for your build around your team so that you're all developing against the same environment. It's also handy for rebuilding exactly the same VM again where you've maybe trashed it by accident.
The videos linked to above will give you a sense of where and when to use Vagrant and Ansible.
It seems like you need to do some more reading for full comprehension (and I am a novice), but real quick:
Ansible/Chef/Puppet are for configuration management and Vagrant is for managing VMs (normally on a local machine, for development). These tools may provide some overlap from a 'solutions' perspective, but they achieve the result via a different approach.
ETA: Apparently, Guix is for both package management and config, which is really interesting. This might be the excuse I needed to learn Scheme.
Does anyone else feel like they're just duplicating all the work done by Nix? I guess there's nothing wrong with multiple functional package managers. It just seems wrong to reject Nix just because it allows unfree software.
The Guix project reuses the Nix daemon but it made different decisions at various points, such as using one unified language (Guile Scheme) for every part of the system, or how to configure the system; using Guile for everything (turning Guix into a regular Scheme library) has given rise to different tools and projects that use the library.
The problem space is the same but there is not just one valid approach, even when we both decide on sticking to functional package management.
I love Scheme, so working on Guix is a no-brainer for me, but also other decisions of the Guix project align more with my priorities, such as the desire to not let blobs (e.g. for bootstrapping Java) sneak in needlessly. The absence of non-free software is a feature, in my opinion.
Guix indeed has a strong commitment to user freedom, which makes it a different project.
Technically, I don't see it as "duplicating" Nix: Guix uses low-level parts of Nix that are awesome, and replaces higher-level layers with a more integrated and unified system. See https://arxiv.org/abs/1305.4584 for the technical rationale or the video on the home page on the concrete impact of those differences. Hacking GuixSD is a very different experience, I think!
(Disclaimer: I maintain Guix and I'm a former NixOS hacker.)
Does the Guix team have any interest in adding documentation to the Nix source code? Presumably you guys know a lot more about it than the average developer. I haven't spent a huge amount of time looking through the Nix source, but the times when I've had, I've found the documentation to be very anemic.
Random lazy question: do you know about running GuixSD on AWS EC2? We're very happy with NixOS's AMI images, since they let us provision AWS instances extremely easily.
I feel like all the Linux distributions are duplicating work. And all the language-specific package managers. If everyone could just get along and work on the same thing... Well, they're not going to. Guix has a different language, different core values, different allegiances, probably different long-term plans.
If I were to start thinking more seriously about the wasted effort in duplicated definitions of packages, I would probably end up thinking about using RDF linked data to define software dependencies and capabilities. Then you could generate Nix expressions, Guix things, RPMs, Docker containers, or what have you. The declarative model of Nix/Guix should work well with something like that.
I hope you're not thinking of using Turtle or XML for defining packages in this system, as I would need to fork your project and define my packages in JSON-LD. I would also need to submit a proposal to the w3c to include 'rdf:SchemeLiteral' into the standard for compatiblity with existing infrastructure.
Luckily the W3C people have been very careful to make sure that Turtle, XML, and JSON-LD are all interoperable, so I don't mind your fork, if the JSON syntax is more appropriate for your needs.
GNU seems like the China of software. They write their own version of all existing tools, simply so they can have a copy of it that completely adheres to their principles.
I understand and respect their position too, and while I think it's probably a waste of time and effort, I think they're a little less misguided than the GPL group. I use both GPL and BSD tools every day, so I'm mostly indifferent.
From this I conclude that python-sphinx is python3 version of sphinx right? Do you think that's sane (from user experience point of view) assuming that /usr/bin/python should point to python2?
https://www.python.org/dev/peps/pep-0394/
Yet anyway as a python user I would expect to receive python2 version when installing python-sphinx.
In Fedora we have (most recent change in packaging guidelines):
Source rpm: python-foo which generates
binary rpms: python2-foo and python3-foo where python2-foo has virtual provides for python-foo, once Python3 will become THE default python by the upstream merits we will change python-foo provides to python3-foo package.
It's a distribution of Linux that uses the Scheme language both for defining packages and for configuring the user's system.
In that picture, someone is generating a QEMU image from a certain system configuration. The Guix tools will read the configuration file, and then create a VM that matches its parameters.
You might be reminded of Ansible, Vagrant, and similar tools for "declarative" configuration.
Note how the configuration specifies a list of user accounts. Later, inside the VM, you could modify that list -- say, change Bob's name, add Alice, and also create a common user group for them -- and then tell Guix to update the system. It would then look at its current configuration and figure out what to change.
The same applies for things like defining system services. So you will have a Scheme file that defines, say, that nginx should be running on port 80 with these three different vhost configurations. The distribution will then know how to generate an nginx configuration file, how to setup the init system, and how to reload these things on changes.
(I don't know if Guix has a predefined thing for generating Nginx configurations; if not, you can certainly build one easily with some Scheme expressions.)
I use NixOS, which is an earlier implementation of the same idea. Me and my brother, with whom I work, share a basic system configuration file. That means I can define a new web service, along with whatever package dependencies it may have, even including creating a new user account -- and push that to our repository, so he will pick it up, and then I can say "hey, check out localhost:8050" and it'll just work.
We also use NixOS on our cloud servers. We have one configuration file that declares all of our services (redis, etc) along with our app vhosts (mostly in the form of systemd units that launch Docker containers, using some custom Nix functions). This means that our services are always running the same way on our laptops as on the server, just on different domains (the server's internet domain vs our localhost domains).
> GNU Guix is a functional package manager for the GNU system. The Guix System Distribution or GuixSD is an advanced distribution of the GNU system that relies on GNU Guix and respects the user's freedom.
> In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. Guix uses low-level mechanisms from the Nix package manager, except that packages are defined as native Guile modules, using extensions to the Scheme language. GuixSD offers a declarative approach to operating system configuration management, and is highly customizable and hackable.
A different approach at package management. Inspire by functional programming. Installing a package defines a new system generation shadowing the previous one. The older generation aren't removed. You can thus have multiple version of a package on disk and jump back and forth between generations.
ps: also stronger transactional semantics. You can only switch to a new generation if all went right, and the old generation is still as correct.
Few times under other distro an update changed some dynamic library that then caused a crash when some already in-memory program decided he wanted to link to it.
Is there a recommended approach for using Guix along with VMs or containers (in the most general sense)?
My day-to-day machine is a mac laptop; but for my software development work, I have been using docker-machine, docker-compose and docker server/client to configure, spin up, wire together, and tear down repeatable Linux environments, which run inside VirtualBox on my mac.
I'm interested in what a comparable (in a broad sense) workflow might look like with Guix, but I'm not quite sure where/how to start.
"guix environment" can spawn a container to run software in isolation. There's also "guix system vm" to build a virtual machine, or "guix system container".
Note, though, that Guix has not been ported to MacOS yet.
I don't want a full port, at least I'm pretty sure I don't; consider, with my current setup I'm always running the Docker daemon on a Linux host. In a bash shell running on MacOS, I'm using the client-side command-line Docker tools (the three I mentioned previously), which have been ported; and the Mac edition of VirtualBox is obviously running on MacOS as well.
Is there any possibility of using the guix command-line tools, ported to MacOS, which would then talk to a remote Linux host (could be in the cloud, could be a VirtualBox VM running locally on my mac laptop, etc.)? Does guix envision a client/server "mode" of usage?
You can use the guix command line tool and let it talk to a remote guix-daemon. I would not know how to set this up on a Mac, but Guix is in fact designed such that the command line tool is separate from the daemon.
I'm using this feature to provide users of our scientific compute clusters with a way to manage their software on all cluster nodes. The Guix tool on the nodes connects to one central guix-daemon.
That's all GNU/Linux, though. I don't know if this could work on a Mac. You could ask on guix-devel@gnu.org or on #guix at freenode.
Package signing is relatively useless. All it tells you is that a given person signed a binary, but it does nothing to tell you whether the contents of that binary match the source code (and currently, no such tooling exists to perform this verification.)
Nix and Guix on the other hand, are primarily source code based. Packages are given an identity which is a secure hash of their source code and complete build instructions. This makes it infeasible to make any modifications to the source code, as intentionally malicious modifications would result in a different hash, and source code changes are detectable in commit logs.
While binary versions of these packages are available via Hydra, one can always use the source code versions if they're worried about security. Long term goals for Nix and Guix are to have completely reproducible binary packages, whereby we can assert that given a package definition, the resulting binaries will be bit identical when compiled from different machines. Instead of relying on a hash of the source code, we'll have a hash of binaries matching a package definition.
And when we get to this stage, we can completely stop worrying about who packaged a piece of software, because the result should be the same regardless. Instead, we can start looking at an alternative model of distribution where several independant parties, chosen by the user, reach a consensus on the binary hash which corresponds to a given package definition, and have that package retreivable through a distributed network (a DHT).