I feel that this should be answered in an FAQ, but I can't find the answer anywhere.
In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection
Maybe that’s enough? (I don’t understand package managers.)
(More info here: https://lists.gnu.org/archive/html/guile-user/2012-07/msg000..., taken from icebraining below)
Once you understand this, a lot of GNU's technical decisions make more sense.
Then they are a small fraction of users but are not irrelevant.
very common scenario i come across few times a year: you get to work on an old CentOS/RedHat/Fedora box but you need to install latest package X which is only in next version of current distro. which would mean you need to upgrade or do a lot of manual work. with guix/nix you just install it without need to upgrade.
Maybe they'd eventually like to stop having to say Gnu/Linux.
So what would be a compelling technical or business reason to use this, either now or when they get to wherever they're going?
I suppose it would be great for study.
Does anyone still care that much about microkernels, which were the standard design for state-of-the-art OSes back in the 1980s?
Seems like research now is focused on VMs, which do have a current business use, just like they have for decades now.
The Hurd is, at least from the outside, a 1980s design that failed to catch on in the 1980s and is now a solution looking for a problem. At least the userspace intended to go with the Hurd proved to be high-quality and very widely usable.
The main problem with them is simply lack of manpower. Research usually means that older solutions are replaced with the new, which leads to a lot of wasted effort where those changes aren't backward compatible. There's also the huge effort to keep kernels up to date with hardware, and to port over the thousands of software packages that people typically uses in day-to-day activities.
That's perhaps the real advantage of the current popular kernels - they have a strong requirement for stability and introducing breaking changes is out of the question. It's a propagating effect too, due to the many layers of dependencies we have - modifying the lowest layer, the kernel, has the biggest overall effect on the entire operating system.
That's why the current VM (or chroot/jail/namespace) solutions are being pushed and researched - because they bring some of the advantages of the microkernel design to modern computers, but don't completely break everything. A graphical application in user space for example, shouldn't care whether it's running in a VM or on the metal, it only cares about it's dependencies.
>The Hurd is, at least from the outside, a 1980s design that failed to catch on in the 1980s and is now a solution looking for a problem.
The Hurd is still a problem, rather than a solution. It's original goal is by part a failure because of design problems in the Mach microkernel. There's attempts to put Hurd on other microkernels, which has pushed Hurd more into a research position, but there's certainly things to learn from the history of the project on how not to create an OS on a microkernel. It's also not the only project still running with a microkernel design (see HelenOS, Genode etc).
And it's not like the research is completely wasted even if these projects don't gain popularity - as some of their features make it into mainstream kernels. A good example is FUSE (Filesystem in userspace) on linux - which allows people to experiment with filesystems without hacking on the kernel or requiring additional privileges.
But whether a microkernel design will ever become mainstream is a different question. The sheer amount of work to port applications over makes it seem unlikely - although good design of open source software will make the effort significantly easier. It's unfortunate that we seem to be heading in the opposite direction though, with key players in the open source world pushing for a monotonous ecosystem around linux/systemd et al, even excluding working kernels like the BSDs.
I support what they are doing. Wheels sometimes get better when they are reinvented.
"We will encourage you to develop the three great virtues of a programmer: laziness, impatience, and hubris." — Larry Wall
Because the existing fragmentation of GNU package management is too confusing, so we need a new standard for everyone to rally round.
grep -ohE "http://xkcd.com/[0-9]*/" | sort | uniq -c | sort -n
Relevant xkcd: http://xkcd.com/688/
(Sorry, couldn't resist)
Hearing a disturbance, the master programmer went into the novice's cubicle.
"Curse these personal computers!" cried the novice in anger, "To make them do anything I must use three or even four editing programs. Sometimes I get so confused that I erase entire files. This is truly intolerable!"
The master programmer stared at the novice. "And what would you do to remedy this state of affairs?" he asked.
The novice thought for a moment. "I will design a new editing program," he said, "a program that will replace all these others."
Suddenly the master struck the novice on the side of his head. It was not a heavy blow, but the novice was nonetheless surprised. "What did you do that for?" exclaimed the novice.
"I have no wish to learn another editing program," said the master.
And suddenly the novice was enlightened.
Until the next "standard".
* Why utilize scheme? It's declaritive utilize a markup like yaml, don't go off the deep end because you can.
* Utilize OS specific features such as jails, bind mounts, namespaces, you can have the same NIX style separation without destroying FHS.
* Parse the config files for options, it's easy cmake, autocrap, etc all have parsable formatting this way you don't need to mess with adding build time options.
* Parse the source for dependencies, there are lots of white papers of people doing this and beating human made spec files.
* Utilize machine learning for classification of options, dependency resolution (utilizing previous data).
If emacs has thaught us anything, it is that when you give people a fully featured language to fool around with, things you could never imagine can start happening.
Why not scheme? It's liberally licensed and already has mind-share. It's a simple, no fuzz language with little obfuscating magic and great expressive power.
It allows you to build your solutions as bare-bones as you like or go on full architect should you need to. Sounds like just what the doctor ordered.
Maybe a package manager is not the best place to have that flexibility? (I noticed this seems to support "unprivileged package management", but still...)
On the other hand, a LISP does not need a distinction between a declarative format and the computation language itself - it does both in a consistent syntax. (Homoiconicity is the term). It makes it much simpler to mix your declarations and computations, alleviating much of the need for a rigid specification on package definitions.
If you look at places where the former system is used, you'll find they usually age pretty quickly - or have many version increments because the original design omitted features people wanted in their package definitions. In effect, this leads to standards proliferation, because people invent new systems when the existing ones don't do what they need. Rather than trying to second guess the features people might need, it can be easier to just give them the ability to add them when they want.
A recent example of the former system is this project from GNOME: https://live.gnome.org/BuilDj. If you look at what it's trying to do, it really just wants to reinvent makefiles in YAML. A clear example of NIH, which will fail because as soon as someone needs to do something which their YAML schema doesn't support, it's a useless tool.
For practical considerations would you still choose scheme, or would you try and see if some massaging of python can get you the desired results - purely because of the simplicity and adoption rate of the language?
I see no reason not to look into it. It should take you less than 5 minutes to get started in a repl.
And as far as LISPs go, Scheme is extremely simplistic and friendly.
JSON is just a file format. A simple one too. There are plenty of parsers available, with various licenses.
"Use is three times as performant as utilize."
For a moment there I thought the GNU Hurd had been finished, I misread it as GNU Unix :(
Now, how does that compares to GNU Hurd?
What I am saying is that both missed their shot and will never see high volume pickup and large marketshares, in the way Linux, Android an iOS do.
Guix & Nix
Nix is really two things: a package build tool, implemented by a library
and daemon, and a special-purpose programming language. Guix relies on
the former, but uses Scheme as a replacement for the latter.
Technically, Guix makes remote procedure calls to the ‘nix-worker’
daemon to perform operations on the store. At the lowest level, Nix
“derivations” represent promises of a build, stored in ‘.drv’ files in
the store. Guix produces such derivations, which are then interpreted
by the daemon to perform the build. Thus, Guix derivations can use
derivations produced by Nix (and vice versa); in Guix, the cheat code is
the ‘nixpkgs-derivation’ procedure. :-)
With Nix and the Nixpkgs distribution, package composition happens at
the Nix language level, but builders are usually written in Bash.
Conversely, Guix encourages the use of Scheme for both package
composition and builders.
> unprivileged package management
seems quite, well, useless imho. If on the other hand everything in guix lives in the userland, I can see myself using it in the servers I don't have root-access to..
AFAIU the Nix package manager fixes that by trying to assure bit-for-bit matching dependant packages are present by making every package version a hash of the installed files, hashed with the hash of all dependencies (similar somehow to how the filesystem/history data are managed in Git, BTW).
The Nix package builder (named hydra) on the other hand makes sure that builds are performed in complete isolation via enhanced chroots, so that every build is 100% reproducible. It locks the build process down so much that it can't even access the network nor would the process be able to observe varying host names. 
This is why they call it a "purely functional" build system. As in purely functional programming: there's no state at all. Inputs completely define the output.
Now Guix sits on top of Nix and somehow attempts to make its features available on a higher level via a Scheme API (Nix uses its own language for package dependency and build descriptions)
I haven't used Nix before, just read about it, but having suffered through writing complex package build recipies for OpenWrt, I very much appreciate the approach.
For me, some of its features, specially non-privileged user package management, are interesting. I'm not so much interested in the distribution at the moment, more in the package manager. I imagine there aren't many supported packages yet, so it's a matter of writing scripts... eventually it can become a distribution. Not for everyone.
But I still see no problem in more package managers being out there - any one has a problem with yum / apt being two different package managers? Why is yet another package manager is a bad thing?
from a theoretical, conceptual view point, nix is the holy grail in package management.
Too little, way too late.
There are many approaches that one would mistake as being "reinventing the wheel":
1. FreeBSD ports system is barebones, but incredibly simple and transparent.
2. Arch Linux's PKGBUILD recipes are also simple and, at times, a much better alternative to binary packages.
3. Gobo Linux (http://www.gobolinux.org) solved binary packaging the right way by having each package in it's own tree. It's approach is similar to OS X, which also gets this right.
Almost exactly what happened when I installed nmh. Wasn't at all obvious from the dependency list that I'd get qmail, didn't pay attention to the warning prompt and said Y, ...
Bonus, I have a bunch of new system accounts in my Display Manager's login screen. Bleah. Fortunately my scorched-earth reinstall is coming up, so I just leave them there to make it easier to hate what happened.
They don't cover all edge cases perfectly. But for a user who is new to Linux and wants to install some software, it's better than what we've got on any other OS except ones with dedicated and closed app stores.
Right now, I have to maintain my own list of GnuWin32 installers (wget coreutils cygutils diffutils findutils grep gzip less patch sed tar unzip) retrieved from sf.net, with a 30 lines batch script to deploy them, plus a powershell command to retrieve wget as the first step.
look in documentation at http://nixos.org/nix/ on how to install nix and drop on irc.freenode.org#nixos if having problems
Because i'm too lazy :)