Hacker News new | past | comments | ask | show | jobs | submit login
Introducing GNU Guix (gnu.org)
222 points by lelf on Nov 23, 2012 | hide | past | web | favorite | 106 comments

Debian's dpkg and apt are both GPL2+. What's the reason they are reinventing the wheel here? Is there some kind of licence incompatibility that the GNU project cares about? Is there some kind of major architectural difference? Why is it important enough to fragment Free Software developers over?

I feel that this should be answered in an FAQ, but I can't find the answer anywhere.

From the e-mail:

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection

Maybe that’s enough? (I don’t understand package managers.)

Then, why reinvent the obvious other wheel?


It's not a reinvention, it's a fork: "Guix is based on the Nix package manager." [https://savannah.gnu.org/projects/guix/]

(More info here: https://lists.gnu.org/archive/html/guile-user/2012-07/msg000..., taken from icebraining below)

I'm not sure it's even a fork. It appears to plug a different backend/frontend into the nix structure, so it appears to be a "patch", for want of a better word. (at least that's what I gleaned from http://www.fdn.fr/~lcourtes/software/guile/guix-ghm-2012.201...)

According to the site of the project, "Guix is based on the Nix package manager."

The "unprivileged package management" is the important feature here from the GNU point of view. One of the philosophical underpinnings of the GNU project is empowering non-superusers as far as possible - that's why they've stuck with the microkernel design, too. The idea there is that non-superusers would be able to easily run their own filesystem or TCP stack, for example, since it's just a matter of running their own daemon.

Once you understand this, a lot of GNU's technical decisions make more sense.

Those things are never going to be adopted by Fedora, Arch, Gentoo, or any other non-Debian based distro. That is reason enough not to use it.

Why? Because you predict it won't meet your definition of popularity? Once upon a time, Arch and Gentoo were dismissed in exactly this way.

Arch and Gentoo are still tiny fractions of the linux install base (which is itself a tiny fraction of users), and are mostly irrelevant.

Unless you feel that those that write software have more influence in the direction of the community than those who just apt-get/yum install it.

Then they are a small fraction of users but are not irrelevant.

Irrelevant for what, to whom? Everything is a tiny fraction of something -- are you saying that value is purely a function of popularity?

Perhaps it doesnt matter, if this provides a large enough software base, then fedora, arch, gentoo etc would perhaps just write a guix->(dpkg|rpm|...) converter to keep their distribution up to date. Not sure if it would be possible but moving work upstream where more detailed and in-depth knowledge is available might be a good thing.

with guix/nix you don't even care if they get adopted or not. you can use it in coexistence with any package manager.

very common scenario i come across few times a year: you get to work on an old CentOS/RedHat/Fedora box but you need to install latest package X which is only in next version of current distro. which would mean you need to upgrade or do a lot of manual work. with guix/nix you just install it without need to upgrade.

"What's the reason they are reinventing the wheel here?"

Maybe they'd eventually like to stop having to say Gnu/Linux.

As long as they use the Linux kernel they don't have much other naming choiches.

Until they have their own gkernel.

GNU Hurd?

Ah, I was unnecessarily flippant; stopped paying much attention to it in the 90s.


So what would be a compelling technical or business reason to use this, either now or when they get to wherever they're going?

I suppose it would be great for study.

> I suppose it would be great for study.

Does anyone still care that much about microkernels, which were the standard design for state-of-the-art OSes back in the 1980s?

Seems like research now is focused on VMs, which do have a current business use, just like they have for decades now.

The Hurd is, at least from the outside, a 1980s design that failed to catch on in the 1980s and is now a solution looking for a problem. At least the userspace intended to go with the Hurd proved to be high-quality and very widely usable.

Yes, people still care for microkernels, particularly in research, security, embedded systems. The developments didn't end in 1980 and are still going on now. What's interesting is that some of those kernels from the 1980s were architecturally superior to the commercial OSes used today. It's really a mistake to discount the technical advantages of these kernels due to lack of popularity.

The main problem with them is simply lack of manpower. Research usually means that older solutions are replaced with the new, which leads to a lot of wasted effort where those changes aren't backward compatible. There's also the huge effort to keep kernels up to date with hardware, and to port over the thousands of software packages that people typically uses in day-to-day activities.

That's perhaps the real advantage of the current popular kernels - they have a strong requirement for stability and introducing breaking changes is out of the question. It's a propagating effect too, due to the many layers of dependencies we have - modifying the lowest layer, the kernel, has the biggest overall effect on the entire operating system.

That's why the current VM (or chroot/jail/namespace) solutions are being pushed and researched - because they bring some of the advantages of the microkernel design to modern computers, but don't completely break everything. A graphical application in user space for example, shouldn't care whether it's running in a VM or on the metal, it only cares about it's dependencies.

>The Hurd is, at least from the outside, a 1980s design that failed to catch on in the 1980s and is now a solution looking for a problem.

The Hurd is still a problem, rather than a solution. It's original goal is by part a failure because of design problems in the Mach microkernel. There's attempts to put Hurd on other microkernels, which has pushed Hurd more into a research position, but there's certainly things to learn from the history of the project on how not to create an OS on a microkernel. It's also not the only project still running with a microkernel design (see HelenOS, Genode etc).

And it's not like the research is completely wasted even if these projects don't gain popularity - as some of their features make it into mainstream kernels. A good example is FUSE (Filesystem in userspace) on linux - which allows people to experiment with filesystems without hacking on the kernel or requiring additional privileges.

But whether a microkernel design will ever become mainstream is a different question. The sheer amount of work to port applications over makes it seem unlikely - although good design of open source software will make the effort significantly easier. It's unfortunate that we seem to be heading in the opposite direction though, with key players in the open source world pushing for a monotonous ecosystem around linux/systemd et al, even excluding working kernels like the BSDs.

My guess is so they can make it very easy to install only free software.

This system works just as well for free software as for non-free. Downloading a binary blob from somewhere would be easy, just a few lines of code in the package description file.

Usually when there is no clear answer to this in a FOSS project, I think it's safe to assume the answer is ego.

While you're not necessarily wrong, "ego" gives off a pretty negative vibe. I would say it's quite often personal dissatisfaction of the status quo.

I support what they are doing. Wheels sometimes get better when they are reinvented.

I'm not one to declare other people's motivation, but it strikes me as very odd that there isn't a "So, here's what's irreparably wrong with dpkg/apt and rpm/yum".

it's fixing a (real or perceived) problem with nix, not with dpkg or rpm.

Well, there isn't a "So, here's what's wrong with nix" either.

there kind of is, if you read between the lines - their usp is the scheme interface, which provides a "proper" programming language with which to interact with the packaging system. they don't explicitly call that out as something wrong with nix, but it's clearly what they feel they do better.

What relation would you say ego has to scratching one's own itch?

I think the concepts are intertwined, if you are going to scratch your own itch and then release it for the rest of the world to scratch theirs.

"We will encourage you to develop the three great virtues of a programmer: laziness, impatience, and hubris." — Larry Wall

> A distribution built by GNU hackers is a great opportunity to improve consistency and cohesion in GNU!

Because the existing fragmentation of GNU package management is too confusing, so we need a new standard for everyone to rally round.


That particular Xkcd comic has been posted 79 times in the past year or so† and has quickly become a cliché. The authors' stated goals are to make a "purely functional" and "practical 100% free" package manager.†† They don't mention competition, proliferation, or winning market share. A vague hope of consistency and cohesion accompanying the announcement of a program "approaching its first alpha release" isn't a rallying cry. I think they made this announcement simply so that people interested in this sort of thing might take a look.


†† http://git.savannah.gnu.org/cgit/guix.git/tree/ROADMAP

Hah. Your post made me curious which xkcds are the most popular on HN. This is the top10 (caveat below):

     48 http://xkcd.com/927/
     48 http://xkcd.com/386/
     33 http://xkcd.com/538/
     28 http://xkcd.com/378/
     24 http://xkcd.com/810/
     23 http://xkcd.com/327/
     22 http://xkcd.com/323/
     21 http://xkcd.com/605/
     19 http://xkcd.com/936/
     17 http://xkcd.com/552/
You can easily reproduce the results by using the hnsearch API via

  curl  "http://api.thriftdb.com/api.hnsearch.com/items/_search?q=http%3A%2F%2Fxkcd.com&limit=100&start=<iterate> 

  grep -ohE "http://xkcd.com/[0-9]*/" | sort | uniq -c | sort -n
Unfortunately, I could only process the first 1000 entire list of ~1500 results because the API refused queries with start > 900. I have sort of a bad conscience because I'm polluting future queries by posting the top 10 list above, but I think I can live with it.

> I have sort of a bad conscience because I'm polluting future queries by posting the top 10 list above, but I think I can live with it.

Relevant xkcd: http://xkcd.com/688/

(Sorry, couldn't resist)

It's a fair cop.

Or, as it was told in "The Zen of Programming":


Hearing a disturbance, the master programmer went into the novice's cubicle.

"Curse these personal computers!" cried the novice in anger, "To make them do anything I must use three or even four editing programs. Sometimes I get so confused that I erase entire files. This is truly intolerable!"

The master programmer stared at the novice. "And what would you do to remedy this state of affairs?" he asked.

The novice thought for a moment. "I will design a new editing program," he said, "a program that will replace all these others."

Suddenly the master struck the novice on the side of his head. It was not a heavy blow, but the novice was nonetheless surprised. "What did you do that for?" exclaimed the novice.

"I have no wish to learn another editing program," said the master.

And suddenly the novice was enlightened.

I thought of the exact same xkcd when I read this email. Let's just create more fragmentation on something already exaggeratedly fragmented..

Until the next "standard".

It seems everyone reinvents package management poorly again and again. Some things that obviously stand out to me:

* Why utilize scheme? It's declaritive utilize a markup like yaml, don't go off the deep end because you can.

* Utilize OS specific features such as jails, bind mounts, namespaces, you can have the same NIX style separation without destroying FHS.

* Parse the config files for options, it's easy cmake, autocrap, etc all have parsable formatting this way you don't need to mess with adding build time options.

* Parse the source for dependencies, there are lots of white papers of people doing this and beating human made spec files.

* Utilize machine learning for classification of options, dependency resolution (utilizing previous data).

Why utilize scheme?

If emacs has thaught us anything, it is that when you give people a fully featured language to fool around with, things you could never imagine can start happening.

Why not scheme? It's liberally licensed and already has mind-share. It's a simple, no fuzz language with little obfuscating magic and great expressive power.

It allows you to build your solutions as bare-bones as you like or go on full architect should you need to. Sounds like just what the doctor ordered.

> when you give people a fully featured language to fool around with, things you could never imagine can start happening.

Maybe a package manager is not the best place to have that flexibility? (I noticed this seems to support "unprivileged package management", but still...)

You break the idea of a declarative package by doing this you can have side effects that modify or mess with anything. This means I cannot update a package reliably when it's updated because it may or may not change something.

Homebrew packages are Ruby files, and in practice this doesn't seem to be a problem.

Almost any programming language can be declarative, the problem is it allows logic as well as having a whole programming language installed instead of a simple parser.

Might as well use python because of greater?? adoption and ease of use?

Packages are mostly just declarations of how you want a package to be built/installed, but sometimes they require additional logic or computation. Using python would require a separate file format to put definitions into (e.g, YAML), and those definitions would need a rigid schema and versioning system. Some existing package managers take this approach, but what happens when you need a feature that doesn't exist in the declarative schema? You would need to extend the YAML format and bump the version numbers to allow your new features.

On the other hand, a LISP does not need a distinction between a declarative format and the computation language itself - it does both in a consistent syntax. (Homoiconicity is the term). It makes it much simpler to mix your declarations and computations, alleviating much of the need for a rigid specification on package definitions.

If you look at places where the former system is used, you'll find they usually age pretty quickly - or have many version increments because the original design omitted features people wanted in their package definitions. In effect, this leads to standards proliferation, because people invent new systems when the existing ones don't do what they need. Rather than trying to second guess the features people might need, it can be easier to just give them the ability to add them when they want.

A recent example of the former system is this project from GNOME: https://live.gnome.org/BuilDj. If you look at what it's trying to do, it really just wants to reinvent makefiles in YAML. A clear example of NIH, which will fail because as soon as someone needs to do something which their YAML schema doesn't support, it's a useless tool.

That's actually a great point - something I didn't think of when I suggested python since I don't know scheme :D Maybe this is reason enough to learn scheme soon.

For practical considerations would you still choose scheme, or would you try and see if some massaging of python can get you the desired results - purely because of the simplicity and adoption rate of the language?

Like I've repeatedly said, as far as LISPs go, scheme is very simplistic and friendly. For things like this, basically a DSL for package-definitions (or other declarative data-formats), a simple LISP like scheme seems pretty optimal.

I see no reason not to look into it. It should take you less than 5 minutes to get started in a repl.

Not to start a flame-war, but I would argue that Python is not as fully featured as a LISP language-wise.

And as far as LISPs go, Scheme is extremely simplistic and friendly.

I suspect the fact there's GNU Guile, an entirely in-house Scheme ecosystem, has a lot to do with it. Use Python and be beholden to the Python community (outsiders!) Use JSON (as someone else suggested) and be beholden to the Javascript/JSON community (outsiders! oh and that awful JSON license!) etc.

> Use JSON (as someone else suggested) and be beholden to the Javascript/JSON community (outsiders! oh and that awful JSON license!) etc.

JSON is just a file format. A simple one too. There are plenty of parsers available, with various licenses.

I am really curious what Stallman thinks for the Don't Be Evil license.

Three words: Not Invented Here.

Please, utilise the word "use".

You are going about it wrong, you need to talk to the audience. Try this phrasing instead:

"Use is three times as performant as utilize."

But utilize doesn't do joins.

That's because utilize is web scale.

"Use is 3X as cromulent as use."

How about "leverage"?

It is probably the Guile extension language variety of scheme which is the "official GNU extension language". It makes sense really.

> it’s not a bootable distribution yet, but rather one to be installed on top of a running GNU/Linux system.

For a moment there I thought the GNU Hurd had been finished, I misread it as GNU Unix :(

GNU Hurd finished? Good one :)

With e17 in alpha, Hurd has to be around the corner.

It is finished. It just has no drivers and doesn't work :)

It is finished, in the sense like WebOS is finished.

I use WebOS in two devices (a pre2 and a touchpad) every day and am pretty happy about it. I think Enyo rocks and is available everywhere and that WebOS is more user friendly than iOS and Android.

Now, how does that compares to GNU Hurd?

I'm not saying WebOS is not good, or that it worse than any of the high-volume platforms out there. The same goes for Hurd.

What I am saying is that both missed their shot and will never see high volume pickup and large marketshares, in the way Linux, Android an iOS do.

It's "based on Nix", but what's the difference, and why fork?

  Guix & Nix

  Nix is really two things: a package build tool, implemented by a library
  and daemon, and a special-purpose programming language.  Guix relies on
  the former, but uses Scheme as a replacement for the latter.
  Technically, Guix makes remote procedure calls to the ‘nix-worker’
  daemon to perform operations on the store.  At the lowest level, Nix
  “derivations” represent promises of a build, stored in ‘.drv’ files in
  the store.  Guix produces such derivations, which are then interpreted
  by the daemon to perform the build.  Thus, Guix derivations can use
  derivations produced by Nix (and vice versa); in Guix, the cheat code is
  the ‘nixpkgs-derivation’ procedure.  :-)
  With Nix and the Nixpkgs distribution, package composition happens at
  the Nix language level, but builders are usually written in Bash.
  Conversely, Guix encourages the use of Scheme for both package
  composition and builders.

Can someone explain what benefits do I get for using this rather than the package manager offered by my distribution? And do you need root access to install guix? Because if you do then

> unprivileged package management

seems quite, well, useless imho. If on the other hand everything in guix lives in the userland, I can see myself using it in the servers I don't have root-access to..

No, you do not need root access in order to install guix. Put it in your home folder, there, done. For ease of use add it to your $PATH.

Thanks for clearing it up :)

It does feel a bit like it can become a platform independent alternative to something like Cygwin. Along the lines of ActiveState perl, but a whole GNU environment instead.

i wonder if userdir-based language package managers like rvm, cabal and opam can profitably rebase themselves atop guix.

That's one misleading name if you ask me - GUI+X

Both GNU Guix and Nix seem like excellent steps forward in bringing sanity to package management. Does anyone know if puppet, chef, salt and similar configuration management tools were inspired by these projects? At a cursory glance, e.g., there are some striking similarities in the DSLs of NixOS and Puppet.

What problem does it solve?

Ever tried to build distribution packages? If it works for you, doesn't mean it works everywhere. Slight errors in the dependency specifications of a package may make it fail if you have wrongly versioned/missing libraries etc. It's often very difficult to verify that package dependencies are 100% correct. Sometimes package builds are not reproducable, as autoconf doesn't care about the build dependencies you defined but just adds any dependency it can auto-detect at compile-time. So you have to double-test your package when build on a build server etc. etc.

AFAIU the Nix package manager fixes that by trying to assure bit-for-bit matching dependant packages are present by making every package version a hash of the installed files, hashed with the hash of all dependencies (similar somehow to how the filesystem/history data are managed in Git, BTW).

The Nix package builder (named hydra) on the other hand makes sure that builds are performed in complete isolation via enhanced chroots, so that every build is 100% reproducible. It locks the build process down so much that it can't even access the network nor would the process be able to observe varying host names. [1]

This is why they call it a "purely functional" build system. As in purely functional programming: there's no state at all. Inputs completely define the output.

Now Guix sits on top of Nix and somehow attempts to make its features available on a higher level via a Scheme API (Nix uses its own language for package dependency and build descriptions)

I haven't used Nix before, just read about it, but having suffered through writing complex package build recipies for OpenWrt, I very much appreciate the approach.

[1] http://hydra.nixos.org/build/2860022/download/3/release-note...

Thanks for your information.

Does it need to? It's for hackers, not for every user. If you're bored and feel like hacking on something, this might interest you. If it doesn't, Mr. Ludovic is not trying to force you to use it.

For me, some of its features, specially non-privileged user package management, are interesting. I'm not so much interested in the distribution at the moment, more in the package manager. I imagine there aren't many supported packages yet, so it's a matter of writing scripts... eventually it can become a distribution. Not for everyone.

But I still see no problem in more package managers being out there - any one has a problem with yum / apt being two different package managers? Why is yet another package manager is a bad thing?

I principle you are right. However in practice there is a sane point where there are so many competing solutions that you have to question why people don't spend their time improving existing solutions instead of adding to the fragmentation.

I highly recommend reading over some of the Nix papers -- what the author is doing is totally divergent from nearly every other package manager in a way that's either insane or brilliant (I don't personally know enough to judge). I saw his talk at a software upgrades workshop, and it was pretty impressive.

Sometimes you can't just improve an existing solution. It may be fundamentally different from what you intend to do (both technically and philosophically).

Well, I didn't mean to sound like I dislike the idea, hacker value can be as much reason as a problem to code a salvation.

maybe take a look here: http://nixos.org/docs/papers.html

from a theoretical, conceptual view point, nix is the holy grail in package management.

Same problem as every other GNU software.

I hope this causes a surge of interest in gnu guix, nixpkgs, and package management systems with these nice features in general.

Yet another package manager? :) I've yet to see something that does a better job than http://paludis.exherbo.org/ And that is not to say Paludis couldn't be improved. But I don't get why people devote time to the community to work on stuff that has been thoroughly solved.

you have yet to give guix or nix a try. been using paludis in past and i can tell you that guix/nix is obvious evolution step :)

Documentation ?

The Linux community needed this around 15 years ago when Windows 98 came out. Now we've got .deb and .rpm which do the job perfectly well. It's even better nowadays, as well, since their existence is effectively hidden from the user thanks to package managers.

Too little, way too late.

I don't think package management is a solved problem. Most distributions fail at the simple task of installing multiple versions of the same package alongside each other. Also, lackluster dependency management (you install library X, it wants to install mail server Y and database Z).

There are many approaches that one would mistake as being "reinventing the wheel":

1. FreeBSD ports system is barebones, but incredibly simple and transparent.

2. Arch Linux's PKGBUILD recipes are also simple and, at times, a much better alternative to binary packages.

3. Gobo Linux (http://www.gobolinux.org) solved binary packaging the right way by having each package in it's own tree. It's approach is similar to OS X, which also gets this right.

"you install library X, it wants to install mail server Y and database Z)"

Almost exactly what happened when I installed nmh. Wasn't at all obvious from the dependency list that I'd get qmail, didn't pay attention to the warning prompt and said Y, ...

Bonus, I have a bunch of new system accounts in my Display Manager's login screen. Bleah. Fortunately my scorched-earth reinstall is coming up, so I just leave them there to make it easier to hate what happened.

you can rollback on debian? you can install packages as user on RedHat? i moved away from it just because things like .deb and .rpm don't do the job all the way

"do the job perfectly well" is a bad way of saying what I was thinking: "for the most part, they work pretty well for the average user" which is true.

They don't cover all edge cases perfectly. But for a user who is new to Linux and wants to install some software, it's better than what we've got on any other OS except ones with dedicated and closed app stores.

Ironically, this Guix thing would be perfect... for Windows.

Right now, I have to maintain my own list of GnuWin32 installers (wget coreutils cygutils diffutils findutils grep gzip less patch sed tar unzip) retrieved from sf.net, with a 30 lines batch script to deploy them, plus a powershell command to retrieve wget as the first step.

nix (and potentially guix) is working on windows, at least from what i heard. probably there are some issues with existing packages since nobody really pays much attention to windows but if somebody cared this could improved a lot.

This could turn out way better than current package managers if GNU doesn't screw it up.

Well there is already nixpkgs which has existed for a while with core features that I saw mentioned. I would be happy if there was increased interest in both and their feature set.

Anything could be turn out to be better than anything if its creators don't screw it up.

How can I give it a try? F.e. if I want to install my own vim in my home directory?

havent try guix yet, but with nix you can simply do: nix-env -i vim or vimFull and it should install it.

look in documentation at http://nixos.org/nix/ on how to install nix and drop on irc.freenode.org#nixos if having problems

I suppose it's nice that GNU has a "GNU-branded" package manager... I guess...

I love it! Please use it and add a ton of packges.

Because i'm too lazy :)

Ill stick with apt and yum than you. They've proved themselves enough to me

I like that this brings out a bunch of people who don't understand Scheme or why it's important to be purely functional. Oh you, startup kids.

Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact