Hacker News new | past | comments | ask | show | jobs | submit login
NixOS - Declarative configuration OS (nixos.org)
244 points by wamatt on May 18, 2013 | hide | past | web | favorite | 75 comments



This is a move in the right direction:

=== On NixOS, you do not need to be root to install software. In addition to the system-wide ‘profile’ (set of installed packages), all user have their own profile in which they can install packages. Nix allows multiple versions of a package to coexist, so different users can have different versions of the same package installed in their respective profiles. If two users install the same version of a package, only one copy will be built or downloaded, and Nix’s security model ensures that this is secure. Users cannot install setuid binaries. ===

Requirement to be administrator to install software is the root of all evils in managing OS.

As far as keeping everything in a special location (/nix or whatever), this reminds me SCO OpenServer 5: they had all the files somewhere in /var; /bin, /sbin and everything else were just symlinks. It did not work all that well.


>[...]the root of all evils[...]

Pun intended? ;)


Why did it now work all that well in SCO OpenServer 5?


It was mostly in /opt, although there were also some things in /var.


Some of the ideas seen in NixOS can also be seen in the Image Packaging System (Atomic Upgrades, Reliable Upgrades) project: https://java.net/projects/ips/pages/Home

Disclaimer: I'm also one of the IPS authors, so I agree with some of NixOS' ideas :-)


I read one article and skimmed the other under "Background reading" but I could not spot what the core philosophy or mechanic of how you are producing atomic and reliable upgrades.

I try to stay abreast of package systems which have the same or close feature set as nix. If you have a moment to explain the core mechanic or a reference it would be much appreciated.


The "no scripting zone" and "no more installer magic" background posts have some of the philosophy.

But really a lot of the atomic and reliable aspects are the result of integration with ZFS. When the user upgrades packages, we create a backup boot environment (clone of the root file system) and then update the packages. If anything goes wrong, the user can just reboot to the backup be.

When updating the system (that is, packages that require the kernel to be reloaded/system rebooted), we create a clone of the root file system (a boot environment) and update the clone. When the update finishes, the admin can then reboot the system at their leisure.

If you'd like to know more, I'll try to answer your questions. You can email me at $myusername at gmail dot com.


If anyone is wondering, I suspect you could take IPS and integrate with btrfs instead. The abstractions are generic and btrfs is growing the required functionality as time goes on.


I tried to do just this myself and Btrfs ended up corrupting my install beyond use.

Thankfully this was just on a test system and I do appreciate that other people have had good success with Btrfs; but as a ZFS user for quite a few years now (since before Linux and FreeBSD ports became stable - so my file servers were originally running Solaris) and have never had an issue with the file system (in fact it's saved me a few times), I can't help thinking that Btrfs is still a long way off being a viable contender.

What's more, I found Btrfs's commands to be convoluted and, at times, counter-intuitive when compared with ZFS's. Which isn't a deal breaker on it's own, but it is a great shame given the opportunity they had to get it right (ie writing the entire software stack from scratch and with no legacy to worry about).

This is all my personal experiences though. Others will have their own preferences and (anecdotal) evidence to support that.


We seem to be heading for a future where every application is run in a virtualized OS, configured and launched on demand.


The future you describe is the distant past for many of us.

We've been using chroot, FreeBSD jails, Solaris Containers, and similar technologies for many years now. Combined with the basic functionality of UNIX and UNIX-like systems, we can quite safely isolate users, apps and data, but without the overhead and inconvenience of heavy-weight virtualization techniques. Even then, these techniques are quite new relative to what's been available to mainframe users for many decades.


This is all too primitive. If we're heading in the direction that the GP describes, it's probably because some people have realized a long time ago that Alan Kay was actually making sense when he was talking about the object-based computational model as being something like "small computers all the way down". If your application is a set of objects implementing the functionality by passing appropriate messages between those objects, and the OS is another set of objects, providing all the necessary interfaces to applications in form of capability-exposing objects, then everything is virtualized by definition. Or, in other words, in such a system, you can virtualize (in the classical sense) as much or as little as possible.

Need two versions of the same library for two different applications? No problem, because each app gets its own inferface.

One app needs to accesses a physical block device and another a virtual one? No problem, because each app gets its own inferface.

Is the capability required by the app deployed on another machine and you want to expose it to the application transparently? No problem, because each app gets its own inferface.

And if the current hardware isn't amenable to running such systems with ease, well, that might have something to do with the fact that everything we're running has its roots in the 1980s. The unprecedented success of the PC revolution has essentially frozen all development of substantially new systems.


We seem to be heading for a future where the only application run locally is a web browser.


We seem to be making the same unrealistically broad generalizations over and over, never learning from our past.


There are at least three reasons that won't happen:

1. Horses for courses: Once you get past superficial similarity, the Web isn't a universal application runtime. It is more appropriate for some kinds of apps, such as publication media, form-filling, etc. than others, such as apps with fine-grained interactivity, apps with significant endpoint computation, etc.

2. Nobody has cracked the "write once, test everywhere" curse of multiple implementations of a standard, never mind a standard as complex, and with such a long legacy tail as Web browsers. Web app frameworks still devote a lot of code to compatibility. "Native" also means "a single vendor's runtime."

3. Operating systems are different, and there are varying approaches to portability. Running in a universal runtime is one approach, and it has inherent compromises. For example, Microsoft has applications for editing Office documents for Windows, Mac OS X, and for Web browsers. Which one has limitations both in terms of integrating OS features and UI conventions, and it terms of functionality?


We seem to be heading for a future where every website says "Don't look at this, download our app!".


Ya I very much doubt that. It just looks more people who barely do anything productive are on the Internet just using Youtube and Netflix. While the people who are developing say something like Youtube(or god forbid something more useful and complicated) are left to fend for themselves, because hey what do they matter.


http://audio-video.gnu.org/video/ghm2009/ghm2009-dolstra-lar...

this talk by Eelco is a good introduction for hacker types. And the motivation is at the beginning of the talk.


This "Multi-user package management" is a really neat feature, I wonder if there are other distro's having that.


Multi-user and multi-aplication (different applications can depend on different versions of a shared library).

Here's another example that's not public, and not really a distro, but... http://stackoverflow.com/questions/3380795/what-does-amazon-...


GoboLinux works in a very similar manner, though I'm not sure the distro has seen any love in a few years.


I've read dpkg has `dpkg -i --force-not-root --root=$HOME/.local package.deb` options, but I haven't ever tried those myself.


After playing around with Arch for the last few weeks, this is incredibly interesting to myself.

Does anybody have any experience/testaments to post about this OS?


Sure, nixos is very forgiving to experiment with because you can easily rollback to a pervious configuration. Modify a library and something stops working fixing it is as simple as `nix-env --rollback` for a user package and `nixos-rebuild --rollback` if something was changed with the system configuration a restart may be needed if some systems are touched but it has not happened often and when it is needed it is specified.

I enjoy how easy it is to set up different profiles with different sets of libraries/compilers for development.

It has made it worth while to try out old libraries that only work on old compilers. It also made it easier for me to test against the HEAD of the complier repo for testing forward compatibility or new compiler features.

Since everything is nicely encapsulated once I have an environment working on my laptop I can use `nix-copy-closure` to move it to another remote server. `Nixops` a more recent offering just renamed from `charon` also allows for deterministic building of ec2 instances.


took me a while to figure out. nix-env -qa * | grep "package you are looking for" nix-env -i "package"

need to add different channels if you are grabbing packages from places other than nix-pkgs.

I had a hard time, getting some haskell packages to work, because nixos symlinks all the binaries, and there was some encryption library dependency, hard coded in a cabal file, and it was a headache/ gave up.

Eelco also provides a pathelf utility for modifiying dynamic linker in elf files, I haven't used it.

I'm on ubuntu again but I can't remember why. Oh ya, I couldn't get Blackberry's SDK working on nixos. Anything that doesn't go through nix-pkgs, will take some work. I didn't get to the point where I was writing nix-expressions.


If you run it in a VM (KVM/QEMU) you might need to set some CPU flags for the VM (otherwise GMP will assume your CPU has some instructions that it doesn't actually have in the VM): https://github.com/NixOS/nixos/issues/120#issuecomment-14776...

Other than that it works fairly well (in a VM), and you don't need to wait for it to compile all the packages like with Gentoo: if the build server has a binary then it will download that.


For those with a penchant for parentheses, there is also GNU Guix, which just had a second alpha release:

http://lists.gnu.org/archive/html/bug-guix/2013-05/msg00034....

It shares concepts and some bits with NixOS, but replaces the configuration language with Guile, an implementation of Scheme.


The way they install multiple versions of applications and manage the dependencies reminds me of GoboLinux. I like the direction of this. Really like how they named everything though in Gobo.

http://wiki.gobolinux.org/index.php?title=The_GoboLinux_File...


I remember playing with Gobo several years back. Although renaming system directories makes it a bit more approachable to newcomers, it also makes it less language independent and less recognizable to the users who typically need to use it (sys admins and such).

However, the general idea of installing things in a self-contained directory was fantastic, and I'm happy to see another operating system trying something similar. I used to use PC-BSD regularly, whose package manager also does this:

http://www.pcbsd.org/en/package-management/


Interesting. I never knew PC-BSD did this. Been probably 10 years since I gave a BSD variant a try.


That's exactly what the Linux community needs: abstractions made for people who want to save time.


IOW, not "the super-engineer, an idealised and imaginary extension of the Unix hackers of the era that gave birth to Unix, GNU, and finally Linux."

http://www.listbox.com/member/archive/182179/2013/05/sort/ti...


In my mind, it's a much better engineered solution to have a centralized, repeatable set of configurations than to have them spread out wherever in the filesystem.


Whenever I hear "centralized configurations" I think of the Registry and shudder. But the issue there isn't the centralization, so it's probably unfair.


Interesting critique by a Debian developer:

http://lists.debian.org/debian-devel/2008/12/msg01027.html


Not interesting at all. He doesn't understand what nix is doing or why, and just points out all the ways that it's different from Debian.


For a more application-level (rather than OS-level) approach that shares a lot of good ideas with NixOS, there's also ZeroInstall (http://0install.net). There's a comparison with Nix at http://0install.net/comparison.html#Nix.


I'm not a Linux expert, but it seems some dependency issues won't be resolved by this e.g. Glibc conflicts. Or would it?


Is there something special about glibc conflicts that goes beyond normal versioning conflicts?

Package A could depending on glibc-2.13 and package b could depend on glibc-2.17. No conflicts would because by both running at the same time.


I had a hell of a time installing something in Ubuntu that required a particular glibc version that was differnt to the system--wide one. Like I said, I'm not Linux expert and I probably missed a trick. I ended up using a different distro.

Would this new distro have helped?


I think in the Nix world, you would have two or more different glibc versions at different paths. Then the applications are configured to link against the required versions. Nix has extra build metadata that calls GNU configure as far as I can tell, and that is where this knowledge would lie.

So you can run two programs side by side with different glibc versions in Nix. On Ubuntu, I think this is mostly not possible, as you have found.


As far as i understand Nix, two users could have different glibc versions, but two programs of a single user?


I don't see any problems here. libc is nothing special, it's just a library that (almost) every piece of software depends upon.

Two different programs may depend on two different versions of it (/nix/libc-2.10-fdsfgs/lib/libc.so.6 and /nix/libc-2.13-fgsfds/lib/libc.so.6 or even /nix/uclibc-0.9.33.1-blahblah/lib/libc.so) just fine. The only requirement is that both libc versions must support running under the current kernel¹, as you can't have two kernels at the same time.

However, it would be problematic if a single program (foo) would depend on a library (libfoo), which depends on another version of libc that foo does. I.e., dependency graph will be like foo->libc-x.xx, foo->libfoo->libc-y.yy. This would cause symbol conflict and AFAIK this can't be solved without either rebuilding either foo or libfoo or introducing some really nasty hacks to the ELF loader (ld-linux).

___

¹) Google for "FATAL: kernel too old" error message to see the example of what I mean. There are patches to make newer libcs to run with relatively older kernels, though, it just requires building from source.


Two different versions will definitely live at different paths, since the path contains the content hash. The path doesn't have much to do with the user name; that's a separate issue.


As a single user I could run two programs utilizing different glibc versions.


> I had a hell of a time installing something in Ubuntu that required a particular glibc version...

I have been in the same situation. But big changes in glibc that break things are very rare, most glibc version updates are backward compatible.

> Would this new distro have helped?

Yes, but it might have required re-compiling some packages.

Btw. NixOS is not such a new distro, it's been bubbling under for years now.


Rhe Red Hat/Fedora package manager yum also provides a rollback mechanidm. No user installation, though.


Does it also provide multiple versions of the same package running side by side? That is one of the more useful features of nix to me.


Installing multiple versions is not the problem. The problem is configuring packages with respect to their dependencies.

Nix enables per user installations and different users can use different versions of a package. What Nix cannot do (as far as i know), is using multiple versions as the same user. For example, Flash once broke due to change in the GNU libc. Can Nix use a different libc version for a specific browser plugin?


Nix can do that, you can easily use nix-env to create a new profile instead of using the default one stored in ~/.nix-profile . I am using multiple tool-chains stored in multiple profiles for my development process.


nbp is correct it is not a problem for the same user to install multiple versions of the same package.

> Can Nix use a different libc version for a specific browser plugin?

Are you asking about the case where the browser depends on a newer version of libc then the plugin?


I tried this distro 2 times since I heard of this rootless package administration. I give up because of poor documentation in both situations. I would love to see this approach in arch or gentoo (without the butthurt of slots)


That is too bad it has worked fairly well on vmware for me. I have also enjoyed using the nix package manager on my macbook as well.

The documentation can be lite on examples but it is very forgiving distribution to experiment with and I do not see to bump in to corner cases as often as I have on other linux flavors/package manager.


Having installed NixOS over a year ago, I was really impressed with the package manager. It's wonderful to be able to have multiple versions of GCC, haskell, etc side by side, so you can debug some of the reasons why your program works in one user's version/environment but not the other's. The actual papers are an interesting read as well.


Interesting, sounds like puppet/chef at the OS level.


I've long suspected that something like this was the logical end point. Ever since I found myself writing a Puppet manifest that created an Upstart entry.

Configuration engines have tended to emphasise bits-at-rest. "Make sure these packages are installed, that these files are present, that this is what's in /etc".

Process management engines emphasise bits-in-flight. "Make sure Wordpress is running. Wordpress relies on PHP, nginx and MySQL".

Generally speaking, config engines assume that the bits-at-rest are correctly arranged to ensure correct runtime performance. And process management assumes that someone else has supplied the bits-at-rest which can be reified into an operational system.

Configuration engines tend to stray a bit into ensuring that software is up and running (eg, cfengine polls services every 5 minutes), but stop well short of the final conclusion of process management: insertion into the init hierarchy.

Why the separation? It's historical. Each local problem was solved in isolation (broken server config / crashing server processes) and they've each grown at the edges towards each other.

Just as ZFS collapsed several historical layers of file system tools into a single layer, it's been long overdue for the concept of defining a model of a system's various configurations with a detect-and-correct mechanism to be a universal framework that applies across an entire system.


Solaris' SMF and fault management framework is a very good step towards what you're after, plus it's mature and suitable for use in production.

http://www.oracle.com/technetwork/articles/servers-storage-a...

Don't let the XML configuration put you off. I suspect they'd have used JSON if they were doing it again, but it's from the era when XML was the default structured text based format.

If you want to play with this (and IPS as mentioned above), try OmniOS: http://omnios.omniti.com/


I've been busy hacking together a SmartOS zone wherein the nix package manager runs. My plan is to use disnix to configure SMF services on it.


Is it turing complete?


You might like a programming language I make, "NCD". See: https://code.google.com/p/badvpn/wiki/NCD

It follows a similar philosophy to Nix but for runtime management of processes and events in general. It's really functional though. But it is a bit declarative, with the implicit backtracking, the unique feature of NCD.


KDE4? Systemd? Experimental? Centralized, versioned configuration? Sounds very, very cool. Worthy of being installed on something around here...


Interesting, how is it different from Gentoo Linux?


check this page out to understand the underlying philosophy: http://nixos.org/nix/


gentoo need slot support for similar behavior


What linux needs is another package manager...


Nix is an experimental Research OS exploring what you can do with a pure functional package manager. This is interesting. You are probably being downvoted because your comment looks like a poor attempt to cash in on the "What linux needs is another..." meme for some easy karma.

But let's take you at face value. Maybe you were serious in your assertion.

While Ubuntu, Debian, Fedor, Gentoo, and many other distros use very mature, robust and all around awesome package managers I still run into issues where a package is uninstallable because of some other package that it depends on is pinned, or the wrong version, or any number of other reasons.

Nix fixes exactly this problem while still maintaining many of the same benefits as the other package manager. So maybe Linux does need another package manager.

Or at least it needs people willing to play around with solving issues in the current package managers in a "research" setting perhaps?


Case in point: I had to upgrade some Debian VM's with Postgres 9.0 to Wheezy with Postgres 9.1 a while ago. They were messy combinations of mostly Etch + parts of Lenny and Squeeze, as a legacy of rushed upgrades. And while upgrading Debian version by version mostly runs smoothly with apt-get, there are some very nasty gotchas:

- You always need to upgrade apt and dpkg step by step, as if you're careless, you'll leave your system with an apt and dpkg that can't install most of the following upgrades, including the next available version of itself due to external dependencies on packages that are only available in a format your current version of dpkg/apt does not support. You then need to downgrade. Problem is you then run into utter dependency hell. Generally the solution is to --force-all install an older version of dpkg from /var/cache/apt/archives, then update and try again.

- If you're not very careful with the apt sources when doing a Postgres upgrade, you risk having Debian install 9.1 and remove Postgres 9.0. Problem is, if it removes 9.0, you can't run pg_upgradecluster, because that requires the old Postgres to exist and be running. Now, reverting is suddenly a big problem unless you add the Postgres teams own Debian repository.

This isn't particularly a criticism of Debian (though I dislike the fact that dpkg and apt-get have external dependencies - if there's anything that should be built statically, it's a package manager) - if you do things carefully, and step by step, things will work fine and you "only" need to learn a couple of rules of thumb (First, always make sure to upgrade version by version, apt-get update, apt-get install dpkg apt). These are hairy edge cases... But they'd be so much less of a problem with easy rollback and/or ability to pull in multiple versions easily.


> I still run into issues where a package is uninstallable because of some other package that it depends on is pinned, or the wrong version, or any number of other reasons.

I ran in to this problem quite often when trying out science or numerical libraries and is one of the reasons I run mostly on nixos now.


[deleted]


I think this comment from jacques_chester [1] does an excellent job at explaining why it is important to break free from this mindset periodically. It is all about busting out of local maxima in the efficiency of our tools.

[1] https://news.ycombinator.com/item?id=5727876


Local maxima, exactly. We often lose sight of how and why we are at a particular maxima and assume that the current layering is the only acceptable layering.

But sometimes it's just accidents, or was just the shortest path to a working system, or lots of little local solutions that agglomerated into larger global solutions.

Every once in a while punching through the old layers is useful. Not always. Sometimes. Having a sense of history makes this easier ... and harder.

Sometimes the outcome is so stunningly obviously better that everyone slaps their foreheads and wonders how it could have been any other way.

Other times ... well, other times we all spend our workdays arguing about it on HN.


Why doesn't it need any more fragmentation? Your argument is begging the question. Personally I think people are better off with 50 kinds of spaghetti sauce.


Improve in what way? This puts any design change off the table forever. I don't see anything wrong with a rethink, as long as it's motivated. Maybe the motivation has not been well-stated - but maybe you haven't made it any clearer that dpkg or rpm are the optimal package managers (barring a few little 'improvement' patches here or there).

Just blasting the project in the most generic terms is avoiding the necessary thinking about what the goals should be and what are good ways of achieving them.


It seems that people have good enough reasons to do away with fragmentation.

And this doesn't sound like just a package manager. It's also taking care of configurations.

That said, I'd love to see this integrated into Arch some day.


What linux needs is a better package manager. I'm not convinced that the entire solution space has been explored yet so I welcome all newcomers.


And fewer asshats.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: