
NixOS - Declarative configuration OS - wamatt
http://nixos.org/nixos/
======
foohbarbaz
This is a move in the right direction:

=== On NixOS, you do not need to be root to install software. In addition to
the system-wide ‘profile’ (set of installed packages), all user have their own
profile in which they can install packages. Nix allows multiple versions of a
package to coexist, so different users can have different versions of the same
package installed in their respective profiles. If two users install the same
version of a package, only one copy will be built or downloaded, and Nix’s
security model ensures that this is secure. Users cannot install setuid
binaries. ===

Requirement to be administrator to install software is the root of all evils
in managing OS.

As far as keeping everything in a special location (/nix or whatever), this
reminds me SCO OpenServer 5: they had all the files somewhere in /var; /bin,
/sbin and everything else were just symlinks. It did not work all that well.

~~~
dEnigma
>[...]the root of all evils[...]

Pun intended? ;)

------
binarycrusader
Some of the ideas seen in NixOS can also be seen in the Image Packaging System
(Atomic Upgrades, Reliable Upgrades) project:
<https://java.net/projects/ips/pages/Home>

Disclaimer: I'm also one of the IPS authors, so I agree with some of NixOS'
ideas :-)

~~~
davorak
I read one article and skimmed the other under "Background reading" but I
could not spot what the core philosophy or mechanic of how you are producing
atomic and reliable upgrades.

I try to stay abreast of package systems which have the same or close feature
set as nix. If you have a moment to explain the core mechanic or a reference
it would be much appreciated.

~~~
binarycrusader
The "no scripting zone" and "no more installer magic" background posts have
some of the philosophy.

But really a lot of the atomic and reliable aspects are the result of
integration with ZFS. When the user upgrades packages, we create a backup boot
environment (clone of the root file system) and then update the packages. If
anything goes wrong, the user can just reboot to the backup be.

When updating the system (that is, packages that require the kernel to be
reloaded/system rebooted), we create a clone of the root file system (a boot
environment) and update the clone. When the update finishes, the admin can
then reboot the system at their leisure.

If you'd like to know more, I'll try to answer your questions. You can email
me at $myusername at gmail dot com.

~~~
binarycrusader
If anyone is wondering, I suspect you could take IPS and integrate with btrfs
instead. The abstractions are generic and btrfs is growing the required
functionality as time goes on.

~~~
laumars
I tried to do just this myself and Btrfs ended up corrupting my install beyond
use.

Thankfully this was just on a test system and I do appreciate that other
people have had good success with Btrfs; but as a ZFS user for quite a few
years now (since before Linux and FreeBSD ports became stable - so my file
servers were originally running Solaris) and have never had an issue with the
file system (in fact it's saved me a few times), I can't help thinking that
Btrfs is still a long way off being a viable contender.

What's more, I found Btrfs's commands to be convoluted and, at times, counter-
intuitive when compared with ZFS's. Which isn't a deal breaker on it's own,
but it is a great shame given the opportunity they had to get it right (ie
writing the entire software stack from scratch and with no legacy to worry
about).

This is all my personal experiences though. Others will have their own
preferences and (anecdotal) evidence to support that.

------
noonespecial
We seem to be heading for a future where every application is run in a
virtualized OS, configured and launched on demand.

~~~
skrebbel
We seem to be heading for a future where the only application run locally is a
web browser.

~~~
tensor
We seem to be making the same unrealistically broad generalizations over and
over, never learning from our past.

------
laurentoget
[http://audio-video.gnu.org/video/ghm2009/ghm2009-dolstra-
lar...](http://audio-video.gnu.org/video/ghm2009/ghm2009-dolstra-large.ogg)

this talk by Eelco is a good introduction for hacker types. And the motivation
is at the beginning of the talk.

------
yowmamasita
This "Multi-user package management" is a really neat feature, I wonder if
there are other distro's having that.

~~~
aphexairlines
Multi-user and multi-aplication (different applications can depend on
different versions of a shared library).

Here's another example that's not public, and not really a distro, but...
[http://stackoverflow.com/questions/3380795/what-does-
amazon-...](http://stackoverflow.com/questions/3380795/what-does-amazon-use-
for-its-build-and-release-system)

------
GigabyteCoin
After playing around with Arch for the last few weeks, this is incredibly
interesting to myself.

Does anybody have any experience/testaments to post about this OS?

~~~
davorak
Sure, nixos is very forgiving to experiment with because you can easily
rollback to a pervious configuration. Modify a library and something stops
working fixing it is as simple as `nix-env --rollback` for a user package and
`nixos-rebuild --rollback` if something was changed with the system
configuration a restart may be needed if some systems are touched but it has
not happened often and when it is needed it is specified.

I enjoy how easy it is to set up different profiles with different sets of
libraries/compilers for development.

It has made it worth while to try out old libraries that only work on old
compilers. It also made it easier for me to test against the HEAD of the
complier repo for testing forward compatibility or new compiler features.

Since everything is nicely encapsulated once I have an environment working on
my laptop I can use `nix-copy-closure` to move it to another remote server.
`Nixops` a more recent offering just renamed from `charon` also allows for
deterministic building of ec2 instances.

------
ztzg
For those with a penchant for parentheses, there is also GNU Guix, which just
had a second alpha release:

[http://lists.gnu.org/archive/html/bug-
guix/2013-05/msg00034....](http://lists.gnu.org/archive/html/bug-
guix/2013-05/msg00034.html)

It shares concepts and some bits with NixOS, but replaces the configuration
language with Guile, an implementation of Scheme.

------
snowpalmer
The way they install multiple versions of applications and manage the
dependencies reminds me of GoboLinux. I like the direction of this. Really
like how they named everything though in Gobo.

[http://wiki.gobolinux.org/index.php?title=The_GoboLinux_File...](http://wiki.gobolinux.org/index.php?title=The_GoboLinux_Filesystem_Hierarchy)

~~~
gw
I remember playing with Gobo several years back. Although renaming system
directories makes it a bit more approachable to newcomers, it also makes it
less language independent and less recognizable to the users who typically
need to use it (sys admins and such).

However, the general idea of installing things in a self-contained directory
was fantastic, and I'm happy to see another operating system trying something
similar. I used to use PC-BSD regularly, whose package manager also does this:

<http://www.pcbsd.org/en/package-management/>

~~~
snowpalmer
Interesting. I never knew PC-BSD did this. Been probably 10 years since I gave
a BSD variant a try.

------
oakaz
That's exactly what the Linux community needs: abstractions made for people
who want to save time.

~~~
vdm
IOW, not "the super-engineer, an idealised and imaginary extension of the Unix
hackers of the era that gave birth to Unix, GNU, and finally Linux."

[http://www.listbox.com/member/archive/182179/2013/05/sort/ti...](http://www.listbox.com/member/archive/182179/2013/05/sort/time_rev/page/1/entry/1:75/20130508172342:8CF96552-B825-11E2-98EE-8CBB5940B2DC/)

~~~
drivebyacct2
In my mind, it's a much better engineered solution to have a centralized,
repeatable set of configurations than to have them spread out wherever in the
filesystem.

~~~
qu4z-2
Whenever I hear "centralized configurations" I think of the Registry and
shudder. But the issue there isn't the centralization, so it's probably
unfair.

------
brini
Interesting critique by a Debian developer:

<http://lists.debian.org/debian-devel/2008/12/msg01027.html>

~~~
cwp
Not interesting at all. He doesn't understand what nix is doing or why, and
just points out all the ways that it's different from Debian.

------
gfxmonk
For a more application-level (rather than OS-level) approach that shares a lot
of good ideas with NixOS, there's also ZeroInstall (<http://0install.net>).
There's a comparison with Nix at <http://0install.net/comparison.html#Nix>.

------
topbanana
I'm not a Linux expert, but it seems some dependency issues won't be resolved
by this e.g. Glibc conflicts. Or would it?

~~~
davorak
Is there something special about glibc conflicts that goes beyond normal
versioning conflicts?

Package A could depending on glibc-2.13 and package b could depend on
glibc-2.17. No conflicts would because by both running at the same time.

~~~
topbanana
I had a hell of a time installing something in Ubuntu that required a
particular glibc version that was differnt to the system--wide one. Like I
said, I'm not Linux expert and I probably missed a trick. I ended up using a
different distro.

Would this new distro have helped?

~~~
chubot
I think in the Nix world, you would have two or more different glibc versions
at different paths. Then the applications are configured to link against the
required versions. Nix has extra build metadata that calls GNU configure as
far as I can tell, and that is where this knowledge would lie.

So you can run two programs side by side with different glibc versions in Nix.
On Ubuntu, I think this is mostly not possible, as you have found.

~~~
qznc
As far as i understand Nix, two users could have different glibc versions, but
two programs of a single user?

~~~
drdaeman
I don't see any problems here. libc is nothing special, it's just a library
that (almost) every piece of software depends upon.

Two different programs may depend on two different versions of it
(/nix/libc-2.10-fdsfgs/lib/libc.so.6 and /nix/libc-2.13-fgsfds/lib/libc.so.6
or even /nix/uclibc-0.9.33.1-blahblah/lib/libc.so) just fine. The only
requirement is that both libc versions must support running under the current
kernel¹, as you can't have two kernels at the same time.

However, it would be problematic if a single program (foo) would depend on a
library (libfoo), which depends on another version of libc that foo does.
I.e., dependency graph will be like foo->libc-x.xx, foo->libfoo->libc-y.yy.
This would cause symbol conflict and AFAIK this can't be solved without either
rebuilding either foo or libfoo or introducing some really nasty hacks to the
ELF loader (ld-linux).

___

¹) Google for "FATAL: kernel too old" error message to see the example of what
I mean. There are patches to make newer libcs to run with relatively older
kernels, though, it just requires building from source.

------
qznc
Rhe Red Hat/Fedora package manager yum also provides a rollback mechanidm. No
user installation, though.

~~~
davorak
Does it also provide multiple versions of the same package running side by
side? That is one of the more useful features of nix to me.

~~~
qznc
Installing multiple versions is not the problem. The problem is configuring
packages with respect to their dependencies.

Nix enables per user installations and different users can use different
versions of a package. What Nix cannot do (as far as i know), is using
multiple versions as the same user. For example, Flash once broke due to
change in the GNU libc. Can Nix use a different libc version for a specific
browser plugin?

~~~
nbp
Nix can do that, you can easily use nix-env to create a new profile instead of
using the default one stored in ~/.nix-profile . I am using multiple tool-
chains stored in multiple profiles for my development process.

------
durbatuluk
I tried this distro 2 times since I heard of this rootless package
administration. I give up because of poor documentation in both situations. I
would love to see this approach in arch or gentoo (without the butthurt of
slots)

~~~
davorak
That is too bad it has worked fairly well on vmware for me. I have also
enjoyed using the nix package manager on my macbook as well.

The documentation can be lite on examples but it is very forgiving
distribution to experiment with and I do not see to bump in to corner cases as
often as I have on other linux flavors/package manager.

------
yarou
Having installed NixOS over a year ago, I was really impressed with the
package manager. It's wonderful to be able to have multiple versions of GCC,
haskell, etc side by side, so you can debug some of the reasons why your
program works in one user's version/environment but not the other's. The
actual papers are an interesting read as well.

------
subprotocol
Interesting, sounds like puppet/chef at the OS level.

~~~
jacques_chester
I've long suspected that something like this was the logical end point. Ever
since I found myself writing a Puppet manifest that created an Upstart entry.

Configuration engines have tended to emphasise bits-at-rest. "Make sure these
packages are installed, that these files are present, that this is what's in
/etc".

Process management engines emphasise bits-in-flight. "Make sure Wordpress is
running. Wordpress relies on PHP, nginx and MySQL".

Generally speaking, config engines assume that the bits-at-rest are correctly
arranged to ensure correct runtime performance. And process management assumes
that someone else has supplied the bits-at-rest which can be reified into an
operational system.

Configuration engines tend to stray a _bit_ into ensuring that software is up
and running (eg, cfengine polls services every 5 minutes), but stop well short
of the final conclusion of process management: insertion into the init
hierarchy.

Why the separation? It's historical. Each local problem was solved in
isolation (broken server config / crashing server processes) and they've each
grown at the edges towards each other.

Just as ZFS collapsed several historical layers of file system tools into a
single layer, it's been long overdue for the concept of defining a model of a
system's various configurations with a detect-and-correct mechanism to be a
universal framework that applies across an entire system.

~~~
bensummers
Solaris' SMF and fault management framework is a very good step towards what
you're after, plus it's mature and suitable for use in production.

[http://www.oracle.com/technetwork/articles/servers-
storage-a...](http://www.oracle.com/technetwork/articles/servers-storage-
admin/intro-smf-basics-s11-1729181.html)

Don't let the XML configuration put you off. I suspect they'd have used JSON
if they were doing it again, but it's from the era when XML was the default
structured text based format.

If you want to play with this (and IPS as mentioned above), try OmniOS:
<http://omnios.omniti.com/>

~~~
BlueZeniX
I've been busy hacking together a SmartOS zone wherein the nix package manager
runs. My plan is to use disnix to configure SMF services on it.

------
drivebyacct2
KDE4? Systemd? Experimental? Centralized, versioned configuration? Sounds
very, very cool. Worthy of being installed on something around here...

------
denysonique
Interesting, how is it different from Gentoo Linux?

~~~
vitno
check this page out to understand the underlying philosophy:
<http://nixos.org/nix/>

------
mbell
What linux needs is another package manager...

~~~
zaphar
Nix is an experimental Research OS exploring what you can do with a pure
functional package manager. This is interesting. You are probably being
downvoted because your comment looks like a poor attempt to cash in on the
"What linux needs is another..." meme for some easy karma.

But let's take you at face value. Maybe you were serious in your assertion.

While Ubuntu, Debian, Fedor, Gentoo, and many other distros use very mature,
robust and all around awesome package managers I still run into issues where a
package is uninstallable because of some other package that it depends on is
pinned, or the wrong version, or any number of other reasons.

Nix fixes exactly this problem while still maintaining many of the same
benefits as the other package manager. So maybe Linux does need another
package manager.

Or at least it needs people willing to play around with solving issues in the
current package managers in a "research" setting perhaps?

~~~
vidarh
Case in point: I had to upgrade some Debian VM's with Postgres 9.0 to Wheezy
with Postgres 9.1 a while ago. They were messy combinations of mostly Etch +
parts of Lenny and Squeeze, as a legacy of rushed upgrades. And while
upgrading Debian version by version _mostly_ runs smoothly with apt-get, there
are some very nasty gotchas:

\- You always need to upgrade apt and dpkg step by step, as if you're
careless, you'll leave your system with an apt and dpkg that can't install
most of the following upgrades, _including_ the next available version of
itself due to external dependencies on packages that are only available in a
format your current version of dpkg/apt does not support. You then need to
downgrade. Problem is you then run into utter dependency hell. Generally the
solution is to --force-all install an older version of dpkg from
/var/cache/apt/archives, then update and try again.

\- If you're not _very_ careful with the apt sources when doing a Postgres
upgrade, you risk having Debian install 9.1 and _remove_ Postgres 9.0. Problem
is, if it removes 9.0, you can't run pg_upgradecluster, because that requires
the old Postgres to exist and be running. Now, reverting is suddenly a big
problem unless you add the Postgres teams own Debian repository.

This isn't particularly a criticism of Debian (though I dislike the fact that
dpkg and apt-get have external dependencies - if there's anything that should
be built statically, it's a package manager) - if you do things carefully, and
step by step, things will work fine and you "only" need to learn a couple of
rules of thumb (First, always make sure to upgrade version by version, apt-get
update, apt-get install dpkg apt). These are hairy edge cases... But they'd be
so much less of a problem with easy rollback and/or ability to pull in
multiple versions easily.

