On NixOS, you do not need to be root to install software. In addition to the system-wide ‘profile’ (set of installed packages), all user have their own profile in which they can install packages. Nix allows multiple versions of a package to coexist, so different users can have different versions of the same package installed in their respective profiles. If two users install the same version of a package, only one copy will be built or downloaded, and Nix’s security model ensures that this is secure. Users cannot install setuid binaries.
Requirement to be administrator to install software is the root of all evils in managing OS.
As far as keeping everything in a special location (/nix or whatever), this reminds me SCO OpenServer 5: they had all the files somewhere in /var; /bin, /sbin and everything else were just symlinks. It did not work all that well.
Pun intended? ;)
Disclaimer: I'm also one of the IPS authors, so I agree with some of NixOS' ideas :-)
I try to stay abreast of package systems which have the same or close feature set as nix. If you have a moment to explain the core mechanic or a reference it would be much appreciated.
But really a lot of the atomic and reliable aspects are the result of integration with ZFS. When the user upgrades packages, we create a backup boot environment (clone of the root file system) and then update the packages. If anything goes wrong, the user can just reboot to the backup be.
When updating the system (that is, packages that require the kernel to be reloaded/system rebooted), we create a clone of the root file system (a boot environment) and update the clone. When the update finishes, the admin can then reboot the system at their leisure.
If you'd like to know more, I'll try to answer your questions. You can email me at $myusername at gmail dot com.
Thankfully this was just on a test system and I do appreciate that other people have had good success with Btrfs; but as a ZFS user for quite a few years now (since before Linux and FreeBSD ports became stable - so my file servers were originally running Solaris) and have never had an issue with the file system (in fact it's saved me a few times), I can't help thinking that Btrfs is still a long way off being a viable contender.
What's more, I found Btrfs's commands to be convoluted and, at times, counter-intuitive when compared with ZFS's. Which isn't a deal breaker on it's own, but it is a great shame given the opportunity they had to get it right (ie writing the entire software stack from scratch and with no legacy to worry about).
This is all my personal experiences though. Others will have their own preferences and (anecdotal) evidence to support that.
We've been using chroot, FreeBSD jails, Solaris Containers, and similar technologies for many years now. Combined with the basic functionality of UNIX and UNIX-like systems, we can quite safely isolate users, apps and data, but without the overhead and inconvenience of heavy-weight virtualization techniques. Even then, these techniques are quite new relative to what's been available to mainframe users for many decades.
Need two versions of the same library for two different applications? No problem, because each app gets its own inferface.
One app needs to accesses a physical block device and another a virtual one? No problem, because each app gets its own inferface.
Is the capability required by the app deployed on another machine and you want to expose it to the application transparently? No problem, because each app gets its own inferface.
And if the current hardware isn't amenable to running such systems with ease, well, that might have something to do with the fact that everything we're running has its roots in the 1980s. The unprecedented success of the PC revolution has essentially frozen all development of substantially new systems.
1. Horses for courses: Once you get past superficial similarity, the Web isn't a universal application runtime. It is more appropriate for some kinds of apps, such as publication media, form-filling, etc. than others, such as apps with fine-grained interactivity, apps with significant endpoint computation, etc.
2. Nobody has cracked the "write once, test everywhere" curse of multiple implementations of a standard, never mind a standard as complex, and with such a long legacy tail as Web browsers. Web app frameworks still devote a lot of code to compatibility. "Native" also means "a single vendor's runtime."
3. Operating systems are different, and there are varying approaches to portability. Running in a universal runtime is one approach, and it has inherent compromises. For example, Microsoft has applications for editing Office documents for Windows, Mac OS X, and for Web browsers. Which one has limitations both in terms of integrating OS features and UI conventions, and it terms of functionality?
this talk by Eelco is a good introduction for hacker types. And the motivation is at the beginning of the talk.
Here's another example that's not public, and not really a distro, but... http://stackoverflow.com/questions/3380795/what-does-amazon-...
Does anybody have any experience/testaments to post about this OS?
I enjoy how easy it is to set up different profiles with different sets of libraries/compilers for development.
It has made it worth while to try out old libraries that only work on old compilers. It also made it easier for me to test against the HEAD of the complier repo for testing forward compatibility or new compiler features.
Since everything is nicely encapsulated once I have an environment working on my laptop I can use `nix-copy-closure` to move it to another remote server. `Nixops` a more recent offering just renamed from `charon` also allows for deterministic building of ec2 instances.
need to add different channels if you are grabbing packages from places other than nix-pkgs.
I had a hard time, getting some haskell packages to work, because nixos symlinks all the binaries, and there was some encryption library dependency, hard coded in a cabal file, and it was a headache/ gave up.
Eelco also provides a pathelf utility for modifiying dynamic linker in elf files, I haven't used it.
I'm on ubuntu again but I can't remember why. Oh ya, I couldn't get Blackberry's SDK working on nixos. Anything that doesn't go through nix-pkgs, will take some work. I didn't get to the point where I was writing nix-expressions.
Other than that it works fairly well (in a VM), and you don't need to wait for it to compile all the packages like with Gentoo: if the build server has a binary then it will download that.
It shares concepts and some bits with NixOS, but replaces the configuration language with Guile, an implementation of Scheme.
However, the general idea of installing things in a self-contained directory was fantastic, and I'm happy to see another operating system trying something similar. I used to use PC-BSD regularly, whose package manager also does this:
Package A could depending on glibc-2.13 and package b could depend on glibc-2.17. No conflicts would because by both running at the same time.
Would this new distro have helped?
So you can run two programs side by side with different glibc versions in Nix. On Ubuntu, I think this is mostly not possible, as you have found.
Two different programs may depend on two different versions of it (/nix/libc-2.10-fdsfgs/lib/libc.so.6 and /nix/libc-2.13-fgsfds/lib/libc.so.6 or even /nix/uclibc-0.9.33.1-blahblah/lib/libc.so) just fine. The only requirement is that both libc versions must support running under the current kernel¹, as you can't have two kernels at the same time.
However, it would be problematic if a single program (foo) would depend on a library (libfoo), which depends on another version of libc that foo does. I.e., dependency graph will be like foo->libc-x.xx, foo->libfoo->libc-y.yy. This would cause symbol conflict and AFAIK this can't be solved without either rebuilding either foo or libfoo or introducing some really nasty hacks to the ELF loader (ld-linux).
¹) Google for "FATAL: kernel too old" error message to see the example of what I mean. There are patches to make newer libcs to run with relatively older kernels, though, it just requires building from source.
I have been in the same situation. But big changes in glibc that break things are very rare, most glibc version updates are backward compatible.
> Would this new distro have helped?
Yes, but it might have required re-compiling some packages.
Btw. NixOS is not such a new distro, it's been bubbling under for years now.
Nix enables per user installations and different users can use different versions of a package. What Nix cannot do (as far as i know), is using multiple versions as the same user. For example, Flash once broke due to change in the GNU libc. Can Nix use a different libc version for a specific browser plugin?
> Can Nix use a different libc version for a specific browser plugin?
Are you asking about the case where the browser depends on a newer version of libc then the plugin?
The documentation can be lite on examples but it is very forgiving distribution to experiment with and I do not see to bump in to corner cases as often as I have on other linux flavors/package manager.
Configuration engines have tended to emphasise bits-at-rest. "Make sure these packages are installed, that these files are present, that this is what's in /etc".
Process management engines emphasise bits-in-flight. "Make sure Wordpress is running. Wordpress relies on PHP, nginx and MySQL".
Generally speaking, config engines assume that the bits-at-rest are correctly arranged to ensure correct runtime performance. And process management assumes that someone else has supplied the bits-at-rest which can be reified into an operational system.
Configuration engines tend to stray a bit into ensuring that software is up and running (eg, cfengine polls services every 5 minutes), but stop well short of the final conclusion of process management: insertion into the init hierarchy.
Why the separation? It's historical. Each local problem was solved in isolation (broken server config / crashing server processes) and they've each grown at the edges towards each other.
Just as ZFS collapsed several historical layers of file system tools into a single layer, it's been long overdue for the concept of defining a model of a system's various configurations with a detect-and-correct mechanism to be a universal framework that applies across an entire system.
Don't let the XML configuration put you off. I suspect they'd have used JSON if they were doing it again, but it's from the era when XML was the default structured text based format.
If you want to play with this (and IPS as mentioned above), try OmniOS: http://omnios.omniti.com/
It follows a similar philosophy to Nix but for runtime management of processes and events in general. It's really functional though. But it is a bit declarative, with the implicit backtracking, the unique feature of NCD.
But let's take you at face value. Maybe you were serious in your assertion.
While Ubuntu, Debian, Fedor, Gentoo, and many other distros use very mature, robust and all around awesome package managers I still run into issues where a package is uninstallable because of some other package that it depends on is pinned, or the wrong version, or any number of other reasons.
Nix fixes exactly this problem while still maintaining many of the same benefits as the other package manager. So maybe Linux does need another package manager.
Or at least it needs people willing to play around with solving issues in the current package managers in a "research" setting perhaps?
- You always need to upgrade apt and dpkg step by step, as if you're careless, you'll leave your system with an apt and dpkg that can't install most of the following upgrades, including the next available version of itself due to external dependencies on packages that are only available in a format your current version of dpkg/apt does not support. You then need to downgrade. Problem is you then run into utter dependency hell. Generally the solution is to --force-all install an older version of dpkg from /var/cache/apt/archives, then update and try again.
- If you're not very careful with the apt sources when doing a Postgres upgrade, you risk having Debian install 9.1 and remove Postgres 9.0. Problem is, if it removes 9.0, you can't run pg_upgradecluster, because that requires the old Postgres to exist and be running. Now, reverting is suddenly a big problem unless you add the Postgres teams own Debian repository.
This isn't particularly a criticism of Debian (though I dislike the fact that dpkg and apt-get have external dependencies - if there's anything that should be built statically, it's a package manager) - if you do things carefully, and step by step, things will work fine and you "only" need to learn a couple of rules of thumb (First, always make sure to upgrade version by version, apt-get update, apt-get install dpkg apt). These are hairy edge cases... But they'd be so much less of a problem with easy rollback and/or ability to pull in multiple versions easily.
I ran in to this problem quite often when trying out science or numerical libraries and is one of the reasons I run mostly on nixos now.
But sometimes it's just accidents, or was just the shortest path to a working system, or lots of little local solutions that agglomerated into larger global solutions.
Every once in a while punching through the old layers is useful. Not always. Sometimes. Having a sense of history makes this easier ... and harder.
Sometimes the outcome is so stunningly obviously better that everyone slaps their foreheads and wonders how it could have been any other way.
Other times ... well, other times we all spend our workdays arguing about it on HN.
Just blasting the project in the most generic terms is avoiding the necessary thinking about what the goals should be and what are good ways of achieving them.
And this doesn't sound like just a package manager. It's also taking care of configurations.
That said, I'd love to see this integrated into Arch some day.