Hacker News new | past | comments | ask | show | jobs | submit login

This reminds me a lot of the nix package manager[1]. It deterministic package updates that are easy to reason about - they either succeed or have absolutely no effect. The packages are isolated to the point where you can have multiple versions of the same package at once.

That being said, at a glance Ubuntu Core looks like it might be better for the simplicity it brings to the table. It looks like it's making the images based of regular Ubuntu behind the scenes [2], but isolating everything for you and making things a lot more atomic. In short, it's a lot less foreign of a system than Nix.

Those are my general thoughts. I haven't actually used either of them, but Nix has captured my attention in the past for the problems it claims to solve.

[1] http://nixos.org/nix/ [2] https://news.ycombinator.com/item?id=8724049




Transactional installs are an advantage. I m trying to understand & brainstorm if any disadvantages exist especially with instances with multiple copies of every subcomponent.

* Transactional updates across instances: Let's say I have app, web, db, and some other roles of servers. How can I ensure all coordinating sets of instances to get updated altogether or not? For example, I don't want my app servers end up with a previous version of postgres adapter while my database is already updated.

* Memory requirements: does the approach increase the total memory requirements?

* Security: do we need to rely on a 3rd party for updates or can we still compile or own subcomponents? ( We had to in recent bash vulnerabilities)

* Security: If every image sits with its own copies and versions of each subcomponent, do we end up having to prepare a permutation of different images to ensure all is fine?

* Updates: Does it make integrators get lazier and end up with a lot of obsolete or non-improved versions of many subcomponents?

* Architecture: Do we give up the idea of reusable shared building blocks at this level of abstraction (sub-instance)?


I can address some of those questions based on my reading of the literature and what they will integrate with:

* This seems like it would be coordinated by fleet, mesos, kubernetes. If I recall, some of these would allow you to direct new connections to new instances. For databases where clustering requires more sophisticated upgrades, it might have to be manually rolled/scheduled, but could probably be scripted with these.

* Memory requirements: Generally yes, but the thought process is that by having a read-only filesystem for most data, deduplicating filesystems (BRTFS, ZFS) can reduce your memory and storage requirements.

* Security is the toughest nut to crack. You're right, if a package incorporates BASH as a simple shell to run exec against, then you end up dependent on the app provider to use that. Likewise, openssh, libc, other libraries seem like you could get stuck with whatever the app developer has packaged. Alternatively, it looks like if there is a security fix, it should be easy to handroll your own temporary version by unpacking a package, dropping in a new lib, and repackaging. Hopefully they're not pushing for static compilation (which would defeat my argument on memory as well.)

* Updates: Yes, but the same problem happens when everyone has long dependency chains. Instead of laziness, it becomes a hurdle to overcome to get people to up their constraints and incorporate fixes. At least this way, every app developer can ship what works for them.

* Architecture: The reusable component aspect would likely shift closer to compiler/build process. e.g.: Look at how Cabal for Haskell and Cargo for Rust work (and occasionally, fail to work.) I think the goal would be to have reliable, repeatable builds using components managed by something else, using repositories of source code/binaries to build against.


> deduplicating filesystems (BRTFS, ZFS) can reduce your memory and storage requirements

This is getting into a pretty tangential discussion, but I'd be surprised if there are net memory savings from deduplication. Disk yes, but the dedup process itself has significant memory overhead (both in memory usage, and memory accesses), which would need to be offset to have a net win. At least on ZFS, it's usually recommended to turn it on only for large-RAM servers where the saving in disk space (and/or reduction in uncached disk access) is worth allocating the memory to it.


For online deduplication you are correct, but there is not much need for online deduplication on a mostly read-only system.

BTRFS currently only supports offline, and I believe the current state of ZFS is only online. A curious situation, but I imagine ZFS will eventually support offline dedupe and with that, the memory requirements will fall in terms of what needs to be cached.

And memory usage would decrease, because offline dedupe on read-only files reduces duplication in cache. Even memory-only deduplication would be sufficient. I'm not sure if zswap/zram/zcache support it, but it seems like a worthwhile feature.


BTRFS has no built in dedup support as of yet but any snapshots made initially share all data which is very fast and lightweight


Zfs dedupe has memory overhead for writes. Technically the read overhead is zero since the metadata on disk just points to the correct file chunks.


Same to me. And I thought it was strange they did not list Nix as a source of inspiration.

I really look fwd to playing with UbuCore.


i hope it works. i hope it works. i hope it works.

Agreed that this could be very cool.


I like how Nix obsoletes a lot of what Puppet&co are all about, and does so in a very straight forward manner.

It's these simple solutions that make me tick :)


>I like how Nix obsoletes a lot of what Puppet&co are all about, and does so in a very straight forward manner.

I like that a lot, too. It's a great example of taking a step back, examining a problem from a different perspective, and coming up with a better, simpler solution than the current state of the art.


It would be awesome to see a Nix package manager module for puppet, saltstack, ansible, chef & co.

I think that for package installation and rollback, a simple model like that proposed by DJ Bernstein with slashpackage http://cr.yp.to/slashpackage.html seems to accomplish the same goals but with much less complexity.


I just started reading about Nix, do you have some good resources to suggest me on the provisioning in puppet/saltstack/... like scenario? What advantages, how it does things differently... Thanks

Edit: I ask because I understand it's a package manager so I find comparisons to other package managers, but not to provisioning systems


Did you see NixOps?

Basically Nix obsoletes much of Puppet&co, to the extend that you probably want to go without it.

On top of that Nix also provides great means for deployment.


After using Nix and NixOps, my sense was that a Nix package manager module for puppet&co would be really really useful. With NixOps, my concern was that I need to have a trusted OS, and I didn't feel comfortable having to use a custom build of something Ubuntu-ish.

Maybe there's a way to convert or bootstrap any Linux OS into a NixOps-compatible distro. But I didn't see anything.


Nix packages live in isolated directories on a read-only filesystem and their build process is repeatable (usually; any exceptions are bugs!). This means that installing a Nix package will always give you the same result, and that result cannot be interfered with afterwards. Since everything's isolated and read-only, changes are transactional.

For example, let's say package A v1.0 is using some dependency B v3.0:

    B v3.0 --> A v1.0
Let's say we want to update B to v3.1. Since the packages are read-only, we can't touch any of B's files at all. Instead, B v3.1 gets installed alongside the existing v3.0. A v1.0 doesn't notice, since it's still using v3.0:

    B v3.0 --> A v1.0
    B v3.1
We want A v1.0 to use this new version of B, but again we can't touch any of its files. Instead, we install another copy of A v1.0 which uses B v3.1 as a dependency. Again, since they're isolated these two copies of A v1.0 won't interfere with each other:

    B v3.0 --> A v1.0
    B v3.1 --> A v1.0
So, how does the system know which copy of A to use? Firstly, Nix identifies packages by hashing them and their dependencies, which is why we can have two A v1.0 packages (there's no unnecessary duplication; we only get two copies when something's different). Secondly, each user has a "profile" listing which packages they want. So what's a profile? It's just another package. When you "install" a package, you're actually just creating a new version of your profile package, which has different dependencies than the old version. Hence, a more complete version of the above diagram would be:

    B v3.0 --> A v1.0 --> chris-profile-1
    B v3.1 --> A v1.0 --> chris-profile-2
We can easily rollback changes by using a previous version of our profile package (it's not even a "rollback" really, since the new profile is still there if we want it). To reclaim disk space we can do a "garbage collection" to get rid of old profile packages, then clean out anything which isn't in the dependency graph of any remaining profile package.

To get two machines into the same state (the use-case of Puppet and friends), we just need to install the same profile package on each. There's a Linux distro called NixOS which uses Nix to manage all of its packages and configuration.

In fact, since we can install multiple profile packages side-by-side, we can use profiles as a poor man's container. They don't offer strong protection, eg. in a shared hosting environment, but from a sysadmin perspective they give us some of the non-interference advantages (eg. no dependency hell, since there are no package conflicts or version-unification difficulties).

If we want a stronger containment system we can use NixOps, which gives Nix packages a Vagrant-like ability to provision and spin-up real or virtual machines, EC2 instances, etc. For example, we might have a "DB server" package.

If we want to orchestrate these services, we can turn them into Nix packages using DisNix; then we can have packages like "Load Balancer" which depends on "Web Server", which in turn depends on "Users DB".

I have to admit I've not used NixOps or DisNix, but I do use NixOS as my main OS and have grown to love it :)


Whelp.. I've been seeing a lot about Nix on and off for the last year or so, and looked at it once or twice, but never quite got why it was compelling. Now I see... time to go investigate.

I recommend copypasta-ing this to an FAQ associated with Nix!


Very interesting, thanks for the writeup




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: