Hacker News new | past | comments | ask | show | jobs | submit login
Nixery: Transparently build and serve containers using Nix (github.com)
192 points by kimburgess 59 days ago | hide | past | web | favorite | 27 comments



I have to say, this is pretty clever. Dealing with Docker image tags for dependencies is a real pain and doesn't make much sense. you get things like `[package semver]-[edition]-[distro][distro version]`.. and this in no way reflects any guarantee that tag has not been rebuilt since the last time you pulled it. which has horrible implications for security patches to the image itself, and so on.

I don't know where the community will land for a solution, but this is a clever method. I hope it builds traction


Currently when running a private Nixery instance you can actually specify a full git commit hash for the Nix repository that you're using.

If this repository is the upstream nixpkgs[1], or if your private repository follows the same rules about not importing anything without fully-pinned hashes, you can already get those guarantees about the image content with Nixery today!

The one exception is packages that are not in the binary cache, end up rebuilt and aren't reproducible (the binaries might differ) - but NixOS is ~98% reproducible already[2]!

[1]: https://github.com/NixOS/nixpkgs/

[2]: https://r13y.com


I want to try building something like this, but using Tahoe-LAFS[1] to store the packages / images, so that you could get de-duplication for free, and capability based access control. I have no idea if it would be feasible in practice yet. How hard is it to build a custom docker registry? From a glance at this code it doesn't look terribly complex to get something simple going.

https://tahoe-lafs.org/trac/tahoe-lafs


Sorry for the possibly lazy comment, but why TAHOE-LAFS instead of IPFS?


IPFS would be an interesting experiment too. Mainly I want to try it with Tahoe because of two features: convergent encryption (basically a way of deduplicating encrypted blobs) and capability based security (a way of doing access control at the file/blob level, rather than role-based).


Quick note on the layers:

The current layering strategy of the public instance is not the one linked to in the README (edit: fixed), it is the one described in this document:

https://storage.googleapis.com/nixdoc/nixery-layers.html

This strategy optimises for reducing the amount of data transferred. I recommend reading both this and the original post on buildLayeredImage!


When combining multiple packages, does the order matter? Does shell/git/htop give me the same thing as shell/htop/git?


(Author here)

The actual content layers (i.e. those containing Nix store paths) are going to be the same, but metadata layers (those containing the image name and the references to all layers) are currently going to be cache-busted.

I'll look into getting that fixed - as long as Docker doesn't mind seeing a different image name than it requested in the manifest it shouldn't be an issue to return the metadata in a stable order.


since this is using Nix, I don't believe the order matters. each package installs its own isolated dependencies, there is no sharing between packages.


Order doesn't matter to Nix, but images have ordered layers.

After digging into the blog post, it sounds like the layers are sorted in a particular order, which suggests that the order in which you specify the packages doesn't matter.

That said, I went ahead and pulled nixery.dev/shell/git/htop, and then pulled nixery.dev/shell/htop/git, and I actually got different results. I ran `docker info` on both images and diffed the results, and the layer lists were slightly different. I filed this as https://github.com/google/nixery/issues/38.


This is now fixed - the issue was basically that image manifests and config layers contain the image name, so if the name changes those layers will be replaced.

The solution is to sort the packages in the name and return identical manifests. Docker doesn't seem to care about attaching names from registries that weren't in the config layer.

There's a separate issue where Docker will re-download layers it already has because it doesn't just use the content hash for caching, but those layers will be identical. Working on figuring this one out ...


Neat. I like both Nix and containers.

Though I'm curious. Can someone explain

> This is not an officially supported Google project.

"20% time" I guess?


Google open source policy. Even if you work on something with your own equipment, on your own time, and it doesn't relate to your work, it probably still relates to something Google does. Therefore Google asserts ownership over all work their employees produce and thus they have these disclaimers.


I don’t work at google. Maybe I should also include that disclaimer. Just in case somebody thinks ol’ smudgy is pumping out official google projects.


It's possible sometimes to get Google to disclaim ownership of stuff that's genuinely unrelated. There is a process. (Or was when I left.)

On the other hand, if you want to work on open source on company time or use the code at work (which may have been part of the point of writing it), being able to do so is nice.


The process still exists, but it's discouraged for open-source projects. On the other hand, most open-source projects don't care who owns the copyright.


Sometimes big companies aren't paying because they want you to do a job; they're paying because they want you.


Neat, focusing on containers as a way to replace systems while still doing most things within Nix is a good take over strategy.


Neat! Maybe out of place but is there something similar where a virtual machine is spun up rather than docker?


That's actually something I've been discussing with a colleague - Nix is capable of building various other image types (netboot, qemu, etc.) and we could theoretically extend Nixery to provide such images in addition.


(I know almost nothing about Nix, so take this question loosely)

Do you know if there's an easy way to generate a Docker container from a Nix script? That could be a very nice alternative to a Dockerfile.


Yes, you can do that using for example `buildLayeredImage`:

  { pkgs ? import <nixpkgs> {} }:
  
  with pkgs; dockerTools.buildLayeredImage {
    name = "curl-bash";
    contents = [ curl bashInteractive coreutils ];
  }
Putting that in a file and calling `nix-build` on that file will give you a tarball that Docker will happily load using `docker load`.

Doing this has a lot of advantages over Dockerfiles, see this blog post for some of the others: https://grahamc.com/blog/nix-and-layered-docker-images


Fantastic, thanks!



That can be made a runtime concern - if you use Kata Containers as a runtime for Docker, it'll use a VM instead of kernel namespaces.


A little OT, but is that broccoli in the Nixery logo? Anyone know why?


Why not? ;-)




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: