Hacker News new | past | comments | ask | show | jobs | submit login

I was thinking about something like this for build systems. Everything in Docker is regular Linux. I get why Docker is so big for its use case as cloud deployments, but what I actually want from it is such a small piece of it. Hermetic, reproducible builds that produce the same binary on the same kernel release. No dependency hell because the dependencies are all part of the same build. (I know Bazel has a lot of this already.) The Docker solution of pulling in an entire distro is overkill, and it doesn't even solve the problem because dependencies are just downloaded from the package manager.



If that's your concern then Nix might be your thing, because that's what is targeting. It approaches the problem slightly differently. Instead of generating an image, it is used to describe all dependencies for your project down to glibc. Of course you're not expected to define full dependencies of your project, so you (and most people) will use nixpkgs. As long as nixpkgs is fixed to specific version (it's literally a github repo) you can get identical result every time.

Once you have that then you want to deploy it in whatever way you like it. Nixpkgs has functions to for example generate a docker image that contains only your app + essential dependencies. You could deploy it using nix package manager as you would install a typical app in the OS. You could also describe configuration of NixOS that has your application included and it could generate an image of it.

Community also came up with other generators [1].

[1] https://github.com/nix-community/nixos-generators


> it doesn't even solve the problem because dependencies are just downloaded from the package manager.

The advantage of Docker is that you can verify the container works locally as part of the build process rather than finding out it is broken due to some missing dep after a deployment. If you can verify that the image works then the mechanism for fetching the deps can be as scrappy as you like. Docker moves the dependency challenge from deployment-time to build-time.


Does container mean something different to y’all than it does to me?

I ask because I read your comment as saying “the advantage of Docker is that it uses (explanation of what containers are)” and the parent comment as saying “all I want from Docker is (explanation of what containers are)” and I am confused why (a) y’all are not just saying “containers” but rather “the part of docker that packages up my network of scripts so I can think about it like a statically linked binary” and (b) why you think this is a competitive advantage over other things you might have instead recommended here (Buildah, Makisu, BuildKit, img, Bazel, FTL, Ansible Container, Metaparticle... I am sure there are at least a dozen) to satisfy the parent comment’s needs.

Is there really any container ecosystem which has write-an-image-but-you-can’t-run-it-locally semantics? How do you finally run that image?


Docker is too general, too much of a Swiss army knife for this particular problem. The problem I am talking about is where a C++ program has all of its dependencies vendored into the source tree. When you run Make, everything including the dependencies build at the same time. All you need is a chroot, namespaces, cgroups, btrfs, squashfs--plain old Linux APIs--to make sure the compiler has a consistent view of the system. Assuming the compiler and filesystem are well behaved (e.g., don't insert timestamps), you should be able to take a consistent sha256sum of the build. And maybe even ZIP it up like a JAR and pass around a lightweight, source-only file that can compile and run (without a network connection) on other computers with the same kernel version.

Again, Bazel is basically this already. But it would be nice to have something like OP's tool to integrate in other build systems.

I could just make a Dockerfile and say that's my build system. But then I'm stuck with Docker. The only way to run my program would be through Docker. Docker doesn't have a monopoly on the idea of a fully-realized chroot.


You might be interested in LXC if you haven't seen it already. It is also a big tool, but it is smaller than Docker.


For some scenarios, most (all?) of them have write-an-image-but-you-can’t-run-it-locally semantics.

My build server is x64, but target output is ARM. Can't exactly just run that locally super easily. Perhaps somebody has created a container runtime that will detect this, and automatically spin up a qemu container, running an arm host image, and communicate my container run request (and image) to that emulated system, but I haven't heard of that feature. (Not that I actually looked for it.)


In my current company we are deploying almost all code as docker (with exceptions of lambda functions) when talked to multiple developers. No one uses docker for local development, except maybe using it to spin another service that might interact with the app, but even that isn't preferred. Mainly because unless you're running Linux, docker is quite expensive on resources due to running under VM.


"Everything in Docker is regular Linux" is a bit of a misleading statement IMO. You aren't required to pull an entire big distro like Ubuntu and install deps from a package manager. Are you familiar with scratch builds? You can create an image that is basically just a single executable binary with a few mounts handled for you behind the scenes so that chroot works.

https://www.mgasch.com/post/scratch/

Typically a minimal Alpine distro image will allow you to pull in deps however you want (either via package manager, manual download, or copy), run a build, and then copy only the artifact over to a scratch image for a final image that's only a few MB in size.

https://medium.com/@chemidy/create-the-smallest-and-secured-...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: