I have been using nix for a while to build binary packages for crashcart[1] and I really love the premise of isolated source-based builds.
Unfortunately, over time I've become quite frustrated with the pull-everything from the internet model. If you are building packages from scratch each time instead of pulling down cached version using the nix command, the build breaks quite often.
Mostly it is silly stuff like the source package disappearing from the net. A particularly egregious offender is the named.root[2] file being updated, which will cause everything to fail to build until the sha is updated in the build def.
I don't know that there is a great solution for this problem. Maybe there needs to be a CI system that does from scratch build of all of the packages every day and automatically files bugs. Alternatively, a cache of sources and not just built packages could ease the pain. This issue probably affects ver few nix users, but it has demoted my love of nix down to "still likes but is somewhat annoyed by".
Regarding disappearing sources: Nix offers a content-addressed mirror for sources downloaded by the Hydra CI system. As a random example, here is the latest Chromium source tarball:
So disappearing sources is not a huge problem in my experience. Obviously if you have package declarations outside of Nixpkgs proper things are different.
This problem is also something the Software Heritage project[0] aims to solve, but I don't think they have a good API yet.
Yeah I'm aware of the build mirror, but crashcart builds things under a different prefix so it must build things from source each time. I realize this doesn't affect most users (which is why it takes a while for disappearing sources to be found and fixed). A content addressed mirror for sources as well would solve the problem nicely.
It might be related to using a different Nix prefix for your builds, which is a little poorly supported. Just curious: Why are you using a different prefix?
The point of crashcart is to be able to side load a filesystem with utilities into a running container. It is very important that the location we pick doesn't conflict with any path already in the container. If we used /nix as the mount path it would conflict with any container that uses nix. In order to prevent this (probably rare) conflict, we build our utilities in /dev/crashcart/ instead.
Hmmmm, interesting... That does seem like a pretty good reason, though maybe you could bind mount over /nix? That should be fine from a Nix perspective, not sure how well it works with container technologies.
we could bind mount over /nix or /nix/store, but that means any existing nix packages from the container would not be available. The whole point is to have the whole container file system available along with the utilities. We could alternatively find each package that we need for our debugging utilities and bind mount each directory from the store individually. This would work due to unique paths in the store, but that means potentially hundreds of bind mounts and is an orchestration nightmare.
we didn't look at using overlay. Might be possible, although that would introduce a dependency on kernel version and/or module. A custom fuse might be an option here as well but fuse in containers is a bit sketchy at the moment.
This looks like a neat feature. Reading up on it briefly, it looks like it changes the mount namespace / chroots when running the binary in order to accomplish this. Unfortunately that defeats the purpose of crashcart, since the whole idea is to have access to the container file system while running your sideloaded binaries.
Unless there is some magic that I'm missing, I don't think this option helps for our use case.
What about running your own caching HTTP proxy for your build's external dependencies or else pulling static copies of these dependencies into your own repo? It seems like the problem isn't building everything from source but rather that the sources of truth for the inputs are unreliable. You'd have the same problem trying to build e.g. Debian from scratch if you couldn't reliably pull down all the sources for things.
A caching http proxy would help me build the same things reliably, but it wouldn't help anyone else who cloned my repository and attempted to do their own build unless I also gave them access to the proxy. And it hides the fact that the standard from-scratch build doesn't actually work. I think the cached nix packages is why it takes so long for some of these issues to be discovered.
The difference with Debian (and other linuxes) is that the source code for the build is also provided. The upstream source might be from random place on the net, but Debian provides a source package that you can use to rebuild the binary package. Maybe the solution is for nix/nixos to provide something similar.
Generally everything is cached that you build/download. But only on the machine you do it on.
That's why you usually want a collective cache inside your org additionally, because not all machines will have everything yet.
Unfortunately, over time I've become quite frustrated with the pull-everything from the internet model. If you are building packages from scratch each time instead of pulling down cached version using the nix command, the build breaks quite often.
Mostly it is silly stuff like the source package disappearing from the net. A particularly egregious offender is the named.root[2] file being updated, which will cause everything to fail to build until the sha is updated in the build def.
I don't know that there is a great solution for this problem. Maybe there needs to be a CI system that does from scratch build of all of the packages every day and automatically files bugs. Alternatively, a cache of sources and not just built packages could ease the pain. This issue probably affects ver few nix users, but it has demoted my love of nix down to "still likes but is somewhat annoyed by".
[1]: https://github.com/oracle/crashcart [2]: https://www.internic.net/domain/named.root