> You can just download a bunch of packages in parallel, extract them in parallel, and finally "merge" the prefixes to get those combined bin/ and lib/ dirs.
docker pull achieves the same result: layers are fetched in parallel, and they are extracted using pgiz (parallel gzip). It just uses a pre-defined order, which does not harm performances, but it is not useful either in case nixery is used.
The point is not about parallelization, it's that nixery has to optimize for cache reuse, which is an artificial problem created by docker.
If you have two layers installing an individual packages like /nix/store/x and /nix/store/y, stacking them as [x, y] and [y, x] would result in the same docker image contents, but docker will generate different hashes.
> If you have two layers installing an individual packages like /nix/store/x and /nix/store/y, stacking them as [x, y] and [y, x] would result in the same docker image contents
This is an assumption which is valid for nix, but not for most of the package managers. Whenever such assumption can be considered correct, Dockerfiles can achieve similar results using multiple stages, but you would probably need a pre-processor to have a stage for each package. Something like an `INCLUDE` directive could help too: https://github.com/moby/moby/issues/3378.
docker pull achieves the same result: layers are fetched in parallel, and they are extracted using pgiz (parallel gzip). It just uses a pre-defined order, which does not harm performances, but it is not useful either in case nixery is used.