> Spack is a package management tool designed to support multiple versions and configurations of software on a wide variety of platforms and environments. It was designed for large supercomputing centers, where many users and application teams share common installations of software on clusters with exotic architectures, using libraries that do not have a standard ABI. Spack is non-destructive: installing a new version does not break existing installations, so many configurations can coexist on the same system.
> Spack is a package manager for supercomputers, Linux, and macOS. It makes installing scientific software easy. Spack isn’t tied to a particular language; you can build a software stack in Python or R, link to libraries written in C, C++, or Fortran, and easily swap compilers or target specific microarchitectures.
Very similar to (home)brew, MacPorts, etc, with more of a focus on HPC so you can have multiple versions of a particular piece of software, made with different compiles and linked against different library version.
> Accessing InfiniBand and GPUs directly become a problem.
I use nvidia containers on HPC systems every day and accessing NICs, doing RDMA to GPUs, etc. "just works" and performs as well as baremetal. Every time we upgrade our container we verify the new container with a set of benchmarks against both the old one and baremetal.
> You don’t want to give indirect root access via docker group, too.
I don't know of any HPC center using docker though. It does not sound like a good idea because the docker daemon runs as root..
> Why not just containerize your software and run the container on the HPC cluster?
Docker needs root access which is a big no-no in multi-user environments.
Singularity/Apptainer was developed (with HPC in mind) so that non-admin users could run containerized workloads, and Spack supports creating such workloads:
I have not personally used Spack, but a similar product in HPC, and I have often wondered why something analogous did not take off for desktop computing. We are finally seeing the rise of Nix, but the experience is currently so bad, I will give it a decade before it starts gaining serious traction. Sooner only if some big platform latches onto it.
A lot of these selling points sound like features that Nix already has. What does Spack offer that Nix does not? Is it documentation? User experience? Wider support for computing libraries?
Co-author/spack guy/occasional frequenter of Nix fora here.
Aside from what p4ul said above, the main architectural differences between Nix and Spack are:
1. Spack has a dependency solver (the "concretizer"), Nix does not
2. Spack packages are parameterized, and we solve for parameters
like build options, compilers, virtual dependencies, etc.,
so the user can say what tweaks they want on the CLI or in
an environment. To do this same with Nix packages you typically need
to hack on some nixpkgs. The goal of Spack is to be able to compose
and swap libraries, compilers, and other options easily.
3. Spack has more support for external packages (or "impure" packages
as the Nix people say). You can point it at an existing installation
on your system to use as a dependency.
Hashing installation directories, the store-based installation model described in the paper are things the two have in common.
I'm not very familiar with Nix, but one rather unique feature of Spack is the concept of combinatorial builds.
Suppose we have a big HPC cluster, and suppose also that this is a fairly heterogeneous cluster in terms of CPU architectures. We might have, for example, 3 different generations of Intel chips. We would also have several different compilers, and several versions of MPI (e.g., OpenMPI, MVAPICH, etc.).
Using Spack, it is trivially simple to build a piece of software with all possible combinations of compiler-architecture-MPI.
Spack also integrates very nicely with LMOD, so managing Spack installs using the LMOD module system is extremely convenient.
I would also argue that the user experience with Spack is phenomenal, but that's just a personal preference.
This is a nice paper, and I do appreciate the authors' efforts. I fight with this stuff daily.
But...this caught my eye:
"While it’s a pleasant thought experiment to imagine a world
where we do not require backwards compatibility with estab-
lished loaders, the state of the practice is that we must work
within the limitations of ELF and the System V ABI model."
In light of the current HPC orthopraxis, I wonder if there might not be real value in seeing how far towards reality one could drag the thought experiment of not having to depend on that backwards compatibility. E.G. what sort of work abandoning the ELF/SysV ABI model might require.
Dijkstra said something once about the problems of the real world being the ones that you're left with when you refuse to apply their effective solutions.
I tried to remove the expectation that supercomputer applications would load their libraries dynamically and that became one of the single most hated aspects of that product line
people really hate having their expectations violated - even if its an expectation of a fork in the eye
Interesting! I'd never touched a Catamount/SUNMOS system; didn't know that static everything was an option there, although some of my friends did work on those. Do you suppose the friction was due to it not being how their local workstation/previous system worked?
I never worked with the XT3, but I’ve worked with others with similar limitations. Many of our projects prefer static linking, but at any given time they tend to have to run on several platforms. As of today, they have to run on at least two where one won’t load gpu components in libraries correctly if they are statically linked, and one where it won’t if they’re dynamically linked. Forcing projects into one model or the other makes the smaller requirements on other platforms much harder to deal with, even if the requirement aligns with the project’s preference.
yeah, mostly just 'what the hell'. I pretty sure it really didn't get in anyone's way - all the libraries were installed on the front end. it wasn't an option - we didn't support dynamic linking there
Another author here (Tom Scogland). We seriously talked about it, and are exploring some other loading models as part of subsequent work. The real trick is that this has all been neglected and effectively set for so many decades that if we do manage to find something better, it will take convincing some of the most conservative open source maintainers and code owners in existence to update the loader and c standard library to roll it out in a way most users would be able to leverage. We’re trying, and think we’ve found some really preferable alternatives and options, so here’s hoping.
The stand-alone code generator (a statically linked executable written in Go with no dependencies outside the Go standard library) generates stand-alone POSIX C code for the neural net, requiring only gcc to compile.
> Spack is a package management tool designed to support multiple versions and configurations of software on a wide variety of platforms and environments. It was designed for large supercomputing centers, where many users and application teams share common installations of software on clusters with exotic architectures, using libraries that do not have a standard ABI. Spack is non-destructive: installing a new version does not break existing installations, so many configurations can coexist on the same system.
* https://spack.readthedocs.io/en/latest/
> Spack is a package manager for supercomputers, Linux, and macOS. It makes installing scientific software easy. Spack isn’t tied to a particular language; you can build a software stack in Python or R, link to libraries written in C, C++, or Fortran, and easily swap compilers or target specific microarchitectures.
* https://spack.io
Intro presentation from some HPC conferences:
* https://www.youtube.com/watch?v=edpgwyOD79E
* https://www.youtube.com/watch?v=DhUVbroMLJY
Very similar to (home)brew, MacPorts, etc, with more of a focus on HPC so you can have multiple versions of a particular piece of software, made with different compiles and linked against different library version.