But for example, if I install the Python package "shapely", it will need a C package named GEOS as a shared library. How do I ensure that the version of GEOS on my system is the one shapely wants? By trial and error? And how does that work with environments, where I have different versions of packages in different places? It sounds a bit messy to me, compared to a solution where everything is managed by a single package manager.
You are describing two different problems. Do you want a shapely package that runs on your system or do you want to compile shapely against the GEOS on your system. In case 1 it is up to the package maintainer to package and ship a version of GEOS that works with your OS, python version, and library version. If you look at the shapely page on pypi you'll see something like 40 packages for each version covering most popular permutations of OS, python version and architecture. If a pre-built package exists that works on your system, then uv will find and install it into your virtualenv and everything should just work. This does means you get a copy of the compiled libraries in each venv.
If you want to build shapely against your own version of GEOS, then you fall outside of what uv does. What it does in that case is download the all build tool(s) specified by shapely (setuptools and cython in this case) and then hands over control to that tool to handle the actual compiling and building of the library. It that case it is up to the creator of the library to make sure the build is correctly defined and up to you to make sure all the necessary compilers and header etc. are set up correctly.
In the first case, how does the package maintainer know which version of libc to use? It should use the one that my system uses (because I might also use other libraries that are provided by my system).
The libc version(s) to use when creating python packages is standardised and documented in a PEP, including how to name the resulting package to describe the libc version. Your local python version knows which libc version it was compiled against and reports that when trying to install a binary package. If no compatible version is found, it tries to build from source. If you are doing something 'weird' that breaks this, you can always use the --no-binary flag to force a local build from source.
You could use a package manager that packages C, C++, Fortran and Python packages, such as Spack: here's the py-shapely recipe [1] and here is geos [2]. Probably nix does similar.
That's what I mean, in this case pip, uv, etc. are the wrong tool to use. You could e.g. use pixi and install all python and non-python dependencies through that, the conda-forge package of shapely will pull in geos as a dependency. Pixi also interoperates with uv as a library to be able to combine PyPI and conda-forge packages using one tool.
But conda-forge packages (just like PyPI packages, or anything that does install-time dependency resolution really) are untestable by design, so if you care for reliably tested packages you can take a look at nix or guix and install everything through that. The tradeoff with those is that they usually have less libraries available, and often only in one version (since every version has to be tested with every possible version of its dependencies, including transitive ones and the interpreter).
All of these tools have a concept similar to environments, so you can get the right version of GEOS for each of your projects.
Indeed, I'd want something where I have more control over how the binaries are built. I had some segfaults with conda in the past, and couldn't find where the problem was until I rebuilt everything from scratch manually and the problems went away.
Nix/guix sound interesting. But one of my systems is an nVidia Jetson system, where I'm tied to the system's libc version (because of CUDA libraries etc.) and so building things is a bit trickier.
with uv (and pip) you can pass the --no-binary flag and it will download the source code and build all you dependencies, rather than downloading prebuilt binaries.
It should also respect any CFLAGS and LDFLAGS you set, but I haven't actually tested that with uv.
This type of situation is why I use Docker for pretty much all of my projects—single package managers are frequently not enough to bootstrap an entire project, and it’s really nice to have a central record of how everything needed was actually installed. It’s so much easier to deal with getting things running on different machines, or things on a single machine that have conflicting dependencies.
Docker is good for deployment, but devcontainer is nice for development. Devcontainer uses Docker under the hood. Both are also critically important for security isolation unless one is explicitly using jails.