> Why hasn't the industry come up with an alternative?
We used to have that, some companies still have the capability and know-how to build and run infrastructure that is reliable, distributed across many hosting providers before "cloud" became the "norm", but it goes along with "use or lose it".
Once upon a time, I did "apt upgrade python-pip3" or something like this (was it just "python-pip"? Or maybe it was "apt upgrade"? It was a couple years ago). Anyway, what I do remember is that it quite literally killed apt: invoking it with any command would lead to a dump of a stack trace with ImportError coming from pip. Apparently, apt uses system-wide pip internally so if you touch it, everything breaks? Don't know, don't much care: since it was just a VM so I simply rolled to the previous snapshot and forgot about the details.
Edit: Ah, apparently the steps to reproduce are: do "apt install python-pip3"; do "apt install python3.8"; when pip3 complains that it's outdated, update it with the command it itself suggests.
When deploying the developed application on some server, all the exact dependencies get installed there. The main reason for the existence of the server and its configuration is to run the application, so the server adapts to the needs of the application and gets the dependency versions preferred by the app, instead of the application trying to adapt to the server and trying to make do with the libraries already existing there.
I'd say it's a heavy-handed approach to mitigate more fundamental issues with how python packages are maintained, if everybody wants to pin different versions then we're going to have to install different versions of everything which is what npm does and I consider that heavier.
Again, it's all a question of point of view, what we see as a package manager problem and causes us to keep reinventing packages managers, might actually be a problem with how we maintain our packages, my point of view being the latter. But I'm digressing.
When it comes to installing on "another machine", you don't know what Python they have, you don't know what libc they have, and so on, that is exactly what containers attempt to mitigate, so that seems exactly like the tool to use for this problem.
I think it's a fundamental problem with managing dependencies. On one hand, any given application usually knows what version of dependencies it actually supports so it makes sense for the application to simply bundle those in: it most extreme cases it's statical linking/binary embedding, or (usually) putting the dependencies in subdirectories of the application's directory ― in cases where the application has a "directory where it lives in" instead of it being thinly spread all over the system (e.g. over /bin, /etc, /usr/bin/, /usr/lib/, etc.).
On the other hand, the users/sysadmins sometimes want to force the application to use different version of dependency, so the application may provide for that by somehow referencing the dependency from the ambient environment: usually it's done by either looking for a dependency in a well-known/hard-coded path, or getting that path from a well-known/hard-coded env var, or from a config file (which also you have to get from somewhere) or from some other injection/locator mechanism, thousands of those.
And all this stuff is bloody fractal: we have system-level packaging, then Python's own packaging on top of that, and then some particular Python application may decide to have its plugin-distribution system of sort (I've seen that), and that too goes on top of all of that, not to mention all the stuff happening in parallel (Qt sublibraries, GNOME modules, npm ecosystem)... well, you get the picture. It kinda reminds me of "Turing tarpit" and I doubt collapsing all this into one single layer of system-level packaging, and nothing on top, is really practical or even possible.
If you are using your roads to do gps pathfinding, then you are indeed doing it wrong.
Do not do in infrastructure what can be done in code. You can't version control your infrastructure, but you can version control your code.
And with cloud instances and VMs providing abstractions that don't map 1:1 to the hardware they're running on, all your infrastructure becomes code to create reproducible deployments, or serverless execution.
You don't build your roads immediately before you start driving, and destroy them after you're done, and build only as many miles of road as you need each time you go for a ride. They're not a great analogy to what we do with computers.
These are some attitude goals for me. It's so easy to take things personally. Being able to take things constructively even when they might be personal is a great skill.
Simply having this surveillance tool come out the door shows that there is little or no consideration for people's privacy during the planning and development process.