Hacker Newsnew | past | comments | ask | show | jobs | submitlogin



And which of these are "package systems supplied by your operating system"?


While I agree that the OS package should be used first and foremost, it just often doesn't have the required software. (Even after adding extra repos that it might support)

You are picking on the useless details. Those are all commonly used package managers in production environments. They often provide software that simply never gets packaged with the OS. They likely always will, because they have more focused design goals.

A better way to argue this would be point out that specific ways to use the package managers better. For example: bundlr supports saving all the required packages offline. This provides the opportunity to do a security review and save the packages locally/internally rather than always trusting the whatever is on the Internet.


> While I agree that the OS package should be used first and foremost, it just often doesn't have the required software. (Even after adding extra repos that it might support)

OS-supplied packages don't grow on magical trees. If you don't have the necessary software in official repositories (or if it's your software), you can package it yourself. Deployment then becomes a breeze, and you save yourself otherwise completely useless process of recompiling things over and over again.

> You are picking on the useless details.

Quite the contrary. Those details make important difference.

> Those are all commonly used package managers in production environments. They often provide software that simply never gets packaged with the OS.

Apart from Homebrew, which is for workstations (hardly anybody runs macOS servers), none of these "package managers used in production environments" provide you a complete way to rebuild your software. You can be fine for a while if you stay away from modules that are interfaces to C or C++ libraries and from tools from other languages (e.g. I have used Python's Sphinx to document Erlang daemons quite successfully), but once you hit that, deployment starts to be PITA, because you'll need to remember to install all the required libraries, -dev packages, compilers, and what not.

On the other hand, DEB or RPM with artifacts will just automatically pull the required libraries, and its build dependencies give a dedicated and standard place for the necessary build tools.

Your comment supports my opinion that today's programmers usually don't want to be bothered with learning things that have been working for sysadmins for twenty years already.


Gentoos emerge and FreeBSD port are package managers that preferably build from source. At least emerge also accepts git sources, with tags and commit specifiers. I haven't used bsd in a while on a prod system, but I'd not be surprised if port gained the same functionality - after all whether you're pulling a tarball and use a checksum for integrity checking or hand the task off to git makes exactly no difference at all.

And I have yet to hear that port and emerge are incapable of doing dependency resolution. They're battle tested systems that work well in production environments.


First, portage and ports are OS-supplied mechanism for installing software. They are nothing like pip or gems or npm, which only can install things written in their respective languages of choice and fail miserably for modules touching any library external to them (unless you manually ensure the library's and compiler toolchain's presence, that is).

Second, ports and portage have support for and networks of mirror servers that keep copies of software available through these packaging systems. It's trivial to switch if one of the mirrors goes down. For pip, gems, or npm you need to plan ahead for the problems and deploy your own package cache, from what I know.

Third, I was using Gentoo with one of these "battle tested systems that work well in production" for several years. It was doable, but it wasn't pretty, could lead to breaking software after updating some random deep dependency (if it was recompiled with different flags), and generally required more work and attention than APT would, all that for very little gain (if any gain at all). Oh, and it ended up working with binary packages, after all, I just needed to compile them myself instead of having a half an hour downtime of production MySQL because it needed to get compiled (which could fail, leaving me with no working database installed).


Gems can be used to installing anything. It often build java, go, C, C++ and I think I heard it could do rust once, but generally anything in faster languages to create faster versions of the implementation of the functionality for a given gem.

I bring that up to highlight some of the wrong assumptions you make. You make several needless assumptions and use those to draw funny distinctions between things. I am not even sure of the point anymore.

likely any of these systems could be used in a variety of environments for a variety of purposes.


You can do anything, including Rust, yeah. You can also distribute pre compiled stuff.


The ones that actually matter are more like apt or yum.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: