Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Everything boils down to dependency management. It doesn't matter whether you're running a distribution, a container system, a VM farm, a herd of machines or anything that loves static linking: a sysadmin needs to be able to ask "what packages are in use right now, what is their provenance, and how can I roll out changes to that with the least amount of tsuris?"

(And the security officer needs to be able to ask a sysadmin to compile that data. And the auditor needs to be able to verify the chain that produced that data. And the poor dev trying to fix a bug needs to be able to use that data to build a version that replicates reported issues, so they can show that they can also fix the issue. And so on. And so forth.)

Just as interpreters usually beat compilers for speed of debugging, a system designed to properly manage and modularize dependencies will be faster to debug than an equivalent system that just builds the final target as fast as possible.



I love Arch. It seems to find a sweet spot between the boring business world of yum-based distros and the wild, wild west nature of gentoo.

Just the right thing for the academic pursuits.

Mileage is wildly variable, of course.


I have to say, despite running Gentoo for more years than I can remember, the only time I've ever had anything break on me was when I was first starting out. I had a mask conflict causing issues during emerge/package update and it was only because I was doing something exceptionally stupid. Note that this didn't actually cause me any problems, it just made me change how I was doing something to the correct way of doing it.

Outside of that however, Gentoo has been the smoothest distro I've used in a long time. I've tried out Arch (started out on Arch), RHEL, and Debian in the past but nothing has been as cooperative as gentoo has been.

Portage is really quite a slick packaging system for what it is. Every time I've had an update that could be potentially risky, I've just followed the upgrade instructions (eselect news) that Portage gives me and within 15 seconds any potential issue has been resolved.

Mind you that like you said, this is very much a "mileage may vary" scenario however after switching to Gentoo, it really is hard to want to go back to other distros when the system is so low maintenance after the initial install.

TL;DR Gentoo isn't really as much of a wild west distro as people make it out to be. It is bespoke to be sure but there is very good documentation and during any update Portage provides a very clean and straight forward way to handle the rare manual intervention as well as integrating new package and system configurations into your own.

Maybe this was out of place but I always see people portraying Gentoo as either a toy for experimentation or too crazy, bleeding edge, and unstable to use for real purposes.


Maybe I was eating too much learning curve or not gentoo-ing responsibly, but the Kernel and X11 were perpetual sources of woe.


The initial setup in Gentoo is definitely the hardest part but once stuff works, it seems to work until the end of time.

With the kernel, manual configuration more or less is the following:

1. emerge sys-kernel/gentoo-sources

2. Configure the kernel with "make menuconfig". Manual configuration is just turning on the stuff you need and turning off the stuff you don't. If you don't know what it is, either look it up until you do or leave it as default.

3. "make && make modules_install && make install".

4. Add any kernel modules to /etc/modules-load.d/

If you don't want to configure the kernel yourself(seriously would recommend learning to do this, it is surprisingly easy and once you do it once, you just tweak your config as necessary), you can use genkernel to automate the process. Running genkernel should work in almost every case unless you have some kind of esoteric hardware you need to support. I think you just need to run "genkernel all" nowadays to do a full genkernel config, build, and install.

As for X11, I definitely remember fighting with it a bit initially but nowadays it is supposed to essentially work out of the box more or less like this:

1. Add your GPU and input devices to /etc/portage/make.conf with "VIDEO_CARDS=", "INPUT_DEVICES=", and add X to your use flags.

2. Install xorg-server, xorg-drivers, and test it with xstart. Everything should work out of the box on most hardware.

3. Install your desktop/display manager of choice in portage and follow the wiki steps in the docs for said manager. For most of them this should essentially just involve enabling the services in OpenRC or Systemd.

No idea about wayland but that's mostly just because I haven't been able to be bothered with moving away from X11 for any good reason.


But then the temptation to get weird with the keywords sets in. . .


You can't really compare Arch to Gentoo really. Arch's goal is to keep it simple which they do via a extremely robust, but easy to understand package format. It takes almost no time to create a new package and submit it to the AUR. I can get the latest version of almost any piece of software I want with Arch between their standard package repo and the AUR. I can upload a small PKGBUILD easy-to-understand text file and create a package from it. Yum on the other hand is tedious to maintain. That is why RHEL and CentOS are often years behind stable packages in their repos, and using a two-year old kernel.


As someone who uses Arch as a daily driver and has spent plenty of time messing around with rpmbuild I disagree with your claim that yum (RPMs) is tedious to maintain. RHEL and CentOS are behind on software releases because they actually freeze functionality when they make a release and updates within a release will not upgrade to the latest and greatest major release of that software. They tediously backport bug fixes and security patches to keep the old version up to date and only upgrade to the latest and greatest when a new release comes around years later.

Arch is just a rolling release, it's basically the equivalent of running Rawhide but waiting for a couple weeks to install updates. RedHat spends more effort supporting the old versions that they ship than just following upstream. The whole point of an enterprise distro with long term releases is that they don't do any of those major updates to add features between releases. I can install CentOS and not worry that an update is going to suddenly change how a package works. Just look at https://www.archlinux.org/news/, most of those almost bi-weekly news entries are stuff that I don't need to sweat on CentOS because they won't make any changes like that between releases.


RHEL and CentOS make great sense in the office, where we do NOT want to waste time faffing about with the configuration. All of that is moved to the package manager.

You will NOT use the walrus operator in python until the distro supports it.

When I want to live closer to the edge, and get the latest QGIS going, Arch and AUR are there for me.


> Yum on the other hand is tedious to maintain. That is why RHEL and CentOS are often years behind stable packages in their repos, and using a two-year old kernel.

No it's not. Red Hat is quite capable of packaging any version they want, but the whole point of RHEL is stability (read: no changes for years). Fedora Rawhide is bleeding edge on RPMs, if that's your speed.


Absolutely. RHEL and CentOS are "years behind" because it is their mission to supply a stable distro; stability in this context meaning in the ABI at foremost. You can install a kernel driver via RPM that is years old, and as long as it's targeted to your RHEL version, it's supposed to just work. Same with every other application you might install.

Fedora provides the rapid-release as well as rolling-release equivalent of RHEL; Fedora is the upstream for RHEL after all.

Debian does much the same with the split between stable and unstable.


I use openSUSE tumbleweed with Krypton and I get always up to date package and my Plasma/KDE applications setup is in sync with git master. So I don't think the reason RHEL and CentOS are behind by years is because of th RPM format.


Have you taken a look at the AUR? I guarantee you there are several packages missing that you won't find with openSUSE and building those packages is significantly more challenging initially and in order to maintain and update than with Arch.


I think a good step in this direction is linuxkit[0] -- it's one of the most exciting projects IMO with regards to improving machine build processes (especially if you're in the build-a-VM/AMI world still).

If we can scan container filesystems for dependencies, or choose languages that let us build containers minimally enough that it's only a binary + static libs, we can start approaching systems that have dependency chains almost fully cataloged.

[0]: https://github.com/linuxkit/linuxkit




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: