Hacker News new | past | comments | ask | show | jobs | submit login
Arch Linux turns 20: Small, simple, great documentation (theregister.com)
275 points by mxschumacher 8 days ago | hide | past | favorite | 149 comments





> The installation process, and the documentation behind it, lead to the third virtue: a complete installation tends to be very small and simple, because you only install the bits you need. If you don't know what bits you need, the documentation will help you to work it out, and the result is something that is both fairly minimal and that, with luck, you understand. You know what's in there because you installed it.

I feel this gets to the core of why I like Arch so much. I’m a Linux novice, so for a long time I ran Ubuntu VMs when I needed to do stuff on Linux (this being before WSL). It worked well enough, but I never really felt that I properly knew what I was doing.

Then I tried installing Arch in a VM… and it took me several days and several attempts, but when I finally got it working I felt, for the first time ever, like I actually understood the system I was using. Now I have a webserver running Arch, and only a week or two ago installed Arch on an old PC to see if I could get a desktop working.

Of course, Arch is not easy, especially for a non-expert such as myself. Sometimes I have no idea how to solve a problem, or even what kind of software I need in the first place. For this reason, I’m planning to install Debian instead on the new laptop I’ve ordered (to replace my ~10 year old machine running Windows), in the hopes that it might have more stuff working out of the box. Still, I’d say that trying out Arch has immeasurably improved my knowledge, not just of Linux but of the underlying concepts behind modern computing.

(Oh, and the documentation’s amazing too!)


> Still, I’d say that trying out Arch has immeasurably improved my knowledge, not just of Linux but of the underlying concepts behind modern computing.

I love hearing that, because it was a goal of Arch from the very beginning: to stop fearing the commandline.

And I was the first alpha tester, in that I wanted to learn more about how the sausage was actually made, so to speak. I was comfortable using things like Linuxconf at the time, but its beginner-friendly veneer meant that I didn't really know what to do if it _wasn't_ there.

After tinkering with Crux and PLD for a bit, I wanted to go deeper and start from nothing. So I loaded up the LFS[1] docs and just started typing in the shell stanzas to start building my compilation toolchain. In an effort to DRY as much as possible, the work also got placed into shell scripts, which eventually became PKGBUILD modules.

I started having way too much fun with it, so I put up the world's ugliest webpage[2] to share my triumphs, and a couple people found it, somehow. That begat the immediate need for documentation, which eventually brought Arch into the forefront. I can't recall who spearheaded the Arch wiki, but we owe them a great debt, because it has become a valuable resource for Linux users, and not only the Arch users.

Arch is my happiest accident.

ps: btw, I run Arch (is this still a meme?)

[1] https://www.linuxfromscratch.org/

[2] https://web.archive.org/web/20020328043401/http://www.archli...


Thanks for starting it! I dabbled with Ubuntu, Debian and SUSE in highschool and would occasionally see things from the Arch Wiki when troubleshooting problems. Learned about its philosophy when at a highschool computing competition and my team captain was raving about one of our opponents using Arch.

Started a CS degree the following year, and I decided I wanted to take Linux more seriously, so I wiped Windows off my laptop and threw Arch on it to force myself to learn, and it's been my daily driver now for the last decade!


You can install Debian the same way you installed Arch (manual, CLI) via debootstrap. It's not really advertised, Debian really wants folks to use their installer.

I built a tool that does this, you can look through the code and see how I do it--it's just bash spaghetti. Download a Debian live ISO (or use my tool to create an Arch-like minimal live USB) and you can install it however you want.

https://github.com/candiddev/forge


But isn’t that the whole point of Arch. It is advertised and promoted and therefore its well documented with a lot of support.

As a novice, if I am stuck somewhere the odds that I find the answer in the Arch Wiki or can ask an Arch enthusiasts and get an answer is orders of magnitude higher than the equivalent sources for Debian.


there is also [1] preseed, we use it to install our servers/VMs. It is a bit quirky tho, especially partitioning.

* [1] https://wiki.debian.org/DebianInstaller/Preseed


> Then I tried installing Arch in a VM… and it took me several days and several attempts, but when I finally got it working I felt, for the first time ever, like I actually understood the system I was using.

Honest question: why days?

I have installed arch multiple times in the past decade and I don’t remember anything exceptionally out of the ordinary. You just follow the step-by-step instruction and you are good to go.

It’s all fairly standard: boot on a live cd, get internet, format the disk, mount, arch provides a script to install the base and another to change the root and it’s vanilla Linux config from there.

Edit: Hmm, I guess how to configure a vanilla Linux might be quite complex for someone who has no idea of how to do that. I might have answered my own question actually.


> Honest question: why days?

Because I was a total novice. I barely even knew what a partition was, let alone anything else. Now, of course, I find it much simpler.


If you know most Linux file systems well, you would probably choose one in a few seconds. If nobody has ever asked you to pick a file system before, you could probably spend a couple of days researching.

I think there’s a lot of decisions like that to make in Arch


Even if you follow a guide exactly as stated, you will often find when done that you now know more about the steps you took and want to immediately start over so it's closer to what you want.

The same thing happened to me the first few times I installed Gentoo. Could I have migrated the first install to what I wanted at and had at the end? Sure, if I had the knowledge I didn't have at the time.


I often have to boot back into the live media because I forgot to install a tool like iwd while chrooted. This information is not included in the single page installation guide. Maybe it’s those scripts i’m unfamiliar with that would shorten the feedback loop.

Yeah, configuring a kernel is one part plus the install wasn’t always like it is now — in the old times there weren’t as many helper scripts (chrooting manually comes to mind)

I've installed arch a number of times, and every times I want full disk encryption I struggle a bit on dm-crypt config and bootloader. Other than that it's pretty straightforward.

This is a false sense of understanding that many Linux users develop. You basically built a puzzle by putting together the pieces that fit together. And you have the illusion that you learned something about the picture drawn on the pieces.

You don’t really understand anything more except how to configure a system with a poorly designed configuration system. Installing a difficult-to-use Linux distribution teaches you nothing about operating systems, compilers, linkers & loaders, shared libraries, or anything else about the foundations of modern computing.


IMHO in most cases the "sense of understanding" comes from all the related materials which you read when installing a "difficult" system: it is difficult not because inherent complexity but for lacking abstracting tools (like GUI wizards) which forces the user to learn more in order to understand the "limited" provided interface. I remember my first (Softlanding?) Linux installs (by mid nineties) reading about hard disk geometry, the mandatory kernel recompilation for the network card drivers, the soft links when upgrading shared libraries, the monitor frequencies for X11, and a big etc. which previously (with DOS/Win 3.x) never had to deal with.

I think it’s a continuum. Sure, I freely admit that I’m still rather ignorant when it comes to the low-level details of my computer. But compared to the understanding I get from Windows or Ubuntu? From that point of view, I’ve learnt a lot.

Besides, it’s not like this knowledge is useless. I now find myself being able to diagnose and fix problems with my system which previously I was clueless about. And it makes it a lot easier for me to learn the lower-level details if I so choose.


If we could combine Ubuntu's easiness of use with Arch's simplicity and performance, that'd be my ideal distro for work.

I love Arch for its simplicity and performance.

But it just wasn't productive for me to get everyday tasks done. I'm not advanced Linux user, occasionally I'd need hours to get seemingly simple stuff done.

For a hobby desktop, fine. For a work tool as a developer, I moved back to Ubuntu (though I have moments of regret every day).


I'm afraid simple and easy don't go well together.

You either have automation to make things easy to the user, but then it's no longer simple.

Or you require the user to do everything manually, but then it's no longer easy.


True. Maybe there's a viable middle ground between Ubuntu & Arch.

Perhaps with a focus on the audience (e.g. a web dev distro), it becomes viable to make sensible compromises on both (easiness & simplicity) that result in a good combination for the user.


For general audiences, Manjaro (arch base) fits your ask pretty well, I think. I ran into problems installing arch, switched to Manjaro, and never looked back.

Pop_os! is coming from the other direction, but I found I prefer using i3/sway instead as I had trouble configuring certain things about pop.


Slackware.

Only reason I've seen to use anything else is if doing a full desktop installation doesn't make sense, as the installer is largely geared around just installing most everything, or perhaps leaving out a category or two. Beyond that, the documentation is clear, and it's made as easy to use as makes sense without just hiding things behind a black box so that when things go wrong, they REALLY go wrong.

Nice to see some of these newer distros like Arch and Ubuntu are coming along though; choice is good.


Perhaps Manjaro? It tries to be a more user-friendly Arch-like distro.

Have you looked at EndeavourOS?

kali

That's just Debian

This summarises my feelings so far precisely. Alas, I’m not quite sure it’s possible to have this: the easier and more full-featured something is to use out of the box, the more components it needs — or at least, its components become more complicated in and of themselves.

One alternate approach I find interesting is that of NixOS: everything is declaratively specified in a single place, so it’s easy to configure stuff, but also easy to see how everything works. I actually tried out NixOS before installing Arch on the aforementioned desktop box, and if it wasn’t for the sheer opaqueness of Nixpkgs I’d still be using it. On the other hand, I can hardly call Nix ‘simple’… as with everything, it’s all tradeoffs, I suppose.


I've been installing Ubuntu this week. Checking download updates during install gave me a broken libc dependency graph which blocked installing ssh or gcc. Libpci-dev conflicts with gnome as far as I can tell, installing that took down X and the network. Changing to the server iso of the same version gave me an OS that can't use the builtin intel NIC, though the desktop version can. The driver errors on modprobe, -2.

I'm going to try again with a different Ubuntu release as soon as I can find the patience. Where prior lessons have taught me to vandalise automatic update or it'll brick itself in the future.

Ease of use has not been the defining characteristic of Ubuntu.


Same experience from my days in Ubuntu world, it sure is relatively easy for a first time linux user but I had so many issues on it that were completely obscure. I had the feeling of fighting against the system itself all the time.

With arch, sometimes it breaks (not that often), it always is quite clear what went wrong (pacman errors are limpid, logs are clear). You'll find the fix on the Arch Linux News, in the wiki or the forums. Bonus you probably know your system better and as a side effect be more efficient in fixing it.


I'm now firmly confirmed that it is Ubuntu that fucks something up. Upgrades seem to work much better with plain Debian.

We've seen many failures during upgrade, from both novice and advanced linux users, meanwhile one of our sysadmins accidentally upgraded Debian by two releases at once (From Debian 9 to 11) and it still "just worked"...


Debian occasionally falls over on me when running unstable but that's kind of the point. Testing hasn't failed on me in at least a decade.

I had pretty constant issues with Ubuntu every time I've tried it as well.

I run pure Debian stable everywhere I can in my personal life (laptops, desktops, servers). It's predictable and there's not a lot of planning at update OR upgrade time. Backup, push the button, reboot, done.

Just kidding, backups happen daily automatically, so it's just update and reboot.


> If we could combine Ubuntu's easiness of use with Arch's simplicity and performance, that'd be my ideal distro for work.

You're describing endeavourOS. It's Arch, but with an installer that gets you a sane default desktop quickly.

It's what I use now that I'm too old to waste all that time it takes to get Arch running properly


As I developer I solved my issues with arch using the latest dev tools in arch and running the code in a docker container. That way I also solve all developers issues with dependencies as everybody runs the same docker container with the same libraries.

Check out endeavouros. It is arch with an opinionated installer and WM/DE setup, but then you get a pretty standard arch setup and it’s great.

IMO it combines the best of both worlds.


Manjaro Linux is what you are looking for then.

"For this reason, I’m planning to install Debian instead on the new laptop I’ve ordered"

Maybe have a look at manjaro, it is based on arch, but comes prepackaged, so you can just boot up the luve image and install, if everything works.


I’ve heard of Manjaro, but on the occasions when I’ve brought it up in conversation, people have almost unanimously warned me off from it in fairly strong terms… not entirely sure why, but that alone makes me reluctant to use it.

There was a lot of controversy around their swapping LibreOffice out for some proprietary closed source office suite for a while: https://www.forbes.com/sites/jasonevangelho/2019/08/03/manja...

Another Arch/AUR-compatible distro with a more automated install process is EndeavourOS. Last time I checked, it was closer to vanilla Arch than Manjaro. But I've mainly used it for spinning up VMs to work remotely. For a long-term workstation I still feel better sticking to plain Arch.

They had multiple issues with opsec in the past. Something that when repeats, makes people skittish. Fool me once and al..

I am not aware of critical issues, but there was indeed a time of confusion, when the dev leadership changed. So sources please for current flaws?

This Github repo has the most common issues I've heard of: https://github.com/arindas/manjarno .

Big ones are: shadiness with funding, letting their SSL certs expire 4 times, and the fact that their idea of stable isn't additional testing, but just letting the packages sit for a week.

There was also a recent kerfluffle not covered there where they shipped a broken kernel to Apple Sillicon users without contacting the Asahi devs: https://twitter.com/AsahiLinux/status/1576356115746459648


Well, the funding issue I find to be quite trivial and of no big concern to me, but the repo issues are indeed something that has bothered me at times, where I ended up modifying repos to get the most up to date ones.

The recommendation from the repo, EndeavourOS sounds interesting, though.


https://www.reddit.com/r/linux/comments/4inrut/manjaros_ssl_...

The link for the post is dead, but they've let their SSL cert expire multiple times. While it happened a few years ago, I find that a hard thing to come back from.


I think the most recent incident was a few months ago...

Good lord.

It literally took them two months to push the critical expat update on aarch64.

Manjaro is like Arch but not exactly like Arch. If you have a problem with Manjaro and then go to the (stellar) Arch documentation to try to figure out why it doesn't work in a case like this, that documentation won't help you.

I mean that's not true: I daily drive ubuntu and the arch wiki has been very kind to me. But to slightly rework you point, if you're a manjaro user expecting the arch wiki to apply 1:1 to your system, you're mistaken and are in for a bad time.

Well, this I cannot confirm. I regular go to the normal Arch docs for help and usually find it suitable. So it is not fundamentally different, but yes, it uses different repos and some things indeed work differently.

N=1 and all, but I've been using it for several years without complaint.

Not anymore. Archinstall, an easy-to-use installation script, now comes included in the ISO and so now anyone can install it and get a full-fledged OOTB distro ready within minutes.

Does archinstall let you luks encrypt without LVM? I've never understood why every distro feels these two are dependent on each other.

I'm not sure since I did not use disk encryption, but it very likely does. You may want to Google it to confirm.

I use luks with btrfs, setup using archinstall, works as expected, I hope this helps.

If one isn't bothered by (or prefers) lack of systemd, Artix provides ISO prepackaged with most common desktop environments.

You should try Manjaro Linux, it's based on Arch but has more stuff out of the box.

Installing arch used to be a lot easier before they deleted the Beginners Guide and redistributed the contents to a bunch of different pages in the wiki.

edit: Beginner's guide, not install guide, is what was deleted


I've been using Arch for well over 10 years, they say it's true that you "learn Linux" using Arch. It's an excellent distro, but a few things make it a great learning OS.

First it's as vanilla as possible, which mean that packages are modified as little as possible from upstream. This means you don't learn anything distro-specific by mistake, and you actually learn more how the package is intended to function.

The second is great documentation and community. The Arch wiki is full of common tweaks that you'll likely have to do, many other distro's may have just held your hand and assumed you wanted those boxes checked, but Arch makes you check them.

Being minimal also helps, it really doesn't overwhelm you. Arch doesn't, it's just that only at a time is usually broken, so you're doing something like that.


I'm not so sure about learning. I think that you're mostly just learning how to administer an Arch Linux system.

After using and contributing to Gentoo for a few years, I don't think I could confidently explain how all of the pieces of the desktop graphics and audio stack fit together - I just installed it and it worked.


What I miss mentioned here is AUR and the fact that almost any software that you can think of is packaged there. You read about some cool software (like git-bug I learned about today here on HN), you do `yay -S cool-software` and it's there. On Ubuntu or Debian? Not so much...

I agree. The AUR is one of the biggest reasons I use an Arch-based distro (endeavour). If it wasn’t for the AUR, I, a pretty quick learner but also lazy person, would be using fedora or something.

The AUR is the main thing I miss after moving from Arch to Gentoo, the scope of the applications on the AUR is crazy, and for the most part in my experience it's usually up to date and reliable. In my opinion, the AUR alone is enough reason to install Arch or an Arch-Based Distro.

What I really like about Arch is how minimal & fast it can be without resorting to 'exotic' software or libraries.

I recently installed Archlinux32 on an old Pentium II machine just for the fun of it and was pleasantly surprised that it still feels reasonably responsive (I didn't get X11 to work yet though as the GPU driver for that machine apparently never reached mainline or was removed in the meantime).

Everything is managed by systemd/networkd the way it's authors intended. No custom scripts or other cruft or bloat. No 'helpful' background services to update man pages or the package database.

It's also refreshing how fast pacman is compared to apt.


Arch is not minimal. Debian netinstall does the same.

Alpine is. If you think Arch is fast, try Alpine x86.

On X.Org, VESA works everywhere.


I think it's more appropriate to say Arch is "simple" instead of "minimal". It does not sacrifice features. It does things in the simpliest way.

Can you elaborate? In what way is Alpine more minimal than Arch? (I really just don't know)

Also are you suggesting that Debian netinstall is not minimal?


Arch is minimal in the sense that the default system consists of very little. If you install nothing but the base metapackage, you won't get a usable system out of it at all. The purpose of Arch is to fill out a complete, working system by making your own choices. If you follow the installation guide, then you start out with base, linux, and linux-firmware. Base is a metapackages that consists of archlinux-keyring, bash, bzip2, coreutils, file, filesystem, findutils, gawk, gcc-libs, gettext, glibc, grep, gzip, iproute2, iputils, licenses, pacman, pciutils, procps-ng, psmisc, sed, shadow, systemd, systemd-sysvcompat, tar, util-linux, xz.

To get a fully-working system, you'd also need a bootloader, but Arch doesn't prescribe what that has to be. Alpine is mostly going to give you these same thing in terms of available CLI utilities, but rather than being based on GNU libc and GNU coreutils, it's based on musl and busybox. The init system is OpenRC rather than systemd. And it has a default bootloader, which is syslinux.

This makes Alpine more "minimal" in the sense of a minimal installation taking up less disk space, because musl, busybox, and OpenRC are smaller in the literal sense of the binary files consume less disk space than glibc, GNU coreutils, and systemd. Busybox also comes with ash (I think actually dash) as the default shell, which smaller than bash.

I have no idea if the apk package manager is smaller than pacman. They're both smaller than what you'd get out of a Debian or Redhat descended system.

Personally, I think it's a bit misleading to call either of these more minimal than the other. The functionality, feature set, list of available utilities is pretty much the same. Alpine is just giving you smaller files, though note that using a musl-based system presents a lot of difficulty because a fair amount of software Linux users expect and are familiar with isn't really POSIX-compliant and only works with GNU C.


Is apk a classic package manager in that it installs packages to the file system or does it use overlays?

The line

> Alpine Linux is designed to run from RAM

in the wiki almost makes it sound like OpenWRTs opkg where the root fs is readonly.


Alpine can do both, actually; a "sys" install does a traditional root filesystem, but Alpine also can run... I think a ro root with a single rw overlay with all added packages(?)

Debian is not minimal at all, it emphasizes compatibility over all else.

There are many custom scripts to wrap around systemd so you can still use old style commands. Apt is pretty slow (and again packages come with custom scripts) and stuff like apt-xapian-index will just gobble up all your CPU if you are on a slow system.

Debian has many scripts to automate things for a nice experience, but it's certainly not minimal.


I think "vanilla" is perfect term for it. Most of the software is just vanilla compile + whatever it is needed to play nicely with rest of the system.

Most defaults are sensible and close to what app itself provides, again with exceptions to play nicely with rest of the system.

Most packages also come with bunch of recommended ones that extend functionality which means a bit extra space used, but just

    APT::Install-Recommends "0";
fixes that (I'd recommend that for servers but not for desktops unless you're seriously space constrained).

And most important thing is that upgrade works. I installed my desktop in ~2008 and just upgraded across the ages, the install older than every single component in my machine.

> There are many custom scripts to wrap around systemd so you can still use old style commands.

That's just not breaking old stuff, minimal doesn't really need to mean "just breaks your old scripts that worked fine up until now".

And it's kinda required for transitionary period, some packages still use /etc/init.d/* to start for example and AFAIK Debian still haven't said "systemd is the only way forward" which means many packages provide both /etc/init.d/* for SysV boot and /lib/systemd/system/* for Systemd boot.

> and stuff like apt-xapian-index will just gobble up all your CPU if you are on a slow system

How is tool that's not even in standard install relevant to anything?


Alpine base rootfs is just a few megabytes, while Arch is like 100-200MB. I usually encounter Alpine in Docker containers when people want to wrap just one specific service and have save space on things that are not essential for it (it would be wasteful to ship your 20MB app in a 200MB Arch/Debian container).

I didn't know Alpine was useful as a working machine though.


A Debian Slim Docker image is 20MiB, and in my experience by the time you install all the dependencies of your app in an Alpine image, it’s bigger than the equivalent based on Debian Slim.

> A Debian Slim Docker image is 20MiB, and in my experience by the time you install all the dependencies of your app in an Alpine image, it’s bigger than the equivalent based on Debian Slim.

This is true, also while Alpine is an excellent base image, some people have run into troubles with musl/busybox and prefer to use Debian/Ubuntu or whatever else they're familiar with as their container base.

Then again, I kind of went in the opposite direction and use Ubuntu as the base for all of my container images and install software "the normal way": for example, getting OpenJDK through apt as I would on a server with Ansible, or for my local dev machine, without any of the fanciful optimizations or clever hacks to keep the file sizes down.

The downside of this is that my base images are multiple hundreds of MB in size (even after cleaning apt cache in the same step as doing the install, to avoid adding that to the layers), but on the bright side that hardly matters because I use the same base images for all of my containers so only the changes for that particular image need to be transferred through the network and like 40-80% of the layers remain consistent: https://blog.kronis.dev/articles/using-ubuntu-as-the-base-fo...

It's not "optimal" from a size perspective, but it's delightfully simple and approachable.


Arch latest docker image is 135MB[0], whereas Debian stable-slim is 30MB[1]

[0]https://hub.docker.com/_/archlinux/tags

[1]https://hub.docker.com/layers/library/debian/stable-slim/ima...


It works fine on older machines. setup-xorg after the installing process, add a light WM such as wmaker or jwm for parent commenter's pentium II and he/she is done.

Interesting to see the documentation emphasized here. I don't use Arch, but I often see the arch wiki in google results for linux stuff I look up. Makes me wonder why wikis aren't used more for documentation.

Other distros have wikis but tend to focus more on documentation the distro specific details. Since Arch packages have very minimal modifications from upstream, this means much of the content from the ArchWiki is applicable across distros. But then once you have one wiki covering the generic advice, there's less motivation for others to duplicate that effort anyway.

I use gentoo, and still have arch wiki in bookmarks as go-to place to look for docs

In fairness, the Gentoo wiki is my second favorite wiki that's largely applicable to all Linux distros:)

Wikis take a lot of maintenance by a determined core of contributors, or else the pages deteriorate into a pile of incoherent edits. Occasionally I see this even on the Arch wiki. Main text says “do this.” That’s followed by some text saying “I tried this and it didn’t work.” That’s followed by a text box saying “that method is deprecated.”

But that's still better then just having the main text saying "do this" where "this" is wrong. And that's often the case for non-wiki documentation.

True, but where docs have a dedicated, core maintenance crew, they can maintain the docs in a good state with less effort. With a wiki, the problem is that drive-by editors can degrade a good product, requiring constant work by the core contributors to revert bad edits.

My main point was as a response to the original question, which was: why aren’t more docs in wiki form? I think one reason for that is that good docs require dedicated contributors, and I think a wiki does little to nothing to reduce their burden.

The myth of the wiki is that by erecting it, drive-by contributors will build a great product. I don’t think any quality wiki was built that way.


In before that was Gentoo wiki, no matter what problem the Gentoo wiki was likely and good search result.

... till they had a failure and discovered none of their backups was working.

Check your backups kids.


I'm a deb-based user but I constantly refer to the Arch wiki to understand best practices and solve problems. No other distro has this level of self-serve documentation that's actually updated and easy to understand.

The main benefit of Arch Linux to me is that when you do a drive-by patch to some software project, it is not unthinkable that the fix reaches you through upstream and an Arch package update in a matter of hours. This flow is not possible with distributions that run pretty out of date software and have long release cycles: the version that's included with the distribution is usually too old to be able to check out the source, test and directly apply fixes to the upstream repository, and getting those fixes back would take months. This also applies to security updates, as it's easier to rely on instant upstream project updates than on some distribution package owner to remember and backport those fixes.

> This flow is not possible with distributions that run pretty out of date software

I use Debian Stable and Fedora on my systems and I have the latest versions of all of the software of which I want the latest versions because I can install software from source like a big boy. And my installers don't have version numbers hard-coded in them like PKGBUILDs do, so I get the latest versions immediately.


Arch has served me well over the years on almost every piece of hardware I owned and is still my goto for a "traditional" FHS distro. The documentation is top notch, it's fairly unopinionated and I love the simplistic nature of it (everything in the repos is just unpatched upstream software, for the most part at least).

Recently made the jump to NixOS though and been really happy with the additional features it offers.


While some may see it as a learning tool I have used Arch on my workstations for 14 years. The last time I remember having to manually fix things was when it migrated to systemd. I am a linux sysadmin so I might be biased but I think people overestimate the effort required to get exactly what you want and nothing more out of an Arch setup.

I'm in the same boat. I started my linux use with everything-bundled distros like Ubuntu but I really appreciated Arch for giving me a better sense of how things tend to hang together. Made me much more comfortable transitioning into sysadmin/sre stuff.

It depends on how uncommon what you want is, although I agree it is very low effort in general. For example, there was not too long ago an issue with the kernel not booting from syslinux for a few weeks. Also, arch-announce mentions a manual fix needed if you have a particular package installed every few months, so good to subscribe to that and fix as described if needed.

https://lists.archlinux.org/mailman3/lists/arch-announce.lis...

My suggestions: 1) Keep both linux and linux-lts installed since otherwise you have no backup if the one you normally use doesn't work. 2) Always fully update the system since dependencies aren't always fully specified and a partial update can damage the rest of the system. If you need to hold back a package, add it to the IgnorePkg= line in /etc/pacman.conf until it works again. 3) Avoid AUR except for rare cases where you review the package manually (always avoid AUR helpers). 4) Don't be too lazy even though things mostly just work, check your boot logs at least every year or so to improve the chance of fixing issues before they cause trouble and look for and deal with pacnew files at least a few times a year.


I use it on all my personal linux boxes at home: laptop, router, nas, a nuc, a couple of rpis and a bunch of VMs. Main thing I love is that I've never found myself in the position of just giving up and starting again on any of them. I've switched a few of those over to arch when I reached that point with what I was running on them before. Most distros effectively force you to do that eventually.

I upgrade them when I get round to touching them for some reason so sometimes months will go by. I've never encountered a problem after upgrade that I couldn't quickly resolve with a brief bit of tinkering, and I'd take that over starting from scratch or leaving things mouldering away on outdated software any day.


A more recent update that required some work was switching to PipeWire for audio. But it was mainly about knowing which packages to remove and replace, and now audio works without a hitch for me, better than PulseAudio ever did.

Arch isn't just simple, it is smart too. With alpm-hooks [0], it is possible to run specific commands pre or post-install/upgrade of packages. E.g re-sign secure UKIs with my keys after the intel-ucode package updates.

This example and more such refined tools such as the AUR that add massive quality of life improvements to the overall Linux user experience is what keeps me happy at Arch.

[0] https://archlinux.org/pacman/alpm-hooks.5.html


Arch is awesome and was the distro that finally got me to stop sticking to Windows. Every other distro I tried had some flaw that kept on making its experience subpar, and I have tried a lot of distros - Ubuntu, Mint, Debian, Manjaro, Fedora, and a few more. I was staying away from Arch for its reportedly complex installation. But this month I decided to take the bullet and, to my surprise, got a fast and flawless system installed within minutes! Archinstall now makes it just as easy to install as anything else, the AUR is awesome and has everything you'll ever need, and the wiki is just unparalleled.

This distro leaves all others in the dust in terms of speed and software availability; I will highly recommend it to everyone looking for a no-nonsense and up-to-date system.


> the AUR is awesome and has everything you'll ever need

But what if I need packages created and maintained by vetted, qualified devs rather than the unvetted randos that upload PGBUILDs to the AUR? Many of the AUR contributors I've looked into have no publicly-accessible real names, no personal websites, no LinkedIn accounts, and their GitHub accounts are only a couple years old with Japanese cartoon characters as their account photos.


> But what if I need packages created and maintained by vetted, qualified devs rather than the unvetted randos that upload PGBUILDs to the AUR?

Pay for them or package them yourself. The nerve of being angry at people giving you their work for free and having the *audacity* of thinking you should have access to their real name, personal websites, LinkedIn and GitHub account.

The level of entitlement dripping from your comment is disgusting.


Wow. You are being very hostile, and make no mistake, that is on you, not on their comment. They made a point on trust which is very valid: just because someone generously does work doesn't mean you should automatically trust them for it if you know nothing about them.

You reading entitlement there and responding so hostile is on you.


I wouldn't auto update from AUR but you can easily download a snapshot of the PKGBUILD of particular software you want from the Arch website and verify that it isn't doing anything questionable (and fix it if it is) then build it yourself. It is a simple format and easy to review, it is rare to see patches or non-obvious build steps. Personally, I have a dedicated aur user with some aliases that grab a URL from a fifo and print it, download, and extract. There are also often packages that pull the latest version from git and can be easily updated to use release branches if you want. Because of the focus on upstreaming fixes rather than keeping patches you can also just build outside the package system and install to /usr/local and you will still benefit from the project likely having received patches to build on Arch (if they don't test it themselves). Even if not yet fixed upstream, any random issue you encounter usually already has a bug filed by an Arch user unless you are extremely quick about updating.

How exactly does any other distro solve that? Please don't tell me you genuinely find PPAs to be a better alternative...

If the PPA is run by the project then you don't need to trust someone else. Sometimes AUR packages are created by a developer for the project but since you aren't trusting a particular URL that is run by the project it is possible that that will change later without you noticing if you are auto updating from AUR.

Whoever makes the PKGBUILD, it almost always points to and builds from the original project's git server in a client machine. You can verify that from the package's AUR web page.

And about project maintainers running PPAs - Say you're a dev, and you want to package binaries for your software. What's easier, whipping up a quick PKGBUILD once and putting it in your git server, thus allowing anyone to get the latest updated build at anytime, or setting up accounts and painstakingly compiling and updating each build to a PPA? Are you aware of the hundreds of abandoned PPAs that lie orphaned after maintainers gave up in frustration about how cumbersome they are?


I haven't looked that often since it is easiest to try to avoid software that isn't in the main package system, but I rarely see PKGBUILD checked into the upstream project. It is more common in my experience that an upstream developer manages the AUR package on the Arch website, not as part of the project. You can verify AUR packages before using them (this is what I do) but it is an extra step vs. a maintained PPA. AUR packages can be unmaintained as well. No matter the system the best situation for users is when the software you use is in the main package system.

Well if you're avoiding software that isn't in the main package system then it doesn't make a difference either way - the official Arch repository has almost everything the official Debian/Ubuntu repositories do that isn't distro-specific.

> Liam Proven Tue 15 Mar 2022 // 10:25 UTC

Yep, that's me. Surprised but happy to see this resurface now.

Been using Pop OS for a while for the It Just Works(tm) experience, but I'm missing Arch more and more.

Mainstream Linux distros feel a lot more like Windows these days. Sure they require less condiguration, but they're also mich harder to mess around with. Starting up htop reveals a jungle of daemons and weird systemd shit I don't even know what does. Systemd is a terribly documented nightmare to configure, etc.

It's so nice in Arch to know pretty much know what everything is for because I was the one who installed it. And to have documentation that isn't infuriating to navigate.


It's been two weeks since I installed Arch on my daily driver laptop, and I have to say it actually gave me a more flawless OOTB experience than Pop, Ubuntu or Fedora could. The archinstall script automatically selects and installs all required packages for you'll need to get a working system up and running, and it configures xorg and proprietary drivers for you so that you don't have to. I haven't had to mess around anything honestly, I'll suggest you give it a look again.

I've been using Debian Testing on my personal Notebooks since ~10yrs: throughout university, and the subsequent work life.

During that time, I oftened wondered whether I should "play"/experiment more with other distros; after all I loved tinkering with my vim config and network setups etc.

However, I've been just satisfied with the status quo, and more importantly: I just wanted to get shit done.

Apt, dpkg, systemd. If I want to get bleeding-edge SW I'll build the upstream source manually. No big deal - won't happen too often.

Getting older, I'm beginning to despise fixing the os more and more ... I just want the machine to work. This results perhaps from my day job, which involves openbsd-developing/tweaking ... And general a lot of cursing.

Granted: I'm not a gamer or graphics-enthusiast, and use my computer primarily for development, writing, watching movies/pictures ... Your typical senior resident trapped in the body of a 30ish guy.

I'm often wondering whether I'm just lazy and/or whether my attitude is the norm or rather the exception respective to Unix/Linux (power)users.

Edit: forgot to say "big thank you" to the arch community! Over the years I consulted the archwiki endless times! Almost everytime really helpful (in contrast to the debian wiki, lol)


I consider myself a Linux power user at this point; been running Linux as my primary os since 2007. I enjoy tinkering, but I like my distro to be rock solid; all of my Linux servers run debian stable. In no way would I say that you're "lazy", you just know what you're looking for in a daily driver. Debian is a fantastic choice and I used it as my primary OS for years.

That being said, I do run Arch on my laptop and desktop these days; I like being a little closer to upstream. I don't run a ton of bleeding edge software, but using aur makes it incredibly easy to stay up to date. I am also extremely appreciative of the Arch wiki, no matter what distro I'm using it's one of the first places I check out if I'm having an issue.


I run arch on my VPS. Before, I ran Debian, but if you need a special package version, or something, it just feels almost as hard as getting a newer IIS version on Windows. I got sick of it, removed Debian, and installed Arch from scratch.

My Linux experience was pretty minimal. Some trying out on desktop in the early 2000s and later again after Ubuntu became a thing, but I always got weird errors. Then some in university, and again a bit to administer my VPS or rPi.

Arch was a breath of fresh air, not only could I get current packages, everything was so well documented! The wiki is, as the article rightfully says, amazing. Now, even when I’m not using arch (I have a small Proxmox server with Debian and Debian containers), I still use the archwiki as I know it will help me for everything but Debian specific things. My first arch install (before that, I never installed an OS without an installer) took maybe 2 hours.


I bought a Librem laptop and was trying to install Debian on it, only to find that there was a firmware bug that prevented me from using the installer. So instead, I read the Arch wiki and installed Debian Arch-style using the live ISO and debootstrap, and I've been installing Debian that way ever since. I've never used Arch because I like everything as boring as possible, but someday I might take the plunge. They're the first place I go for docs, regardless of what distro I'm using, except for maybe Alpine when I have to read docs on something strange like OpenRC, and have to go to Gentoo's wiki for that.

The thing that makes the Arch docs so great is that it covers edge cases and has lots of examples.


I appreciate the straightforward install and wide availability available of packages, but in practice always using the latest packages system wide can be annoying.

For example right now the latest gdb is broken on both my machines, and since i’m not as keen to participate in troubleshooting new software I think I’ll be moving to a more stable distro pretty soon

https://bbs.archlinux.org/viewtopic.php?id=274056


It's possible to downgrade packages though

https://wiki.archlinux.org/title/downgrading_packages


I've been runing arch for 7 years, I had to reinstall once to fix something I broke with deleting python packages but I never had problems with just updating the system ...

AUR makes me stop on trying other distros.

Oh that distro is great. Is there some community package repo in that, equivalent to Arch's AUR? None? NVM.


Using Arch Linux since 2009 as my main desktop OS, and after 3 complete PC ugprades, I never had to reinstall it.

It has been working pretty well for me, except for a couple of issues that I ran over these years (e.g. the transition to systemd in 2012/2013, or no pacman -Syu for several weeks).


I just discovered Arch Linux this year, and I’m running it on a few raspberry pis. I love getting the latest packages, and the docs are great. Arch works well for me because I like updating my homebrew Mac user land daily, and updating Arch just feels natural.

pacman corrupted my installation twice in a year. Obviously I did something wrong but who knows.

Do you mean "an update required manual intervention but I did not pay attention and as a result my system was broken later"? If not, do you have links to the bug reports?

Ubuntu’s release upgrade also corrupts my system, so it’s not just pacman

I have been using it for 10+ years and it hasn't corrupted things for me. It can't corrupt.

It could get a bit tricky if you update very rarely, like once a year, but not sure what else there is to do to break things. Please elaborate a bit more.


Pacman will happily break pacman and sudo without warning if the user requests updating openssl without upgrading the entire system. When updating a package it does not pull in all of the other packages which will break if not updated simultaneously.

Indeed, pacman does not know which library versions are necessary for any particular app version. And almost all of the applications are dynamically linked.

Archlinux repo holds only a single latest version. There are some community maintained repos that contain older versions (and those can be easily fetched through downgrade utility).

When updating Archlinux, upgrade everything and then reboot. This way it should not break.


probably you installed something manually at some point, otherwise pacman does nothing wrong. Rarely you have to touch something manually because a mayor upgrade, but I think it has been like 4-6 times in 20 years for me.

If you want to install something manually in arch it is better to create a package first (if it is not already in aur). That way pacman can check for corruptions before installing anything


Pacman works when updating all packages simultaneously. However when updating only a requested package it can break by failing to pull in other packages which need to be updated simultaneously. For instance, pacman can break pacman and sudo by updating OpenSSL to a newer version than the currently installed version of pacman and sudo expect.

Any chance your RAM has errors?

I was a big fan of Arch for several years but since the introduction of systemd I felt I lost control and didn't really knew what was running in my system anymore. Been using Voidlinux for the last 2 years and I just can't go back to Arch

Just discovered pikaur, an AUR helper/package manager for Arch... it's great.

Meanwhile my personal installation is 8 years old. It was initially installed on a desktop on btrfs and has since moved between three laptops and across to ext4 and now xfs. 150K lines in pacman log file. I love it.

I don't think I'm ever going to actually use Arch, but I do have to admit the documentation is amazing and they have basically become the maintainers of Linux documentation in general.

How does a similar Nix OS setup compare as far as number of packages? Is Nix just as minimal as Arch?

In terms of addressing "you want to know what your system has got", OP argues Arch is good because Arch will only have what you install.

With NixOS, the whole system configuration is declared starting from a single configuration file.

So, NixOS is great for addressing "I forgot how I set <whatever> up".

You'll likely end up using more disk space with NixOS if you're changing your system, since NixOS has functionality which makes it easy to rollback the system to earlier configurations.


I'm fairly new to NixOS, but suspect that a NixOS install might end up smaller over time. With the ability to try out one-off package installs that don't become part of the more permanent config, and the "garbage collection" mechanism that cleans up, I think I at least would have less long-term cruft.

Small? No. Simple? No. Great documentation? Yes.

Care to elaborate in a sentence or two? I mean, I have a decent idea who you are, so I think you have some authority on things Linux - but I'm curious regarding your reasoning :)

I think especially the metrics for "simple" are very use(r) dependent. Simple for you or me means something entirely different than e.g. my Grandma (who has no computer). Personally I run four Linux machines, so they're customized anyway and "simple" for me means "I understand everything that's installed, because I installed and configured it". Someone running 4000 machines would probably have a slightly different opinion on what's "simple".

Small, well, it's not an embedded Linux with a small libc and -Os for sure. But I never felt Arch bloated.


The simplest system uses the fewest moving parts to accomplish the goal. Some of Arch's ideas, like PKGBUILDs, are pretty simple. But ultimately Arch is just another typical mainstream Linux distribution with glibc, GNU coreutils, systemd, PAM, etc. These tools are all severely bloated and over-complex, and many simpler alternatives exist which solve the same problems with much less.

As for "small", well, it varies, but Arch systems tend to bloat more with time as the system does not provide much for auditing and cleaning up your system, so the older an arch install gets, the more garbage it accumulates. Arch also tends to turn on as many options as possible for each of the things it packages, so many packages have a lot of optional dependencies made mandatory. This is not a unique problem to Arch; only Gentoo (and maybe Nix and friends) solve this one, and they have many other problems to contend with.

All of this is not to say that Arch is necessarily a poor choice. It's just not simple, nor small.


I see. Yes, many of these tools aim to be rather universal, which is detrimental to "leanness". OTOH all these bloaty tools are relatively well understood. E.g. being able to write unit files that heavily up the security of random daemons is really something I learned to value the last years. But I only used init.d as an alternative, so my horizon is a bit limited in that regard. Not using glibc, well, I was under the impression this causes a lot of pain for a general purpose Linux.

Regarding the "growing" OS, hm... I've manually checked what's installed on my Debians using aptitude, and removed old stuff. Similarly, I could let pacman produce a list of installed packages to audit and check which are not required anymore. Either case needs manual action, because the automatic tool will not know if I still need that random python lib I manually installed, or if it can be removed (truth be told: neither do I!). Now for unused dependencies this is different. Pulling in lots of stuff is a problem, yes. I think this could be solved by building those packages with a modified PKGBUILD locally (making it more like Gentoo), but for my "Linux on a big machine" I never saw the need to try that.

Anyway, I didn't read it like you were claiming Arch to be a poor choice :)

Thanks for the reply, I really appreciate the perspective. I don't want to drag you into a discussion over details, especially since I don't feel like "you're wrong". So feel free to just let it stand like this. OTOH, what are your favorite glibc/coreutils/systemd/PAM replacements?


Core of the distribution is the package manager/building tools. Compared to the babylon of eg. Debian packaging tools, where you'll have trouble figuring out basics like automatically installing build dependencies if you don't already have the target package built (lol, wtf? :)), Arch Linux's makepkg/PKGBUILD ecosystem is uniform, fully featured, and fairly simple.

There may be simpler packaging solutions, like Slackware's or whatever postmarketOS uses, but those are limited, and you'll have trouble figuring out how to do things like selecting files you don't want the package manager to ever overwrite, or preventing installation/updates of specific packages, or other things you might need that go slightly beyond basic installation/updates.


Small is relative. Small compared to most Linux distros, macOS and Windows. You also have to remember most installers have a lot of packages that may never be used in the user installs. If you're installing KDE then don't expect it to be tiny but using Sway you'll get a pretty minimal but fully functional WM without the bloat unless you pull in a package that has a large dependency graph.

>Great documentation? Yes.

Compared to BSD's, mediocre and subject to dramatic changes by nature.


I disagree, actually. (For clarity, I wrote this article, 9 months ago.)

This year I have done short reviews of FreeBSD, OpenBSD and NetBSD. IMHO the docs for Arch are more helpful than any of theirs.


Subjective, but I also find Arch documentation to be "better" because it more frequently answers my question in less time.

You could say *BSD documentation is more "comprehensive" but I usually find myself wading through an ocean to find what I'm looking for. On top of that, every time you try to look elsewhere, you just see RTFM. I can respect the sentiment, but having different variations of the same documentation can help understanding imo


How short?

     help #in the terminal.

     man afterboot
Also, the OpenBSD FAQ.

Well, great documentation with the caveat that good upstream documentation is always preferable to the Arch wiki.

Main virtue of Arch. They don’t patch. Upstream documentation will apply as is.

I think Arch generally got its philosophy right. It’s pretty much the minimal set of tools to get an easy to update binary distribution. They don’t touch what they don’t have to.


BTW I Use Arch

Only 8 month late (this is an old article from 15/March/2022).



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: