Secondly, make use of the Arch wiki. By knowing at least the basics of how Arch is organized, you can better discern what technical reference material is unique to specific quirks of arch, and what is general to the same daemons running on debian or centos (postfix or dovecot, for instance).
examples of generically useful reference:
I also use sid (with apt-listbugs) on my personal machines and can't remember the last time I had something break. You should definitely know your way around a Debian system in case something does break, but it's not any more prone to breakage than Arch in my opinion.
Arch Linux is like every other binary Linux distribution, only more difficult to admin for no reason. Hell if you squint hard enough even Gentoo has more of a reason to exist. A package managed that is source code first.
Arch's install process hit the right balance of me knowing how various low level components work together (bootloader, kernel, xserver) and overwhelming me with LFS.
I did eventually migrate to ubuntu LTS server + awesomewm for a few reasons:
* at the time I was using dozens of AUR packages (ROS) that depended on boost and some other libraries, which made each update a several hour endevaour of re-compiling those packages,
* the latest and greatest versions of each package got painful, as something I was working on relied on older versions of those packages (might be some codebase that needed particular gcc version or something like that).
Then I encouraged them to uninstall it once they realized they're fighting gentoo eccentricities, rather than learning about unix build systems. But you learn a lot about unix build systems in the time, and the journey to that understanding is what's important.
I think Arch has overtaken that role in many respects, except instead of forum posts it's a wiki and a whole lot more are design decisions optimizing for something different rather than eccentricities optimizing for "That's how it was done".
Ubuntu by comparison is a slow intrusive distribution.
I do use debian-ish by way of raspbian.
To install minimal Gnome on Debian, use the netinst.iso, untick everything on the last step where it asks you to choose a DE, log in after installation is complete, do apt install gnome-core and reboot into a very minimal Gnome. I write this from that setup.
I have an arch server. I just tested and can do "sudo reboot", it becomes unpingable after 2 seconds. The turnaround from shutdown to sshing back into it is just over 20 seconds. (I'll bet it would be faster if I didn't have the 5-beep no keyboard sound at boot)
All that feels by the wayside nowadays. The information about my site installation is all in a set of ansible playbooks. They could blast their way through an installation of any modern Linux OS and the result would look like an horrendous mess to a sysadmin from the 90s, but who cares? It’s still as reproducible as the carefully tended garden that was a Debian installation of the old days, but with the advantage that whenever the install drifts into instability you can just nuke everything and rebuild from scratch.
In fact, it’s all so automated, if you aren’t regularly blasting everything away and reinstalling, you’re doing it wrong.
No longer do I have a gigabyte or more of backups representing my ossified Debian install as a fallback. Instead I have a few kB of YAML and ansible to build everything.
Case in point. Debian 10 ships with LXC 3. If I want LXC 4 I just script the build and installation. The script is what needs backing up. The OS is disposable. In the past I might have taken care to ./configure to install in /opt but why bother? It’s more work (LD_LIBRARY_PATH needs setting) so just blast it into /use/bin and don’t lose sleep over treading on Debian’s carefully pedicured toes. If it doesn’t work out, reimage and try something else. It’s wonderfully transient.
I miss the old days, but I embrace the controlled chaos, volatility, and dynamism of the future. I can do what I want instead of what, in the last, was carefully curated for me by expert Debian maintainers. A bazaar to their cathedral, sort of.
I'm trying to phrase this as nicely as I can: this statement does not reflect the reality of modern day operations.
edit: For people downvoting
1) How exactly do you think operations at companies like Google, Facebook, Amazon, Netflix, Uber, Spotify, etc. work?
2) How often are those companies having monthly outages?
Debian adds a lot of value in dependency management.
Debian adds a lot of value in community.
All of these things can and are done by other organizations, but Debian is definitely doing it.
And you’re right about Debian Security Advisories. They are an important part of a stable base OS for anything internet facing. (The days of local users attacking each other via vulnerabilities seem less relevant in 2020.)
- Set and forget servers (enable automatic security updates and just let it run).
- Painless and guaranteed stable to stable upgrade paths.
Given that upstream OpenSSL itself would have the Heartbleed bug revealed a decade later, with all the revelations of its source code quality made more notable as a result, I'm willing to place more blame on OpenSSL than Debian here.
IIRC they asked a list that was not the main project list and not intended for security-critical questions. Code that actually goes in to OpenSSL gets a lot more scrutiny from the OpenSSL side, and other big-name distributions either have dedicated security teams that review changes to security-critical packages, or don't modify upstream sources for them. Debian is both unusually aggressive in its patching (not just for OpenSSL; look at e.g. the cdrecord drama) and unusually lax in its security review.
> Given that upstream OpenSSL itself would have the Heartbleed bug revealed a decade later, with all the revelations of its source code quality made more notable as a result, I'm willing to place more blame on OpenSSL than Debian here.
Heartbleed was a much more "normal"/defensible bug IMO. Buffer overflows are a normal/expected part of using memory-unsafe languages; every major C/C++ project has had them. Not using random numbers to generate your cryptographic key is just hilariously awful.
interesting, I wasn't aware of the history of this vulnerability. For anyone else curious, here's an analysis of what happened:
No corporate installation ever adds popcon. Nobody sensitive to their privacy adds popcon. It's a small, self-selected group, and massively overrepresents desktops and laptops at the cost of servers.
Please provide a source to back that claim.
Also, preseeding can do much more than ansible/salt/puppet. It can produce %100 reproducible systems under 5 minutes.
Case in point:
I manage a cluster and our deployments are managed by XCAT . I just enter the IP addresses, the OS I want (CentOS or Debian), and some small details in 5 minutes. With three commands, I power up the machines and in 15 minutes ~200 servers are installed the way I want, with the OS I want, identically and with no risk to drift into anything.
The magic? XCAT generates Kickstart/Preseed files and instead of pushing a minion and blasting through the installation, it configures them properly. Some compilation and other stuff is done with off the shelf scripts we have written over the years. More stable and practical than trying to stabilize a salt recipe/pillar set or an ansible playbook.
I only re-install a server if its disks are shot or it somehow corrupts itself (which happens once a year per ~500 servers due to some hw glitch or something).
The new ways are cool, but they don't replace the old methods. You're free to choose the one you like most.
What’s the non-cattle use case you are referring to? Systems that can’t be wiped because of important persistent user processes? That’s the only volatile state I can think of that would be lost by reimaging.
In our system, there's tens of storage servers under heavy load (while they are redundant they are generally actively load-balancing) and more than 700 nodes which are cattle but under max load 24/7/365. The whole thing doesn't have any time or space to breath basically.
While losing nodes doesn't do any harm, we lose some processes hence lose time and, we don't want that.
Even if we can restore something under 15 minutes, avoiding this allows us to complete a lot of jobs. We don't want to rewind and re-execute a multi-day, multi-node job just because something decided to go south randomly.
Our servers are rarely rebooted and most reboots mean we have a hardware problem.
The road is the goal? Some people actually use installed systems.
If everything improved so much, how come that the Internet is worse than in 2006?
Wiping and redeploying is the same idea with infra: just another part of disaster recovery, or even just regular recovery.
What facts are you basing this opinion on?
Hmm. You can install a basic userland with a custom kernel, with no particular dependency on a distribution (Let's use openSUSE today!) in just a few KB of YAML?
Have to admit--I'm skeptical.
> sudo darch images pull pauldotknopf/darch-ubuntu-$IMAGE
So your "build" is actually just grabbing what the distro already built and then using its infrastructure.
Your description sounded more like you were actually doing things from scratch or at least weren't relying on a distro.
There has been quite a bit of standardization and improved compatibility, so that you can choose relatively freely between the "big" distros. But that does not say they are a commodity but rather that all (five or so) of them are good.
FWIW, most stuff that you download from upstream sources will install under /usr/local/ by default, which is standards-compliant for your use case. (You don't generally need to use the /opt/ hierarchy.) Overriding that default and putting stuff in /usr/ is just breaking the OS for no apparent reason.
It helps answer questions like partitioning and encryption that is hard to do on a running OS, but really everything else can be done once the OS is installed and running. The development cycle in creating a preseed config that actually does everything you want is painfully slow compared to writing shell scripts / playbooks / cookbooks for a running system.
I think preseeding is still relevant with the advent of container / immutable operating systems such as CoreOS, and perhaps Nix too. The technology has changed and overlaps with configuration management tooling, but only handles a small part of a servers lifecycle giving room for a proper cm tool.
Also, how often do you rebuild a playbook and a preseed file?
Honestly asking, no traps here.
When it’s Wednesday afternoon and you are still riffing on some ideas as to how to configure a new service, the preseed edit-test-edit cycle is in the order of minutes instead of seconds, compared to a script run via ssh on a stable running system.
That makes a huge difference to productivity, for me.
I generally do service configuration at the post-install stage and if I have a working configuration just get that from our central storage. Or just write a small script and add to the post-boot steps of XCAT to run the commands and configure that stuff.
I configure the service by hand at one server, polish it, get the file (or steps) and roll it out.
So, preseed file stays very stable. We only change a line or two over the years.
Thanks for the answer BTW.
Edit: It's late here. There's packet loss between my brain and fingers.
I think of you are benefiting from a FOSS project day on and day out, donating "one cappuccino" once in a while would not be a major spending decision.
It's just a good start.
Nitpick: the cover still says "Debian Jessie"
I have used Ubuntu at work and in home for a while now and things have been surprisingly free from hassles. I don't mind moving to Debian if things work as smooth as now.
You can use the free version of Debian and install what you need later. But as I was saying, if you're installing on a laptop with no ethernet cable, non-free is the way to go.
The time invested to learn and automate repetitive configuration or admin tasks (upgrades, installing dependencies, deploying a new version of your app, etc) through ansible and ansible-playbook pays for itself pretty fast. For common tasks there are often high level modules that provide a simple declarative interface to non-interactively configure the desired state compared to using the underlying command line tools directly. E.g.. installing/upgrading packages with apt: https://docs.ansible.com/ansible/latest/modules/apt_module.h...
I looks like the source is all in xml. Are these hand written in xml or there is some kind of UI to write? Writing everything in xml looks painful.
Too-little known fact: with a small amount of CSS, mainstream WWW browsers can view most DocBook XML directly. Witness:
Related question for all debian people I guess will visit this page: Is there a version of this for all the small stuff in debian?
Background: A few days ago I installed debian under WSL. Normally I used to install Ubuntu.
I've set it up to be comfortable (bash completion etc,) but I think someone here mentioned some other guide or something for setting up debian machines, this handbook seems to be more about large systems administration and less about configuring it as a developer workstation.
(I also did some cursory searching on this to see if I could find something about vadh completion in this handbook, but couldn't find it.)
Thankfully, most packages are very well-documented, you can usually browse around /usr/share/doc/$PACKAGE to see the upstream documentation and possibly any Debian-specific information. HTML and PDF versions of documentation tend to be in an independent $PACKAGE-doc package, so as to keep the main package rather small.
I'd admit my question wasn't the best, so I try again:
I'm talking about a howto that describes how to apply a number of the small conveniences that Ubuntu has by default but that seems to be optional on Debian:
- bash completion
- man pages (at least they are lacking in the Debian wsl edition)
I figure this stuff out, it only takes 5 minutes every time I notice something is missing bjt I'm fairly sure I've seen a Debian enthusiast here saying there's a short how to on how to configure the end user aspects of debian cli, and that is the thing I'm looking for :-)
first look on an old system:
Most of the tools bigger documentations come in man and info files. If there's an even bigger, additional documentation, it comes with -doc package (like apache2-doc).
Otherwise every package comes with some basic documentation and Debian specific changes file, most of the time.
Debian's specifics on packaging is very strict. You just cannot package something and publish in the main repo.
I'd rather have individually compressed files instead of running a heavier FS layer with smaller/less powerful systems.
They're just compressed. Have you tried zless?
I still remember the HOWTO text files
In fact, Debian has a structure and, once you get hang of it, you can just guess and do the thing you want.
Install Devuan (www.devuan.org) Beowulf instead. It's Debian Buster without systemd. Almost all of the administrator's handbook applies verbatim.
If enough of the Debian usersbase chooses Devuan, its minor diffs will probably be implemented in Debian mainline and we can get past this sad affair.
Also, I personally won’t be dropping systemd any time soon because the alternatives on Debian are:
Go back to Sysvinit and have to write sysvinit scripts myself; or
Use some even less common init system and have no package provided init scripts/units/what have you.
One can also pull in other people's run program collections, of which the world has several.
And there's a handy tool for what's left.
Note, for the sake of completeness, that van Smoorenburg rc scripts changed format on Debian back in 2014. Most of the boilerplate has been eliminated, and writing them is a lot closer to how would would write a Mewburn rc script on FreeBSD/NetBSD or an OpenRC script.
Part of the arguments that led to the fork was the "viral" nature of systemd use, i.e. it's not just an opt-out option, but more integral than that.
At the same time - breaking the systemd dependency was not extremely involved technically. Very little code had to be written, and most of the work is tying things together at the distribution level.
Maybe, but Debian policy reserves the right to break your programs in future updates, right?
> Use some even less common init system and have no package provided init scripts/units/what have you.
If you just want to use the most common thing with everything being easily packaged, why would you be using Debian rather than, say, Windows? The things that traditionally set Debian apart from Windows are the same things that set something like Devuan apart from modern Debian, IME.
I'm pretty certain packages wouldn't see a change such as removing sysvinit support files unless you upgrade to a new major version (i.e. upgrade from Buster to Bullseye). In that scenario, sure, all bets are off. But in that scenario you could also just find the package missing too, so no amount of sysvinit scripts will you if the binary itself is gone.
> If you just want to use the most common thing with everything being easily packaged, why would you be using Debian rather than, say, Windows?
Wat? A pretty basic Debian Buster box I setup recently has close to 200 systemd unit files from just 24 Debian packages. Are you seriously suggesting that I should remove systemd, install... some other declarative service manager, and then write/find appropriate unit files for all of those services?
> The things that traditionally set Debian apart from Windows are the same things that set something like Devuan apart from modern Debian, IME.
I'm glad you made sure to classify that as your opinion.