There's a lot of such misunderstanding here in this discussion, with people asserting, erroneously, that there's a standard, or at least conventional, behaviour for multiple proxy DNS server listings in /etc/resolv.conf, that systemd-resolved is deviating from. In fact, there is a very prominent difference between two widely used softwares that predates systemd by decades, namely nslookup and the C library's DNS client. In actual fact there is not one single conventional behaviour. This particular Frequently Given Answer touching upon the differences in the access patterns dates from 2001, and people are still making this conceptual error about how DNS works 16 years later.
There is absolutely no guarantee that dns resolvers will be queried in order. The implementations of this vary drastically - some implementations make the request to all dns resolvers configured and simply use the response that comes back the fastest.
The implemented a hack that seemed to "work" but made poor assumptions as to why. This is not the correct way to implement split horizon DNS.
"The strategy is to cycle around all of the addresses for all of the servers with a timeout between each transmission."
Note that it does not specify an order -- the resolver determines that, and RFC 1035 even recommends keeping a weighted list of nameservers prioritized by response time:
"To complete initialization of SLIST, the resolver attaches whatever history information it has to the each address in SLIST. This will usually consist of some sort of weighted averages for the response time of the address, and the batting average of the address (i.e., how often the address responded at all to the request)."
...which is exactly what systemd is doing. The glibc behavior is a naive simplest-implementation approach, and if your infrastructure depends on an implementation detail of a specific libc resolver, you should install proper configurable resolvers and use those instead.
The fact that all nameservers are expected to return identical results is inherent in the design of DNS. It's one namespace, and if you need to split it, the client resolver is the wrong place to do that too.
Here's a list of all RFCs related to DNS: https://www.isc.org/community/rfcs/dns/
In https://tools.ietf.org/html/rfc1034 this is discussed:
The resolver always starts with a list of server names to query (SLIST).
This list will be all NS RRs which correspond to the nearest ancestor
zone that the resolver knows about. To avoid startup problems, the
resolver should have a set of default servers which it will ask should
it have no current NS RRs which are appropriate. The resolver then adds
to SLIST all of the known addresses for the name servers, and may start
parallel requests to acquire the addresses of the servers when the
resolver has the name, but no addresses, for the name servers.
BIND9 supports 'views' (iirc) which allow for splitting queries between internal vs. external DNS. But that would be the resolver as opposed the users desktop implementing that.
SystemD is following a standard, while people are relying on an implementation. One that has been stable for a long time and has become a de-facto standard. Think of e.g. memcpy on BSD - even though the standard says memcpy between overlapped regions is not defined, the implementation does memmove in that case, because that's what people expect to happen. If you're replacing a system that has historically provided guarantees above and beyond those given by a bespoke standard, you should emulate them, or you'll cause users a lot of pain.
SystemD devs have chosen to inflict pain on their users, hence the users are not happy.
There was at least a "defacto" standard in that many Linux distribution resolv.conf man pages said things like "If there are multiple servers, the resolver library queries them in the order listed." (verbatim quote from some Linux distribution...I concede though, that different distributions may not have spelled that out)
Once one of these is received, it's been my expectation that the query should been marked as failed, and not reattempted.
systemd appears to be the only project where getting rid of legacy cruft is seen as a bad thing. Having a DNS setup that works in a different way when stuff fails should be seen as a bug; DNS is supposed to be reliable.
Now you have. This is a common configuration in mixed-use situations.
> I'm amazed people are relying on that behaviour described in the ticket
Why? Using each server in order is the expected behavior. From resolv.conf(5)
nameserver Name server IP address
Internet address of a name server that the resolver
should query [...]. If there are multiple servers,
the resolver library queries them in the order listed.
[...] (The algorithm used is to try a name server, and
if the query times out, try the next, until out of
name servers, then repeat trying all the name servers
until a maximum number of retries are made.)
You can call it a bug in your use cases, but having an imperfect fallback is important in some situations.
> DNS is supposed to be reliable.
Software is supposed to be lots of things, but good engineering understands that sometimes the real world deviates from these ideal.
What I've never heard of is people relying on that to get DNS results correct. If your first server doesn't work, you _get the wrong results_. I don't understand on what planet that is acceptable sysadmin setup.
Split DNS zones have been a thing for years. This isn't a good substitute.
One of the reasons the IETF likes multiple running implementations of an RFC is they often have different side effects and so they call out things which might hang people up.
The "correct" way to shadow, and to achieve user required semantics, is to create a DNS proxy that answers queries that it is supposed to and recursively getting results for ones that it doesn't.
Correct here is in scare quotes because it doesn't mean it is right but instead just insures side effects of the libraries will not be a problem.
Well, congratulations, there is a first time to have heard everything.
Ordering of DNS servers matters. For instance, you might have a fast one near-by (perhaps on the same machine or LAN segment); and a fallback one that is more costly to use (few hops of latency, more network resources).
In that case, what you should never have heard of is people pointing to more than one DNS server simultaneously. They should have the one and only one which is reliable and that's it. (In fact, that is not an uncommon scenario; that DNS server does the recursion to others if it doesn't have the info.)
Yes, in terms of performance it's pretty common that the order matters, but the bug report is about the order mattering in terms of result. This is less common and seems like a bad idea.
UDP does not strike me as a reliable protocol.
Tthe glibc behavior is the standard on Linux, and FreeBSD (I don't know about the other BSDs) has the same behavior IIRC.
Changing the de facto standard is not something that should be done lightly...
In much the same way clang implements most of gcc's extensions to C and C++, and even defines gcc version macros, so people can simply drop-in replace gcc with clang. It could not implement any of them, but then it would not be very useful to people.
Most of those extensions either exist for a reason, or were an implementation detail that happened to be very useful to people, hence the behaviour is kept in. Now the SystemD developers have removed that behaviour because it's not mandated by a specific piece of paper, even though it's expected by a significant portion of users. Users are not happy.
For me, the second set of users ought to be getting priority, because it's a vastly common setup and users experience time outs as software failing. Ensuring that the system copes normally in the face of DNS resolver failure is a very useful and important feature.
It's a classic case...should the software follow the spec, or the real-world usage? And people complain either way.
A common use for this behavior is VPNs: you sign onto a VPN, and the VPN's DNS server gets prepended to the list in resolv.conf, which allows you to resolve internal addresses (and still fall back to your regular DNS). This is a crazy-common use case, and it's incredibly arrogant of Poettering to just dismiss it out of hand, or even attempt to argue with it.
It looks like systemd-resolved implements something similar: https://www.freedesktop.org/software/systemd/man/systemd.net...
The fact that there is an incredibly common use case that consists of an unreliable workaround for people who don't have a local resolver that does this correctly is hardly reason to avoid writing a local resolver that works correctly.
Really? My local DNS server should return the same results as 22.214.171.124? Should one never use unregistered suffixes?
What one should do is only choose DNS servers capable of properly resolving all addresses, as it shouldn’t be the resolver’s responsibility to “shop around” for answers.
Fixing VPN configurations that complicate this should be the responsibility of VPN client software — by providing a forwarding DNS server that falls back to the originally configured servers, say — not the resolver, and similarly for any other cases where standard resolver behavior does not suffice.
Application software, for its part, shouldn’t generally rely on implementation-defined resolver behavior, as doing so is obviously not portable, but that’s less sinful than a VPN relying on client library-level support to work around the fact that it’s exposing a broken network configuration.
Note that I have no axe to grind one way or another in the systemd debate: Linux isn’t me primary OS, but I regularly use and abuse both systemd and non-systemd Linux systems, and all the Linux init systems seem to work at least as well as any other system I regularly use (OS X, FreeBSD, Windows) for most practical purposes.
Strictly speaking, what you are talking about here, in RFC 1034 terminology, is a "stub resolver", the Unix model where there's a fairly dumb DNS client library running in the applications, talking to an external program that actually does the grunt work of query resolution by making back-end transactions and building up the front-end response from them.
A "resolver" actually very much does "shop around for answers", amongst content DNS servers. There's even a (slightly faulty) description of its shopping around algorithm, in RFC 1034 section 5.3.3.
"resolver" is such a confusing term, and so often used contrary to the RFCs in the way that you have here, that years ago I took to explaining the DNS to people using terminology borrowed from HTTP: proxy servers, content servers, and client libraries linked into applications.
If you have an internal DNS resolver which handles some private DNS entries, you should not set up any client with that resolver AND a second resolver which does not also have those entries.
So what the OP it's saying is that the example you provided is something you should never do.
So to clarify the statement: "all DNS resolvers defined on a given client are expected to return the same results"
I don't have a strong position on that argument. Just trying to state it more clearly.
And no, you shouldn't just blithely assume that you own parts of the DNS namespace that you do not, either. There are plenty of other people's mistakes to learn from here. Don't repeat them.
If they're internal services the namespace and the trust heirarchy can be private just fine.
Running your own CA works but has two caveats: first, I don't really trust myself to run a CA competently given that almost nobody who runs a CA as their only business function is doing so competently, and second, you have to get your config onto every device on the network. If you let personal devices on the network or even just hard-to-configure things like Chromecasts, you have to get your config onto all of them somehow.
It's quite a bit simpler and more secure to just use .corp.example.com or whatever.
Sure, it's most suitable for a homelab where you can import the certs, with suitable discretion, for yourself only.
>Running your own CA works but has two caveats: first, I don't really trust myself to run a CA competently given that almost nobody who runs a CA as their only business function is doing so competently
Not a massive concern since the context is internal only, but I hear you.
>and second, you have to get your config onto every device on the network.
Yep, this can be tricky in a less controlled environment. I had two scenarios in my mind - Home lab, and enterprise, when I penned that response. There's definitely a third one here, that you have in mind, that I didn't cover.
>It's quite a bit simpler and more secure to just use .corp.example.com or whatever.
Definitely the best option for small to medium companies.
You don't have to deal with any of the issues of public CAs because you control all the machines, running one in a corporate environment is no issue at all and there are plenty of existing products that will do almost all of the work for you.
> you have to get your config onto every device on the network
Which is a single task in an Ansible playbook, Puppet module, or Task in the Windows Task Sequence.
> you have to get your config onto all of them somehow
Usually not, you just need to get your CA on all the devices that are going to connect to your hosted services.
If the ordder is predictable, I can do some shadowing: if A is searched before B, I can put some entry into A which blocks certain sites or redirects.
Even if we just have one server A, there is still an order! The recursive DNS search implements that order. Each server in the chain has the option to return its own local result, which doesn't have to coincide with the next server that is used.
So, that is to say, DNS is implicitly ordered from the leaf server you're using up to the root one.
First of all, DNS queries are usually made with UDP. There is no ack or retransmission with UDP, and the network stack is free to drop messages if necessary, such as if a queue fills up or the network is congested. (Hence it has the nickname "Unreliable Datagram Protocol".)
Second, "man resolv.conf" has this to say about what happens when you have multiple DNS servers listed: "The algorithm used is to try a name server, and if the query times out, try the next, until out of name servers, then repeat trying all the name servers until a maximum number of retries are made."
Consider what happens when you combine those two together. You roll the dice and send a UDP request to the first server. Then it rolls the dice by sending a UDP response to you. If that doesn't work, you send to the second server. If that doesn't work, then you try the first server again.
TLDR: In the case of lost packets, which you must assume happen because it's UDP, it alternates between servers. The only thing that's guaranteed is which one comes up in the rotation first.
There is not a "chain" of servers, each passing on queries to the next one along. There is a resolving proxy DNS server that takes a front-end query, makes a bunch of back-end query+response transactions with content DNS servers, and stiches their results together to make the final front-end response.
The aforementioned DNS client library can be given the IP addresses of multiple such resolving proxy DNS servers, but there has not been one single conventional order of querying them for far longer than systemd has been around. Different DNS client libraries do it in different ways. nslookup's internal DNS client library does something different to the ISC DNS client library that forms part of many C libraries, as I have already pointed out in this discussion.
The whole "chain of servers" model is one of the commonest misunderstandings of the DNS, in my personal fairly long experience of people with DNS problems. It is quite wrong.
So much for CDNs or internal networks then.
Is that assumption true anymore, given a world where some ISPs issue redirection for unfindable domain name?
EDIT: Also, Comcast shut down their Domain Helper as it is fundamentally incompatible with DNSSEC
It's good to have the knowledge that Google servers are compiled-in to resolved.
(Also, the one below points at more Google infrastructure...)
Already the line numbers are slightly different
It may be worth pointing out that dnsmasq has a similar behavior by default and points out the correct way to override specific hostnames/zones:
Q: My company's nameserver knows about some names which aren't in the
public DNS. Even though I put it first in /etc/resolv.conf, it
doesn't work: dnsmasq seems not to use the nameservers in the order
given. What am I doing wrong?
A: By default, dnsmasq treats all the nameservers it knows about as
equal: it picks the one to use using an algorithm designed to avoid
nameservers which aren't responding. To make dnsmasq use the
servers in order, give it the -o flag. If you want some queries
sent to a special server, think about using the -S flag to give the
IP address of that server, and telling dnsmasq exactly which
domains to use the server for.
Meanwhile, systemd has actual functionality for scoping your VPN domain names to your VPN's DNS server only: https://www.freedesktop.org/software/systemd/man/systemd.net...
Systemd is known to but in into areas that are not / should not be of concern for it. Also known to do stuff half baked or changing things because you can.
So I'd very much love to see systemd fingers completely outside of dns resolver, that way we wouldn't have this conversation at all.
Correct, which is why it's not part of /bin/systemd or any other init/pid 1 process.
Having one monolith executable or ten executables that increasingly demand each other is the same thing. Oh wait its name also suggests it is part of the package, being named Systemd-resolved.
There is number of reality distortions like this one that systemd people tried to perform, during the several past years.
Having a "technical conversation" would require you engaged with the person you were replying to, not engaging in what can only be described as "technical trolling".
For the record, the vocal "technical trolls" are why I don't like systemd, despite having little opinion on the technical merits.
I think it is relevant that the current state of affairs (with glibc) isn't actually better at preventing DNS leakage, and that systemd-networkd has relevant functionality specifically to address the underlying use case in a way that reliably blocks DNS leakage.
I also think it is relevant that the functionality is not in pid 1 and (although I didn't say it, and maybe I should have) there is no particular obligation or desire on the part of systemd's pid 1 binary to use systemd-resolved. They're developed in the same project, and there are some nice things you get by using them together, but you don't have to, and you experience no loss of functionality as compared to the status quo ante systemd if you choose to use systemd as init plus the glibc stub resolver. (In particular, I think that's the standard way most distros configure systemd.)
I was probably rude in my reply about how this functionality is not in pid 1, and for that I apologize, but I don't think I was irrelevant. Maybe I don't understand the objection to having systemd-resolved developed alongside an init system? (Developing everything together and using common routines where possible is the traditional UNIX approach, epitomized by the current BSDs.)
> there are some nice things you get by using them together
These two statements contradict each other, and you said them one after the other.
The accusation of trolling stems from this behavior: if you're not even being consistent for the span of two sentences, why should I believe you're advancing a thoughtful, genuine argument? The evidence you present (by so flippantly contradicting yourself) is against that.
> I was probably rude in my reply about how this functionality is not in pid 1, and for that I apologize, but I don't think I was irrelevant.
I believe you misunderstood the objection to the two pieces of software being coupled in your rush to correct a technical point -- it's completely irrelevant if the two processes both run under the same pid as long as it's the case that "there are some nice things you get by using them together".
The concern is that a process which shouldn't be coupled to the process of DNS resolution at all is injecting these "nice things" through coupling, and that systemd is making a whole suite of such software around their init system. Your reply, well, doesn't address that concern, and doesn't seem to even be aware that is the concern.
> Maybe I don't understand the objection to having systemd-resolved developed alongside an init system? (Developing everything together and using common routines where possible is the traditional UNIX approach, epitomized by the current BSDs.)
The concern is inappropriate coupling via these "nice things" that gives systemd inappropriate sway over what non-init portions of the system are, while degrading the overall resilience and versatility of components, and that over time, the process to replace systemd will require replacing substantial portions of the OS, rather than just the init system.
OK, thanks, I understand the technical disagreement now. I don't believe they contradict. (And I'd appreciate your help in understanding this, since I apparently am causing people to think I'm being disingenuous!)
If you use systemd the pid 1 binary by itself, with the usual glibc stub resolver, you get a pretty good init system.
If you use systemd pid 1 along with systemd-networkd and systemd-resolved, you get some particularly fancy features relating to tying service management to network state (for instance, I think this combination is required to let you reliably implement the feature where VPN DNS queries are sent to the VPN only, by correlating the DNS server config, the VPN interface, and the VPN process).
However, if you don't want these features, you can totally use systemd pid 1 by itself. And in fact my understanding is Debian/Ubuntu-based distros configure systemd, maybe halfheartedly configure networkd but use ifupdown and /etc/network/interfaces, and don't configure resolved.
Therefore, I said that there is no need (obligation) for systemd, the init system, to use systemd-resolved, nor is there any desire, in the sense that it works better as an init system. It gives you new features related to DNS resolution, but if you're not interested in tying your init to DNS resolution, it doesn't work any worse. (This is different from how, e.g., systemd works slightly worse without libnss-myhostname if your current hostname doesn't resolve, or systemd works I think a good bit worse without udev, or systemd doesn't work at all without cgroups, which is a design decision that I really dislike.)
> The concern is inappropriate coupling via these "nice things" that gives systemd inappropriate sway over what non-init portions of the system are
I am somewhat sympathetic to this argument, in that it seems to be not super straightforward to write an API-compatible drop-in replacement for systemd components, if you happen to like systemd's design but not its implementation (as I do).
However, the alternative is that these "nice things" simply cannot exist at all. I don't think these are inappropriately coupled; if you want to reliably correlate long-running processes and virtual network devices, your service management tool has to keep track of them together. (Admittedly, this tool doesn't need to be pid 1, which is another point of disagreement I currently have, and one I've changed my mind on a few times. It can easily be pid 2.) And given that the nice thing is, in my understanding, the exact thing that was requested at the top of this thread (reliably preventing VPN DNS leakage), it seems like it would be making an incomplete analysis to completely disregard this tradeoff.
(Also, I think that systemd's APIs are actually documented enough that you can write the drop-in replacement if you really wanted to, and that systemd upstream is open to documenting these APIs where they're undocumented.)
Furthermore, while I might be wrong that this is the only way for the "nice things" to exist, I don't think that I am so wrong that this argument is in bad faith / trolling. (Maybe I am, maybe there's something obvious I'm missing?)
That is: it was already broken, systemd-resolvd just made the breakage more visible. (If I understood it correctly, it only switches after the first "glitch", so the only difference is that it doesn't switch back until the second server also "glitches".)
1. Is it actually that bad for a desktop?
2. Is it actually that bad for a server?
3. If yes to either, how suitable are the alternatives (e.g. Gentoo, Slack, or BSD)?
As a desktop user, systemd has been a godsend. Now everything works the same on every distro. No more checking the website of each distribution to know where to put stuff if you want them to run on boot.
I also migrated everything in my computers to systemd-networkd and systemd-resolved and systemd-timesyncd and it is just so fast and always work reliably (again, for a desktop usage).
journalctl is fantastic: now every single line of log has the same format that every other, and you can filter exactly which service outputted which log line at which time. VS the old way where you would always have that rebel service that logged stuff in whatever format that would always break the grep command you'd be trying to write.
But here are my systemd observations:
- log files now are binary and accessed by `journalctl` - it's a bit less convenient than looking for text files in `/var/log`
- unit files are a miracle. For the 7 or so years before systemd, I wrote one unportable init script, and I hated it. Now I churn them out whenver I feel like it - they are standardized and the docs are incredible.
Apart from that, nothing really changed from my POV, and I use systemd on desktop as well as in small servers.
I find it more practical to be able to filter logs in any way I want (get everything that concerns this device or this service or this user) rather than in a single way.
Yes, there are other systems that do all these things, but having this actually be implemented out-of-the-box is one of the amazing things too.
(Note that this is different from saying that systemd is flawless code or that I agree with the systemd maintainers' approach to everything. I do actually have a good number of disagreements. But they've made a working system that solves real problems.)
If it was that bad for the average consumer, the major mainline distros wouldn't have all adopted them.
It was my understanding that the major mainline distros adopted it because Red Hat coerced Gnome into adopting it, and everyone feels the need to support Gnome.
And major distros all had long discussions and debates on public mailing lists where the details of why they adopted it is discussed.
One thing I've seen systemd newbies do that I have to shake my head at is "journalctl | grep sshd" and that's entirely the wrong way to do it.
$ time journalctl -r | head -n 50
journalctl -r 0,01s user 0,04s system 0% cpu 5,289 total
head -n 50 0,00s user 0,00s system 0% cpu 5,287 total
"journalctl -r -n 50" takes 5.4 seconds as of a test just now from (this time, artificially) cold cache.
2) No, on a server, I prefer systemd. I write just a small handful of units for services, and it's so so so much simpler to do basic stuff with systemd than trying to put together init scripts to do the same.
My use cases don't depend on a long history of legacy stuff to support, so for me, systemd is simply a win, full stop.
I for one appreciated the opinion, because it answered my question.
So this isn't a case of systemd-resolved deciding to arbitrarily do things in a new and broken way; rather it's systemd-resolved still being rather immature (but for some reason already ending up as the default on lots of people's systems).
>If there are multiple servers, the resolver library queries them in the order listed.
> (The algorithm used is to try a name server, and if the query times out, trythe next, until out of name servers, then repeat trying allthe name servers until a maximum number of retries are made).
So it seems there is a de facto standard for the default libraries in some *NIX OSes?
One can even trace its spread from, say, 386BSD through NetBSD to OpenBSD.
Manual pages are not standards, not specifications, but implementation documentation. As Chuck McManis said in this very discussion, to misuse them as standards or specifications is to mistakenly treat implementation as architecture.
The BIND DNS client library is far from the only DNS client library in existence. Other DNS client libraries read /etc/resolv.conf, as part of a BIND DNS client compatibility shim or otherwise. They do not have the same implementation details as the BIND DNS client; they do not all have the same access patterns as the BIND DNS client. Indeed, as I have mentioned elsewhere in this very discussion, even other tools from the same origin, namely nslookup, read /etc/resolv.conf and have different access patterns.
There is not a single common conventional, let alone standard, behaviour to rely upon.
None of your links contain the text "resolve.conf".
Nslookup is an end user diagnostic tool and would not be expected to conform to any standard, even if there was one.
With dnsmasq: https://news.ycombinator.com/item?id=15231319
With systemd: https://news.ycombinator.com/item?id=15232027
That's tough to swallow.
And in such a breaking way?
For caching and DNSSEC validation at the system level instead of needing to be handled ad hoc by each individual framework or application.
> And in such a breaking way?
Systemd designers optimized their new resolver's behavior for one common use case, but apparently never considered that it breaks another (undocumented) use case. That seems like an example of a lack of real user testing for an important feature before cutting a new release.
Given the hostility and generally crappy attitude of the systemd's maintainers, this reads to me as "Reinventing the wheel and throwing out decades of knowledge because shiny is better".
Systemd as a project run by people abuses this confusion. In the minds of most normal people, when you prefix a name with another name, you're implying a deep association. Apple Watch, Apple iPhone. Systemd Journal, Systemd resolver.
When people criticize Systemd's continued absorption of Linux system services, they point at it being "modular" (despite it being a rapidly moving target with no real alternative replacements - i.e. modular in the purely theoretical sense of the word), and then take criticisms of the individual parts as criticisms of the whole in an attempt to shout the detractors down strawman-style.
Most of the problems with systemd can be chalked up to its leadership and utterly toxic cultural impact, rather than its technical merits (or lack thereof).
Most people with a problem with systemd don't have a problem with systemd as code - they have a problem with systemd as this creeping monstrosity that keeps eating services that have worked fine for decades in the name of objectively worse replacements. Did name resolution, NTP, and so forth really gain anything tangibly meaningful from their assimilation, other than integration with systemd? Did those services gain more than they lost in annoyances by system admins troubleshooting the changes?
We arguably needed a new init system (and I'll just gloss over things like Upstart and OpenRC). We did not need yet another new NTP, DNS, cron, logger, auditd, and fucking QR code library tied into the system internals!
no, just no. Nobody get confused by GNU emacs being prefixed with the same brand than GNU tar or GNU make.
> Did name resolution, NTP, and so forth really gain anything tangibly meaningful from their assimilation, other than integration with systemd? Did those services gain more than they lost in annoyances by system admins troubleshooting the changes?
As an user, I certainly gained much from only having to learn a single, predictable config format that GUI tools can parse easily, vs 1 different config format for each and every service under the sun. The only things it misses in my opinion is power management (eg upsd / nutd).
Emacs is just Emacs to most, not GNU Emacs.
>I certainly gained much from only having to learn a single, predictable config format that GUI tools can parse easily
Linux's primary use is on servers where there usually isn't a GUI, which already had conventions for config formats, usually delineated by debianlike or RHELlike. For people that run servers (read: most Linux users by far), these changes are obnoxious.
Is it ? nowadays 3% of internet traffic comes from Linux computers. Doesn't sound like much, but it makes for more than 115 million users.
Desktop is 90% windows, mobile is 70% linux (by way of Android), server space is 60% linux. Any way you slice it, Linux is responsible for a lot more traffic.
Political capture. Now we're saddled with this dysfunctional product because only a couple of niche distros are taking the effort to still give their users a choice.
>If you disagree, use a Linux distro that doesn't, or make your own.
Rather, I'll do what I continue to do, and point out the ways in which it sucks and is a net negative on Linux.
And in my personal experience, using systemd has been much less of a nightmare than the old ways. Especially when it comes to cross-distro compatibilites.
edit: here is some even more justification for debian, https://wiki.debian.org/Debate/initsystem/systemd
I think it's rude to assume that people who run these distros didn't think about the technical aspects.
The confusion is your own. You somehow came to believe that systemd is a project that only builds an init system.
You could have come to the same confusion with apple, to use your example; you could have thought "Apple only makes ipods.. wait, they're making phones? Apple's abusing confusion by doing more things".
The systemd project is fairly similar. All those things are closely related; they're made by the same people and work well together. They are not all related to init, just like everything apple makes isn't related to the ipod or the apple II or whatever the first product of theirs you heard of is.
> they point at it being "modular" (despite it being a rapidly moving target with no real alternative replacements - i.e. modular in the purely theoretical sense of the word), and then take criticisms of the individual parts as criticisms of the whole in an attempt to shout the detractors down strawman-style.
That does happen some, but I think that's largely because the detractors of systemd constantly turn criticism of any small component to criticism of the whole thing. It's the same fallacy in reverse.
Also, systemd is not modular in the sense that components are perfectly interchangeable; rather the majority of systemd components are optional and you can keep using whatever you were before (e.g. journald has a configuration to forward to rsyslog or whatever your old syslog daemon was and to not save on disk, thus being as before, resolved is optional, timesyncd is optional, networkd is optional, etc etc).
All of that optionalness is not theoretical. There are distros that take advantage of it, including to the extreme of running full linux systems with no systemd at all.
> Most of the problems with systemd can be chalked up to its leadership and utterly toxic cultural impact, rather than its technical merits (or lack thereof).
I think the toxic cultural impact is a huge problem with systemd, but that impact came not from systemd, but from trolls around it, from people spreading FUD, etc. I don't actually think the systemd leadership or project has actively worked to create such an environment, but rather all of systemd's detractors have intentionally made it a polarizing and political subject which resulted in a toxic environment.
Proving this in either direction is near impossible unfortunately, so we could each see different reasons for the same environment.
> creeping monstrosity that keeps eating services that have worked fine for decades in the name of objectively worse replacements
If it's objectively worse distros wouldn't be adopting it. Have you ever read syslog-ng's code? bind's? glibc's? Have you ever configured your network with rc scripts and hacks? Have you then tried to do the same network configuration on a different distro and found it to be different?
It's clearly better in some metrics in that distros have switched to it.
Because of the cultish political toxicity around the project, all of systemd's issues get amplified a hundredfold and make a perception of it being worse, but people only rarely see the convenience and benefits it brings.
> Did name resolution, NTP, and so forth really gain anything tangibly meaningful from their assimilation
Networking gained consistent configuration formats which don't vary by distro. ntp didn't really change meaningfully (though I like that it's an ntp client, not a client and server bundled into one for no discernible reason).. it did gain better man pages and a better CLI, but that's only a small detail.
Name resolution gained a number of security features that typically weren't available,
Networking in general gained ordering with services, which was nearly impossible before and was typically powered by luck and spit.
> We did not need yet another new NTP, DNS, cron, logger, auditd, and fucking QR code library tied into the system internals!
Maybe, maybe not. It turns out that once you're starting, restarting, and managing FDs for processes, it's really convenient to also integrate with a logging system, and syslog was utter garbage for that sort of integration.
Once you've got the facilities an init has for starting and stopping processes, it's also pretty natural to do a cron-like thing; that's just starting and starting processes too (and timers are much more featureful than cron, including things like '4 hours after the last time this exited', random skew, and much more readable formats).
Once you're managing processes, it also kinda makes sense to manage containers; afterall containers need something to start/stop/manage them too...
and once you're managing containers, networking tools, logging,and so on make even more sense.
Almost like systemd was pitched to solve the init problem, and then all this other crap winds up coming along for the ride? Naming a thing systemd-something does not remove that perfectly reasonable logical assumption that the thing that handles init is now handling something else.
Then again, I'm not the one that chose to start with an init system and then suffix all the other stuff I'm assimilating with the same name.
>That does happen some, but I think that's largely because the detractors of systemd constantly turn criticism of any small component to criticism of the whole thing.
Is nobody allowed to be concerned about the creeping monoculture?
>..rather the majority of systemd components are optional
"Optional" in the sense of if you want to compile your own system and run your own homegrown flavor of Linux, sure. Meanwhile, most people are going to be stuck with the defaults and the feature creep - and those defaults are what will matter when you go looking for help.
>I think the toxic cultural impact is a huge problem with systemd, but that impact came not from systemd, but from trolls around it, from people spreading FUD, etc.
First, you just did that thing we were talking about. It is possible to dislike systemd and its implications without being "a troll". Second, it's not the detractors that are responsible for Poettering et. al. 's general poor attitude towards everyone else's place in the ecosystem. That attitude was exemplified with the kernel message overflowing problem. It's exemplified when systemd maintainers open PRs to something like Tmux to put in systemd-specific code. It's exemplified when the maintainers pressure downstream distros to pressure the kernel devs. It's exemplified when they lie about compatibility promises. On and on and on. There's a wiki somewhere that I lost the link to that includes pages of this kind of crap happening again and again.
It's the attitude that turns people off, myself included. Systemd (the project) is not a good citizen, it is the thing that you should make allowances for and get out of the way of. I am not a fan of this style of political wheeling and dealing.
>Have you ever read syslog-ng's code? bind's? glibc's? Have you ever configured your network with rc scripts and hacks?
You mean the exhaustively documented "rc scripts and hacks" that existed on every distro pre-systemd and worked perfectly well on those distros? /etc/network/interfaces for Debianlikes, /etc/sysconfig/network-scripts/(interface-name).conf for Rhellikes. That's 90% of the server/desktop land covered.
The thing that is coming in to change everything is what needs to justify itself, not the other way around. Normally I'd be all for the technical merit by itself, but time spent learning/unlearning/troubleshooting/etc is a thing too. I work in a mixed Ubuntu environment since around 8.04ish, and lemme tell ya, unlearning ~10 years of muscle memory every time I want to tweak services on a 16.04 machine is a pain in the ass that causes curses on systemd's name regularly from a not insignificant amount of people.
The typical response to this is to insult people who dare bring up this concern as "backwards" or "old fashioned" (read: appeal to novelty). And I could put up with that change no problem if there was something objectively better about the replacement.
Yet, there is not. What I have works fine right now, and I gain no new benefits from all of this wheel-reinventing. Point to a tangible benefit. No, "universal network config" is not a tangible benefit in my book, because I don't see a Linux-wide monoculture as a positive.
>It's clearly better in some metrics in that distros have switched to it.
Argumentum ad populum. Once RHEL (which was a given since they're basically the corporate backer) and Debian bought in, the discussion was effectively over.
As to the rest of your post, yes, once you're in a position of power and can basically dictate standards that everyone else is pressured to follow (going as far as the Linux kernel wrt. kdbus), it makes sense to absorb as much as you can.
I'm not accusing Systemd (the people) of mendacity or evil. I'm accusing them of having a shitty, hostile attitude that I don't have a (meaningful) choice in avoiding unless I want to spend all my days reconfiguring stuff that wasn't broken in the first place. Most of the benefits systemd brings to the table just from an everyday server standpoint really aren't that huge at the end of the day, at least not anywhere near huge enough to justify the headache caused by its adoption.
The job of ensuring compatibility of softwares before replacing them is a distribution's job
i'm more and more becoming convinced that the systemd ecosystem should be considered a regression.