Hacker News new | comments | ask | show | jobs | submit login
System Down: A systemd-journald exploit (openwall.com)
385 points by gmueckl 7 days ago | hide | past | web | favorite | 344 comments





Yet another proof for the following:

1. It's reasonable to claim that amd64 (x86_64) is more secure than x86. x86_64 has larger address space, thus higher ASLR entropy. The exploit needs 10 minutes to crack ASLR on x86, but 70 minutes on amd64. If some alert systems have been deploy on the server (attacks need to keep crashing systemd-journald in this process), it buys time. In other cases, it makes exploitation infeasible.

2. CFLAGS hardening works, in addition to ASLR, it's the last line of defense for all C programs. As long as there are still C programs running, patching all memory corruption bugs is impossible. Using mitigation techniques and sandbox-based isolation are the only two ways to limit the damage. All hardening flags should be turned on by all distributions, unless there is a special reason. Fedora turned "-fstack-clash-protection" on since Fedora 28 (https://fedoraproject.org/wiki/Changes/HardeningFlags28).

If you are releasing a C program on Linux, please consider the following,

    -D_FORTIFY_SOURCE=2         glibc hardening

    -Wp,-D_GLIBCXX_ASSERTIONS   glibc++ hardening

    -fstack-protector-strong    stack smash protection

    -fstack-clash-protection    stack clash protection

    -fPIE -pie                  better ASLR protection

    -Wl,-z,noexecstack          don't allow code on stack

    -Wl,-z,relro                ELF hardening

    -Wl,-z,now                  ELF hardening
Major Linux distributions, including Fedora, Debian, Arch Linux, openSUSE are already doing it. Similarly, Firefox and Chromium are using many of these flags too. Unfortunately, Debian did not use `-fstack-clash-protection` and got hit by the exploit, because it was only added since GCC 8.

For a more comprehensive review, check

* Recommended compiler and linker flags for GCC:

https://developers.redhat.com/blog/2018/03/21/compiler-and-l...

* Debian Hardening

https://wiki.debian.org/Hardening


"Proof" suggests a level of absolute confidence that this example certainly does not give.

> The exploit needs 10 minutes to crack ASLR on x86, but 70 minutes on amd64.

Is there any realistic threat model under which the difference between 10 minutes and 70 minutes is the difference between "insecure" and "secure"?

> Using mitigation techniques and sandbox-based isolation are the only two ways to limit the damage.

I'm not at all convinced that mitigation techniques represent a real improvement in security, because by definition a mitigation technique is not backed by a solid model. If you're letting an attacker control the modification of memory that your security model assumes isn't modifiable, how confident can you be that ad-hoc mitigations for all the ways you could think of to exploit that cover all the possible ways to exploit that? E.g. I can remember a time when ASLR was touted as a solution to C's endemic security vulnerabilities; now cracking ASLR as part of vulnerability exploitation is routine, as seen here. Mitigations appear to give a security improvement because an app with mitigations is no longer the low-hanging fruit, but I suspect this is a case of "you don't have to outrun the bear": as long as there are C programs without mitigations, attackers will go after those first. That's different from saying that mitigations provide substantial protection.


The hands-on-keyboard SLA for a lot of on-calls is 30 minutes.

So in an “attack was detected, break all the glass” scenario, the difference between 10 and 70 minutes is sufficient to allow human operators to render the attack moot by offlining its target, while the attackers are still trying to break through API servers.

At both big corps I’ve been at, the incident response plan for an exfiltration attack on customer data was invalidate DB creds and take the system down ourselves.

Better to be out of service than lose custody of customer data.


>Is there any realistic threat model under which the difference between 10 minutes and 70 minutes is the difference between "insecure" and "secure"?

How about an intrusion detection system that flags up a human response? 10 minutes is hardly any time at all to respond, an hour gives you a chance to roll out of bed.


PaX offers an anti-bruteforce protection: if the kernel discovers a crash, the `fork()` syscall of the parent process is blocked for 30 seconds for each failed attempt, the attacker is going to have a hard time beating 32-bit entropy. Meanwhine, it also writes a critical-level message to the kernel logbuffer to notify sysadmins, and possibly uncover the 0day exploit the attacker has used.

> if the kernel discovers a crash, the `fork()` syscall of the parent process is blocked for 30 seconds for each failed attempt

That seems like something good to be able to turn on in a stock kernel, but not with that high a timeout. Imagine your shell failing to start another process for 30 seconds after you're debugging a segfault, or your browser failing to open another tab for 30 seconds after one crash.

500ms would drastically slow brute-force attacks without noticeably inconveniencing the user (and then they can always turn it off manually when doing something like fuzz testing).


I've waited almost 15 minutes for an ssh(1) to a server to complete, with a latency of 1-2 minutes per command.

30 seconds, predictably? I'd take it any day over that.


journald is a systemd service, right? cannot restartsec simply increased to 1s?

I guess, as long as the IDS senses the attack in progress quickly -- my gut is this type of attack would be hard to detect until the outcome was achieved. More likely the initial entry would be the detected event(s) -- in which case yeah the extra time gives some safety net.

In either case, it still feels like pulling all things into systemd creates a much harder to protect surface area on systems. Why should init care if your logger crashes, let alone take down init with it? I am not a anti-systemd person but I honestly do see the tradeoffs of the "let me do it all" architecture as a huge penalty.


> Why should init care if your logger crashes

It cares in the same way it cares about all the other processes. There's nothing systemd-specific here. Journald service is configured to restart of crash, same as many other services.

It's not taking down init when journald crashes either.


> There's nothing systemd-specific here.

Well, except journald itself.


> In either case, it still feels like pulling all things into systemd creates a much harder to protect surface area on systems. Why should init care if your logger crashes, let alone take down init with it? I am not a anti-systemd person but I honestly do see the tradeoffs of the "let me do it all" architecture as a huge penalty.

100% this. Also, as I understand it the exploit would not exist if it was literally just outputting log lines to a file in /var/log/systemd/ ?

EDIT: Also as I understand it, appending directly to a file is just as stable as the journald approach, given that many, many disk controllers and kernels are known to lie about whether they have actually flushed their cache to disk (actually moreso, because the binary format of journald is arguably more difficult to recover into proper form than a timestamped plaintext -- please correct me if I'm wrong, though!!)


> the binary format of journald is arguably more difficult to recover into proper form than a timestamped plaintext -- please correct me if I'm wrong, though!!

It depends what you mean by recover. To get the basic plaintext, you can pretty much run "strings" on the journal file and grep for "MESSAGE=". It's append-only so the entries are in order. Just because it's a binary file doesn't mean the text itself is mangled. (Unless you enable compression)

The reference may look complicated https://www.freedesktop.org/wiki/Software/systemd/journal-fi... but that's all extra features you may ignore for "recovery in emergency".


> Why should init care if your logger crashes, let alone take down init with it?

They're separate processes. Logger crashes do not take down init.

> I am not a anti-systemd person

Whether you are or not, you are (inadvertently) repeating misinformation about it.


A server suddenly spiking full load for over an hour should raise alarms. It shows up even in the dumbest of 5 minute averaging charting tools.

A few minutes of high load can easily get overlooked.


Enterprise systems or any large scale stack can have one running like this where people dismiss it for an hour.

Some systems run hard like this by default. See Transcoding


Also, Weekend and Christmas attacks. In the field we are seeing more attacks with a valid username and pass occur at times when a sysadmin may not be on call.

> Is there any realistic threat model under which the difference between 10 minutes and 70 minutes is the difference between "insecure" and "secure"?

Time is given here just for an example. To crack systemd, it only takes 70 minutes, but in general, bruteforcing ASLR on 64-bit systems can take as few as 1.3 hours but as many as 34.1 hours, depending on the nature of bug. On the other hand, the ~20-bit of entropy on 32-bit systems is trivial to crack in 10 minutes for nearly all cases, and does not provide an adequate security margin.

Oon a 64-bit system there is ~32-40 bit of ASLR entropy available for a PIE program. It forces an attacker to brute-force it. Unlike other protections, no matter how is the system cleverly analyzed beforehand, it taxes the exploit by forcing it to solve a computational puzzle. This fact alone, is enough to stop many "Morris Worm"-type remote exploitations (they have suddenly became a serious consideration, given the future of IoT), since an exploit takes months or years to crack a single machine.

If it's not enough (it is not, I acknowledge ASLR by itself cannot be enough), an intrusion detection system should be used, and it already has used by many. For example, PaX offers an optional, simple yet effective anti-bruteforce protection: if the kernel discovers a crash, the `fork()` attempt of the parent process is blocked for 30 seconds. It takes years before an attacker is able to overcome the randomization (so the attacker is likely to try something else). In addition, it also writes a critical-level message to the kernel logbuffer, the sysadmin can be notified, and possibly uncover the 0day exploit the attacker has used. I'd call it a realistic threat model.

Finally, information leaks is a great concern here. Kernels and programs are leaking memory address like a sieve, and effectively making ASLR useless. Linux kernel is already actively plugging these holes (but with limited effectiveness, HardenedBSD should be the future case-study), so should other programs.

> e.g. I can remember a time when ASLR was touted as a solution to C's endemic security vulnerabilities; now cracking ASLR as part of vulnerability exploitation is routine, as seen here.

You can make the same comment on NX bit, or W^X/PaX, or BSD jail, or SMAP/SMEP (in recent Intel CPUs), or AppArmor, or SELinux, or seccomp(), or OpenBSD's pledge(), or Control Flow Integrity, or process-based sandboxing in web browsers, or virtual machine-based isolation.

Better defense leads to better attacks, and it in turns leads to better defense. By playing the game, it may not be possible to win, but by not playing it, losing the game is guaranteed. In this case, systemd is exploitable despite ASLR, due to a relatively new exploit technique called "Stack Clash", and for this matter, GCC has already updated its -fstack-check to the new -fstack-clast-protection long before the systemd exploit was discovered. If this mitigation has been used (like, by Fedora and openSUSE), it causes simply a crash, and is not exploitable. At least before the attacker finds another way round.

Early kernels and web browsers have no memory and exploit protections whatsoever: a single wrong pointer dereference or buffer overflow is enough to completely takeover the system. Nowadays, an attack needs to overcome at least NX, ASLR, sandboxing, and compiler-level mitigation, and we still see exploits. So the conclusion is all mitigations are completely useless? If it's your opinion, I'm fine to agree to your disagreement, many sensitive C programs need to be written in a memory-safe language anyway. But as I see it, as long as there are still C programs running with undiscovered vulnerabilities, and as long as attackers have to add more and more up-to-date workarounds and cracking techniques (ROP, anyone? but now the most sophisticated attackers are moving to DATA-ONLY attacks) to their exploit checklist, then we are not losing the race by increasing the cost of attacks.

On the other hand, if an attacker don't have to use an up-to-date cracking techniques, then we have serious problems. For example, broken and incomplete mitigation is often seen in the real word, and it's the real trouble. Recently, it has been discovered that the ASLR implementation in the MinGW toolchain is broken, allowing attackers to exploit VLC using shellcode tricks from the 2000s (https://insights.sei.cmu.edu/cert/2018/08/when-aslr-is-not-r...). And we still see broken NX bit protection and the total absence of any ASLR, or -fstack-protector in ALL home routers (https://cyber-itl.org/2018/12/07/a-look-at-home-routers-and-...).

The principle of Defense-in-Depth is that, if the enemies are powerful enough, it's inevitable all protections will be overcame. Like the Swiss Cheese Model (https://en.wikipedia.org/wiki/Swiss_cheese_model), a cliche in accident analysis, eventually there will be something that managed to find a hole in every layer of defense and pass though. What we can do, is to do our best at each layer of defense to prevent the preventable incidents, and adding more layers when the technology permits us.

My final words are: at least, do something. ASLR is already implemented as a prototype, analyzed, and exploited by clever hackers back in 2002 (http://phrack.org/issues/59/9.html), but only seen major adoptions ten years later. It would be a surprise if ASLR-breaking techniques has not improved given the inaction of most vendors.

> "Proof" suggests a level of absolute confidence that this example certainly does not give.

I agree. I should've use "given more empirical evidences" instead of "given a proof".

For real security, I believe memory-safe programming (e.g. Rust), and formal verification (e.g seL4) are the way forward, although they still have a long way to go.


> You can make the same comment on NX bit, or W^X/PaX, or BSD jail, or SMAP/SMEP (in recent Intel CPUs), or AppArmor, or SELinux, or seccomp(), or OpenBSD's pledge(), or Control Flow Integrity, or process-based sandboxing in web browsers

I can, and I would.

> or virtual machine-based isolation

A little different because a VM can be designed to offer a rigid security boundary (with a solid model behind it) rather than as an ad-hoc mitigation technique.

> So the conclusion is all mitigations are completely useless? If it's your opinion, I'm fine to agree to your disagreement, many sensitive C programs need to be written in a memory-safe language anyway. But as I see it, as long as there are still C programs running with undiscovered vulnerabilities, and as long as attackers have to add more and more up-to-date workarounds and cracking techniques (ROP, anyone? but now the most sophisticated attackers are moving to DATA-ONLY attacks) to their exploit checklist, then we are not losing the race by increasing the cost of attacks.

> The principle of Defense-in-Depth is that, if the enemies are powerful enough, it's inevitable all protections will be overcame. Like the Swiss Cheese Model (https://en.wikipedia.org/wiki/Swiss_cheese_model), a cliche in accident analysis, eventually there will be something that managed to find a hole in every layer of defense and pass though. What we can do, is to do our best at each layer of defense to prevent the preventable incidents, and adding more layers when the technology permits us.

> For real security, I believe memory-safe programming (e.g. Rust), and formal verification (e.g seL4) are the way forward, although they still have a long way to go.

I think the defense in depth / swiss cheese approach has shown itself to be a failure, and exploit mitigation techniques have been a distraction from real security. It's worth noting that systemd is both recently developed and aggressively compatibility-breaking; there really is no excuse for it to be written in C, mitigations or no. Even if you don't think Rust was mature enough at that point, there were memory-safe languages that would have made sense (OCaml, Ada, ...). Certainly there's always more to be done, but I really don't think there's anything that would block the adoption of these languages and techniques if the will was there.


Before the critique, I want to thank you all the detailed information (esp compiler tips) you're putting out on the thread for everyone. :)

"You can make the same comment on NX bit, or W^X/PaX, or BSD jail, or SMAP/SMEP (in recent Intel CPUs), or AppArmor, or SELinux, or seccomp(), or OpenBSD's pledge(), or Control Flow Integrity, or process-based sandboxing in web browsers, or virtual machine-based isolation."

You can indeed say that about all those systems since they mix insecure, bug-ridden code with probabilistic and tactical mechanisms that they prey will stop hackers. In high-assurance security, the focus was instead to identify each root cause, prevent/detect/fail-safe on it with some method, and add automation where possible for these. Since a lot of that is isolation, I'd say the isolation based method would be separation kernels running apps in their own compartments or in deprivileged, user-mode VM's. Genode OS is following that path with stuff like seL4, Muen, and NOVA running undearneath. First two are separation kernels, NOVA just correctnes focused with high-assurance, design style.

Prior systems designed like those did excellent in NSA pentesting whereas the UNIX-based systems with extensions like MAC were shredded. All we're seeing is a failure to apply the lessons of the past in both hardware and software with predictable results.

"Better defense leads to better attacks, and it in turns leads to better defense. By playing the game, it may not be possible to win, but by not playing it, losing the game is guaranteed. "

Folks using stuff like Ada, SPARK, Frama-C w/ sound analyzers, Rust, Cryptol, and FaCT are skipping playing the game to just knock out all the attack classes. Plus, memory-safety methods for legacy code like SAFEcode in SVA-OS or Softbound+CETS. Throw in Data-Flow Integrity or Information-Flow Control (eg JIF/SIF languages). Then, you just have to increase hardware spending a bit to make up for the performance penalty that comes with your desired level of security. Trades a problem that takes geniuses decades to solve for one an average, IT person with an ordering guide can handle quickly on eBay. Assuming the performance penalty even matters given how lots of code isn't CPU-bound.

I'd rather not play the "extend and obfuscate insecure stuff for the win" game if possible since defenders have been losing it consistently for decades. Obfuscation should just be an extra measure on top of methods that eliminate root causes to further frustrate attackers. Starting with most cost-effective for incremental progress like memory-safe languages, contracts, test generation, and static/dynamic analysis. The heavyweight stuff on ultra-critical components such as compilers, crypto/TLS, microkernels, clustering protocols, and so on. We already have a lot of that, though.

"For real security, I believe memory-safe programming (e.g. Rust), and formal verification (e.g seL4) are the way forward, although they still have a long way to go. "

Well, there you go saying it yourself. :)

"Early kernels and web browsers have no memory and exploit protections whatsoever"

Yeah, we pushed for high-assurance architecture to be applied there. Chrome did a weakened version of OP. Here's another design if you're interested in how to solve... attempt to solve... that problem:

https://www.usenix.org/legacy/events/osdi10/tech/full_papers...


FWIW, the stack vulnerabilities here aren't just a C problem. Most languages, including every language relying on LLVM and GCC until the most recent versions, failed to perform stack probing.

I hesitate to call stack probing "hardening". IMO it's better understood as a failure by compilers to emit proper code in the first place, and it's been a glaringly obvious deficiency for years if not decades.


These are alloa overflows. In the first one, the code tries to put the entire (attacker-controlled) command line on its stack. Compilers can't detect or prevent that, though with ABI support they may be able to detect it before failure

alloca isn't part of C, it's a compiler intrinsic with analogs in other languages. In LLVM it even has it's own instruction. C99 and Fortran also provide dynamically-sized array constructs which historically used alloca internally, and which in any event have similar issues.[1] An alloca overflow is from a language perspective little different than a recursive function blowing the stack.

The proper job of the compiler is to make sure that the generated code doesn't blow the stack in a way that overwrites random memory, whether because of alloca, a large stack-allocated object (of static or dynamic size), or recursion. It's true that blowing the stack in C is undefined whereas in a language like Rust it's supposed to terminate the program.[2] But that's beside the point because Rust didn't actually implement stack probing either and therefore was just as susceptible to these vulnerabilities.

The only sane, acceptable behavior for the compiler is to generate stack probes for any stack allocations that may exceed the page size; not only for alloca, but even for regular, non-array objects which happen to be large. Both programmers and compilers for complex languages like C++, Rust, and Swift aggressively attempt to stack allocate as much as possible, which makes the issue even more acute for those languages. As others have hinted, both alloca and dynamic arrays have been frowned upon in C for a long time (C99 added dynamic arrays principally for the Fortran crowd, who typically consume trusted data, anyhow). The fact that most of the stack smash and stack clash exploits you see are for C is a consequence of most popular software libraries being written in C, and researchers being most familiar with developing PoCs for C-based codebases.[3]

[1] Crucially, C99 dynamic arrays have block-scoped lifetimes whereas alloca allocations have function-scope lifetimes. Meaning calling alloca in a loop is doubly crazy. Early C99 implementations that simply reused the pre-existing alloca intrinsic were buggy. C99 compound literals were also buggy for several years for somewhat similar reasons.

[2] In truth it's de facto undefined in most languages, because most language designers and compiler authors have historically been content to ignore the issue.

[3] To be clear, I'm not saying that C only seems error prone because its popular. I'm only speaking to the particular issue of stack allocation overflow.


In sane languages, being unable to stack allocate the required memory triggers an exception, it does not corrupt it.

You can start with Ada83 as an example of this feature.


Sure. And the function of stack probes is to 1) ensure the stack is properly extended, if possible, or 2) interrupt the program with SIGSEGV (or SIGBUS), if not possible.

Without evidence to the contrary, I would assume that a GCC- or LLVM-based Ada implementation would also be susceptible to stack overflow in the same way--pathological allocation patterns that silently bypass the system's "this could never happen in the real world" assumptions. And just like with C, the fault would lie with the compiler, not the language.

Again, I realize the behavior in C is technically undefined, but its undefined precisely for the reason of permitting the implementation to do the most sane thing for the environment, such as sharing mechanism and semantics with sister languages like Ada or Rust.


Undefined behavior is not for permitting an implementation to do the most sane thing for the environment, that is implementation defined behavior.

I shouldn't have been so quick to agree that stack overflow is undefined behavior. (Not sure who I was agreeing with, anyhow.) Behavior on stack overflow isn't undefined, rather it's simply not considered by the standard at all.

It's all good. :)

Such implementation wouldn't be standards compliant as it wouldn't respect language implementation semantics.

Not every language community is so full of UB love as C and C++ ones, specially those where safety trumps performance in language design.

Naturally if the code segment is writeable and one uses Assembly rewriting as attack vector, then anything goes.


Yes, it would be non-compliant, which leads back to my original point: this particular stack overflow issue isn't about language specification, it's about implementation correctness. Unless GNAT adds stack probes at the front-end or actively subverts even basic optimizations further down the GCC pipeline, AFAIU it's entirely possible for GNAT to compile simple, vanilla Ada code in a way that oversteps the guard page. And this could be so even if GCC didn't aggressively optimize; in fact, it could happen purely as an accident of how objects are ordered on the stack in the absence of optimization, even if all the compiler did was order them as declared.

Indeed, I presume stack probing took so long partly because, short of memset'ing the entire stack frame on entry, ensuring contiguous initialization is non-trivial. But no matter how difficult, I'd bet it's less difficult than proving the generated code is safe without explicit stack probing.[1]

[1] I'm reminded of the infamously brilliant design of Soft Updates for FFS, where the order of operations was meticulously rearranged in the file system implementation and formally proven to result in a stream of atomically consistent disk writes without having to change the on-disk layout. Modifying softdep filesystem code is notoriously tricky. By contrast, a journal is both easier to write and hack on.

EDIT: Perhaps you meant that triggering SIGSEGV would be non-compliant? Stack probes don't necessarily need to touch a guard page. AFAIU on Windows you can just query the TCB for the stack size, but it's substantially faster and in some respects easier to simply trap SIGSEGV (pretty sure Java does this), and the runtime is still free to rethrow SIGSEGV. If you mean stack overflow in Ada is supposed to throw a language-level exception, that's a rather trivial detail that can be accomplished equally well whether probe failures occur inline or asynchronously. In any event, I think my larger point about how to frame the issue and where culpability and responsibility reside still stands.


GNAT is one implementation, among at least 7 existing Ada implementations.

Ada Core, Green Hills, PTC (owns former IBM and Aonix compiler divisions), DDC-I.RR Software, OC Systems.

If the implementation is not able to validate stack size correctness on function entry and throw a stack allocation exception on failure, then it is a compiler bug.

In Ada this is a required runtime check, unless explicitly disabled.

The only way the stack layout would be corrupted, in a bug free Ada compiler, is to explicitly disable such check and make use of unchecked pointers in unsafe code.


Please point to the line of the spec requiring stack probing.

The program in this blog post https://ldpreload.com/blog/stack-smashes-you corrupts memory despite being well-defined C. If you think that program should output what it actually does, I think you need to point to the line of the spec that allows it to behave that way.

Stack probing isn't the only option. Other options include fixed maximum sized stacks, architectures with separate stack and heap address spaces, etc.


Hmm… Stack overflow is certainly meant to be undefined behavior, because even if modern systems really ought to be able to catch it, there are many existing systems and implementations that do not or cannot – and the C standard is intended to be maximally compatible with old implementations, to the point of allowing compilers to only treat the first 63 characters of an identifier as significant, or set a limit of 4095 characters per source line, to name two of the more egregious entries under “environmental limits”.

However, you seem to be right that the program you linked is technically well-defined C, because the C11 spec doesn’t explicitly address stack usage. Not only does it not set a minimum requirement for the limits of local variable usage or function recursion, as far as I can tell, it doesn’t even acknowledge that such limits could exist! But if the program is well-defined, it ought to be able to execute to completion. Aborting the process, even cleanly, is no more acceptable than corrupting memory. Thus, a compliant implementation would have to have an infinite amount of memory. Since that’s a bit unreasonable to ask… it’s probably better to treat stack overflow as implicitly UB. The allowable level of stack usage could then be treated as implementation-defined.

Which is not to say that compilers shouldn’t try to handle stack overflow sanely. Stack probing was long overdue, and I’d love to see better support in mainstream compilers for static max-stack-usage analysis, among other things. It’s just that the C standard is probably not the right place to mandate such things, considering how conservative and compatibility-oriented it tends to be.


Which spec? Stack probing isn't the law, it's just a good idea.

Hey, you will want to add:

  -fcf-protection=full       ROP protection
to that list :)

(see: https://gcc.gnu.org/onlinedocs/gcc/Instrumentation-Options.h...)


What is the performance overhead for using these flags?

It's very hard to find solid numbers, because these flags are introduced and popularized at different times. But I'll try to give you my best summary. Long shory short, these flags are considered essentially and most web browsers and distributions have already enabled them without problems.

Let's see...

    -D_FORTIFY_SOURCE=2
    -Wp,-D_GLIBCXX_ASSERTIONS
These two are considered very cheap. It enables some sanity checks for buffer overflow in glibc and glibc++, respectively. See https://access.redhat.com/blogs/766093/posts/3606481 and

    -fstack-protector-strong
Also known as stack canary. The earliest version is simply called "-fstack-protector", but the coverage was bad (only "protects" <2% functions), there was also "-fstack-protector-full", adding checks to ALL functions, but the performance overhead is unacceptable. The "-strong" variant is an upgraded version, it uses a heuristic when determining if a function needs stack protectors, provides much better code coverage and a tradeoff between security and performance.

It's relatively expensive, but is considered essential to prevent attackers from smashing the stack, and is enabled in most web browser engines.

See https://lwn.net/Articles/584225/

    -fstack-clash-protection    stack clash protection
Introduced in GCC 8.0 after the Stack Clash attack was discovered, see https://securingsoftware.blogspot.com/2017/12/stack-clash-pr...

    -fPIE -pie                  better ASLR protection
PIE is Position Independent-Executable, Address Space Layout Randomization needs it to achieve its full level of protection. It has been around for a long time, but only saw a mass adoption recently, possibly due to higher performance impact on 32-bit x86, but favorable on 64-bit systems.

Red Hat has a performance analysis.

* https://access.redhat.com/blogs/766093/posts/1975803

Arch Linux has also benchmarked its performance impact before deciding to enable it.

* https://github.com/pid1/test-sec-flags/wiki

    -Wl,-z,noexecstack          don't allow code on stack
It just disables the executable stack, no overhead. And normally the stack is NOT executable, unless under some special conditions like linking assembly code, due to bad default settings in GNU toolchains for historical reasons. This explicitly tells the linker: don't allow an executable stack, no matter what.

    -Wl,-z,relro                ELF hardening
    -Wl,-z,now                  ELF hardening
It marks the GOT table as readonly, and force the system to resolve all function references to shared libraries at startup time, instead of runtime. It has some impact on startup time, but mostly negligible.

See https://blog.osiris.cyber.nyu.edu/exploitation%20mitigation%....


Thanks! I like to know the consequences of enhancing security. not every program (or system) needs everything and the kitchensync.

I would add: alloca() should be considered harmful and warned out of existence, like gets().

If your malloc() implementation is slow, fix that! tcmalloc and jemalloc have been around for a long time.


I'm reminded that the custom memory allocation system in OpenSSL was cited as being potentially implicated in the Heartbleed bug:

https://www.darkreading.com/vulnerabilities---threats/did-a-...

or it at least made it harder for the code to be analysed by automated tools:

https://dwheeler.com/essays/heartbleed.html#basic-fuzzers


I'm not sure if you're trying to compare OpenSSL's FIFO malloc cache to tcmalloc or jemalloc, or not. They have nothing in common. I was not proposing adding a crappy linked list cache in front of malloc(3), but instead linking a better malloc(3) implementation to your program (and gave specific, high-quality examples).

you can patch binaries to fix memory corruption issues :D compiler flags won't help on already compiled softwares, and if they are yet to be compiled, the sources could be improved... that being said i'm not against these compiler things, just don't lean on those to fix your broken code... which is what a lot of developers seem to imply when they say this is the way to fix things.

Sure, you need both.

I said,

> CFLAGS hardening works, in addition to ASLR, it's the last line of defense for all C programs. [...] Using mitigation techniques and sandbox-based isolation are the only two ways to limit the damage.

Mitigations are here to limit the damage, not to eliminate vulnerabilities and exploits.

Looking for bugs and improving the code for security for as hard as you can, and in case you missed one (you certainly will), CFLAGS and ASLR is your last line of defense, and you can only hope for the best that it is able to buy you some time to patch it, before a better exploit appears...


> If we send a large "native" message to /run/systemd/journal/socket ... the maximum size of a "native" entry is 768MB

Why does journal allow such large messages over a socket? That alone might be a denial-of-service attack; are real messages delayed/blocked if someone sends spawns a bunch of processes that send 768MB of junk to that socket?

    commit c4aa09b06f835c91cea9e021df4c3605cff2318d
    Date:   Mon Apr 8 20:32:03 2013 +0200
    ...
    -#define ENTRY_SIZE_MAX (1024*1024*64)
    -#define DATA_SIZE_MAX (1024*1024*64)
    ...
    +#define ENTRY_SIZE_MAX (1024*1024*768)
    +#define DATA_SIZE_MAX (1024*1024*768)
WTF? Why would you need to increase the max size so much? What are you intending to send over that socket?! Oh. From the commit[1], the missing lines at the 2nd "...":

    +/* Make sure not to make this smaller than the maximum coredump
    + * size. See COREDUMP_MAX in coredump.c */
Why would you send coredumps over a socket? Just write them to a file and send the file's path to journald. Increasing the max message size 1200% just to avoid writing a core file is crazy.

[1] https://github.com/systemd/systemd/commit/c4aa09b06f835c91ce...


Another big question is why does it use alloca at all. Considering that it is well known in the C world to be problematic and discouraged. For this very reason.

- Are you performance-constrained? NO.

- Is calling malloc/free such a big deal, instead of alloca? It is not.

Yes. That neat join funcion will need to get an extra parameter and free the returned pointer when it's done with it. This is C. If you want neat convenient string joining functions use a different language.


> - Are you performance-constrained? NO.

A significant portion of issues filed against journald are performance related.

It's undesirable to have a bunch of mallocs and frees in the hot path of logging. It's not inappropriate to use the stack where possible, obviously in this case user-controlled input was being trusted where it shouldn't have.

There have been many problems of this sort in journald, where the nature of the user-controlled input has the ability to bring journald to its knees in a global manner.

Prior to the process metadata caching, you could cause journald to go absolutely nuts querying the sending processes metadata from /proc by sending it messages full of newlines. Every newline-separated field is treated as a separate message with its own metadata to be acquired. The routines for fetching all that stuff from /proc granularly allocated and freed the data via libc as well, it was very painful.

Things are better now, but you still can't log particularly high rates through journald compared to a simple O_APPEND style logger like syslogd. Programs synchronously logging debug messages quickly become slowed down by journald pinning a CPU.

You can still cause a world of hurt by spamming journald with tiny emergency messages, each of which forces a sync of the journal. But a user can also run `while true; do sync; done`


> Prior to the process metadata caching, you could cause journald to go absolutely nuts querying the sending processes metadata from /proc by sending it messages full of newlines. Every newline-separated field is treated as a separate message with its own metadata to be acquired. The routines for fetching all that stuff from /proc granularly allocated and freed the data via libc as well, it was very painful.

This seems more like an indictment of journald's architecture than an argument against the cost of mallocs per-se


My point was that a malloc vs alloca in a full blown IO path, where you have sockets and posibly disk fsyncs is not all that significant, if at all. I understand that journald cares about performance. Micro-optimizations like malloc vs alloca are the least of your problems. Batching/buffering messages, an asyncronous model, plain O_APPEND, different files for logs/metadata etc like others and you have said ITT is what will matter. You can even use a sane dynamic string library with preallocation if you care that much. I haven't looked at the src, to be honest, but am I wrong?

journald does all its journal writing via mmap. When the window to write into is already mapped (it caches the mappings), there's relatively little per-message syscall overhead required.

Prior to process metadata caching, the per-message CPU cost was dominated by acquiring the sender process metadata from /proc, which was dominated by syscalls but the malloc()+free() overhead of all that junk was not insignificant.

I haven't done any journald profiling or hacking since those process metadata caching changes landed, but I imagine it's brought the per-message syscall overhead down to little more than pulling the message off the socket.

When I was experimenting with adding metadata caching, once caching could amortize the /proc-accessing syscall overhead across many messages the major pain was all the allocator activity in moving that metadata around and composing the KEY=VALUE clauses from that metadata. It was a death from a thousand cuts kind of situation, happening for every message.

The stack is the convenient place to efficiently do much of this granular, ephemeral stuff. But it's obviously inappropriate for allocations of user-controlled/unbounded size.


Thanks for the interesting and detailed info.

I mean the cost of hitting the disk/socket. The actual IO. There is also the syscall overhead but that doesn't matter /that much/ with IO usually, in comparation with the actual IO (unless, again, you are doing something very wrong - very small buffers etc ...). This of course depends on the deatils. And yeah, if it uses mmap then that syscall problem is no more. Problem I was refering was actual IO cost per message (fsync/msync per message?) to mantain consistency instead of caching for example or batching and grouping IO at the expense of consistency guarantees (maybe) or coherent data re. messages in realtime.

I agree that the stack is usefull and I see your point. But I would almost never resort to alloca. That is not a solution IMO. If you have that kind of costs with the joining of keys there are other solutions, I think. What about a dynamic string library, that over allocates (maybe a big chung on the first time in the 'new message' path) and then just uses the extra memory until you run out. Classic (and silly) strategy of allocating like n << 4 bytes every time you run out of space. This way you really just have a few allocations instead of hundreds maybe making it a non-issue (at the cost of prob. unused extra memory). Not as efficent as alloca but very close.

I you want to get fancy you can have efficent malloc pools to. But I think a semi-clever dynamic string lib would be able to handle the problem best.


> A significant portion of issues filed against journald are performance related.

Right. Because instead of doing the SANE thing which is appending timestamped text to a read only file -- which gives you actually better guarantees of being written to the disk than the binary format journald uses, they have a ridiculously convoluted setup like this.


I do think we'd be better off with the hot path being a simple append to a read-only file.

All this database-like junk should have been done asynchronously and persisted in ancillary read-write files kept adjacent to the logs.

Features like `systemctl status` that want to show the most recent log entries for a given service could have simply indicated 'potentially stale' when the logs were newer than the ancillary files, and support blocking on a refresh if desired.

I'm not a fan of how it is done today, and I've hacked a bit on journald. It doesn't belong in the critical path. There's a lot of that in the systemd project in general; tightly coupling complexity where it really should be loosely coupled at best. But I believe a lot of that is to be expected from a project aspiring to do too much with too little resources, shortcuts are taken regularly. Everyone experienced knows tightly coupling components eases the development of them.


my thoughts exactly. just because people are filing performance tickets doesn't mean that its performance bound application or framework. it could also, like illustrated many times before by people, just be bloated and convoluted code causing slowness.

To take such a position is to not think charitably about the intentions and competency of the developers, which I find personally absurd.

If you're not going to take the time to scrutinize the code and understand the problems, spreading uninformed FUD like a troll isn't helpful.

I have spent significant time with the journald code in the past. I understand its inner-workings and what performance bottlenecks it has suffered from.

Bloated and convoluted code is not how I would describe the causes. The systemd project's code is generally rather spartan, easy to read, simple C code.

Journald attempts to associate myriad process metadata with the logs it receives. This information is useful to the log consumers, and is actually somewhat impossible to reliably collect in lockstep with the messages being received, using today's kernel interfaces.

It will always be a racy endeavor to do this from userspace in the form of sampling /proc after the messages have been written and languished in a socket buffer for some non-deterministic amount of time.

What journald attempts to do is a best-effort approach of acquiring and associating this information with the messages in-line with the logging, to get it as close as possible. I believe at some point, if we continue using this kind of metadata-rich logging in Linux, that the kernel will grow new interfaces to reliably deliver this information in a socket sidechannel, much like how minimal sender credentials may be retrieved on UNIX domain sockets today.

Until then, journald will continue to find itself in the awkward position of being a kinda-sorta logging database which must sample piles of process metadata via /proc to describe the sender in what's appended to the journal.

But this is a necessary phase of progress we must pass through. Upstream kernel developers generally refuse to add new interfaces for this kind of thing until there's an established use case demanding it. So what we're all using in journald today is arguably an MVP, establishing that there's a market need for this level of information in our logs, which can then be used to compel upstream to help us make it both efficient and 100% reliable.

The process metadata caching that was added to improve the performance arguably traded accuracy of the metadata to do so. (and, one could argue, increased bloat and complexity) But in lieu of better kernel interfaces, it's the only choice other than giving up on logging the metadata entirely. The impression I got while working on the journald code pre-metadata-caching, was that the authors had assumed the kernel would evolve to make the information available efficiently. The kdbus debacle speaks to that trajectory, and its rejection from landing upstream definitely threw a wrench in the overall plan, I'm not sure the systemd project has ever fully recovered from that setback.

So please, refrain from making such uninformed speculative comments. I encourage you to instead get involved and help improve things where you can, if you care about running modern Linux systems.


journald seems to be overall a very bad idea.

Playing the devil's advocate here: not every box has a writeable filesystem.

I think this is the right answer - more specifically, a program may not have write access to the filesystem (e.g., with the systemd "ProtectSystem" option). Linux lets you spawn a process to handle a core dump, but I believe the process runs as if it were a child process of the crashing one, i.e., in the same namespace and with the same filesystem permissions.

Or the only writable filesystem could be a tiny tmpfs/ramdisk. In both situations, do you even want core dumps?


You could imagine kernel.core_pattern pointing at an NFS mount, or a psuedo-filesystem that otherwise redirected to the network (could even be a small FUSE FS, allowing for userspace handling). Of course, neither of these suggestions necessarily gives control back to systemd to then relocate or process the file. But you couldn't do that on a RO system anyway.

The do not core dump.

As Linus might say, "this is complete idiocy".

Even if you need to write a lot of data, the correct way is to write it in blocks using a fixed-size buffer, not try to stuff the whole thing in at once...

Maybe it's not possible to completely eliminate all dynamic allocation, but striving to make your code O(1) space (and time) should be a relatively important goal.


Do you have a close-to-perfect implementation, perchance?

Any logging implementation I have ever seen is full of tradeoffs along all sorts of axes. At some point something has to give.

Examples: Your log cannot accept more messages (maybe it's a local disk), what do you do? Do you store it in memory? Do you throw it away and record that you threw messages away? What happens if the machine dies before you can record the failures? Are there some log messages that are important enough to actually halt the application? etc. etc.

EDIT: ... and let's not even get into remote logging.

It's actually a really complicated problem and not enough people appreciate that.


The choice may be incorrect, but the alternatives aren’t all around optimal, either.

A logging system must write each message atomically. That can be done when “writing in blocks using a fixed-size” buffer, but it complicates code, and, I guess, would introduce additional latency (something you really do not want in a logging system) when processes race for using the logging system.


Additionally, the one time I actually needed coredumps, they didn't fit in 768MB and so I had to turn off systemd's coredump interceptor. It's probably useful in certain scenarios, but I really don't see why you would enable coredumpctl on a desktop system. Or why the limit would be hardcoded. Or why you wouldn't intelligently try to write the coredump to disk if it's too big to fit in the log.

if it's more than ~5M I don't want it on my desktop. usually it's not set up anyway to warrant sending it to developers for debugging. (missing symbols, unclean environment, etc) and even if it's less than ~5M, who will send it to where? coredumps are completely useless to do automatically. if the developers expect to get them, use a guardian process or some other error reporting framework.

Systemd's insistence to gobble up core dumps is one of my favorite misfeatures of that thing. I lost many an interesting (large) dump because they wouldn't be logged in the journal and not be written to file as expected. Luckily this can be disabled (until the next system update resets the syscfg variable again). I don't even get what problem this "feature" is supposed to solve.

and at some point it was even comedy gold when the system would compress them with ... xz! lol

I'd imagine that the use-case for this was e.g. aggregating core dumps via some sytsemd/journald-powered centralized logging from a fleet of machines so they could be collected & analyzed.

If you dump core locally and log the path you'll need other side-channel to do the aggregation.


I would never expect that core dumps to belong in a system logger of any description. A stack trace from the dump, maybe. But the actual core itself, could be tens of gigabytes. A constraint upon the maximum message size to a few kilobytes would be no bad thing, perhaps with a continuation mechanism for longer messages.

I believe it's so they can manage disk usage by core dumps (and disable them in low disk conditions).

It's certainly weird, but they're only exploiting this to slow things down just a little bit.

The big problem is the unchecked length of the incoming environment (args) string...


Because systemd tries to hijack as much functionality it can.

This is demonstrably false.

Sockets are files internally to Linux.

Top comment from reddit thread: FWIW distros that use -fstack-clash-protection to compile systemd, including recent Fedora and OpenSUSE, aren't vulnerable.

https://www.reddit.com/r/linux/comments/aeac8g/systemd_earns...


It says so in the article too.

Also in the article:

> To the best of our knowledge, all systemd-based Linux distributions are vulnerable, but SUSE Linux Enterprise 15, openSUSE Leap 15.0, and Fedora 28 and 29 are not exploitable because their user space is compiled with GCC's -fstack-clash-protection.


Does that include openSuse Tumbleweed?

I'm of two minds when it comes to mitigations like this. On the one hand, it helps decrease the number of exploits. On the other hand, it encourages/tolerates the proliferation of low-quality code.

Stack clash protection is a necessary implementation detail of the C language. It's a shame it's even optional. C17 § 6.5.16.1 Simple assignment, Semantics, (3):

> If the value being stored in an object is read from another object that overlaps in any way the storage of the first object, then the overlap shall be exact and the two objects shall have qualified or unqualified versions of a compatible type; otherwise, the behavior is undefined.

Also listed under J.2 "Undefined Behavior," with reference to the same section.

The same clause is present in C11 and C99 under the same section.


> otherwise, the behavior is undefined

I don't see how this necessitates stack protection at all, at least from a standards perspective.


Ok, here's the logic:

0. Your premise is that standard C does not require a runtime to provide any protection from stack memory growing into the heap.

1. Standard C defines automatic object storage and heap object storage.

2. It is implementation defined where heap objects are allocated in memory. As a result, the type and compatibility of arbitrary heap memory locations are unpredictable to developers.

3. It is implementation defined where stack objects are allocated in memory. As a result, the type and compatibility of arbitrary stack memory locations are unpredictable to developers.

4. Standard C says that mutated objects can't overlap [more or less].

5. Optimizing compilers don't allocate memory for stack objects that aren't at least partially mutated.

6. Standard C defines no way for developers to determine how deep their stack is, or where the top of the heap is.

7. From (2), (3), (5), (6), it is impossible for developers to determine whether or not they would violate (4).

8. As a result of (7) and (0), any use of both heap and stack objects is UB. A C runtime compliant to your definition in (0) could start the heap from 0x100, growing up, and the stack at 0x104, growing down.

8.a. Similarly, any use of static or other lifetime, and stack objects, is UB.

9. It logically follows from 8 that, under your premise (0), it is always UB to the stack and other kinds of data memory.

I guess you could make the case that the C committee hate everyone and all stack objects are UB.

But my read of the C standard is that the intent is that any use of stack objects is not in fact required to be UB. If stack objects were always UB, I think it would be called out explicitly or not specified in the standard.

So yes, I think any standard-conforming C implementation has to prevent stack clash one way or another. That could take the form of an alternate stack memory that is entirely separate from heap memory; it doesn't have to be probing and guard pages.


> I guess you could make the case that the C committee hate everyone and all stack objects are UB.

Honestly, I think that this is the logical conclusion, or at least, every compiler/machine implementation is subtly wrong; the standards glosses over what should happen when the stack is too large, for example, so technically any use of the stack is not guaranteed to be valid because you might just be out of space. The C standard is for a "head-in-the-clouds" abstract machine, so it doesn't concern itself with pedestrian issues like lack of memory.


Heh, I guess that's possible. I hold out hope that it isn't the intent of the C committee, and maybe they'll consider firming up the stack memory model for C2x.

These are three long standing exploitable bugs in systemd's journald which can be used to gain local root privileges. Most systemd based distributions are vulnerable and, from the looks of it, may have been for years.

link?

I believe he's giving a synopsis of TFA

https://wiki.sei.cmu.edu/confluence/display/c/MEM05-C.+Avoid...

> This compliant solution replaces the VLA with a call to malloc(). If malloc() fails, the return value can be checked to prevent the program from terminating abnormally.

Wishful thinking. On overcommit-enabled VM systems (pretty much every GNU/Linux desktop and server by default), malloc will cheerfully return a valid-looking pointer to virtual memory even if the system is out of memory. Your process will get a fatal signal when later it tries to use the memory. It will not be prevented from terminating abnormally.

Any page of the memory could blow up. Maybe the first three pages get a frame just fine, but when the process touches the fourth, it faults.

The reason to use malloc instead of the stack is that it goes awry only when the system is out of memory, not when the stack is out of room.


But the only failure case there is a DoS. You cannot get memory corruption from overcommit the way you can from a stack clash, and therefore you cannot get arbitrary code execution. That seems like a huge reason to prefer overcommit.

Stack clashes are protected with guard pages. Guard pages can be skipped, but not by code like strjoina which fills in all of the bytes.

https://www.qualys.com/2017/06/19/stack-clash/stack-clash.tx...

The reporters do claim to have a root exploit that they will publish later, which will be interesting (if it still applies on systems with robust guard pages).

The strings are copied into the alloca area from the lower address to the higher, so initially, the stpcpy calls might be corrupting memory belonging to an adjacent mapping and not to the stack. Eventually the loop has to cross back into the stack, and it cannot do so without writing into a guard page.

Situations where that is still exploitable can be contrived. For instance, the program sets up a SIGSEGV handler to execute on an alternate stack using sigaltstack. When the stack guard page is hit, that handler executes successfully, the program continues somehow (e.g. by a recovering longjmp that rewinds the stack) and then falls victim to the corruption.

One thing we can do is after calling alloca, walk over the memory in reverse order (higher address to lower) in strides of 4096 bytes, and touch a byte at every increment. This way we hit the guard page (on any system that has a page size of at least 4096).


There are a bunch of other cases where guard pages don't help you (as long as you're filling in the bytes from the far end of the stack first, as you probably are on a stacks-grow-down architecture). If you start overwriting an area of memory that's being used by another thread, even if you eventually segfault, you've still corrupted memory. That can be a problem if the process has multiple threads and you can corrupt another thread's memory, or if you overwrite mmaped data and so the writes get committed to disk even if the program proceeds to crash and restart.

In any case, those + your SEGV handler case are ones where stack clash attacks are exploitable to corrupt memory or hijack control flow. The OOM killer never corrupts memory or permits hijacking control flow; it only kills the process if it's in an irrecoverable situation. So overcommit is much safer.

The technique you describe is sometimes called "stack probing" (or confusingly "stack checking") and is what GCC's -fstack-clash-protection implements, which is why this isn't exploitable on OSes which compile systemd with that option.


This is true; if it's under threads and we hop a guard page, we are screwed, due to the race condition; by the time we hit that guard page, the other thread has used the corrupt stack.

I have seen this sort of thing happen! Third party routing stack used in system with thread stacks reduced to 64K. Third party stack turned out to have debugging macros producing "{ char msg[8192]; ...}" declarations for sprintf. Twice the size of a page; hopped right into another thread stack.

(Don't tell me systemd has threads in it, ouch!)


> Stack clashes are protected with guard pages.

Guard pages are necessary but not sufficient, as I think you would agree with based on the rest of your comment. I just object to leading the otherwise thoughtful comment with the absolutism about guard pages (trivial to understand and implement) that ignores the necessity of getting the fiddly bit (stack probing) correct.


Systems that care about reliability and avoiding random overcommit crashes disable overcommit. It's as simple as that.

I should add, the thorough guidelines quoted are for writing secure and robust programs, and imply throughout that overcommit is disabled (which it must be, to implement programs robustly).

If you're curious to know further critics regarding Systemd, then without-systemd[1] has got you covered.

[1] : http://without-systemd.org/wiki/index.php/Main_Page


Yeah I know that one. It puzzles me how systemd has such a broad support among distributions and end users. It just redoes ages of system management in such new ways that they don't exactly fit the picture.

We all know that some stuff of Linux sucks but this can't seriously be the solution. Solving things the windows ways. ;)


SystemD doesn't have broad support among end users. It has weary resignation and a few people who don't know how grep works.

Honestly, if systemd is really that bad, why everyone adopted it?

We can blame Red Hat as Lennard's employer for forcing systemd into their family of distributions (RedHat, CentOS, Fedora, CoreOS). But why Debian/Ubuntu? Why ArchLinux and Gentoo which were never ever targeted at "people who don't know how grep works"? Why SUSE? Either systemd brings more good, than harm; or we have to blame Illuminati and Masonry.


You are using logical fallacies in your argument.

First, not _everyone_ has adopted it (loaded language). Google, which controls the vast majority of Linux systems on the planet, has not. GNU has not. Others [1] have not.

Second, the critique against systemd is substantial and solid enough to stand on its own regardless of popularity. Popularity does not imply quality, you should read "Worse is better" by Richard Gabriel. Politics, network effects and an octopus-like architecture that imposes itself via ever-increasing interdependencies are reasonable explanations to systemd adoption. For a distribution provider or package maintainer, it has gotten to the point where it's easier to go along with systemd than try and fight it, since the latter option means extra work. This is really a sad state of affairs.

[1] http://without-systemd.org/wiki/index.php/Main_Page


OK, my bad, I admit. Not everyone. Most.

Most popular distributions, because, let's make it clear: RedHat, Fedora, CoreOS, CentOS, Debian, Ubuntu, Arch, including Kubuntu, Xubuntu, Fedora KDE, and all other members of these respected families will account for at least (data by different sources vary) 75% of all installations.

Octopus-like architecture cannot be argument why it was adopted in the first place. People don't like dropping familiar/stable tools. Also, it was not so octopus in the first version.

Politics and network effects sound to me like conspiracy theory. Sorry, but I really do not believe that there is someone so powerful to make RedHat, Debian, SUSE and Canonical, to name a few, to harm themselves in one and the same, very specific way.

Problems solved by systemd exist. Systemd was not the only project trying to solve these problems, it was most successful/adopted. There was upstart. Remember Upstart? So, honestly, just reverting back to SysV init is not an option, it's just burying your head in the sand. Systemd is not perfect. It never was. Just SysV is worse.

I look at http://without-systemd.org/wiki/index.php/Arguments_against_... and see that most arguments against systemd are either a) Ignore obvious fact, that systemd is not a single program, but suite of programs which play nice together and are optimized to exchange data in effective ways, keep configuration in similar manner, etc. You cannot compare systemd to initd, like you cannot compare Atom to nano. b) Simply nostalgic. c) Somewhat valid, but again, systemd is not perfect, it's just much better than SysV initd. That's why it was adopted, not because of politics.


> I really do not believe that there is someone so powerful to make RedHat, Debian, SUSE and Canonical, to name a few, to harm themselves in one and the same, very specific way.

Let's go through these one by one then.

* RedHat could certainly have adopted it because they saw it as a way to take control of the development of a central piece of GNU/Linux software architecture. An init system is the one piece of software (other than a kernel) that you can't run two of at the same time on the same bare metal, so this is obviously a tempting piece of real estate to capture to provide a competitive advantage in a commoditised landscape.

* SUSE didn't want to be seen to be left behind with "old fashioned" sysvinit, and didn't have the resources to invest in their own competing init system, especially after Canonical had already thrown their own resources at Upstart. Siding with the RPM distro over the DEB based one was also an obvious choice.

* Debian had a contentious debate about which init system should be the default (and, in practice, after choosing systemd, the only) fully supported init system. The decision was placed in the hands of the Technical Committee, who were split down the middle between choosing systemd or Upstart. The tie was resolved by a single vote, that of the committee's chairman, Bdale Garbee:

https://lwn.net/Articles/585363/

He is, no doubt, an honourable man, but he is also a cheerleader for HPE:

https://www.linux.com/NEWS/LINUX-LEADER-BDALE-GARBEE-TOUTS-P...

despite SUSE being HPE's preferred Linux distro:

https://www.zdnet.com/article/sweet-suse-hpe-snags-itself-a-...

* Canonical (that is, Ubuntu) went with systemd shortly after the Debian vote, once it became clear that single-handedly supporting Upstart was an unsustainable option for the company, especially as packages were starting to add dependencies on systemd:

https://www.zdnet.com/article/after-linux-civil-war-ubuntu-t...

* With all these top tier distros succumbing to systemd, more and more packages started to depend on it as the init system, to the point that it became all but impossible for another distro to ship packages that didn't depend on systemd in its base system.

This is exactly the sort of slow creeping spread that systemd is notorious for, using the momentum gained from each small victory to help crush bigger and bigger targets, until it is unavoidable.

The worst part, though, is the historical revisionism, and the suggestion that everyone just accepted systemd and abandoned all the software it replaces, based purely on the merits of systemd. Most people had to accept systemd whether they liked it or not. systemd is not a "suite of programs which play nice together", it is a suite of programs which only play nicely together, and which bully all the other programs into submission, despite systemd's technical flaws.


> GNU has not

GNU? GNU ... GNU what? I never heard of any ( https://www.gnu.org/distros/free-distros.html )

> Second, the critique against systemd is substantial and solid enough to stand on its own regardless of popularity.

Sure, just as the responses to those. And thus the trade off was/is acceptable to the maintainers of distros that eventually adopted systemd.

There are too many random shell scripts everywhere in the world (not just the Linux world), systemd's unit files are a big step in the right direction, even if their code and architecture is a dumpster fire (low level linux plumbing written in C is usually that).


Just a nit, but systemd isn't the default init daemon in Gentoo. You can use systemd if you like, as the distro is about choice.

It is, however, selected by default if you use the gnome 3 profile because gnome 3 is a pain in the butt to get working without systemd. It's possible, just painful.


Gentoo use openrc by default, they provide systemd as an option if you want that kind of stuff on your system(s).

Because in the tech/software industry, the second best thing always wins.

Windows vs OS/2[1], x86 vs. Motorola, Atari vs. Intellivision, Betamax vs VHS ...

See also: https://en.wikipedia.org/wiki/Worse_is_better

[1] Windows vs. almost any other operating system of the time really...


All of those are examples of products that had better technical specs on paper but didn't fullfill the needs of the market. They were beat out by solutions which did a better job of doing so and therefore truly were the better solution.

Some got to market earlier and the used network effects to retain the market.

Getting to market earlier is just one way to better fulfill the needs of the market.

Systemd is not widely adopted at all, only a handful of distros adopted it. That's like a few friends level of adoption. And adoption of something by a small group of people doesn't imply that it's a good or a bad thing. But it does suggest that this group shares something that influences their decision making.

Users of those distros did not adopt systemd, they were forced to deal with distros that adopted systemd after upgrades. Just like they were forced to deal with new gnome and wayland. All of this is not without consequences of course. There are forks, brokenness, fragmentation, fatigue and general unattractiveness of linux desktop. Ecosystem is in a very sad state. I hope it's not on purpose, but it's definitely possible there is some "extend extinguish" strategy at play too.


To me this is a bizarre line of reasoning.

I was "forced" to adopt systemd, but so far it's been leagues better than previous attempts (OpenRC, sysvinit) for integrated management of daemons, user-session daemons, a predictable way to access logs of a daemon ("systemctl -u postgresl", for example) without having to guess what it's log file name (if it even has one or logs into the syslog, etc.), etc.

Painting this as some sort of nefarious "extend extinguish" strategy is just tin foil hattery.

(Yes, there are still bugs, etc., but they will get fixed and the whole ecosystem will benefit. The security problems discussed ITT seem to stem more from the use of an unsafe language rather than design issues, per se.)


Honestly, if heroin is so bad, why so many addicts?

Because it acts directly on your brain to produce intense pleasure and make you crave it, at the expense of making rational decisions? As opposed to systemd, which… doesn’t.

Who said there are many addicts?

The US media has been shouting about an opioid crisis for years now. The stories imply or outright claim that there are "many" opioid addicts for some definition of "many" that exceeds the number there "should" be.

I think that shouting is out of proportion to the number of real addicts, and I'm not claiming to speak for the person you are replying to. But it's not outlandish for someone to mistakenly believe there are lots of heroin addicts.


I am just pointing out the fallacy in this argument.

It's not an argument, it's a question. I'll rephrase to make it clear, sorry if it was not that clear.

"If systemd is worse than other options, why is it so widely adopted?"

And by widely I mean most popular distributions adopted it, so more than a half of installations use it. More like 3/4 or so. IDK how heroin addicts answer this question. You kind of ask me "If heroine is worse than other options, why is it so widely adopted?". My answer is - heroine is not widely adopted in the first place.


This does not seem to be my experience as a professional systems engineer. Nobody says it's good code, but everyone is more excited about it than sysvinit, including people who do actually know how to use grep.

(Except for a small number of people who let their biases override serious technical considerations, but those people tend to be poor employees for plenty of other reasons.)


Since when is the argument about systemd vs sysvinit??

How about systemd vs daemontools?

How about systemd vs s6?

How about systemd vs OpenRC?

How about systemd vs SMF?

How about systemd vs upstart?

It seems that the distro maintainers (and not the users of those distros) picked the new an shiny instead of investigating what would be the best for the future of these projects and picked a crappy solution. Soon a mayhem followed and no we are dealing with DNS resolution issues because of it which where not in scope at the time when they decided the cut over. Kind of how drug dealers get new customers.


The expansive scope of systemd is what people like about it. daemontools, s6, OpenRC, and SMF are much more limited in scope; upstart is also somewhat limited though it tries to do some of the same things. (SMF also doesn't exist on Linux, and I've spent a lot of time working with both upstart and systemd professionally and my conclusion is that systemd genuinely has a better design and is genuinely more robust, although there are plenty of smart people who like upstart, I'll admit.)

Again, I'm just speaking to my professional experience here and saying that the argument "Nobody wants systemd" is incorrect. People want systemd. People want what systemd does. Maybe they're wrong, but that's a separate argument.


This matches my experience as well: in our devops group it's basically a weekly occurrence to have a statement like “this is much {easier,safer,more robust} when we can use systemd”.

Upstart was better than SysV but it took many years to add basic functionality (e.g. setuid/setgid, console logging, etc.) and there are still problems like not being able to reliably keep a process unless you remember add things like post-stop sleep blocks.


First CVE I've seen to reference a System of a Down song.

(Every subsection starts with an SOAD song quote)


Off topic, but SOAD were really one of the most enduring metal acts of the early 2000s. If you remember them, Needle Drop recently did a classic review of Toxicity - enjoy the nostalgia! https://www.youtube.com/watch?v=-jI1ofec02A

I'm not a SOAD fan, but good to see someone at my former employer still has a sense of humor :)

systemd is interesting as an innovative init/daemon-manager(+), but I remain convinced it's not ideal that it is de facto Linux default init. Surely the default init should be something more manageable (i.e. with fewer attack surfaces) like runit or OpenRC (in combination with something, perhaps runit) or s6 etc.

I think it would be great to run systemd as pid 2 with a bare-bones pid 1 that only took on the responsibility of (1) spawning a service manager as pid 2, and (2) killing reaping zombies. This is pretty much what BSD's init(8) does today[0], except with /etc/rc as pid 2 instead of systemd.

(I'm not advocating for /etc/rc — it sucks. I much prefer systemd-style dependencies, service management, etc. But bloating the code size and responsibility of pid 1 is a common systemd complaint that would be trivial to fix.)

[0]: Well, it does some other crap too that could / should probably be removed, like having a notion of run level (multiuser, single user) and spawning getty(8)s. But mostly it's a tiny barebones program that spawns pid 2 and reaps zombies.


I think it would be great if such things were options, yes. Despite the common argument that systemd is actually modular, I haven't seen a lot of modular use of its pieces. In practice, use of systemd (and, again, I'm complaining more about distro devs than systemd devs) has seem to make things more inflexible (and brittle). E.g. Gnome Shell ended up with a practical dependency on systemd, though this has been successfully worked around. Snaps ("Snaps are universal Linux packages") are really 'universal systemd packages' (and in fact only 'universal Ubuntu packages' if you want sandboxing). But hopefully at some point it will be possible to more easily use the good bits of systemd without having to go 'whole hog' (so to speak).

The claim that systemd is modular is a brilliant marketing gag that rests entirely on how you define modular: software parts that are separated by clearly delineated interfaces can be called modules without being exchangeable. So in the eyes of the systemd guys the thing is perfectly modular, but as the users noticed, no part is exchangeable, which is what the other camp wants. And both camps get hung up over the contradictions without realizing that they need to redefine their nomenclature first.

You can go start your own systemd free distribution.

For everyone else, there is a reason they centralized on it, and its not because Lennart was stalking their mailing lists threatening to send them expired cheese. Systemd is not perfect, but it is infinitely better than a sludge of shell scripts.


There already are plenty of systemd-free distributions. Not because the developers hated systemd, but because they decided using a baroque init+daemon-manager+logger+.... wasn't the best idea.

> For everyone else, there is a reason they centralized on it, and its not because Lennart was stalking their mailing lists threatening to send them expired cheese. Systemd is not perfect, but it is infinitely better than a sludge of shell scripts.

This is such a disingenuous argument. Don't like Ford? What, you want the horse-and-carriage back?

I'm not arguing for sysv init. Nothing in what I've said references sysv init, and - if my choices were (a) sysv init, and, (b) systemd, I would indeed choose (b). But that isn't the full range of choices.

There are various non-sludge-of-shell-script choices which aren't systemd. Including OpenRC, s6, runit, Shepherd. I haven't used the first two extensively, but I have the latter two, and both were better experiences than systemd: they are faster, they are more comprehensible, and they are less error-prone.

On the other hand, my systemd-running home desktop - every time I want to reboot, I end up REISUB'ing it because I don't feel like waiting 2-5 minutes for systemd to release it. I don't have that issue on any of my other, non-systemd-powered machines. They all shut down/reboot cleanly and quickly.


This attack doesn't touch the init system's attack surface though. It breaks journald, which is a separate program that you don't even have to run (syslog daemons still work)

Calling this a bug in the init system is like assigning fault for bugs in eg. rsyslog to sysvinit


This is disingenuous, systemd-journald is the default in every systemd-using distribution I am aware of. The philosophy of systemd is all about tight coupling and forcing its singular vision on end users. When that vision falls apart you can not claim it is not really a systemd problem because in theory you could have gone out of your way and done something that is not encouraged.

No, you cannot run systemd without journald. Journald can forward messages to another syslog implementation. Its launch is even hardcoded into systemd very early in the boot process IIRC. The intent is to capture early log messages in memory inside journald before a writable /var is available.

But you know that my post is a criticism of the systemd ecosystem, or, more specifically, its status as the de facto. All my systems that run systemd run journald; all of my systems that run journald run systemd. Because those are the choices the distro devs made. My systems which don't run systemd are immune to these vulnerabilities.

I might be reading that article wrong, but it reads like they're critical of the use of alloca? I've never really written C or the like, what is dangerous about alloca?

It allocates on the stack which is inherently of limited size and allocations grow in the direction of the “juicy” stuff.

As so often with the C library, there is little bounds checking going on either (in fact, the behavior once you overflow the stack is explicitly undefined, so who knows what the hell is going to happen)

If you somehow let the user specify the amount of bytes you’re going to allocate on the stack, then that’s an exploitable issue.


Ah, OK. So it is truly horrifying that it's in heavy use in journald, where users inevitably have that sort of power.

Variable size allocation from the stack... That's one dangerous trick.

[1] https://stackoverflow.com/questions/1018853/why-is-the-use-o...


Was systemd ever properly audited? That's exactly why i use devuan as my home server.

Despite being a security critical component, systemd doesn't even seem to have a proper security announcement release process. For example, there was a remote code execution vuln discovered in their DHCP client in October (CVE-2018-15688).[1] The issue was fixed with a GitHub PR that failed to mention that the changes fix a remote exec vulnerability: https://github.com/systemd/systemd/pull/10518

They also have a rather informal release process. Their "release notes" are just a very long NEWS file[2] in the git repo, with notes of trivial and critical changes mishmashed together. And for some reason, to this date there is not a mention of the DHCP remote exec vuln fixed in the latest release.

I must say I don't feel too good about this project's attitude towards security. Compare this to e.g. Apache.

[1] https://www.theregister.co.uk/2018/10/26/systemd_dhcpv6_rce/

[2] https://github.com/systemd/systemd/blob/master/NEWS


That was not the first time either:

https://github.com/systemd/systemd/issues/5144


Or relying on other people to report the issues so that people patch.

https://www.openwall.com/lists/oss-security/2017/01/24/4


The tone of discussion there indeed suggests a very lax / "meh" attitude towards vulnerabilities.

While I agree that it isn't ideal -- it should be pointed out that neither does the kernel itself.

systemd is a low-level building piece for distributions - the distros are doing end-user changelogs and security notices. The target audience for their NEWS file are upstream maintainers, not users.

As if Devian has had more security reviews than systemd?

Devuan is a Linux distribution that does not ship systemd by default.

You are comparing apples and oranges.


Not really.

systemd replaces several components you would typically find as core components (multiple packages) which everything in a distro is built and depend upon, init-scripts being no exception.

Or at least it does if you present the criticism that systemd is too big.

Make up your mind: either systemd is not such a big monolith or it does in fact make sense to compare it to a minimal Linux distro.

You can’t have it both ways.


Now you're comparing fruit salad with apples. :)

1) Devuan isn't a "minimal linux distro" Its a complete fork of Debian. Co-founded by Ian Murdock, Debian's original creator, after the systemD coup.

2) Every "core component" in any system, "minimal" or otherwise, will indeed have had more security review than the systemd code base.

3) Pid 1 - Sponsored by NSA and DoD vis a vis Redhat defense contracts.


Was init?

No, but it's far less complex than systemd to the point where you can audit the code yourself and it's battle tested over the years. Also because it's so simple it was shown to us at the university so we've got essential understanding of how it works and I think they're still explaining it instead of systemd.

But systemd is more than just init. It replaces for instance daemontools, restartd or monit.

Are you saying you can trivially audit those?


Does it fully replace monit? Could you link me more info? From what I could find I didn't see anything like free disk space warnings, web interface etc.

> Does it fully replace monit?

Not fully, no but as far as I can tell, it has overlapping responsibilities.

My main thought when mentioning it, was automatic restart of services in the case of failure.


That's a fair point. However, the parent was responding to whether init - in particular - had been audited. So, bringing up other, larger projects that systemd subsumes is a bit disingenuous, imo.

The complexity of Sysvinit is orders of magnitude less than that of systemd.

The "sysvinit" binary package has had a total of 2 bug reports tagged "security" filed in Debian's bug tracker. That's over the entire history of the package that goes back to at least 2004.

If I extend the search to all packages built from the "sysvinit" source (which includes packages like initscripts and sysvinit-utils), the count increases to 8. (source: https://www.debian.org/Bugs. I'm not linking to exact queries since they take quite some time and HN has a tendency of taking down Debian's bug tracker)


Also crond, atd, and various log manipulation tools that systemd replaces.

It doesn't replace them, it just doesn't use them.

It supplants them with the same functionality.

That's literally the definition of replacement.


Indeed. Not complaining - I don't care for HN karma - but it is very odd that an easily searchable fact is being downvoted on a technical forum.

It provides inferior code and interfaces that are used instead of them.

So you're correct, in a way.


Not if you count in the god-awful shell scripts full of race conditions which it replaced.

Then it should be easy to audit!

Did you pay for an audit of sysvinit and every bash rc script it runs?

I can't think of a single init system which has been audited.


No I didn't but I can read bash so I can audit that myself and in fact over the years I did read most of those scripts. Also it's just a few lines of code compared to systemd so even one person can do it.

> I can read bash

Lol. I'm sure you're the only person on earth who can think of all the corner conditions when reading a bash script.

Auditing a shell script for security is near impossible to do.


I never said that, at the end of the day it's the same thing with audits, you can have 50 people auditing systemd and they won't find every single mistake either. What I'm saying is that simplicity limits room for mistakes and it gives more people chance to look at it and play with it. The concept is fairly straightforward.

> I can read bash

And I can read systemd unit files. That doesn't tell me anything about the security of the system in charge of running them.

https://www.cvedetails.com/product/21050/GNU-Bash.html?vendo...


> but I can read bash

And all the source code for bash and all the other tools the init-scripts invokes?


Is Coverity being run against systemd? Would it not have found these issues?

Usefulness of static code analysis beyond linting is greatly overstated (by people selling static code analysis tools).

Coverity isn't useless. It finds these kinds of buffer overflow errors easily. It's shameful that a prominent keystone FOSS project is using such outdated coding practices in the first place. Not using the free tooling available for such projects is doubly so.

Systemd uses Simmle LGTM and QL for static code analysis which is a good thing since Coverty Scan is currently down without an ETA for restoration.

the flaws have existed for years.

As proven by a large majority of C developers ignoring lint since 1979.

I don't see any systems with security releases ready. Did they not notify the effected systems before making it public? I thought that was standard procedure these days.

It's in the linked page:

2018-11-26: Advisory sent to Red Hat Product Security (as recommended by https://github.com/systemd/systemd/blob/master/docs/CONTRIBU...).

2018-12-26: Advisory and patches sent to linux-distros@...nwall.

2019-01-09: Coordinated Release Date (6:00 PM UTC).


There is a pattern of designing for the most niche use cases and imposing complexity on everyone by default. Shouldn't it be the other way round? Letting those with advanced needs accept the complexity? Presumably they have the know how.

There are multiple examples of this including the binary journal for those with extreme auditing needs, optimizing for laptop wifi networks when Linux is predominantly used in servers, the ironically named 'predictable network names' that are anything but predictable. And these issues being hand-waved away because of some fringe use case with the additional overhead nonchalantly imposed on all users.


It's been two weeks since the GNU/Linux distributors were notified of this, and still no fixes are available in Redhat/Centos/Debian.

Any ideas about how to mitigate this until official fixes are available? Would it help to block autorestart of journald? Any idea about how to that?


I still use ISC DHCP client due to enterprise DHCP using some esoteric DHCP Options that systemd cannot do to this day.

Are there static analysis tools that catch such problems? User controlled buffer length for alloca.

>On a Debian stable (9.5), our proof of concept wins this race and gains eip control after a dozen tries (systemd automatically restarts journald after each crash):

Seems like a good example of why automatically restarting things that crash is a bad idea.


As with anything, there are tradeoffs.

A dozen is too short to really make a difference, but one thing that can be done at the service management layer without abandoning restart entirely is a delayed restart, or exponential backoff.

You could also imagine a per-service option for auto restart. Paranoid organizations with plenty of on-call engineers could disable auto-restart, if they were convinced the engineers wouldn't just rig up their own auto-restart to avoid the call.


I suppose prlimit'ing stack size for systemd-journald could work against this exploit too, since clash protection works, but without recompiling anything.

I think that putting a hard limit on alloca allocation size, somewhere around 1-10 Kb, could help prevent this type of exploits. Also, it is possible to write a function that would choose between alloca() and malloc()/free() depending on the size.

How can you check if you systemd is patched ? This sounds very serious.

I refuse to use systemd to this day. It's unbelievably complex and became established through political power play rather than any sort of merit. Which is without a doubt not what I expected to see in the Linux ecosystem.

Yeah, it's not so terrible technically, but politically I despise it. Especially when the people involved take a stance of "we're not actually going to discuss this or debate why this might be the best way to do it, we're just going to slowly try to make your life harder and harder until you give in" ( https://lists.freedesktop.org/archives/systemd-devel/2010-Se... )

Systemd is a double edged sword. I started using FreeBSD in 1999, and Mandrake Linux in 2000ish. I switched to gentoo when it came out, because i like to tinker and see how things work (like or hate the distro it does make you learn a quite a bit).

Fast forward to 2018 and i manage around 500 Redhat servers here. About 150 are at RHEL7 (the others need to be upgraded). So learning systemd is a requirement, it is not going away. I like some points of it. I do like the jounrnalctl command and the ease of looking at logs, however being the hypocrite i am i despise the binary logs and non compatibility with most syslogging devices (i have had to explain lost logs during an audit with systemd and it was not pleasant).

From a end user standpoint systemd is great for my desktop. For the servers it is functional, server boot time does not matter to most administrators. On VM's both boot fast enough, on bare metal....some of our large servers take 5 to 7 minutes to post (thanks HP) so the OS taking 20 to 30 seconds longer to boot is not a big deal.

I follow the systemd mailing list to see what is going on, i think if the guys over there were not as abrasive in some of their answers things would be smoother, and more receptive to the needs of some of the users. I see a lot of features that are desktop specific and that seems to be where their development lies for the most part, desktops and servers are two different beasts.

I guess the infighting just sucks, i am not a fan of systemd simply because i like a simple OS with minimal features to accomplish my objectives, however it is here to stay and i am a Linux Admin so it is part of my job and i learned to use it and keep up with development.

I still use FreeBSD for all my home servers (except my docker swarm), old habits die hard.


Speaking of BSD and Docker... I'm learning FreeBSD for a little side-project but it seams its harder to get a good deterministic deployment pipeline with Jails than with Docker. Whats the best practice there?

I don't develop on FreeBSD either, its just for prod.


Unfortunately there is not really one. I always treated jails as VM's. You could create the jails on ZFS and do a send and receive, or use something like Jenkins to build and deploy. However jails are like LXC on Linux, not like docker.

You could use nanoBSD in the source tree to create a minimal mostly immutable image with your application. It will still be like a VM though and need deployed somewhere (this is how i run my DNS servers at home, they are about 120MB each and can run on pretty much any VM host (or bare metal)). But docker does have that market better (hence why i have a small swarm at home to play on).

There is also docker for FreeBSD, still in testing but works pretty good it you build your own native images. I build a memcached image and was running that for a while.


> large servers take 5 to 7 minutes to post (thanks HP)

IBM/Lenovo xSeries servers are the same. 5+ minutes to get through the boot process, due to all the hardware init and checking. That's with UEFI too. ;)


> i have had to explain lost logs during an audit

How did they get lost?


Somehow during a system crash (hardware failure) the journal logs were corrupt and could not be read, and application logging was not being forwarded to the syslog facility. Our logs were not send to a logging server at the time, they were backed up daily and stored on tape.

If it's not technically terrible, it's not technically terrible anymore.

All the initial "ooh and aah" factor of SystemD came from using the kernel's cgroups mechanism, which was a dumpster fire by itself--something dropped by google in the kernel for easier maintenance with the understanding that no one should be stupid enough to use it for something important...enter SystemD.

Apparently cgroups is in better nick these days (as is SystemD), but until the likes of Al Viro come around and give it a clean bill of health, I think I'll pass.


So no docker for you either?

cgroups are essential to properly limit daemons on baremetal servers without the overhead of containers (so more bang for bucks).

For example, you can make sure some poorly written python daemon will never use more than 50pct of the cpu or over 1 Gb of ram - and systemd services will nicely restart the daemon if\when that happens.

By not using systemd you are depriving yourself of essential tools in modern sysadmin (timers alone are worth switching to systems compared to cron anachronistic hacks)


You can use cgroups without systemd.

Them being released at similar times does not mean systemd implemented them, they exist completely separately to systemd.

Literally:

   mkdir /sys/fs/cgroup/memory/<groupname>;
   echo 100000000 > /sys/fs/cgroup/memory/<groupname>/memory.limit_in_bytes
is all you need for cgroups.

The grandparents comment was targeted at cgroups in general. You can’t use cgroups without cgroups. To add on that: IMHO, both systemd and docker made cgroups much more accessible to joe sysadmin and while they’re not a bulletproof security feature, having them (and capabilities, another kernel feature exposed by systemd) in wider use is generally a good thing.

> cgroups without cgroups.

I've been charring my brain to understand what you mean by this.

You don't need to have cgroups for all applications in order to use them.

And they don't need to be implemented by systemd specifically for them to be in use by other init's or tools.

SystemD existing (and being pushed so heavily) stymies the rise of such init systems.


The post I was referring to criticizes cgroups in general, regardless of how they are managed or used.

Sure, then you need to write scripts to monitor your daemons and their dependencies, and you may end up writing something likely worse than systemd that just suffers from the 'NIH' syndrome.

Adjusting to systemd took me a while, but I believe it was worth the effort, and I also believe most people who dislike systemd are just set in their old ways


Other way around there were daemons that previously managed cgroups and system NIHed a new implementation.

It is terrible technically. It does too many things (for no apparent reason) and it's every other month that we see some enormous exploit for it.

Hyperbole is not how you make technical arguments. Try some data:

https://www.cvedetails.com/vulnerability-list/vendor_id-7971...

There are 11 total from 2015: one arbitrary access, one which allows administrators to run a process as root rather than the intended user (CVE-2017-1000082), and some DoS opportunities. Not great and several great examples of why C should not be used but hardly “every other month”.

Now, it's hard to compare that it against other systems because it combines functionality which people used to have to use third-party tools for or roll themselves so you have to count that against every buggy init script or supervisor feature, apps whose developers never got around to adding the hardening features which systemd does use, etc.


The systemd developers "forget" to file a lot of the CVE bugs that should have been filed, so the CVE is incomplete in regards to the serious security issues that have affected systemd.

I still remember that time when systemd developers argued that a privilege-escalation bug systemd enabled was not really a bug. The criteria those developers* use to determine what merits a CVE filing doesn't quite match up with what you'd expect, so CVE count isn't a good metric for systemd security.

* Unfortunately, they aren't alone in their sloppy handling of security issues. Rust is also another project that is known to not file CVEs for serious bugs if they occur in previous releases.


So does the linux kernel as current, previous or even release candidates..

You can document this, right? The researcher who finds a problem should be able to request a CVE even if the upstream doesn't acknowledge it.

The theory is good. :)

Weirdly though, getting a CVE assigned can be a real PITA (and unsuccessful). The people who do the CVE assignment are generally overloaded, so lower impact/priority stuff often seems to get missed or not bothered with.


This hasn't been the case for a number of years, Mitre has seriously picked up their game, and you can talk to linux vendors such as Red Hat, Ubuntu and SuSE who also have blocks of CVE's assigned for things they shipped.

The alternative is DWF, which would be more popular if this was a more common problem.


> This hasn't been the case for a number of years ...

Thanks, that's really good news.


Sounds like a petty dictator pushing everyone to think and act the same; manipulating people until they give in because it takes too much effort to resist. He should consider a career in politics... I always get uncomfortable when I know people who pull stunts like that in RL and it is irritating when it dramatically affects an otherwise very good operating system. I use Void Linux just to avoid systemd, considering its long track record of horrific bugs and security issues.

As a counter point, I personally enjoy using systemd to manage my home servers and PC. And I have written unit files for quite a lot too, and enjoy it.

Surprisingly, it seems as if different people have different tastes/opinions. :)


And I wish we could just use it for services! But alas, it comes with a free mandatory proctology exam, by a guy with really big hands, called "pid 1".

What's also kind of funny to me is, it wasn't designed for cloud-based services. We now have to invent whole new systems that look a lot like systemd just to manage distributed decentralized services that run on Linux systems.

Systemd would have been great for everybody if it had been developed by people with different priorities.


I also enjoy systemd much more than sysvinit, having a uniform service structure vs messy shell scripts really does make a difference.

The fact that people keep repeating the "messy shell scripts" fallacy is proof of how effective systemd propaganda has been. The vast majority of Linux systems on the planet, which ship software made by Google, do not run systemd.

> The vast majority of Linux systems on the planet, which ship software made by Google, do not run systemd.

WTF? Are you talking about Android? The user does not even interact with the init system on Android at all, what are you even talking about?

It's not "systemd propaganda" that has been effective, it's the anti-systemd propaganda that seems to get dumber every time I see it.


The user almost never interacts with any init system, unless they are starting up, shutting down, or doing some weird maintenance. It's an "initialization" system.

Honestly, none of this crap matters at all. You can get by just fine with init scripts, and you can get by just fine with systemd services. Except for a few people in the world who have applications that are really sensitive to either fast boot times or really complex system dependencies, any init system will work.


> The user almost never interacts with any init system

I write service files pretty regularly. If you're a sysadmin, developer or an enthusiast on Linux. you definitely interact with the init system.

If you're a regular Android app developer however, you normally don't do that.


I guess I forgot about all those users that constantly modify their operating system's software design.

This is not true. systemd did standard multiple interfaces in Linux ecosystem. It is much easier to get things working nowadays than it used to be. systemd became standard thanks to meritocracy (important people in most distros decided to adopt systemd, it was not a "political power play").

It is complex, and there are bugs like any other kind of software[1], however like any other software it is improving.

[1]: For example, I used to hit multiple concurrency bugs in sysVinit used by Arch Linux. The only way to fix them was to find the correct order that system worked. No more problems of this kind thanks to systemd dependency resolving.


> For example, I used to hit multiple concurrency bugs in sysVinit used by Arch Linux

Arch didn't really use SysVinit before systemd though, but their own BSD-like rc system, which I rather enjoyed actually.. And I got hit with concurrency issues after the switch to systemd, because seemingly the dependencies for disks and file systems aren't setup correctly by the Arch scripts requiring me to add a few fake units to fix up the dependencies (otherwise the system would just fail to boot at all most of the time).


I need to run /usr/bin/expect from one of the init.d scripts. That breaks with systemd. Moving that box to Devuan (no systemd) fixed the issue. The alternative was to rebuild that particular application without systemd. So far systemd hasn't brought me anything particularly useful. I've never had any issues with getting things working (pre-systemd), and one of my pre-systemd laptops boots from power off to fully graphical in five seconds, so I don't buy the argument I hear from some that systemd is necessary for reducing boot-up time.

I made a typo in lighttpd's config once. sysvinit init script, which used start-stop-daemon, would report success, but would not start the server at all. There was no error messages, nothing in the system log.

Systemd logs all process output by default, so things like this cannot happen at all. Just for that, I'd choose it over sysvinit any time.


I had a similar issue with systemd in Debian. Restarting the service (Apache) via systemd actually had no effect, and calling init.d script worked.

By the way, I am not a fan of sysvinit: I read all those shell scripts.


I am going to guess you did not start the apache via systemd? This is actually a feature: systemd only manages processes it has started. So if you started apache via /etc/init.d, you should restart it via /etc/init.d as well.

(note that the opposite is not true: many /etc/init.d scripts stop process by killing everything with this process name -- no matter if it is the right process, or user-specific one, or one managed by some other supervisor)


I don't remember exactly, but I think Apache was started via systemd because there was no error like "Apache is not started". Systemd was reporting that it is stopping and starting Apache but in fact it did nothing.

You know you can run scripts to start something up with systemd as well?

How does systemd break Expect?

I am going to guess that process required a tty for some reason, and the systemd unit file did not specify

    StandardOutput=tty
nor did he wrap the invocation in `unbuffer` program

> I need to run /usr/bin/expect from one of the init.d scripts

That's interesting. Can you provide some more details?

To clarify: I don't care about the systemd argument; I'm interested in why an init script would need expect. I can think of a few pretty edge-casey good reasons and some scary ones. Care to share yours?


Important people choosing to use instead of letting the community decide is the definition of political and not merit

At least for Debian, technical committee (elected by the community) could not choose between systemd and upstart [0], so systemd was the final winner. There was overwhelming desire that sysvinit should go.

And when this question was brought to general election, community decided it did not care: [1]

So I am not sure who those "important people" were, why do you feel they did not represent you, and what would community choose, in your opinion.

I, having worked with upstart, can say it is worse than systemd.

[0] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=727708#672... [1] https://www.debian.org/vote/2014/vote_003#outcome


> technical committee (elected by the community)

Technical committee members are not elected by community, they're appointed by the DPL.

https://www.debian.org/devel/constitution#item-6


Interesting, I didn't know that Debian has a constitution.

The community did decide, each distribution chooses for itself what software to use. Debian's open debate process lasted for months.

And resulted in Devuan fork.

I have used systemd and currently am forced to use it in production. It works great until it doesnt.

There have been multiple times I have no idea what was wrong as it was faster to simply recreate the system from scratch and hope the same bug didnt happen again.


Who else has more merit to make this decision than the Debian developers (= the important people) themselves? Random neckbeards whining on online forums?

At least as far as Debian is concerned, you will find that the people with skin-in-the-game (administrators) were massively against systemd. The Devuan fork/split happened exactly because of that.

> you will find that the people with skin-in-the-game (administrators) were massively against systemd

Let me offer you an anecdote from my company. Now, we're a RHEL shop and not a Debian shop, but we've been around since long before systemd.

Most of our hosts now run RHEL 7, but we still have a handful of old boxen still running RHEL 6, and I've been personally involved with quite a few migrations from RHEL 6 to 7 during my time here. Whenever we retire a RHEL 6 host and replace it when a RHEL 7 host, the general consensus from the people who administrate it is "thank god we can now use systemd instead of having to deal with sysvinit". Every time I have to do anything on one of our remaining RHEL 6 hosts, I brace myself for pain, because our RHEL 7 hosts are just that much easier to manage.


RHEL 6 was using upstart, not sysvinit, so the general consensus of the people you mentioned is irrelevant.

That's because they only understand bash and not c

because configuring and managing systems in C seems suboptimal ?

Developers make terrible system managers usually.

Letting the wider community decide is also the opposite of a meritocratic decision.

You must be joking. What meritocracy? The guy who wrote systemd had already a major failed projects under his belt. PulseAudio rings a bell? Every reasonable person understand that separation of concerns is a good thing. Systemd does the exact opposite.

Few articles to be familiar with regarding systemd:

https://www.dedoimedo.com/computers/software-development-can...

"The same can be said of the init systems in Linux, like systemd. In a nutshell, the system should start quickly and get into a working session. We had this in 2010 or so, with boot times down to mere 10 seconds using init. No flaws, no bugs. Even in the commercial sphere, working with init, I do not recall any major problems.

Then, suddenly, we have this new binary diarrhea with a hundred million modules, and for the past five years, this unstable, half-baked, undebuggable nonsense is the backbone of most Linux distros. The invasive and pervasive nature of the systemd framework has also affected the stability of the user space, the very thing it should never have touched, and pretty much all problems with the quality of the Linux desktop nicely coincide with the introduction of systemd. The development continues, of course, and for no good reason than trying to reach the level of stability, maturity and functionality that we had half a decade ago. Someone landed themselves a lot of monthly pay checks by writing complex code to solve a problem that did not exist."

https://ewontfix.com/14/

"None of the things systemd "does right" are at all revolutionary. They've been done many times before. DJB's daemontools, runit, and Supervisor, among others, have solved the "legacy init is broken" problem over and over again (though each with some of their own flaws). Their failure to displace legacy sysvinit in major distributions had nothing to do with whether they solved the problem, and everything to do with marketing. Said differently, there's nothing great and revolutionary about systemd. Its popularity is purely the result of an aggressive, dictatorial marketing strategy including elements such as:

Engulfing other "essential" system components like udev and making them difficult or impossible to use without systemd (but see eudev).

Setting up for API lock-in (having the DBus interfaces provided by systemd become a necessary API that user-level programs depend on).

Dictating policy rather than being scoped such that the user, administrator, or systems integrator (distribution) has to provide glue. This eliminates bikesheds and thereby fast-tracks adoption at the expense of flexibility and diversity."

Solaris already had a better replacement for initv so did some linux distros.

I think we need a better alternative that does not behave as a black hole, ntp, dns resolution, logging etc. do not get sucked into it and finally we need a way better implementation that does not have a remove or local priv escalation every other week[1].

https://github.com/systemd/systemd/issues


Are you saying that entire TC of Debian was intimidated by Poettering? And then later, in general election, he forced majority of votes to say "we are fine with systemd"? [0]

I find this highly unlikely. Here is my take: people were tired with "sysvinit". Of all alternatives, systemd sucked less. So they went with it.

[0] https://news.ycombinator.com/item?id=18874100


There was a heavy influence from the GNOME camp, both upstream and within Debian, by mandating the use of systemd with GNOME. At the time, I wasn't particularly happy at the disproportionate influence for a specific desktop use case at the expense of other use cases e.g. embedded, server and others.

Surely you enjoy the irony of the move to microservices and isolation when merging so many functions into init ?

I am not sure what you are getting at. "systemd" deb package does contain lots of binaries, 73 in my case:

    dpkg -L systemd | xargs stat -c '%F %A %n' | grep file.*rwx | wc -l
    73

Each binary does one thing. So for example /lib/systemd/systemd, the PID 1 process, only starts processes, nothing else.

Now, some people complain that "systemd" package contain other binaries, like systemd-networkd, or systemd-journal-gatewayd (the HTTP server). I don't understand their problems. Sure, it is annoying that "systemd" package contains network manager -- but you don't have to use it; in fact, I am not using it at all.

So I do not see the irony. I think 73 binaries, each optional, easily replaceable, and doing only one thing, is very much inline to microservices philosophy.


The number of binaries isn't in and of itself a good measure. It says nothing about the tight coupling and poorly documented interfaces which tie them all together.

There are two major things to consider here:

- how easy is it to replace each of these with an independent third-party implementation - if it is possible, why should the design of that component be dictated by the systemd upstream

A huge amount of the value of Linux was the fact that the system was a loosely-coupled collection of parts. Much of that flexibility has been lost, and with it the utility which drove a lot of the adoption of Linux in the first place.


Re "how easy is it to replace each of these with an independent third-party implementation"

What did you have in mind when you were talking about hard-to-replace implementation?

I find the replacing all those components to be pretty easy. Those things are all loaded via regular unit files, and they need just one symlink to be completely disabled.

In particular, systemd-networkd is entirely optional, and ifupdown is still supported in ubuntu (even if not installed by default).

systemd-timesyncd is a regular service, and can be disabled directly, or via timedatectl. ntpdate still works when it is disabled.

Many other services, like systemd-hostnamed, do not even run by default. You could disabled them by default and things would still work.

The worst offender is probably systemd-logind, but luckily I did not have to deal with it. Looks like it only needed for GUI logins, not ssh one -- so most sysadmins would not have to care. That said, I did try to use it's predecessor, ConsoleKit, and it was pretty horrible as well.


> 73 binaries, each optional, easily replaceable.

Except for the journal. It is effectively required by systemd (the pid 1 binary). In the vast majority of cases, that means running systemd-journald.

Every attempt I have seen at operating without it either misses early logs or is extremely fragile/unsupported.

That's a point of fact, not a position in the larger argument here. There are good arguments as to why journald should be effectively a required component.

> /lib/systemd/systemd, the PID 1 process, only starts processes, nothing else.

That is incorrect. It also interacts with the journal, the component implicated in the exploit in the article.


> Except for the journal. It is effectively required by systemd (the pid 1 binary). In the vast majority of cases, that means running systemd-journald.

    man systemd
    ...
    --log-target=
           Set log target. Argument must be one of console, journal, kmsg, journal-or-kmsg, null.
you can log to kmsg -- the messages won't be lost.

> Every attempt I have seen at operating without it either misses early logs or is extremely fragile/unsupported.

I am not surprised -- this was one of many annoying parts of sysvinit, the missing early userspace logs. This was one of the many annoying parts. Gentoo used to have a hacky way that scraped screen buffer (!) but I think it broke at some point[1].

> That is incorrect. It also interacts with the journal, the component implicated in the exploit in the article.

Fair enough, I oversimplified. It also interacts with udev, systemctl, and does cronjob handling and network interface renaming.

[1] I just checked Gentoo's website now and looks like they have a better way now due to OpenRC.


The decoupling of services is the trend of microservices. Systemd is merging many previously decoupled systems to manage them and provide a platform for decoupled services.

Seems ironic to me is all


For me, "decoupling" means running separate processes with well-defined interfaces. Systemd does exactly that -- it has many separate processes, each with well-defined interfaces. For example, systemd-timesyncd can be run on any machine with dbus, and does not require systemd init at all.

The only "merging" systemd does is that it puts multiple binaries into one package, other than that they are completely separate. While it is annoying waste of space, there is no reason to suspect "coupling".


Your idea of well defined and mine are somewhat different :)

Can you be more specific, please? Like which interface is not well defined?

I find the fact that systemd internals are connected via the same mechanisms as non-systemd apps (units, sockets and so on) very nice, especially compared to sysvinit or upstart. Have you tried to patch network detection in upstart? Not an easy job. And upstart's manpages are less than useful, compared to systemd's.

Above in the other threads someone said "journal". This is a good point, except there is --log-target=kmsg option which disables that interface and switches it to good old kmsg which anyone can parse and read.

Any others?


>systemd became standard thanks to meritocracy (important people in most distros decided to adopt systemd...)

But that's literally the exact opposite of a meritocratic selection.


They are important because of what they did, on their merit. They also selected systemd because it solved problems they had, unlike other candidates.

Do not mistake that with any random Joe gets his opinion voiced.


We are smarter than you. Trust us :)

It's not about being smarter and trusting, it's about doing stuff and having results vs taking a ride for doing nothing.

When I'm getting a free ride, I'm not going to talk down the provider how he should be doing the job differently (not necessarily better; that might be my opinion, but not objective truth). I'm not in his shoes, solving problems he is solving, so my view could be diametrically different than his.


Important people are important people because of their merit. The fact that their opinion counts and not random users' opinion is what makes it a meritocratic instead of a democratic selection.

What you are arguing about is whether those people selected systemd based on its merits or based on other considerations is an entirely different question and an unsubstantiated accusation.


I'm sorry but no, there's no such thing as an objective measure of merit.

In 1991, most of the "important people" in technology would have laughed at Linux compared to OSes like Unix System V and DEC/HP Unix. Even as the 90s progressed most serious shops wouldn't adopt Linux and went for the BSDs (FreeBSD/ NetBSD, etc) or Solaris because Linux was considered hobbyist software. I know because I worked in tech at this time and the same attitude even continued into the early 2000s.

The point I'm trying to make is that random users' opinions are exactly how certain software projects, open source or proprietary, become popular or fall out of public use. It starts with Systems Administrators or developers getting fed up with a certain solution and adopting alternatives until it reaches critical mass and the old is rendered irrelevant.


I do too, for personal use (OpenRC on Gentoo user through and through), but due to the adoption of systemd by popular distributions I don’t see an alternative for work. Unfortunately.

I moved all my personal things to OpenBSD due to the Linux community move away from unix standards

That's what I did as well. But I also continue to use Slackware which doesn't have any current plans to move to SystemD.

It was the first Linux distro I ever used and I still love it to this day. I've used many different systems for work over the years but I've kept at least one Slackware laptop or desktop going since early 1995 when I first found a Slackware CDROM in the back of a book about learning Linux.

Although it gets new software, bug fixes and minor updates here and there it's remarkably similar today to what it was then. And I find it Just Works, I've had very little trouble with it over the years.

I think I like it too because it always felt like the most traditional Unix like distro I've ever used and also more BSD than System V which I also generally prefer.


I do have Slackware as well, so far it hasn't left its roots.

I like OpenBSD a lot, but I have to nitpick your description: OpenBSD diverges from "standards" when they decide the standard doesn't make sense. They are not afraid to tear down working things when they decide it's necessary, and I've faced a lot of release-to-release breakage, forced reconfigurations, or features I've relied on go completely missing without a lot of warning.

Still a great project that I use every day, I'm not knocking it. It's just not a great example of a change-averse type of stability.


If you've got the career capital, move somewhere that doesn't use it? Look for places that run servers on FreeBSD, perhaps.

I worked with both SysV init and updstart. Upstart was not too bad. SysV init was giant piles of bash-script, always slightly different from service to service and from distro to distro. Bash is a language which two decades after, working full time in Linux, I don't completely grasp. Just a few months back we got a client still stuck on SysV init, we ran this algorithm with him:

https://xkcd.com/2083/


Yes, systemd makes being an application developer on Linux so much easier. You can declaratively control state of your daemons, recover from crashes, add process limits, control startup order correctly and on and on.. Much more readable and usable than bash scripts pulling in random libraries and using endless greps to find out system state.

> declaratively control state of your daemons

Hum... Symlinks existence is declarative, configuration files are declarative. Running a specific piece of software to change the state of your service isn't declarative at all.


> Bash is a language which two decades after, working full time in Linux, I don't completely grasp.

Have you taken the time to actually read the read the bash(1)? I've noticed that a lot of people (myself included) don't learn sh/bash (and shell scripting in general) initially out of necessity, not with the intent to actually learn the language. If someone wanted to learn a "normal" programming language like C++, Java, or Rust, the would probably start by reading tutorials or guides that cover the major features and teach the core concepts[1].

With bash, most people appear to start with trying to fix something like a build/install script, often resulting in cargo culting a few lines from stackoverflow, forums, "cool one line tricks"-style blog posts, or similar community resources. These can be very helpful[2], but they are not really a way to actually learn the fundamentals of the language.

For me, bash programming initially was very confusing. After I finally forced myself to take the time to actually read the docs/etc, I realized I was making a few incorrect assumptions[3] about the language. After fixing that and learning a few important tricks[4], bash became a lot easier to understand. Now it's one of my favorite languages (for simple tasks).

[1] e.g. (C) pointers, (Rust) variable ownership/borrowing

[2] http://mywiki.wooledge.org/BashFAQ

[3] IO had a lot of trouble initially trying to figure out how to return values from a function. This was hard because, bash "functions" are not really functions like you would find in most languages. I had assumed that you would return values form functions, which breaks when you try to return anything besides an integer. (bash "functions" like foo() { echo "bar" ; } defines programs that return data by writing to stdout).

[4] Always quote variables. Always use curly brackets when using variables ("Hello,, ${name}", not "Hello, %name"). Using $* is almost always wrong; Use "$@" (with the double quotes) instead, which correctly handles spaces/bad-chars in filenames.


If you don't grasp bash after two decades of Linux experience, you should maybe question yourself as to why that is rather than blame bash. Maybe you never properly spent a few days fulltime trying to learn it?

I'd take bash scripts without hesitation over systemd any day of the week. I can debug bash with my eyes closed. Every single time I had to debug systemd, I wanted to kill myself.


The truth here is very simple honestly.

SystemD turns your bash scripts into C, and then gives you a config file input.

Before this; on RHEL systems there were similar initiatives, you /rarely/ touched the init scripts, you used the /etc/sysconfig/ directory of configuration files[0];

The major difference being that system administrators could actually tell you line-by-line what was happening. In C you would have to find the version of the code in some version control somewhere and hope that there's not patches applied.

The absolute truth is that you cannot know what systemd is doing, you cannot even strace it. You /rely/ on it telling you the truth and doing the right thing.

And honestly I don't trust it to do the right thing when it can't even get encoding right[1].

[0]: https://access.redhat.com/documentation/en-us/red_hat_enterp...

[1]: https://imgur.com/a/6aiXrEg


Personally, I totally agree with you. Aside from the political power play that made systemd "popular" (/adopted), it also gained traction because of a lack of skill among the ever growing user base of Linux. Quite a few of the "problems" that systemd claimed to solve where solvable in SysV init, but it required skill/experience.

A bit like claiming to be a car mechanic and bashing an old car because you can't tune the ignition timing, because it doesn't have a board computer doing it for you and you don't have the skill/understanding to do it yourself.

In the past, you often were required to have at least some skill/experience to get Linux to work for you. Today, it's more about choosing the right tools to use Linux with a just a bare minimum of skill/experience.

An understandable development for sure, but it also opened the door for software of questionable quality, which has at least part of its popularity to thank to the ignorance of a significant part of the user base.


Fully agreed. There has been a race to the bottom regarding technical competence by various entities for various reasons. To me it seemed at the time that Redhat used dark-pattern-like behavior in order to push systemd, exploiting people's hopes regarding a more unified Linux ecosystem.

There is nothing wrong with making things easier to use. The problems begin when the people you put in charge of that effort (Poettering and co) are extremely short-sighted, do not properly understand what came before and thus make mistakes that could have easily been avoided, are average-at-best engineers with a personal history of bad projects (pulseaudio, avahi), are bad communicators and take valid critique poorly.

When I was still at Google, colleagues at Android and ChromeOS teams laughed at the mere mention of systemd. The general consensus in the teams I mostly interacted with was that systemd resembled a hidden iceberg with enormous problems, the magnitude of which would slowly become apparent. Which is one of the reasons Google has mostly kept away from it.


How did 'skilled' programmers recover when a daemon crashed in Sys5 init-based systems? The init script ends after startup so you are stuck with a third party monitoring service that will often clash with startup functions making controlling system state really hard. systemd was worth it for that feature alone.

Linux userspace was a bunch of crufty, unmaintained tools from xinetd to logind (literally had no maintainer until system came along) to update-rc.d that additionally were not taking advantage of a lot of new kernel features like cgroups. systemd has done a great job moving the base layer of linux userspace forward.

The old world of cobbling together bash, sed and shell metacharacters was always hacky, insecure and broken as shit and we are way better off now with systemd.


One would ask himself how the BSDs manage to do it (no systemd), Android (no systemd), ChromeOS (no systemd), Solaris/Illumos (no systemd) ...

Your arguments hold no merit whatsoever. The fact is that all the problems you describe have been solved, properly, multiple times _before_ systemd entered the picture. The reasons behind systemd mass adoption were political and Redhat exerted a lot of pressure at the time and in many ways, still do.


You could use the same argument to disparage any technological innovation. Computers? Our ancestors did just fine without them and so do many people across the world to this day. But more ridiculous than that is your listing of Solaris and Android as systems without systemd. Solaris uses SMF which was one of the inspirations for systemd. SMF is much closer in spirit to systemd than it is to sysvinit or rc. The particular merits of systemd or bash scripts aside, using SMF as an argument for the later is just ignorant. Furthermore, even though Android doesn't use systemd, software written for Android is written in such a way that it doesn't interact with the init system at all. The typical user/admin has no access to the init system. I have no specific knowledge of ChromeOS but I suspect it is much like Android in this regard. The only example that really works for you are the BSDs.

"Closer in spirit" does not mean anything tangible in the real world, you are merely playing with words.

Solaris SMF is vastly simpler than systemd and has a much narrower scope and focus. I don't recall mentioning bash scripts in the post you replied to, so you are simply being disingenuous by presenting a false dichotomy.


I find it really hard to believe that you were unable to understand what I meant.

The complexity of Sysvinit is orders of magnitude less than that of systemd.

This whole thread is a discussion of the relative complexity between sysvinit and systemd. Sysvinit uses shell scripts to implement much of the logic around starting and stopping services. Any discussion of sysvinit and systemd is going to involve bash. It's disingenuous to pretend otherwise.


I will copy my post again, here, to expose your strawman or unwillingness to stop deflecting:

"One would ask himself how the BSDs manage to do it (no systemd), Android (no systemd), ChromeOS (no systemd), Solaris/Illumos (no systemd) ... Your arguments hold no merit whatsoever. The fact is that all the problems you describe have been solved, properly, multiple times _before_ systemd entered the picture. The reasons behind systemd mass adoption were political and Redhat exerted a lot of pressure at the time and in many ways, still do."

This is the post you replied to, with arguments that (still) hold no merit. Now you are trying to shift this into something else rather than stick to the points I made _in this thread_. You pick and choose a reply of mine _from a different thread_. Moreover, you write: "This whole thread is a discussion of the relative complexity between sysvinit and systemd". No it is not. As I wrote: "The fact is that all the problems you describe have been solved, properly, multiple times _before_ systemd entered the picture."

I would take you more seriously if you stopped presenting one logical fallacy after another.


It is a logical fallacy to ignore context when making an argument.

daemontools, or respawn from inittab. There are a number of perfectly reasonable possibilities, of which these are just two.

But more generally, you didn't restart daemons if they crashed. Firstly, because they used to be written sufficiently well that they never crashed, because competent people wrote them. Secondly, because continually respawning and crashing does no one any good.


Firstly, because they used to be written sufficiently well that they never crashed, because competent people wrote them.

If your deployment strategy depends on never having to run substandard software then you've already lost. Also, it just isn't true that older software was necessarily more reliable. It's just that when you found yourself maintaining poorly written software you just dealt with it through whatever means you had available. I remember having to use an IBM HSM implementation on AIX, something you would expect to just work because it was IBM software written for their own system on their own hardware, but in practice the filesystem kept invisibly crashing resulting in apparently corrupt files and I'd have to restart it every few minutes.


The thing is, most Linux distributions ran services this way for well over two decades. The world did not end, and the number of problems we experienced in reality were negligible. If we wanted automatic service restart on failure, then there were facilities for doing so should we chose to avail ourselves of them.

If something is crashing and requires a restart, then doing it automatically is at best papering over a problem the admin should be investigating. It's a mitigation, rather than a solution, and might well cause more problems than it solves.

systemd isn't bringing anything particularly new or noteworthy to the table here.


The thing is, most Linux distributions ran services this way for well over two decades.

So? That doesn't mean it was the best way to do things. Humanity existed for thousands of years without vaccines and still managed but that doesn't mean that disease wasn't a thing. Misbehaving services have always been a thing and they've always been a problem. I know because I've dealt with them. If you feel that other methods for dealing with misbehaving services are better, fair enough, but don't appeal to a mythical golden age where everybody wrote software without bugs.


I'm not in any way appealing to a "mythical golden age". I'm simply pointing out that the whole Linux world ran this way for over two decades without this being a critical problem, or even a problem of particular note.

Not only that, but automated service restarting was configurable if desired, albeit not the default. If you wanted it, it was doable with ease. This is not a new feature which systemd brought to the table, it was already available.


The sentence: Firstly, because they used to be written sufficiently well that they never crashed, because competent people wrote them. IS an appeal to a golden age. Because it's not true. People did write services that crashed and sysadmins had to work around them. I know this to be the case because I had to work with a particularly egregious example. Whether you think the older methods for managing them were better is one thing, but those problems existed and they caused lots of damage.

You're assuming bash is a requirement which is short sighted, #!/bin/foo can be anything and isn't limited to bash

> How did 'skilled' programmers recover when a daemon crashed in Sys5 init-based systems?

Maybe the daemons didn't crash so often as now?

I never understood this new approach, which consists of writing a server process in a half assed way so that it crashes frequently, then solving that by putting a restarter service in front of it, done!


This sentiment is something I relate to strongly

systemd is very easy and nice to use. Why waste your life writing init scripts.

Woah there, careful with that opinion.

Most of HN loves to flame systemd, yet still use it. There is no suitable alternative on Linux.

[flagged]


Can we just ban/flag these comments already? They're so obviously false and their motivations aren't sincere. Political colorization of other people's choices because you don't like it, with a thin veil of political grandstanding masquerading as insight, on top of needless hyperbole is just unhelpful. Honestly, I get that you like systemd. I understand wanting to not talk about it anymore. But not wanting other people to be able to talk just because they disagree with you is ... not a compelling perspective.

>Honestly, I get that you like systemd.

No where did I state that.

>I understand wanting to not talk about it anymore.

I didn't say that.

>But not wanting other people to be able to talk

That's certainly not my motivation. How dare you even claim that is something I want.

It's an unsubstantiated comment that doesn't offer anything to the conversation. It provides no context, nothing to argue against, no examples, no solutions, but it's allowed because it fits within the HN userbase's opinion. An opinion often grounded in narrative and not facts.

It uses conspiracy theory accessories and grandstanding. That's it.

Your comment isn't clever because it just reveals you couldn't be bothered to read what was being stated, and then even had the audacity to prescribe positions to me that I never stated. And certainly have never stated on this board.


Me too. Ironically, systemd is why I bought a Mac.

It's so hard to find a linux distro that actually works on a laptop but doesn't use systemd, so I eventually just gave up on Linux entirely.

That left me with BSD, Windows, and macOS.

I tried FreeBSD and OpenBSD but performance was atrocious and noticeably worse in almost every way than Debian and win7 on the same hardware. I'm sure BSD makes a good server OS but it's just not mature enough for serious work unless I was willing to keep my laptop plugged 99% of the time. But what's the point of a laptop if the OS performs so poorly it won't even last a 4-hour flight?

I similarly seem to be the last guy on earth who still remembers the bad old days of Microsoft's dominance so I won't pay for windows.

The winner by default is therefore apple.


Interesting choice you have there. Systemd is heavily influenced by launchd, the service manager that makes MacOS tick. They're very similar in design. IPC based, socket-activated, property-based, dependencies, parallel startup, unifying init and service management...

https://en.m.wikipedia.org/wiki/Launchd


Launchd is not a syslog audio daemon dhcp client. It just launches things.

Neither is systemd.

Were you thinking of journald,... etc. instead?

http://0pointer.de/blog/projects/the-biggest-myths.html


> ...you can turn off and replace pretty much any part of systemd, with very few exceptions. And those exceptions (such as journald)...

I’m not sure what your link is supposed to prove.


systemd is not a syslog daemon either. systemd-journald is.

It's funny that two entirely unrelated pieces of software written by separate projects have such similar names.

One of them should really change to avoid this unfortunate confusion.


Windows GUI is heavily influenced by Mac, so it also must be as good as the Mac version, right?

As in. Launchd is directly mentioned in the design documents of systemd as the main source of inspiration

http://0pointer.de/blog/projects/systemd.html


Yes, that doesn't contradict my point.

Systemd is a much worse implementation of a Launchd concept, just as Windows is a worse implementation of a Mac like GUI concept...


[flagged]


My complaint isn't with the design. My complaint is with the piss-poor implementation.

Sorry you had such a bad time with OpenBSD. It isn't top of the heap when it comes to performance but there are benefits that balance it out.

On the Linux front, I've been using Void and it has been pretty fantastic. Runit is the default init. It is a more hands-on distro (think Arch 10 years ago), but it's fast, slim, and doesn't have systemd.


> It's so hard to find a linux distro that actually works on a laptop but doesn't use systemd

Don't you think that there's a reason for that? That systemd solves the problems that make it possible to react to events and makes laptop-usable distros possible?

Meanwhile, enjoy launchd. Maybe you will find out, that they are conceptually similar.


launchd has a clearly defined scope, and it restricts itself to doing only those tasks which fall under its remit.

If systemd had followed a similar philosophy, it would have been accepted without anything like the same amount of criticism. As it is, its scope creep is frankly absurd and dangerous.


Most of the additional functionality is optional, and in separate binaries. Apple also has similar functionality (seat management, resolver management, etc), and nobody is bothered by that.

One gets the impression that optionality is only theoretical. Same for separate binaries. The degree of coupling in the systemd architecture is enormous. So then, what purpose do separate binaries serve other than being able to (conveniently) be used for deflection or provide a certain facade?

The philosophy of systemd exemplified through PRAXIS is one of subsumption and uniformity (== taking away choice) under a singular vision defined by the systemd implementors. In practice, this means that you are penalized in various ways if you don't "buy in all the way". You can take a look at all the distributions that ship systemd by default and see how many have bought in all the way vs not using the "additional functionality".


> That systemd solves the problems that make it possible to react to events and makes laptop-usable distros possible?

It is one such solution. But there are others, which from an end-user perspective are much easier to deal with (e.g. runit, shepherd etc.).

So, for instance, my Void-powered laptop shuts down quickly and cleanly when I ask it to, and I don't have to wait 1:30 minutes (or more) on shutdown.


Launchd appears to actually work. And it isn't Poetteringware.

You bought a Mac because of systemd? Isn't that some sort of overreaction?

I moved away from Linux as soon as systemd was everywhere. I first migrated to Slackware, and then to FreeBSD. I wouldn't call this kind of move overreaction.

Some people just want to know exactly what is running on their system, and how it works. That's one of the reasons I don't use Windows.

(I do confess that I never took the time to learn systemd properly, but I highly appreciate it for introducing me to the BSD world).


I don't particularly think so. The amount of my time that systemd has wasted is insane. Buying a mac would have saved me all that time... if only I liked their hardware more.

systemd has wasted a ton of time for me as well.

Then again I just bought a new MacBook air and they keyboard gave out after two weeks and it gave me a rash.

Can't win at all.


I don't think so. If the distribution you're using insists on integrating software that you don't want, it's reasonable to leave.

He tried other open source operating systems and didn't find them acceptable. Sure, there are Linux distributions that have formed to avoid systemd, but it's not clear how long any of them will last.

Mac os isn't exactly unicorns and rainbows either, but there are some benefits of being part of a larger user base, too.


Don't tell him about launchd

Did apple reinvent every Unix daemon under the sun as part of launchd? And did they also reinvent all the security problems while they were doing it?

The problem with systemd isn't necessarily how it handles system startup and daemon supervision (although I don't care for that), it's that it subsumes so many other things, and with no technical excellence.


I prefer to use my computer to do productive things, instead of fighting against the OS. MacOS is a certified Unix underneath. Works great.

Devuan works fine on my laptop and doesn't use systemd. It's nearly exactly as Debian otherwise.

When you tested OpenBSD, did you enable apmd for power management? This significantly improves my battery life (by about 3 hours).

Of course. But it resulted in multi-second pauses while doing such demanding tasks as closing browser tabs.

So when did you leave Mac due to it using launchd?

Launchd is nowhere near as bad as systemd and doesn't suffer from the latter feature creep.

It's only a matter of degree, not philosophy, though. Launchd still accumulated the job of multiple services (initd, inetd, cron, incron, etc).

The aggregation of these services are all understandable, because every one of them is about running a service. They might be triggered by different events (timer, network connection, at boot, manually), but other than that they are all doing pretty much the same thing: setting up and running a service.

It isn't building in an NTP client, or a hostname resolver, or configuring your network, or taking over the system logging, or generating QR codes. It has a singular focus and design which makes conceptual sense. It's not a dumping ground for a disparate collection of poor quality alternatives to existing tools.


Slackware64 14.2 runs fine on my 10yo mediocre (for the time) laptop. Don't know about battery though, mine it's too old to be useful.

There are a number of desktop/laptop distros which don't use systemd. Void Linux is one such (it uses runit), and it's very easy to use.
More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: