This is the really concerning part. silently fixed in the upstream git is not at all an acceptable way to deal with serious security flaws in your product.
Just to be clear, systemd is not part of the Linux kernel.
Also, if you are going to make a broad claim like that I would appreciate some citations/examples. I have no idea if you are wrong or right on the whole, but without examples I can't learn myself.
If you have a bug in some github project you cannot request a CVE for that. If a CVE is reported you'd usually include that in the commit. But that's not the same as every security bug should have a CVE. Often way easier to just fix bugs instead of figuring out if it is a security bug (=method Linus uses).
(and it was not even my project... I just reported the bug)
Now the workflow changed a bit, in the link that you shared in fact it says "For open source software products not listed below, request a CVE ID through the Distributed Weakness Filing Project CNA." which is just an easy-to-fill Google form. Not such a close system as you seem to imply
(OTOH, obviously CVE cannot guarantee or pretend to have universal coverage of every security issue ever existed)
I generally like systemd, but it's irresponsible to not publicly communicate about such an issue if you're aware that it's actually a security issue
fix bugs before investigating, that's ok... but not communicating it means that you'll leave users downstream exposed to it, since it won't prompt maintainers to ship the patch/upgrade
If you go to https://cve.mitre.org/ it has a link "Request a
CVE ID" which IMO explains that it is only for some products, not all. Alternatively there's also a weblink below it which want GPG key, etc. Alternatively you can email some mailing list, but I don't see where this is documented.
The complaint was that the CVE should've 1) been included in the commit 2) been made. IMO the entire thing is confusing.
Also like to repeat: it's super nice that things are reported and have a CVE. But that doesn't mean every security commit will be seen as related to security.
There seems to be a pretty major backlog for getting CVE numbers, such that for not-hugely-impacting ones it seems like the CVE request people won't take any time to discuss things.
Saying that after trying to get a CVE for a low risk problem with CMake on Windows. Applied for a CVE (months ago), and the only response received was:
Please resend your CVE request properly (the description was not filled out properly) and
resubmit. The correct format is:
[Vendor name] [product name] version [version info] is vulnerable to a [single flaw type]
in the [component] resulting [some impact].
Which is strange. I looked over the original submission, and there's nothing that I'd change in it. Emailed the person back asking for clarification and received zero reply.
If it was a high risk bug, I'd probably take the time to follow up more. Since it's not though... ;D
That is not true anymore. You can get a CVE from MITRE for anything (they are the ultimate root authority), and for the Open Source world you can get a CVE from the DWF (https://distributedweaknessfiling.org ), something that is currently slow because we're working on automating a lot of it and stream lining the process (I'll be giving a talk on this at RSA: Saving CVE with OpenSource: https://www.rsaconference.com/events/us17/agenda/sessions/56... ).
My goal long term is to have CVE requests take <5 minutes for the requestor and <1 minute for the assigner to process. We need to scale this out and simplify it vastly. People need to be aware of security flaws so they can be dealt with, and CVE is the best option for this we have currently.
You need to be an invited project to be able to allocate CVEs out of a block, but you can absolutely request CVEs from one of the participating projects (including MITRE themselves) as a random person. See https://cve.mitre.org/cve/request_id.html . Contacting one of the large OSS product security teams like Red Hat's is also a fine option.
I'm not the one who downvoted you, but there is two reasons why this is not a sufficient explanation:
1. That's why you should assign a CVE even for "lower" exploit. This way, people who work in that field look at it and can figure out it's worse when it is.
2. That is still a terrible mark on systemd's procedures that such a thing is not reviewed by someone who will consider an exploit through all lenses, and added with the no-CVE issue from above it makes it even worse.
When systemd is taking over more and more critical parts of the system, and getting deployed to most linux distros, it's only fair that we expect more of them and put them under more scrutiny. That they trip on such a "trivial" case is kind of scary.
Now I'm not sure if this was linked to a pull request or some other place where discussion took place, but it looks like it was a simple fix, by one person, over a year ago.
At a minimum I think this suggests that more scrutiny is required, especially for bugs that suggest security issues.
> 1. That's why you should assign a CVE even for "lower" exploit. This way, people who work in that field look at it and can figure out it's worse when it is.
Within a few hours of review of the bug-fix patches affecting Linux kernel version 2.6.24, we identified a commit from February 2008 with serious security consequences (Git ID 7e3c396, commit subject ``sys_remap_file_pages: fix ->vm_file accounting''). At the time that we conducted this review, this bug and its corresponding patch had been disclosed for more than 10 months, yet it had no associated CVE number or record of any security consequences.
We developed a privilege escalation exploit for this bug in a few hours; doing so did not require any innovative techniques or extensive expertise. The exploit allows any user on a vulnerable system to gain full administrator privileges on the system.
If you care about security, you should run the latest version of all the security-critical software you run. Healthy projects clean up bad / smelly code all the time, and don't investigate the security weaknesses of old versions of their code.
> they also seem to think that a local DoS is not enough for a CVE
Some vendors do not consider local DoS as security issues. I tried to discuss these kind of issues in oss-security but even MITRE refused to assign a CVE.
If the system are not restricted by having quota on every computer resource its trivial for any local user to DoS the system. For the issue to be exploitable, you need to have restrictions in place and to my knowledge the only way to do so in the past was with seLinux. Today of course there is cgroup.
Which one was this specifically? Not all local DoS's are security vulnerabilities, in general there needs to be a trust boundary that is violated, e.g. the ping of death, clearly a single remote ICMP packet shouldn't cause the system to reboot. But what about DoS's that can only be triggered by root? And the whole grey area in between these two extremes?
Sudo and polkit are so complicated because they solve the wrong problem. The common problem is how do I execute code under a different effective user id. Instead they try to solve a much harder superset: how to securely implement a policy defining who is allowed to execute what under which effective user id in a setuid executable.
Polkit gets remarkably close. You can use polkit to define rules so an unprivileged user can call "systemctl restart ...", which sends an unprivileged message over D-Bus to pid 1, which checks authorization and then does the task if the requestor is authorized.
Polkit just also ships with pkexec and similar things in the sudo mindset.
The problem is that you have many things to configure instead of just one. That is a problem as we humans have a limited capacity for remembering things. On top of that polkit is basically javascript and xml (i think), and one can not expect everybody to understand javascript or have the patience to read/write xml. Now, in addition to groups and users (and advanced fs permissions and app/se/linux/armor), you have polkit and dbus permissions. This makes it really hard to look at something and know what it can do. For example steam, when you start it up, asks NetworkManager to bring up an internet connection. To disable that one has to write javascript.
In short, it's a clusterfuck. Even if everything worked properly (that is hard with Turing complete configuration files) it is still too vague, too big of a load to administrate. I expect the fd.o/fedora peoples answer to that (when people start complaining, that is) to be a GUI, not a rewrite.
I'm not going to defend polkit's choice of JS, but I have to say it's funny to see a complaint about Turing-complete configuration files when the usual anti-systemd sentiment prefers sysvinit Turing-complete shell scripts to systemd declarative configuration files :)
Do you have any real objections to what i said ? Did i even mention systemd or any sysvinit ?
Polkit is on systems with and without systemd. And, if you just have to pull sysvinit into it, shell scripts are simple (note that permissions to do something are not even remotely related to the shell scripting language as permissions in shell scripts are in the filesystem).
Less social, more technical. Please. Otherwise there is no hope of making things better.
I don't object to criticizing JS as an unusual choice. I agree with that.
I do object to the claim that declarative rules are better: the goal here is to reliably be able to express a security policy that means the same thing to the computer as to you. It would be difficult to find a sysadmin with no experience with either sudo or JS who would find sudoers rules more readable than JS. I found the experience of writing a polkit JS config pretty easy, especially since there are bunch of examples in the manpage: https://www.freedesktop.org/software/polkit/docs/latest/polk...
It's very common for someone to, say, not realize that a sudoers rule that lets you run /usr/bin/something also lets you run /usr/bin/something --with-arbitrary-args, but as soon as you provide a single argument that behavior goes away.
I definitely think you do want a simple language that lets you supply a few conditionals, maybe run some commands, and make decisions on that, and the best option is a real, existing language. Serious mistakes would include using M4, like sendmail, or writing your own scripting language, like Plymouth (https://www.freedesktop.org/wiki/Software/Plymouth/Scripts/).
I do think it'd also be good to have a system for extremely simple rules that doesn't involve writing JS, but I think the examples in the manpage mean the need for that is not very much.
I'd actually be inclined to suggest shell, given that it's familiar to sysadmins, but shell is notoriously bad at foolproof string handling, and that's exactly what you want to be able to enforce here. Failing that, perhaps Python (or Lua, but I'd bet more people are familiar with JS than Lua).
I'm reminded of proxy PAC files, which are also JS (because that was convenient for browsers; presumably JS was convenient here because GNOME has a JS library easily available). They're a weird system with some security concerns, but I don't think that the fact that they're in JS has ever been the problem with them.
Uh, so, systemd don't recommend to use `sudo systemctl start blah`. Their idea is to use `systemctl start blah` and have polkit handle the authentication. Not sure what's unclear here.
The difference is that with `sudo systemctl ...` systemctl is run as root, but with PolicyKit only parts are. And if I understand it correctly PolicyKit allows more fine grained control over permissions than sudo.
Well, since this was the recommendation from one of the polkit contributors... I'm not exactly sure what you mean. Perhaps you can enlighten me with your wisdom.
I think your being so quick to judge anything using JavaScript as inherently bad is silly, but this link just made me realize that using sudo with systemd as I had been seems to be incorrect, so thanks for the link.
Let's say you write some authorisation code using JavaScript. If it contains a syntax error, or a logic error your authentication is broken for your entire system. Checking the correctness of a program is usually non-trivial, but I accept some things CAN be checked (e.g. syntax). However, JavaScript, naturally is a procedural language and hence the bulk of your problems would be in your logic.
In contrast, /etc/sudoers{.d} config is syntactically validated using a strict grammar so that it can be validated for correctness before being loaded and used (hence visudo). It's primarily a declarative language too which means that logic bugs aren't really possible. This means that there is a robust mechanism to detect syntax issues (and some semantic issues) before breaking your system.
As a Qubes user, no, not at all. It's fairly easy to starve other VMs of resources. Qubes is more vulnerable here than other systems, due to it's dynamic allocation of memory. A VM that has a no-matter-what fixed limit of say 2 GB RAM would have a harder time to cause trouble there, but the dynamic management done by Qubes is a major selling point, otherwise it wouldn't be really practical to use on mobile hardware (which is all the rage). I believe one can disable it on a per-VM basis though, so there's that.
Given how furiously the thought police react to an OT comment, or indeed any comment mentioning karma, I should have created a throwaway for that question... -4 jeez
Local security on Linux is completely forfeit. It's a single user OS. Anyone with access has root. There's just too much surface area between all the different subsystems and nobody's been paying much attention to local security for a very long time.
I've thought for a long time that containers and even virtualization are kind of a parody of this. They shouldn't be necessary. If the OS had good multi-tenancy, resource control, and local security you could have multiple tenants (even untrusted ones!) on the same "box" without requiring any of those layers of complexity.
Why does systemd implement touch(1) as a library function? Isn't the whole point of coreutils to keep stuff like that centrally maintained so we don't have a million different (and possibly broken) implementations of it?
Sure, security has performance costs, but I don't think an init system (whose job, ultimately, is to fork and exec a lot of things) is going to be harmed by it.
And, anyways, if you had a libcoreutils, suddenly you're stuck worrying about symbol versions, LD_PRELOAD, etc., etc., whereas simply executing the binary is pretty simple.
> I don't think an init system (whose job, ultimately, is to fork and exec a lot of things) is going to be harmed by it.
The performance overhead of the script-heavy init system that preceded it is in fact one of the core design points of systemd. Boot time still matters in some environments, and the old init scripts were completely out of hand.
Poeterring works for Red Hat and it was written on Fedora. Yes, Ubuntu had something else. I don't see how that's relevant to a discussion of systemd's design goals.
Fedora was one of those two systems, using upstart since Fedora 9. If you are going to participate in a systemd discussion, you should know what you are talking about, lest you once again (as indeed you are) promote the There Is Only System 5 init And systemd fallacy. Read the Uselessd Guy's articles already pointed to. Read the Debian Hoo-Hah where there were four choices. Read Lennart Poettering's own explanation, widely published a couple of years ago now, of how upstart was the motivator.
Fun fact: the person who fixed this is an Arch Linux developer. The whole issue based on the commit seems like an oversight (thinking mode_t is signed).
This doesn't appear to be malicious in any way. Note that many apps have sign issues like these, with the difference being that it's not enough to give root.
On a systemd operating system there may or may not be a program called "init" and it may or may not be an alias for a systemd program. (One's operating system could be one of the ones that just invokes "/lib/systemd/systemd" directly without an "init" at all; or "init" could be Upstart or van Smoorenburg init or the nosh system manager; and of course just running unadorned "init" assumes that "/sbin" is on an unprivileged user's PATH.)
The best command to invoke here is either "systemctl" or "systemd".
2) maybe. mode_t is unsigned and MODE_INVALID was defined as:
#define MODE_INVALID ((mode_t) -1)
and the problem was in a check:
fd = open(path, O_WRONLY|O_CREAT|O_CLOEXEC|O_NOCTTY, mode > 0 ? mode : 0644);
so maybe the author thought MODE_INVALID < 0. though, maybe safe languages will let you do this explicit cast as well so maybe they won't save you.
the other thing is maybe in a safe language you would use an Option/Maybe type here instead of a plain mode_t type.
It wasn't really anything to do with the language, and far more to do with the operating system kernel API.
The fchown() system call supports passing -1, cast to the appropriate type, as a no-op value. The systemd people were attempting to wrap similar semantics around fchmod(). Originally in 2014 M. Sievers specified (mode_t)0 as the no-op value, which wasn't a good choice, with M. Poettering changing it to (mode_t)-1 in 2015 but overlooking one place where the value remained tested against 0.
The same error could technically occur in Rust if the API was designed that way but I think the "Rust way" would mandate using an `Option` instead of a special MODE_INVALID value.
So it would become something like `mode.unwrap_or(0o644)` which doesn't leave a lot of room for error.
So in other words it wouldn't have made a difference.
A better type system gives you the option to enforce stricter checks to help you catch mistakes, but the same people with the same procedures would have written this bug in any language.
Not necessarily. If any unsafe constructs are locally visible during code review, and the language is such that unsafe constructs are rarely required, then it's much easier to give unsafe constructs a higher level of scrutiny that you can't afford to do in a language like C where unsafe things are pervasive and the same line can easily be safe in one context and unsafe in another.
I don't know about SystemD's code policies. But certainly serious vulnerabilities have been found even in C code where changes went through code review (the famous Chrome sandbox escape due to an undefined bitshift was noted to have been reviewed and explicitly "LGTMed" by two people).
And the decision about whether to code review is not necessarily static. A language that reduces the cost and/or increases the benefits of code reviews changes the decision space. And a more expressive language can free up developer time to spend on things like code review.
Eh, I feel like I've seen even pickier static analysis warnings. If the next -Wall told me I should rewrite this as "mode != 0" to confirm the intent, that wouldn't seem crazy to me.
(Future readers can safely ignore the rest of the comment, I was struck by the "did not read the article" disease)
Surprised that software has security flaws? Especially software written in "let-me-use-that-chainsaw-to-trim-the-bushes" C? :)
I know that there will be security flaws in any language, but if there ever was software today that deserved a safer programming language, the init system was it. Or the web browser.
Although the specific cause -- (mode_t)-1 -- is something that you can't really do in many languages. So you'd likely have to write 0777 or equivalent explicitly down, making it so much more obvious what a bad idea that is.
It's one part of a cascade of errors. The author(s) defined an opaque MODE_INVALID but wrote code depending on their 'knowledge' of its underlying value. Signed/unsigned confusion is typical of C, though.
The 'fixed' code¹ has the property that calling it to create a file with mode==0 (i.e. no permissions) actually creates one with mode==0644 (i.e. some permissions), which is a wtf r u doin that can't be blamed on C.
The "sysvinit" binary package has had a total of 2 bug reports tagged "security" filed in Debian's bug tracker. That's over the entire history of the package that goes back to at least 2004.
If I extend the search to all packages built from the "sysvinit" source (which includes packages like initscripts and sysvinit-utils), the count increases to 8.
(source: https://www.debian.org/Bugs. I'm not linking to exact queries since they take quite some time and HN has a tendency of taking down Debian's bug tracker)
van Smoorenburg init dates from 1992. Considering its bug history from age 12 to age 25 and comparing that to systemd's bug history from age 5 to age 6 is at the very best misleading.
At age 4, van Smoorenburg init was panicking when /etc/inittab had blank lines. At age 6, van Smoorenburg init was having a buffer overflow in init.c fixed.
Counting the CVE bugs is a silly approach. It is far better to look at the coding practices that are followed in a project. Are APIs designed and implemented consistently? When a functional change is made, is the doco always changed at the same time to match? Are the reasons for seemingly odd things properly recorded for maintenance programmers in the future to read? There are many, far better, questions to ask in place of how many CVE listings something has had in some arbitrary interval in its lifetime.
Why is it misleading? Why if we had something safe and stable because of its age and refinement is that an excuse to stuff something less mature in its place?
Personally I like unit files rather than bespoke Bash scripts, I like dependency-driven parallel startup, and I like getting a daemon watchdog for free. That's about where I'd have ended the feature requests, though. I'd also like it to be roughly as secure as what it's replacing.
For the reasons given. Read again, properly. Pay particular attention to the statement that counting CVE listings is a silly approach. Then think, and consider that if that alone is silly, what must counting two sets of CVE listings from entirely different parts of two projects' lifetimes be.
Then read on a find a whole lot of better questions to ask. For bonus points, try asking yourself them about various systems. (-:
OpenRC (and to some extent, the BSDs), already have dep-driven startup, and OpenRC supports parallel startup. Also, with rc.subrs and whatever OpenRC calls it, the init files can be pretty damn descriptive, and very concise.
so if Vsinit is stable and bug free right now, why switch to something that isn't?
Also, systemd has had two bad exoploits over the course of about a year. The flaws in Vsinit you point out were two years apart, and the former of the two could only be triggered by someone who was already an admin, and gave no opportunity for privesc.
Given, init panicking is never acceptable, but it's a heck of a lot better than what systemd did.
Anyone who erroneously thinks that van Smoorenburg rc is bug free hasn't spent time fixing the whole host of faulty, incomplete, and rickety van Smoorenburg rc scripts that exist.
It is also erroneous to propound the old There Is Only System 5 init And systemd fallacy, which is just bunkum.
I'm well aware there are other inits: OpenRC, BSDinit, Runit, s6/s6-rc, and many others are all better alternatives to both sysvinit and systemd. I was making the sysvinit comparison here for simplicity's sake.
>Anyone who erroneously thinks that van Smoorenburg rc is bug free hasn't spent time fixing the whole host of faulty, incomplete, and rickety van Smoorenburg rc scripts that exist.
I never said it was bug-free, I said that the problems with the rc scripts aren't inherent to the sysvinit model.
At the same time, the list of bug reports that the init scripts used with sysvinit had (because they all had to replicate the same code, in different ways, patched together from some stackoverflow post) is uncountable.
That's not an inherent problem: bsd's rc.subrs and similar systems exist, which fix this issue in a sysv/bsd style init system. They don't even technically require patching init itself.
Systemd's problems are deeper in. One major one is most of its design, and its entire conception.
IMO having a security bug in systemd is not acceptable. Commits should be carefully reviewed. That other projects (even if these include very important ones) have way more is "meh".
From what I understood from the systemd code is that it's written pretty defensively. I quite like systemd because its useful, thought out, etc. But then I want all that without any drawbacks (because why not!). Probably unrealistic, but nice to strive for a perfect project.
This exploit isn't in PID 1 and neither are all of the other things people claim are in PID1. PID1 in systemd doesn't contain much more than it needs to, it handles parsing unit files which could arguably be split out but that still leaves something highly privileged to parse them, but just about everything else would be a major pain to separate from PID 1 as most of it is how to walk the dependency graph from where it currently is to the target that it's trying to get to. It also needs to handle supervision and other core parts of init but by and large most of the claims out there that everything and the kitchen sink is being put into PID 1 are completely baseless.
Even so, systemd init tends to (by my understanding) trust other parts of systemd.
In addition, I don't see all that many other daemons running that can be attributed to systemd: where did crond go? What daemon replaced it? Because from out here, it seems that it was sucked into the core, and don't tell me that's an essential.
Finally, even if the core is rock solid, there's no excuse for these kinds of exploits in core software, which systemd seems to want to be, especially not this frequently.
You're right. Ever since distros made systemd default, computers all over the world have been catching fire, exploding, shooting jets of lava from their headphone jacks. It's the end times
You don't really need jets of lava to have a catastrophe. Also, it doesn't have to happen right away - OpenSSL was neglected for years before heartbleed happened. Also, keep in mind that it only takes one vuln to compromise the system and we're definitely hearing of too many of them throughout the time.
It is NOT a secure project. It's not even remotely so. The development process is too fast and erratic.
Yes, tell us all you know about the systemd development process. And while you're at it, enlighten us all about how the it is "too fast"; average citizens like myself see but a pace far slower than those other "NOT secure projects" Chromium, Linux and Postgres.
Compare the sizes of community within Chromium/Linux/PostgreSQL. See how much more accent they put on security. Have you seen any fuzzing attempts started around SystemD, for example?
You are making a comment about "The development process [being] too fast and erratic" (it is not). I'm not defending the systemd process, I'm making a point about that.
Then again, if you think "obfuscating the init process" is a valid description of what systemd does per your other comment, I'm having a hard time seeing this as anything other than armchair security expertise...
You just resorted to ad personam for the second time. This is my last post in this discussion because of that.
Here's why I believe it obfuscates the init process - it clearly is less transparent now. Consider logging, consider a new DSL (in terms of keywords, not necessarily syntax) needed to describe services, consider lack of determinism it brings. Yes, it's faster, but definitely more opaque.
As for development process - they clearly have no time to refactor, which makes the code have way too little modularity. It's a huge blob linked against libsystemd and fuzzing the components would really be a mess. The project has much too much responsibility, handling parts of system that could easily be delegated to separate projects. It interferes with desktop environments, GNOME being an example.
> You just resorted to ad personam for the second time.
...?
I realized my first post was unnecessarily sarcastic and edited that part out. It's hard to comment about your opinions without talking about your opinions.
But okay, you've offered a list, let's go through it:
> Consider logging
Logging is in a binary format, but is more consistently accessible.
> consider a new DSL
In terms of keywords? Do you mean like, which value does what in .service files? That's all extremely well documented. Consider the corresponding init shell scripts, how are they "more transparent" if you have to know shell pretty damn well in order to understand them?
> consider lack of determinism it brings
Now I completely lost you. sysvinit brings far less determinism to the table.
If I found that paragraph in the wild without context I would assume it's about sysvinit.
Hm. I was with you until that. The reason I don't have systemd in production is that it's like pulling teeth to get a deterministic service initiation order. For all their flaws, the good thing about init.d scripts was that you knew exactly what order things came up and down in, because it was spelled out in a shell script you could view and edit.
This was great for administrators, but absolute hell for distribution maintainers; I totally get why the distros went to systemd. But since it solves their problems, rather than mine, it's not particularly useful to me (I use GNU PIES in production, which is kind of a middle ground: it can run inittab and rc.d scripts, but it also has a declarative native format)
That's because systemd by default is asynchronous, whereas init by default is linear. It's the main feature of systemd: Execution order is usually not important, it only leads to slower boot.
For when it is important, systemd can guarantee execution order by using its dependency system.
Are you a distro maintainer? I've never heard that it was "hell for distribution maintainers", quite the opposite. What's the issue with this?
I "maintain" a toy "distribution", that isn't a "distribution" in the sense that I think nobody has ever downloaded it.
What I meant was that the systemd opaque declarative model makes the distribution maintainer's life a lot easier; if systemd manages state based on declarative statements, you just have to make the declarative statements. Under init, you had to make imperative commands that met broad enough conditions to work for everyone.
I think you're conflating "transparent" and "deterministic".
Init is more transparent in that there are less layers to it. But systemd is more deterministic because it is able to make more guarantees about the state of the system.
Similarly, docker is also more opaque but far more deterministic than a provisioning script.
No, we're just thinking about determinism differently.
Systemd makes (or at least tries to make) guarantees about outcomes, which means it doesn't make guarantees about process. SysV (at least the traditional BSD-style a la Slackware) makes guarantees about process, which means it doesn't make guarantees about outcomes. I prefer determinism of process to determinism of outcome.
I still don't see how this is even a question. Of course it's more opaque; declarative systems are by nature (that's generally a selling point).
The fact that most distros seem to package a brittle spaghetti mess of unit files symlinked into about 7 different places isn't particularly systemd's fault, of course, but even when they're done more sanely just by nature declarative is always more opaque than imperative.
most init systems do have a legitimate use case for touching a file.
Can you name such a legitimate use case then? All I can think of is the need to write a pid file, and pid files are a kludge that real service supervisors gladly do away with.
Well, I want PID 1 to make exactly one userland binary call and then start reaping orphans. I want that PID 2, the RC system, to read some configuration file(s) and, yeah, fork and exec a bunch of userland binaries. It's kind of the whole point, right?
The advantage is that the kernel panics if PID 1 ever crashes, so I want PID 1 never to crash or even be able to crash. It also means I want the binary to have as little of an attack surface as possible, and particularly I don't want it listening to dbus or having links to a QR generation library.
This is a solved problem with multiple good solutions [1] [2] [3], so I can easily avoid those issues by not using systemd.
So add a signal handler that just enters an endless loop. That's what systemd does, so as not to panic the kernel on a crash.
By the way, systemd doesn't listen to dbus (it uses the dbus protocol for IPC) and does not link to a QR generation library (journalctl does, which is your usual unprivileged program).
systemd does not have such functionality. You have not read the headlined message correctly.
Rather, it has functionality to "touch" files in sensitive places, and a bug that meant that they were made world-writable, world-executable, and set-UID. The headlined message alludes to the various uses of this touch function that expose such files to the world to be exploited in certain circumstances, which (amongst others) are:
* timestamp files for timer units
* device tags files in systemd-udev
* /run/udev/queue
* timestamp files used by timesyncd
* private devices, bind mounts, and mirrored /etc/resolv.conf created by systemd-nspawn
* "linger" flags used by systemd-logind
* temporary files used by "systemctl edit"
* All sorts of flag files: /run/systemd/journal/flushed , /run/systemd/quotacheck , /run/systemd/show-status , /run/systemd/first-boot
So not only they didn't notice this was exploitable, they also seem to think that a local DoS is not enough for a CVE or a public report. Excellent.