The biggest problem I see with system is that the developers don't play well with others. Instead of working with various parties, like the glibc maintainers, to fix deficiencies elsewhere, they expect developers everywhere to drop what they're doing to redesign how their projects work, when they work just fine for the many, many other unix architectures out there. Too much of systemd is based on magical pixie dust, compatibility be damned, and not enough on actually making things better.
And anyway, correctly daemonizing is only part of what systemd got right. I think the fact that you don't need to track services with random programs anymore (since systemd knows each and every process parent thanks to cgroups) is something much more interesting than just getting daemon() right.
That's actually not something you want. It turns out that makes process management unnecessarily hard. That said, the glibc implementation isn't terribly good either. The CW is not to use either.
> I think part of systemd's problem, as much as Poettering et al will try to deny it, is that it is full of NIH.
The most exasperated criticism I see about systemd is its use of dbus for a communications infrastructure, because dbus is both a system bus and a desktop session bus, and everyone associates it with the latter. If they'd just done the NIH thing and rolled their own communications protocol (like various other parties), they'd be deflecting a lot of that criticism.
Honestly, the NIH syndrome seems at least as prevalent amongst systemd's critiques.
Have you actually read the mail you're giving as "evidence"?
The last paragraph begins with: "The current idea is that systemd will provide a bridge service, that
offers the current D-Bus socket, and an unmodified libdbus (or an
alternative implementation) can talk to that socket like it talks
today to the dbus-daemon."
kdbus is also not a NIH-implementation, in any meaning of the word. It was "invented there", and it also fixes things: it will have much lower latency and overhead than the current userspace D-Bus.
Also, technically developers don't have to use systemd's dbus library; apparently Gnome's doing direct calls to the kdbus kernel API instead. That makes it even harder to support systems that don't have kdbus+systemd, of course, but this is Gnome we're talking about.
Umm... the mere existence of the compatibility daemon makes it pretty clear that one could easily build a system which interfaced with kdbus without changing much of your existing code at all. Your code wouldn't have to be that modular to pull it off.
Honestly, the Linux kernel has, for the longest time kind of been filled with these ad-hoc, efficient IPC mechanisms like netlink. It has severely needed SOMETHING like kdbus, and you can see the key pain points in Linux have already been addressed by other systems using their own proprietary or semi-proprietary mechanism (which invariably happens if you are late to a party that people need addressed immediately).
So personally as a sysadmin that occasionally runs daemons from the command line I much prefer a double fork(); I don't want the daemon to exit when I log out.
Requiring init to spawn the final daemon process (and hence getting the pid from fork()) is a red herring, any process can get a list of its children and query their proc structure (modulo permissions, but most sane people run init as root).
init should be as small as possible to do only what it is required to do; this is simple anti-fragility at work. The less code and state the better. Feel free to stick your process manager, dependency calculators, and helper daemons in other processes. init should be able to handle something as catastrophic as a SEGFAULT and just execve(/sbin/init) and just carry on (while logging locally and remotely of course).
I think he was presuming you already have all the processes you need, and this is just about getting them architected right.
I hope systemd will live or die on its merits; I fear that it will take over via politicking.
Mostly I feel that it got an undue jumpstart thanks to RedHat trying too hard to be bleeding edge, and Upstart/OpenRC not having as high-profile a backer.
Canonical's insistence on control and ownership ended up torpedoing its project. Which is really quite sad.
I wised up and moved over to using the Debian distribution in this case. Less moving parts trying to make things work the way they thought they should.
Also, it's being rapidly iterated and there isn't a PPA for Ubuntu, which sucks.
Fewer moving parts means easier debugging.
So if I follow you, it would be easier to grok the whole system if all the code was in different places?
This is more difficult if you have a lot of abstractions and binaries lying around. You need to start reading (often nonexistent) documentation and abstract C-code...
It likely does not matter for 95% of users and I've rarely had to do something like this but you are losing some control as developer/sysadmin. For some it's important as their productivity and job depends on solving such issues fast, others will never have this problem...
I've never had a problem with systemd through. But if you have it's difficult to fix it on your own.
What's opaque about Free and Open Source C code?
Some might argue (this guy included) that statically typed, statically analyzed C code will result in fewer people having to debug their system code than the equivalent code written in a particular variant of shell code.
Everything if you are sysadmin/developer trying to fix an issue. At first you need debugging symbols to pinpoint the problem, then you need to read the source.. it all takes time. E.g. you need to learn about dbus-monitor and dbus calls and need to grasp some internal concepts of systemd if something goes wrong. It takes time and patience you usally don't have or don't want to spend on such details. For comparison i.e. FreeBSD rc is only shell-scripts: http://www.freebsd.org/cgi/man.cgi?rc(8)
I don't want to say that one is better than the other but the latter is for most folks far easier to debug and modify than the first. However as I've said it only really matters for few people. But I can understand that they are not particularly happy about this new complexity. And bugs happen.
Generally bad form to make claims on behalf of "most folks" since you are in fact a single person. It's totally a valid argument if you say this on your own behalf.
And yes, bugs happen. But fixing the C code, is in my experience, much easier than tracking them down in `bash -x`. Especially when dealing with race conditions between services/triggers/device initialization.
You are being stubborn just for the sake of winning an argument. Interpreted languages are easier to debug. They have many flaws. Debugability isn't one of them. Heck, my servers don't even have gcc/gdb. Good luck trying to debug systemd in my production environments...
Maybe it's just a different perspective - as a developer systemd likely eases a lot of pains and makes otherwise problematic and error-prone problems easy but as an sysadmin that mostly deals with servers it feels sometimes like forced unnecessary complexity that can introduce difficult to debug issues.
I don't say that's the norm and I don't say this happens often but if you build custom stuff and do "strange" things it's easier to know what's going on if you grasp the complete system. This is more difficult with systemd.
I believe it's a valid criticism and I realize 95% of users never need to care about this. However it's still a valid point if you build complex systems that are not "off the shelf".
Did you account for the many very subtle ways you can run into what the C language defines as "undefined behavior"? I have only met a few programmers that truly understand that can of worms. Way too many don';t even know that compilers exploit these parts of the spec despite programming in C for many years.
That's just the cases where it it is totally legal for the compiler to output random noise - or output nothing at all - instead of what the C code says locally. These are some of the nastiest "gotchas" I've seen in any language. Even the best C programmers are occasionally bit by this class of bug.
I still like C (a lot), but it is not easy. It's just so very annoying and time consuming to track down a bug happening in "foo.c" that is actually caused by a variable in "bar.c" didn't get updated waaaaaay earlier because some bit of code in "quux.c" was skipped over due to undefined behavior. Especially it becomes a heisenbug due to that particular optimization being turned off when in in debug builds.
Bourne [Again] shell has its own share of quirks and "gotchas", but they are usually easy to investigate, and they are generally easy to avoid once you've written a couple scripts.
 There are other important classes of bug; I'm just using undefined behavior as an example because of how amazingly subtle it can be and how many serious security bugs it has caused.
 Before anybody complains that behavior involving 3 files like that is bad design, consider that A) this happens all the time in real world C, and B) I agree. Which is why many of us are against systemd, which adds complicated interactions like this on purpose as a way to force vertical integration.
These days there is really no reason you can't have the debug symbols already around. But you've got a fine point there, the Bourne shell debugger is much more convenient and easy to come by, and it makes postmortem analysis with core files trivial... ;-)
> then you need to read the source..
As much as you need to do that with any system, you need to do that with them all.
> E.g. you need to learn about dbus-monitor and dbus calls
Yes... and if not you have to learn about whatever other mechanism is being used to provide encapsulation and separation of concerns between the components of the system...
> It takes time and patience you usally don't have or don't want to spend on such details.
This really boils down to, "I'm already really familiar with this other system...". It's a legit argument for why you might not use systemd. It's not a terribly legit argument for why systemd is bad.
The rc system doesn't address a fraction of the problem and actually makes a number of things worse. Heck, the rc man page you linked to links to four or more other components of the system, including the voluminous "simple because it is shell" rc.conf.
How do you debug a complete black box, where turning on debugging partly fixes the problem. You really don't. This is literally the worst debugging experience I've had in 20 years of using and maintaining Linux systems -- and that includes trying to do these things with much less knowledge and only limited internet access back in the early days.
I love the ideas behind systemd. It's too bad, even if not surprising, that the implementation is a flaming pile of garbage.
Grub always seemed like the improved features made up for the added complexity; I'm not convinced about systemd.
So I'm no longer using Grub or Debian. And my bootloader is simple. I've installed it once, and never touched it afterwards. It's possible to configure it a little, but there's no need for it. So what if it has fewer features. It only needs to load the damn kernel... and it works. I'm happy.
But it's been a while.
Opaque code is opaque. What does it have to do with Free or Open Source?
If you find C-code "opaque", you're already kind of screwed in the Unix world...
Yes. Separated in documented modules that are self-contained, with well defined behavior. When your logger breaks, you fix your logger. Not your init process. When your devices are not discovered you debug udev not muddle through a code riddled with synching with ntp and so on.
It's very "UNIX" to implement things as communicating processes rather than RPC or procedure-call-within-monolith.
However, in my mind it has made several awesome things possible. My boot time got dramatically shorter when I adopted it thanks to parallelization. Besides, daemons have now simple and robust service definitions. Sys V had become a mess!
Lastly, lightweight containers are the real-deal for small development tasks (not for production!). Just one command: systemd-nspawn, and you're ready to go. Docker is currently a bit more complicated to set up.
Arguably, many features, including containers, should be moved out of systemd. Right now, more than a monolithic architecture, I think systemd is rather shipping too many things under the same project umbrella.
Writing daemon startup files was somehting I always dreaded, and never really did well.
Before systemd, if I needed to run services I'd try to use daemontools (for auto-restart, and logging), but then I had two service-starting services running my system. Upstart had some of the features, but was still finicky (and the versions I had available didn't consistently have good service supervision support).
systemd just fix that.
Also, with systemd, for the first time I feel like I'm really using Linux, not just a random *Nix that has adequate drivers.
I'm saying that systemd makes the Linux kernel's feature set and capabilities visibly usable from user-space. For (nearly) the first time, it feels like it matters that I'm using Linux.
Linux is still Linux without systemd, it just doesn't provide as much benefit (aside from device support and compatibility) over, say, FreeBSD without software that takes advantage of its feature set.
I was using some of them, such as kvm for my virtualization and lvm for disk management. But systemd still had a substantial 'oh, wow, Linux lets process management be this easy and powerful?' factor, showing me something new that I hadn't seen in my use of any other system (FreeBSD, OpenBSD, Windows, a touch of Mac).
I suspect its not well known because there was no commercial, marketing drive behind them. Nowadays at least one of these seems to be needed to even gain visibility.
People don't go search what's cool/good where it is. They wait for HN or some other news website to tell them
Just like the regular news really. Turns out it doesn't work all that great.
Perhaps I'm ignorant and this was always possible.
In a server environment it happens that the various DRAC/BIOS(es) are initialized and the bootloader reached is far longer (several minutes sometimes) than the boot time of sysvinit. So optimizing boot time in the Linux part on a server is probably moot for me.
On the laptop you can suspend/hibernate as others have said if you care about startup time. I have full-disk encryption and need to type in password to boot so few seconds more or less doesn't matter anyway.
So that leaves the desktop, where I might care about boot times (the UEFI/BIOS is actually saner than in servers and reaches the bootloader very fast). It turns out that my desktop boots faster than my router with sysvinit already, so having faster boot times on my desktop would get me nowhere, I still wouldn't be able to use the internet until the router has booted.
So faster boot times ... I didn't need all the pain systemd is causing just for that. Debian has haid makefile/startpar based concurrent boot already, I don't think systemd would improve on that much ...
Meanwhile not using systemd breaks things that used to work on a KDE desktop (USB mounting, VPN config, etc.), so having an app support systemd is a net-negative for me.
The fact that BIOS/DRAC/RAID initialisation is slow on some servers is irrelevant. Linux's init and the firmware initialisation don't run concurrently, therefore if init takes longer the whole boot takes longer. Additionally many servers manufacturers have improved boot times in the last few years (down from 10+ minutes, to 5+ minutes, to less).
Most routers don't take as long to boot as you claim. The entire OS is about 8 MB (uncompressed) and RAM is only 32 MB, and the medium that the OS is stored on is faster than a computer's hard drive. So just looking at IO should tell you your supposition is flawed. In my experience most Linux based routers boot the RJ-45 interface (LAN side) in under 20 seconds unless it is allocating slowly on the WAN interface (e.g. unable to get an IP, etc). If you set a static WAN IP/gateway/etc, then boot times comes down substantially.
Additionally the whole concept that every time your PC turns off you'll also turn off your router at the mains is, uhh, strange. Sure there are power cuts but that isn't the only time you shutdown your PC throughout the year.
The concept that your PC needs to wait for the server is equally flawed. Again, yes, power cuts. But PCs get shut down significantly more often than servers and if we're playing that game then wouldn't a "server" have a UPS anyway?
So overall your argument for why boot times don't matter lacks any kind of substance. It is also purely based on a PC->Server->Router infrastructure where nothing is on a UPS and everything suffers from a power cut (then "races" to all come back up).
In the real world my phone has Linux, our "Tivo" has Linux, our printer has Linux, our car's entertainment system has Linux, etc. So bad Linux boot times will be noticed day to day. It matters to a lot of people and while I don't know if systemd is the solution, I do know that progress is needed relative to the classic UNIX init system (per the article).
Boot time on desktop with sysvinit: 8-9s, boot time on desktop with systemd: ~6s.
Boot time of router (until network is up and usable, maybe made longer by having to setup WiFi): 1m+.
PC waiting for router is my usual use-case when I power off everything and then power them back on at another time. I haven't said anything about PC waiting for server, and I agree it wouldn't make sense.
I don't have a server with systemd to check, but assuming similar improvement, 4s out of 5m+ you mention is barely 1%.
You might be interested in firejail . It makes finer-grained use of Linux namespaces, and doesn't depend on systemd (or much of anything, for that matter).
I've come across this sentiment a few times in the last month, but I haven't yet heard an explanation other than "VMs are battle-tested and containers might leak data to each other". Is there something more that I'm missing? Why aren't containers a good idea to use in production?
Secondly, assuming a bug in the kernel, one might assume root in a container can lead to root on the host. Bsd jails have been pretty solid for the last few years afaik - but hardware support for virtualization might still get more of both separation/safety and speed. There have been som bad bugs in (as i recall) the io system in xen, leading to similar issues ... but again the last time i saw anything on that was years ago.
Ymmv - generally docker doesn't have "run untrusted code, safely, as root" as a design-goal (yet, afaik) (not entirely sure about lxc, née vserver -- the underlying technology) -- so don't expect it to do that. Isolation and security (esp. without sacrificing performance) is very hard to get right. Or so a long series of privilege escalation exploits across many different os' seem to indicate.
Just to be a little pedantic, LXC has definitly be inspired by pre-existing Vserver and OpenVZ. But it's a different implementation.
A lot of things that are viewed as innovations from Docker really already did exist in 2006~2007. Maybe a bit cruder but not that much. OpenVZ was very close to that. AUFS is the only real innovation as far as I know.
Anyway, Docker guys were smart enough to ride the cloud wave and hype the thing. I'm pretty sure Parallels missed the boat because they went the opencore way (OpenVZ/Virtuozzo).
See Dan Walsh articles here: http://www.projectatomic.io/blog/2014/09/yet-another-reason-...
systemd = init system + a whole lot of other things.
When people complain about systemd,they usually do not complain about what it does or how it does it in the init system part.That part is pretty solid as far as functionality is concerned.
When people complain about systemd,they usually complain about the "whole lot of other things" part.Lots of people have different complains and my biggest one is on udev.
udev is a core component in any modern linux system I see systemd absorbing it as nothing but a political move and a power grab.They could have left udev as an independent project and just create a dependency on it.
The "whole lot of other things part" will,by definition,make any other project that is just an init system seem very much deficient in functionality when compared to systemd.
Its not like if they were stupid and took a bad decision because they didn't know any better. They took the bad decisions conscientiously, for these power, political grabs.
That's not how people want Linux distros to be. They want technological innovation to be the driver.
And being on the same camp: fuck you systemd for successfully adding to that model where political profit is more important than technological innovation.
Let's keep some perspective here. We are literally just talking about an init system. There are many others you can use. systemd is not taking away your freedom in any meaningful sense of the word "freedom". Debates should be about the technical merits of system not baseless accusations that they are making a power grab.
This is absolutely incorrect. We're to the point where certain software packages (GNOME comes to mind) are requiring hard dependencies on it. I was just today reading about some incompatibility that arises if your kernel is set up with no IPv6 support which is explicitly caused by systemd. (To which the response from the systemd folks was something along the lines of "You shouldn't be turning it off anyways")
(Great, now our software takes philosophical positions...)
Sure, you're "free" to use something else, in the same way that you're "free" to patch and recompile every program that touches it to stop touching it. So "free" in the FOSS sense that nobody but developers care about.
Meanwhile, in the real world, populated by end users and sysadmins, the most important people when it comes to a computing environment, the ones that all of this crap is being done for at the end of the day... not so much.
I'm annoyed that systemd is taking over for political reasons and not purely on its technical merits, and that there is no way this is not going to lead to a monoculture. There will be others, but they will be relegated to the position of marginalized, niche players that nobody outside of /g/ troll threads care about.
I'm annoyed that the rest of the world is going to have to adapt to this software, rather than the other way around.
I'm annoyed that this software is doing 5000 things where one would do.
Wow... we've come a long way baby! Now freely available software that you can modify as needed without interference is taking away your freedom! ;-)
> I was just today reading about some incompatibility that arises if your kernel is set up with no IPv6 support which is explicitly caused by systemd.
Actually, the problem is that if you load IPv6 support after a socket was created, there's no efficient way to make that existing socket compatible with IPv6, which of course creates a nasty little integration problem. That wasn't a choice of the systemd folks, that was a choice of how the kernel folks organized their network subsystem & modules.
Systemd runs fine on my system with no IPv6 support.
> Sure, you're "free" to use something else, in the same way that you're "free" to patch and recompile every program that touches it to stop touching it. So "free" in the FOSS sense that nobody but developers care about.
For all the complaining about NIH syndrome and absorbing other projects for political purposes, systemd actually builds on top of a lot of very well established components (dbus, udev, etc.). To the extent that software gets tightly coupled with it, it wouldn't be hard to change it so that it used those standard components without systemd... unless systemd is actually providing some unique advantages for that software that you can't live without. In which case... go out and do a better job of it!
> I'm annoyed that this software is doing 5000 things where one would do.
I know, Unix is annoying that way. ;-)
I think you'll find that it isn't nearly that difficult a problem to address. For all kinds of reasons, just like every other system design that has come before it, there will need to be accommodations made to work with other stuff. Its not like the old code just disappears overnight. You can't succeed as a new platform without a way of working with the old (and again, you've mostly been sold a bill of goods... most of systemd's architecture is the old system).
> That flexibility is a large part of why a lot of us came to Linux, and regardless of whether it's technically "freedom", losing it feels like losing freedom.
If it feels like losing freedom to you, you don't know what that is about. You're losing someone writing code the way you wanted them to. That's not losing freedom. That's getting it.
Wait, what's that? Editing compiler-generated assembly code is too hard for you? Well, that's clearly your problem, since you can't expect Microsoft to bend over backwards and write code for you the way you wanted it!
> Wait, what's that? Editing compiler-generated assembly code is too hard for you? Well, that's clearly your problem, since you can't expect Microsoft to bend over backwards and write code for you the way you wanted it!
Umm. no. That's Microsoft bending over backwards NOT to give you the source code, as defined very clearly by the FSF and the OSD.
More importantly, in the case of Microsoft's proprietary software (they do actually have some open source stuff which isn't encumbered like this), it's literally a violation of their license agreement for you to edit that code yourself.
Am I through the looking glass or am I being trolled?
Free software means you can't restrict anyone from going ahead and making whatever changes they might want to to software, distributing it to the world, and potentially garnering mindshare in the process.
If you think freedom is the ability to run only the code you want, the only way you're ever going to get freedom is by writing everything from scratch. Software will never be exactly the way you want it to be unless you write it yourself.
Most software depends on other software. That's just how things work. GNOME is particularly bad offender for that - it installed apache for some reason last time I used it. But GNOME also does a lot of things I don't really need it to do. Maybe someone else uses those features, and that's OK. You can switch DE if you really don't like systemd that much. But if you don't, that's not denying you your freedom - you made a decision about the benefits and drawbacks of a product, and decided to use that product.
Nobody really says how systemd only took over for political reasons, there's just random posts on forums that make claims and link to some dude's podcast. There's really no reason to believe that the systemd people are malicious.
I think systemd is the wrong choice for debian/ubuntu. But there's no reason to badmouth people about it and say hurtful things. Just use something else.
You're kidding, right?
The systemd people and Lennart in particular are very open about their contempt for anything that isn't Linux + systemd and their intent to shove whatever they want down everybody's throats regardless of bugs or breakage, and blame everyone but themselves for what their shit breaks.
This can't be dressed up as anything but malicious.
Systemd does more than just init, it has the features to replace everything from network manager to fstab. It is not just an init system it is an invasion of Linux userspace.
Don't take my word for it, just read the developers blog:
Systemd has made it impossible for me to run an up-to-date Gnome on FreeBSD. That feels a lot like taking away my freedom.
The systemd project has no control over what Gnome relies on. They independently rely on systemd because it provides functionality that makes their lives easier, and it's their right to do that. If you want "up-to-date Gnome" to work on FreeBSD, go and write some code to help make it happen.
I could write patches that reimplement the functionality that Gnome gets from systemd using lower-level functionality, in a cross-platform way. But those patches would be rejected; the Gnome project has decided to use systemd and would not want to duplicate its code. If I were to maintain my own gnome fork I would have to convince distributions to adopt it.
I could write patches that add FreeBSD support to systemd. But those patches would be rejected - again, as a policy decision, systemd doesn't want to support FreeBSD. Thankfully in this case there is a fork, uselessd, but again, we need to convince distributions to adopt it or it's meaningless.
The claim that Gnome "independently" relies on systemd is specious; there was a lot of lobbying and politics from the systemd side. My only practical option is to counter at the same level, and lobby Gnome (and linux distributions) to make the political decision to move away from systemd.
So, Red Hat project GNOME relies on Red Hat project systemd, and Red Hat employees won't allow systemd to be portable to competing platforms. Convenient.
You can of course make the case that the interfaces provided by systemd are substandard, but so long as you don't have an alternative to offer, it's just talk.
As far as I know, Gnome does currently have fallbacks (with reduced functionality) on non-systemd systems, so it's not the case that they just ignore things. However, I perfectly understand why they would not bother with duplicating code just to support systems that aren't good enough.
I'm apparently not able to respond to the reply below me for whatever reason, but... It's been said many times that making systemd portable makes no sense, which is why it provides interfaces. And regarding the interfaces, which part of them, exactly, are not stable? There's a very reasonable interface stability promise, which to my knowledge has held, so far.
> You can of course make the case that the interfaces provided by systemd are substandard, but so long as you don't have an alternative to offer, it's just talk.
The problem isn't that the interfaces are particularly bad, it's that they're not standardized. If systemd would offer standardized interfaces that let me offer a compatible alternative that would be fine. But they don't. Trying to remain compatible with software that will make no effort to provide compatibility from its side is a mug's game.
"That is simply not true. Porting systemd to other kernel is not feasible. We just use too many Linux-specific interfaces. For a few one might find replacements on other kernels, some features one might want to turn off, but for most this is nor really possible. Here's a small, very incomprehensive list: cgroups, fanotify, umount2(), /proc/self/mountinfo (including notification), /dev/swaps (same), udev, netlink, the structure of /sys, /proc/$PID/comm, /proc/$PID/cmdline, /proc/$PID/loginuid, /proc/$PID/stat, /proc/$PID/session, /proc/$PID/exe, /proc/$PID/fd, tmpfs, devtmpfs, capabilities, namespaces of all kinds, various prctl()s, numerous ioctls, the mount() system call and its semantics, selinux, audit, inotify, statfs, O_DIRECTORY, O_NOATIME, /proc/$PID/root, waitid(), SCM_CREDENTIALS, SCM_RIGHTS, mkostemp(), /dev/input, ..."
It's not just cgroups.
"uselessd is planned to work as a primitive stage 2 init (process manager) on FreeBSD. Stage 1 is inherently unportable requires a total overhaul in regards to low-level system logic (with systemd assuming lots of mount points and virtual file systems that aren’t present, is designed with an initramfs in mind and many other things). Stage 3 can always be achieved by having a sloppy shim around the standard tools like shutdown, halt, poweroff, etc.
So far, uselessd compiles on BSD libc with a kiloton of warnings, with lots of gaps and comments in the code, and macros/substitutions in other places. All in all, it is an eldritch abomination. A slightly patched version of Canonical’s systemd-shim is provided and works well enough to emulate the org.freedesktop.systemd1 D-Bus interface. Some of the binaries provide diagnostic information, but at present we are trying to find ways to bring up the Manager interface in the whole buggy affair, in order for systemctl to send method calls. Nonetheless, you are absolutely welcome and encouraged to play around with it in its present state, and we hope to get somewhere eventually."
Considering GNOME is part of the new school design philosophy in general and largely developed by Red Hat employees, it's inevitable that it would have happened anyway, but the systemd developers were directly complicit in speeding it up.
> systemd is Linux-only. That means if we still care for those non-Linux
> platforms replacements have to be written. In case of the timezone/time/locale/hostname mechanisms this should be relatively easy
> as we kept the D-Bus interface very much independent from systemd, and
> they are easy to reimplement. Also, just leaving out support for this on those archs should be workable too. The hostname interface is documented
> in a lot of detail here: http://www.freedesktop.org/wiki/Software/systemd/hostnamed -- we plan to
> offer similar documentation for the other mechanisms.
I'm really not seeing any foul play anywhere; RH tried Upstart (they even used it in RHEL6), found it lacking, and out of that came systemd.
The thing is, the systemd project is far more ambitious (which is good) and not content with just providing an init system. I personally don't see anything wrong with that (a well-integrated core userland for all Linux distros? Yes please), but you obviously do.
I think your project is ultimately not going to gain much traction it's simply ignoring most of the goals of the systemd project. It might have side-effects on how systemd develops though, but I can't really say.
It seems to me that Lennart's personal goal is to make the perfect OS as he visualizes it. He's doing work to make it happen, and he's gaining support because the code is useful to other people. If people outside of Linux circles want to get involved in standardizing core DBUS interfaces (which they should, because pretty much everyone seems to use DBUS) and things like daemon startup notification, they should get involved with the systemd project and discuss the interfaces, not just tell people not to use them... That ship has already sailed. Systemd is rapidly becoming the de facto standard, and that progress is not suddenly going to stop because minorities complain too loudly. :)
It sounds to me like Gnome is the reason why you can't run Gnome without systemd, and that you should probably direct your complaints against them.
You can be stuck in a maze, in a pit or under a tree trunk. You're legally free, since no law or copyright license is saying may not get out. Yet, you're stuck and if you can't get out, you're not free at all.
When it comes to software, it is the size, complexity and complicated interdependencies that make the maze. As the system grows, an individual's practical freedom erodes. For instance, I complain about the web a lot. Even with a browser's source code and a permissive free license, there's close to nothing I can change about it in practice. It would be far too much work for me to maintain millions of lines of code and remain compatible and interoperable with a huge, fast-changing stack of technology... and the more you diverge, the harder it becomes. It's an uphill battle, and at some point you have to give up if you're not the giant. So the four software freedoms are reduced to two (or less). In practice I don't have the freedom to do what I want.
I think the FSF's stance of reducing freedom to a merely ethical issue is alarming. How would Dr. Stallman have felt if he had gotten his printer driver with a free license but so much code and complexity and dependency that it would've been impossible for him and a small team of hackers to actually port it and make it run on his system?
Of course, systemd alone is not approaching that level of complexity. Except that it's not only systemd.. the trend seems to be that all aspects of a modern OS are getting larger and more complicated. A little here, a little there, it all adds up and becomes a lot, everywhere. It's a sad trend.
There was a time when you could've picked up a book that pretty much describes all of your system's hardware at a low enough level so that you could start writing your own bootloader and OS from scratch, with the knowledge that you can interface with all the logical hardware devices. And it didn't take hundreds of thousands of lines of code. Now the amount of accumulated cruft we depend on is so large that the idea of writing your own not-a-toy OS is laughable...
Standing on the shoulders of a giant is necessary and helpful, but when you have to do too much of it, it stifles innovation and encourages monocultures.
I don't have anything against systemd per se; it doesn't represent my ideals, doesn't bring me features I want, and so I don't want to use it. If Ubuntu and Debian want to use it, they're of course free to do so. However, I am concerned that with the notion that "systemd has won", its proponents are going to assume it to be everywhere and build future software with the attitude that it is okay to depend on it -- who cares about the people who would prefer not to use it, let them suffer for disagreeing with the king!
Before the systemd rage, sysvinit might have been "the winner" on Linux in the sense that it was most widely used and supported. But we didn't have this sort of polarizing "sysvinit has won, fuck everyone and everything else" notion. Other distros with other init systems have happily coexisted all along, and these other distros haven't had to constantly fight a growing dependency on one specific init system.
While I am not an expert on legal language and how it can be used by lawyers, I believe RMS and the FSF at least attempt to address this in the GPL Version 2 (there is probably a similar requirement in the GPL Version 3, but it is more complicated so I'm not sure which requirement corresponds to this quote form v2):
"The source code for a work means the preferred form of the work for making modifications to it."
Code that is so large complex that it is not practical to understand or port wasn't written by a human. Such code is probably a template or macro expansion of the real source, and the "preferred form" would be the pre-expansion source.
This may not cover all ways of obfuscating the code, but "preferred form" is trying to be as inclusive as possible.
Or it was written by thousands of people over twenty years.
GPL doesn't protect you from accumulated cruft, complexity and snarly design that makes it hard to understand let alone modify a system.
Some hypervisor-based systems are moving in the opposite direction, with unikernels that reduce or eliminate the OS to run directly on virtual hardware: Cloudius OSv, HalVM, OpenMirage, Erlang on Xen.
I understand your good motivations but just saying "it will be ok" doesn't make the problem go away. Yes I can write my own init system, that's good. Can I uninstall systemd from most system that install and depend on it by default. Write my own distro? You can't just easily plug and play it.
It is a big like the kernel. Just swapping out Linux kernel with a FreeBSD one doesn't quite work.
> system not baseless accusations that they are making a power grab.
With that I agree. Maybe to get the conversation back on track is to present a few valid technical points in response. Or just ignore it. Saying "hold up people, no fighting please" doesn't work as well in such forums.
Well, actually that's the heart of the problem right there. Systemd mangles and complects things. You can replace the kernel (see Debian/kfreebsd and to a lesser extent Illumos). Or you can make a "distro" like cygwin/mingw et al for windows and homebrew for os x. Because there are some more or less well defined interfaces between userland and kernel space. Not just "shit that systemd does that makes sense on recent linux kernels" (afaik there's no plans for supporting something like linux 2.4 on small embedded systems for example).
I think this is a bit of a stretch. It's not like they did a hostile take over of udev. The maintainers also thought systemd was the right place for that code to live.
As for bloat, there certainly have been some new features, but so much of the systemd code (from what I can tell) was existing code that now lives in one place. That means it's free to consolidate the utility code it uses (every project has helpers for what (g)libc does not provide). In the grand scheme of things, less duplicated code is a good thing.
http://lists.freedesktop.org/archives/systemd-devel/2014-May... (via http://redd.it/2a2tz5):
> Also note that at that point we intend to move udev onto kdbus as
transport, and get rid of the userspace-to-userspace netlink-based
tranport udev used so far. Unless the systemd-haters prepare another
kdbus userspace until then this will effectively also mean that we will
not support non-systemd systems with udev anymore starting at that
point. Gentoo folks, this is your wakeup call.
What we ultimately lose with systemd is modularity. If we cannot upgrade systemd without also upgrading the kernel, then systemd might as well be considered part of the kernel.
I think there was already at least one visible problem with systemd stepping on kernel developer's toes (so to speak) by re-using one of the debug flags.
Heck kernel is monolithic. But thinkign about it, I trust kernel developers a bit more than systemd guys. Maybe it is just a new project and it will stabilize at some point in the future. Now they are kind of shooting from the hip (adding ntp, udev, network socket pools, logging, ... ). That tells me "hello lockups and freezes" and being back in the mid 90s on Windows restarting every day.
Gentoo's eudev is more relevant now than it ever has been before.
So you're complaining about politics, while your only substantive criticism is a purely political one?
I prefer the BSD-style philosophy, nice, simple rc.conf, used to run Arch till it got infected with this garbage too. It slowly progressed away from it's BSD-style roots. So recently, I just gave up and moved to FreeBSD. Not a single regret so far.
My only issue with journald's binary log files is that they're of homebrewn custom format, that's not accessible by any standard means. Plain text files aren't directly readable by humans, too, but we have cat, less or alike tools to pass such data to terminal (sometimes an iconv is required, say, if log entries contain filenames that has characters outside of ASCII range), and those tools are available on every modern OS out there.
Personally, I think, a compromise that'd satisfy me (YMMV) would be either an industry-standard log format (like, maybe, sqlite - it's fairly simple, universal and omnipresent nowadays) or, even better, storing data in text files, but having accompanying binary index and metadata ones that store non-human-readable stuff (like hash chain - bet, no sane human would ever check cryptographic log integrity by hand) and provide additional information for faster machine access.
But the heck journald is a tightly-coupled part of into systemd instead of being a separate project is beyond me. I can't reject that systemd has some good things about it too, but it's too terribly monolithic and unhackable compared to mostly-scripted init systems. And such negative points easily outweigh the positive ones.
The point is, journald also introduces a new protocol that's oriented at logging structured data. This way it not just provides a feature, but forces developers to think about structuring their log output in a machine-readable manner. I think that's the excuse that I believe is the journald's raison d'être and that I personally accept.
Just my opinion, though.
However, my philosophical problem is that there's no escape from it. I'd be perfectly happy with it existing if there was a way to turn it off, and let me use my own syslogd program in peace. Instead, I have yet another binary on my system that's running, with all of the problems that can bring, wasting cycles while I hand off log data for actual processing.
I've seen more than my fair share of journald corrupting its own log due to unclean shutdowns. If I'm going to be grepping the journald log file anyway to reconstruct it (possible, but not easy, since journalctl is useless here), then why bother using it at all? It fails at the very task it was built for.
Exactly! What the binary log advocates seem to be missing is that those unclean shutdowns (often called "crashes") are probably the very thing you are going to want to search for the in log. In general, few people care how (or if) log stores that the cron daemon yet again ran the hourly maintenance without any errors. What everybody who has had to search a system log cares about is "what happened right before the crash happened".
The current data that needs to be committed to the log successfully and immediately almost by definition happens at a time when you do not have the time for the complexity of an atomic addition to a database. Often there is barely have time to any disk write at all.
The only way make the system log useful would be to make adding events synchronous. As nobody wants to deal with a syslog that is 10,000x slower (or worse), the only sane option is what we always did - make the writes simple and immediate, and defer any fancier feature.
Have they never heard of "log parsers" before? If you want it in a search able DB (which can be very useful), you do that from original log either as an async daemon or defer it with cron or similar.
Classic plain text log files are structured too - they're files of '\n'-separated records, without much else to it. It doesn't really matter (integrity-wise) whenever one's writing, say, JSON or mostly-unstructured plain English records.
I'm unaware on particular journald internal implementation quirks and issues. Maybe it's badly coded and has lots of bugs that corrupt data. That would be implementation issue. But the overall idea of using "binary" logs isn't that bad to me.
I'm just glad there are sane options available still.
I agree. There is a strong need for common interface(s), and that's a strong part of the motivation behind Fluentd/Kafka/etc. www.fluentd.org/blog/unified-logging-layer
And a journal that has another journal inside it would be somewhat silly. A simple write log can be done better.
I don't think fsync on every log commit is a good idea. This would be more a DOS attack.
That said I'm somewhat troubled by the cavalier attitude to log data safety in journald too.
I'm not against having a wrapper that magically slurps stderr/stdout to timestamped logs -- but if that can't be written cleanly with the apis we have, then surely what we need is to make the minimal improvement in our (probably kernel) api to make writing such a program a trivial exercise? Nothing I've seen of systemd has me convinced the project cares one whit about finding the simplest, least coupled solution to any problem.
This seems to come up frequently. I'm curious: how alternatively would a local process communicate with the Init daemon?
Why is the init daemon what a local process should communicate with?
A daemon that managaes other daemons doesn't need to be PID 1, even to reap zombies.
Secondly, daemons shouldn't care what process is managing them, a principled approach to communication between the daemon manager and a daemon would probably include handing off sockets/ports/fds, but probably not much else.
And get this, prctl(PR_SET_CHILD_SUBREAPER) has existed since May 2012, the original patch was created and submitted by Poettering, and yet we're still told that service management needs to run as pid1 in order to see all double-forked detached daemonized processes.
Because there could, and I know this is a crazy thought, be some benefit in having more meaningful information flow between the master of processes and the processes it manages?
The answer to that is easy: It's still signals, like it has been for decades, and systemd doesn't change that any more than anything else does. systemd defines some more signals, but that's about it.
Notice that you said init daemon. The errors here are in thinking that (a) dbus is the communications system, even in systemd, for the part of the system that does overall system state management and the stuff that the kernel requires of process #1; and (b) the part of the system that supervises daemons is in all packages run in process #1, the "init daemon".
systemd, the package, uses dbus in a number of places between a number of components, most notably logind, hostnamed, timedated, machined, and localed. But don't get that confused with systemd the program that runs as process #1, which is as constrained by the kernel (and others) into using the same signals as always, just as any other system manager is. There's an AF_LOCAL socket (/run/systemd/private) for systemctl to talk to PID #1 using a private undocumented protocol, and a public documented D-Bus API for units, but those are for the service management part of systemd the program.
The system manager and service manager in the nosh package are in the same boat. The system manager's API is that same set of signals again (augmented with some of the extra systemd-defined signals that fit the model). There's no other API because the system manager is not what the world talks to about service management. The control/status API for individual services is the filesystem, the service manager (that is not in PID 1) presenting the same suite of pipes and files as daemontools-encore. And there's an AF_LOCAL socket for system-control (and indeed anything else, such as service-dt-scanner a.k.a. svscan) to talk to the service manager for loading and unloading service bundles in the first place, and plumbing them together.
$ wc -l * | sort -n | tail
Less verbose does _not_ mean simpler! In many cases it means quite the opposite.
UNIX to me is about simplicity. We don't need crap like binary logs and heavy RPC mechanisms to be polluting beautifully simple and minimal systems.
As several others have noted, the code duplication issue is solved in FreeBSD's init(8) with rc.subr.
You might well complain that many these init scripts are substantially longer than the equivalent systemd unit files. Your complaint would be valid. Thing is, many of these init scripts do so much more than the equivalent systemd unit files. Ferinstance, the postgres and mysql unit files that I've seen permit no user configuration (like, such as, altering daemon listen port, config file location, and the like).  They also don't do any sort of housekeeping such as verification of the validity of the service's configuration file, checking and repairing mode and ownership of the same, and verifying the existence of the service's data directory.
I understand that OpenRC wasn't being considered for Debian Jessie, but it does a lot of things right, and is (IMO) head-and-shoulders above SysV init. (But then, isn't even bringing up SysV init kind of beating on a dead horse? We all agree that it really needs improvement.)
If you're interested, Gentoo's apache2 init script is here: http://pastebin.ca/2845519 . For reference, the apache2 systemd service file is here: http://sources.gentoo.org/cgi-bin/viewvc.cgi/gentoo-x86/www-... .
For a look at a simpler init script, check out the script for dnsmasq: http://pastebin.ca/2845520 . Looks pretty simple, no? At least as simple as the systemd service files we've been seeing, yes? The config file for dnsmasq is a single line: DNSMASQ_OPTS="--user=dnsmasq --group=dnsmasq" .
OpenRC provides complexity when you need it, and gets out of your way when you don't.
 I don't have systemd installed, so I might be missing the user configuration facilities. If they exist, and the configuration files have any appreciable complexity, then their line count must be counted against the unit file's line length.
~ # for file in `equery files openrc`; do [ -f $file ] && echo $file; done | xargs wc | tail -1
23729 196573 7432232 total
Oh, and that's with lzma compressed man pages.
I also suspect you're running RH. Under Debian script lengths are typically quite short:
That outlier, by the way, is xprint, part of CUPS. Never had to touch it myself.
Quite a few of those lines are comments, and the basic structure is a set of start / stop / restart blocks.
Simplicity really does buy me something.
$ wc -l /usr/src/sbin/init/init.c
$ wc -l /etc/rc
$ wc -l /etc/rc.d/*
Otherwise should we also count the C compiler, libc, all the CLI utilities and the kernel?
But here you go. Still smaller than systemd. Sysvinit needs a shell too...
$ wc -l /usr/src/bin/ksh/*.[ch]
I could go on.
> The don't solve race conditions in peers trying to locate each other (surprisingly difficult).
Not even sure what you mean here. Are you talking about peer discovery? Because DBus won't help you there either--peers have to be aware of each other's DBus object paths before they can rendezvous. Similarly, two peers need to know where the common pipe is to rendezvous.
> They don't solve a standardized marshaling format.
Nor should they. There are a ton of ways to skin this cat in userspace, depending on what your application needs. Protobufs come to mind, for example, but there are others.
Why do you want the pipe to enforce a particular marshaling format? Does the pipe know what's best for every single application that will ever use it?
> They don't come with an implementation to integrate with main loops for event polling.
It's not the kernel's responsibility to implement the application's main loop. That's what libevent and friends are for today, if you need them.
> They have an inherent vulnerability in FD passing where you can cause the peer to lock up.
Last I checked, you pass file descriptors via UNIX sockets, not pipes.
> They don't handle authentication (well, sort of).
Depends on your application's threat model. The kernel provides some basic primitives that can be used to address common security-related problems (capabilities, permission bits, users, groups, and ACLs). If they're not enough, you're free to perform whatever authentication you need in userspace to secure your application against your threat model's adversaries.
It is unreasonable to expect the pipe to be aware of every single threat model an applications expects, especially since they change over time.
> You can get into deadlock situations in your messaging code if you aren't really careful about message sizes and when you order poll in/poll out.
It's not the pipe's fault if you don't use it correctly.
> They aren't introspect-able to see what the peer supports.
Peer A could use the pipe to ask peer B what it can do for peer A. Why do you want the pipe to do peer B's job?
> They make it super easy to not maintain ABI.
Nor does DBus. Nothing stops an application from willy-nilly changing the data it serves back.
I suspect he was referring to socket activation and how that simplifies these kinds of messes.
> Nor should they. There are a ton of ways to skin this cat in userspace, depending on what your application needs. Protobufs come to mind, for example, but there are others.
Right... so that's exactly what systemd did. It used Dbus, which provides that standard serialization format. Not my favourite format, but very well established and tested and focused on systemd's problem domain.
The point is, in order to have loose coupling between components, something like unix pipes is just a starting point.
> Does the pipe know what's best for every single application that will ever use it?
Ah, now I understand the problem with systemd. I never realized it was trying to take over every application's communications protocol! ; -)
Seriously, I think it is perfectly reasonable (and necessary) to define a standard protocol for system even notifications... I say this because it has already been done... by the standard compoents like udev & dbus that systemd is building on top of...
> Last I checked, you pass file descriptors via UNIX sockets, not pipes.
Correct. People have a tendency to mess up their semantics though. If the original poster wasn't referring to unix domain sockets, than it is an even sillier question.
> It is unreasonable to expect the pipe to be aware of every single threat model an applications expects, especially since they change over time.
Yes, but you do need something more sophisticated than a pipe to manage secure communications between your systems components.
> Nor does DBus. Nothing stops an application from willy-nilly changing the data it serves back.
? D-Bus will drop you like a hot potato the moment you fire off invalid messages. You could send valid messages with fraudlent/misleading data payloads I guess, but at least a whole host of problems are addressed by tightening that up.
It's also an ending point. If each application gets to define its own IPC primitives, then there become as many app-to-app communication protocols as there are app-to-app pairs. This does not make for a loosely-coupled ecosystem.
> Seriously, I think it is perfectly reasonable (and necessary) to define a standard protocol for system even notifications... I say this because it has already been done... by the standard compoents like udev & dbus that systemd is building on top of...
This is circular reasoning. You're saying "we should use systemd's notification protocol, because systemd uses it." This says nothing about the technical merits of its protocol.
> Yes, but you do need something more sophisticated than a pipe to manage secure communications between your systems components.
Um, data within a pipe is visible to only the endpoint processes, the root user (via procfs and /dev/kmem), and the kernel. If you don't trust an endpoint, you should stop communicating with it. If you don't trust the root user or the kernel, you can't really do anything securely at all in the first place. My point is, data within a pipe is about as secure as it's going to get.
> You could send valid messages with fraudlent/misleading data payloads I guess, but at least a whole host of problems are addressed by tightening that up.
I think you will find that the bulk of IPC problems will come from dealing with data you didn't expect--that is, processes sending "fraudlent/misleading" data and other processes acting on it. DBus won't help you there--you always always ALWAYS have to validate data you receive from untrusted parties, no matter what transport or wire format you're using.
That's exactly why you need a more systemic approach to the IPC mechanism...
You can pretend that "oh this is just a stream so there isn't tight coupling", but the information that is communicated is the same. If you haven't imposed some structure and consistency to it, that's exactly how you end up with a ball of mud.
> This is circular reasoning. You're saying "we should use systemd's notification protocol, because systemd uses it." This says nothing about the technical merits of its protocol.
You misunderstood my point. I'm not justifying it on the basis that systemd is using it. I'm saying the fact that all the other systems have arrived at a similar, and in many cases the exact same mechanism, is pretty strong evidence that it is a reasonable design choice.
Basically all the Linux systems out there are already using udev & dbus. Most of the non-Linux systems do as well. Everyone's done it and made it work. That systemd is adopting arguably the most entrenched one in the Linux sphere is hardly as controversial as people seem to think it is.
> My point is, data within a pipe is about as secure as it's going to get.
I wasn't trying to suggest it wasn't a secure point-to-point communication mechanism (it has issues, but fine enough). The issue is that you need more integration in with the security model to avoid having a rats nest of security logic on top of it.
> I think you will find that the bulk of IPC problems will come from dealing with data you didn't expect--that is, processes sending "fraudlent/misleading" data and other processes acting on it.
There's a very long and glorious history of malformed and misleading IPC causing problems. Not that it is the only thing, but life becomes a lot easier when that problem is off the table.
> DBus won't help you there--you always always ALWAYS have to validate data you receive from untrusted parties, no matter what transport or wire format you're using.
Yes you will. However, DBus ensures that you don't have to write a ton of redundant code just verifying you are getting validly structured data and dealing with the nasty ways someone might try to exploit that.
Imagine having to write a secure REST service where the only thing you had to worry about in the entire network protocol stack was the validity of the data expressed in the payload.
So, you basically want to turn IPC into CORBA. It's a bad idea to have the OS impose too much structure on your IPC, in the same way that it's a bad idea to have the base class in an object hierarchy try to take on too many subclass-specific responsibilities. This is because over-specialization of a component needlessly constrains the designs of systems that use it.
That said, you are correct in that byte streams alone do not make for loosely coupled systems. Programs must additionally emit data such that other unrelated programs can operate on it without modification. But we already have this universally-parsable data format: it's called human-readable text. It's why you can "grep" and "awk" and "sed" the outputs of "ls" and "find" and "cat", for example.
Take a second and imagine what the world would be like if you had to write "grep" such that it had to be specifically designed to interact with "find," instead of simply expecting a stream of human-readable text. Imagine if "awk" had to be specifically designed to interact with "ls." This is the world that CORBA-like IPC creates, where programs not only need to be intrinsically aware of the higher-level RPC methods each other program exposes, but also intrinsically aware of the access and consistency semantics that go along with it. No thank you; I'll stick with pipes and human-readable text, where the data format, data access, and consistency semantics are universally applicable.
> Basically all the Linux systems out there are already using udev & dbus. Most of the non-Linux systems do as well. Everyone's done it and made it work. That systemd is adopting arguably the most entrenched one in the Linux sphere is hardly as controversial as people seem to think it is.
First, udev is Linux-specific--it uses netlink sockets to listen for Linux-specific hardware events. Second, not everyone uses udev and dbus. mdev, smdev, eudev, and static /dev are also widely used and have well-defined use-cases that udev does not serve, and plenty of servers (and even my laptop) get along just fine without dbus.
Trying to justify udev and dbus because "everyone uses them so you should too!" is not only an example of the bandwagon logical fallacy, but also reveals your ignorance of and insensitivity to other users' requirements.
> The issue is that you need more integration in with the security model to avoid having a rats nest of security logic on top of it.
I said it above and I'll say it again here. The IPC layer does not know and cannot anticipate the security needs of every single application. If you try to design your IPC system to do this, you will fail to encompass every possible case. This is because threat models are not only specific to the application, but also specific to the context in which the application runs.
For example, you do not send your bank account password over an out-bound socket unless it has first been encrypted using a secret key known only to you and your bank. Your reasoning implies that the IPC system should be tasked with automatically enforcing this constraint, among others. Nevermind the fact that the IPC system will see only the ciphertext and thus will not know that the data it's about to send contains your password.
> However, DBus ensures that you don't have to write a ton of redundant code just verifying you are getting validly structured data and dealing with the nasty ways someone might try to exploit that.
So do plenty of stub RPC compilers and serialization libraries that have been around longer, are more widely used, and are better tested than DBus. However, neither DBus nor any of these solutions will help you with well-formatted but invalid data. Your application has to deal with that, since the validity of data is both application-specific and context-specific (so, not something the IPC system can anticipate).
Again, what's so special about DBus, besides the fact that it's the New Shiny?
> Imagine having to write a secure REST service where the only thing you had to worry about in the entire network protocol stack was the validity of the data expressed in the payload.
If I'm writing a secure service of any kind, you can be damn certain I'm thinking about a LOT more than input validation! Security encompasses waaaaaaaaaay more than that.
Even if security wasn't an issue, there is still a LOT more to worry about than input validation. Things like scalability, fault tolerance, concurrency, and consistency come to mind. There is no silver bullet for any of these, let alone an IPC system that solves them all at once!
I was thinking about this comment, and I realized this is probably the source of most of your angst, which leaves some great solutions on the table.
systemd isn't really creating a much more significant break with the systems you like, because it's building on top of what Linux, which for the most part has already made the break.
The problem is, projects like GNOME, which have software you want to use, are integrating more tightly with Linux and specifically bits of systemd.
I think the obvious solution is a bridge/better interface. The contracts that GNOME is going to rely upon are at least going to be pretty well defined, and if you've got another system that works better, it shouldn't be hard for it to provide an equivalent, even compatible, interface.
If it really is demonstrably better, GNOME and other projects will likely adopt your interface/abstraction, and systemd will end up having to communicate through your interface. Even if they don't, it is a comparatively simpler effort for a software community to support a relatively small set of touch points that they want GNOME to be aware of, and maintaining a fork or compatibility layer is a perfectly reasonable solution (indeed, BSD already does this for Linux runtimes).
I can understand why it'd not be a perfect solution from your perspective, but if a bunch of developers contributing to a work you care about are going a direction you don't like, it's about as good an outcome as one could hope for.
No, what gives me the most angst is the arrogance of a certain segment of Linux+systemd users who think that just because they can apt-get install systemd and write some minimal unit files for some trivial services somehow makes them domain experts on OS design. And these people seem to think that other users' requirements don't matter, since if they're not using systemd too, they're clearly doing it wrong.
No, very much not, because we don't really need an RPC mechanism here. We want something in the event/messaging space.
But it isn't even a want. If you don't have it, you end up with each of the your components actually being very tightly coupled to all the other components it talks to and you've got a truly monolithic mess on your hands.
> It's a bad idea to have the OS impose too much structure on your IPC, in the same way that it's a bad idea to have the base class in an object hierarchy try to take on too many subclass-specific responsibilities. This is because over-specialization of a component needlessly constrains the designs of systems that use it.
This isn't exactly a new concept or a new problem. There are plenty of existing cases where this is happening (basically every platform I can think of right this moment, though I'm sure there are plenty of exceptions), including in the current Linux udev mechanism.
> But we already have this universally-parsable data format: it's called human-readable text. It's why you can "grep" and "awk" and "sed" the outputs of "ls" and "find" and "cat", for example.
/me falls out of chair.
Yeah, that's worked out great. Never had a problem with init scripts not extracting the right column or handling a new variant in how the output comes out (or even better still, the dreaded "value with embedded whitespace").
But you know what? DBus is basically human readible with a bit more imposed structure than generic streams. So, I think you are arguing in support of the systemd approach without realizing it! ;-)
> First, udev is Linux-specific--it uses netlink sockets to listen for Linux-specific hardware events. Second, not everyone uses udev and dbus. mdev, smdev, eudev, and static /dev are also widely used and have well-defined use-cases that udev does not serve, and plenty of servers (and even my laptop) get along just fine without dbus.
Very true. I was speaking in generalities. Point being, udev is out there and very thoroughly established as something that people seem to generally want.
> The IPC layer does not know and cannot anticipate the security needs of every single application. If you try to design your IPC system to do this, you will fail to encompass every possible case.
You have some ambitious notions for the systemd project that go well beyond the goals that critics say are overly broad in scope. This is for addressing a relatively narrow set of problems that wouldn't even come close to defining 1% of IPC on a LInux system. I'm not suggesting we replace the entire Unix toolset with a complete new set of interfaces and programs (nor are the systemd guys). This is specifically for managing the interactions between devices & daemons... It's a well established problem domain with some well established roles & responsibilities and some pretty well understood data message/event structures.
While it might use systemd/dbus/whatever to get notifications about various services and system events, YOUR BANKING SOFTWARE IS NOT SUPPOSED TO USE SYSTEMD TO MOVE MONEY BETWEEN YOUR ACCOUNTS!
> Again, what's so special about DBus, besides the fact that it's the New Shiny?
DBus isn't the new shiny. It's the old shiny. The new shiny would probably be 0mq or some of the new datagram protocols that people are experimenting with, along with various extensible binary protocols like MessagePack and Cap'nProto.
What's special about DBus is that it is already being used very broadly on Unix platforms for this kind of function and is well integrated in to the system security model. The one bit of additional coolness it brings to the table is the support for socket activation, which simplifies the complexity of start ordering and discovery tremendously, which is indeed a VERY nice benefit, but could no doubt have been NIH'd independently.
> If I'm writing a secure service of any kind, you can be damn certain I'm thinking about a LOT more than input validation! Security encompasses waaaaaaaaaay more than that.
Yes, but the point is one derives substantial benefit from trusted components in the system that take care of their part of the problem. You don't benefit from having to reimplement an entire security apparatus with each component. This is basic security compartmentalization 101.
> Things like scalability, fault tolerance, concurrency, and consistency come to mind. There is no silver bullet for any of these, let alone an IPC system that solves them all at once!
You appear to be simultaneously claiming there is no silver bullet and being terribly upset that systemd isn't one.
Yes, it is no silver bullet. It's not even the huge sea change that some people seem to think it is. Rather, it is an incremental improvement over existing practices that gets rid of a bit more of the cruft and stupidity in the existing infrastructure. Doing that kind of thing right can really make a big difference for the system as a whole, but it isn't the apocalypse.
My arguments are that the OS's IPC should not enforce an IPC record structure, but should enforce a consistent set of IPC access methods (i.e. pipes, sockets, shared memory, message queues, etc.) defined independently of applications. I think we're in agreement about the latter--if the OS were to let each application have it's own IPC access methods, then there would be as many access methods as there are applications (leading to tightly-coupled "truly monolithic mess").
I don't think we've reached agreement on the former. I claimed that there is no "best record structure" for all applications, so the OS shouldn't try to enforce one. I also mentioned that human-readable text is the universal data format, which is both a manifestation of this principle (i.e. the OS imposes no constraints on the structure of bytes passed between programs) and a desirable outcome since parsing text is super-simple to implement (by contrast, take a look at the examples in dbus-send(1) to see how painful the alternative can be). You disagree--you think the IPC system should also handle things like serialization and validation.
The problem is that serialization and validation are both application-specific (and even context-specific) concerns, and for the IPC system to address them, it has to gain knowledge from the application. But this lets the application set IPC access methods, which we've already agreed is a bad idea! My (extreme) example to prove this point was that pushing validation responsibility from the application into the IPC system would require it to handle ridiculous application-specific corner cases, like defining a socket class that makes sure that your bank account password won't be sent to the wrong host (still not sure how you concluded that that remark was about systemd). The point is, if you want your IPC system to handle validation for you, you're just asking for trouble.
The same type of problem occurs when you put serialization into the IPC system. The serializer has to know whether or not a string of bytes represents a valid application-defined record. If you make serialization the IPC system's responsibility, it needs application-level knowledge on whether or not an inbound message represents a valid message (which also leads to ridiculous corner cases).
DBus not only enforces structured records (bad), but also lets applications define their own IPC access methods (worse). The RPC-like nature of DBus means that both peers must not only agree on the interpretation of bytes in advance, but also agree on the semantics of accessing them. Unlike reading from a pipe, accessing the value of a DBus object by name can have arbitrary side-effects which the requester must be aware of. In the limit, this puts us into the undesirable situation of having each application-to-application pair agree on an IPC access method, leading to the tight coupling nightmare.
Don't get me wrong--DBus has its use-cases. OS-level IPC isn't one of them. I wish systemd folks took some time to think about this, but they're too busy trying to make DBus into OS-level IPC with no regards to the consequences. See kdbus and the SOCK_BUS socket class it exports.
> But you know what? DBus is basically human readible with a bit more imposed structure than generic streams
/me falls out of chair too.
Now you're just being daft :) The more structure you impose on bytes, the less human-readable it gets. For example, I don't think I have to explain to you why this comment is more legible as rendered in your browser (unstructured text) than as raw HTML (structured records).
> DBus isn't the new shiny. It's the old shiny
CORBA is the old shiny ;) See also: https://en.wikipedia.org/wiki/Remote_procedure_call#Other_RP...
> The one bit of additional coolness it brings to the table is the support for socket activation
Not the IPC system's responsibility. See also: https://en.wikipedia.org/wiki/Xinetd
> You don't benefit from having to reimplement an entire security apparatus with each component
Of course--you use a library and an RPC stub generator for this. Not really part of the "design principles of IPC" discussion we've got going, though.
That maybe what you read, but the context of drdaemon's statement was specifically in response to a question about communications with the init daemon, and of course everything I said after was as well... Glad we got that settled.
> Not the IPC system's responsibility.
Hmm... IPC systems need to have ways of matching up the parties in a conversation, and having one where you don't have to enforce who calls whom first and parties don't have to mutually agree upon the specific endpoints in advance sure seems like something an IPC system might want to have... particularly one employed in an init system...
> See also: https://en.wikipedia.org/wiki/Xinetd
As discussed here: http://0pointer.de/blog/projects/systemd.html
There absolutely is a ton of overlap between what systemd does with socket activation and what Xinetd has evolved to... but as with evenone else doing OS design, there comes a point where you leave Xinetd behind and let the full potential of that trick work in your favour.
> Now you're just being daft :)
Me and the folks at Wikipedia: http://en.wikipedia.org/wiki/Comparison_of_data_serializatio...
Don't get me wrong, I think a lot of the Wikipedians are pretty daft, but they are as reasonable a judge of human readability as I can imagine, given what they do.
> CORBA is the old shiny ;)
CORBA is the old shiny-my-god-we-dont-need-nearly-all-of-that-and-it-really-benefits-a-bootstrapped-systems-so-there-is-a-chicken-and-egg-prolem-here. But yeah, close. I don't think anyone has seriously considered that since the OS/2 & Workplace OS days... and even then.
That said, I would say that THESE DAYS (unlike in its heyday), CORBA is a pretty awesome robust, feature rich _general purpose_ distributed IPC system.
> Of course--you use a library and an RPC stub generator for this.
Ah, so it is much more modularized if it runs as an executable piece of code in process than a piece of executable code out of process. Got it. ;-)
> Not really part of the "design principles of IPC" discussion we've got going, though.
Well, that's the discussion you're having. I'm trying to talk about the design constraints and appropriate solutions for the problem domain...
Lennart Poetterring claims that you should use his software instead of someone else's software! I'm SHOCKED! Full story at 11.
Seriously now, did you honestly think that he would say to use xinetd over systemd? Do you honestly believe a developer will advocate the use of a competing piece of software over something (s)he produced?
> There absolutely is a ton of overlap between what systemd does with socket activation and what Xinetd has evolved to... but as with evenone else doing OS design, there comes a point where you leave Xinetd behind and let the full potential of that trick work in your favour.
Unless you don't feel like replacing small, simple, easy-to-use, well-tested xinetd with the 200K-line pile of C code that is systemd.
Besides, I've got your socket activation right here: Start the daemon, have the daemon open a port, and let the kernel swap it to disk. The kernel will swap it back in when it receives a connection for it.
* the daemon preserves state between "activations" for free
* the kernel gives you this feature for free
* the daemon doesn't have to trust another userspace program with anything
* the daemon can use mlock() to prevent sensitive pages from getting swapped
* if this isn't enough, you can encrypt the swap partition to resist offline attacks
* If disk is too expensive, disk is read-only, you have no swap, you have no CAP_IPC_LOCK, the daemon would need to mlock() too much RAM, and you can't encrypt your swap, there's xinetd.
* Need to apply filters or QoS controls on connections before waking up the daemon? That's what the firewall is for.
* You can have xinetd trigger whatever event you want, since all it does is fire up a program and run it. This includes alerting other programs, like a service manager, that it got a connection, and maybe even sending along the message (or the file descriptor) if you want. There is no need for systemd to subsume this responsibility.
As you can see, "socket activation" is by and large a marketing gimmick.
> Me and the folks at Wikipedia:...
You think an article that compares data serialization protocols somehow proves your ludicrous claim that human readable text is less readable than marked-up text? Maybe daft was too nice a word...
> Ah, so it is much more modularized if it runs as an executable piece of code in process than a piece of executable code out of process. Got it. ;-)
Sir/madam, have you ever written an Internet-facing daemon? Obviously the bulk of the RPC logic lives in a shared library. You know, a logically distinct module that can be independently installed, loaded once, and independently maintained.
Besides, procedurally-generated RPC-handling code adds no technical debt to your project, anymore than the compiler's generated assembler output does.
You seem to want to replace the RPC shared library with a separate process. Not only will this make create a performance bottleneck, but also it makes it a single point of failure. If it crashes, all your daemons lose their connections. This is obviously highly undesirable, especially on servers.
> Well, that's the discussion you're having. I'm trying to talk about the design constraints and appropriate solutions for the problem domain...
I think I'm done with you. You deserve everything systemd will ever do for you.
No... but I thought he might be able to pretty adequately explain how systemd exploits socket activation and contrast it with xientd....
> Do you honestly believe a developer will advocate the use of a competing piece of software over something (s)he produced?
Well, I've certainly done it, so it is possible, but I wasn't referencing him as a persuasive voice... Even if I was, that'd be such a flawed and pathetic argument...
> Unless you don't feel like replacing small, simple, easy-to-use, well-tested xinetd with the 200K-line pile of C code that is systemd.
You might want to look at the code. The socket activation logic is a pretty clean & tight ~90K chunk of code in a handful of files... and for the record, xinetd isn't that slim, with nearly 25K lines of code spead over well over a hundred files, and that's if you only count the C source files.
> As you can see, "socket activation" is by and large a marketing gimmick.
Sigh... I can see you didn't read the article. The implementation differences aren't terribly different, and Lennart already made your points for you... Systemd does have some little tweaks that open up a bunch of different worlds of advantages.
> Sir/madam, have you ever written an Internet-facing daemon?
Yes, but of course, in this context we're primarily focused on AF_UNIX sockets...
> Obviously the bulk of the RPC logic lives in a shared library. You know, a logically distinct module that can be independently installed, loaded once, and independently maintained.
it's very common, for example, for web apps to have a separate process that parses and validates inbound HTTP requests RESTful requests before passing them on to the main application process. You can and do run web apps that are directly exposed to the Internet, but nobody suggests this is to make the request processing logic more modular...
> You seem to want to replace the RPC shared library with a separate process. Not only will this make create a performance bottleneck, but also it makes it a single point of failure. If it crashes, all your daemons lose their connections. This is obviously highly undesirable, especially on servers.
I see you are familiar with Erlang. ;-)
You raise a good point. Often to reduce failure rates people employ load balancer that work with various HA protocols to avoid losing connetions. What do load balancers do again? Oh yeah, they are separate processes receive in bound RPC requests, parse and validate them, attempt to mitigate any in bound attacks before routing and forwarding them to the application itself...
And of course, a lot of web applications are largely front ends to a database, which means they themselves are processing RPC requests, formatting, validating and transforming them before forwarding them to a database for execution...
..and let's not get started about middleware... ;-)
> You seem to want to replace the RPC shared library with a separate process.
No. I really don't. I'm just pointing out that if you are looking for small, modular and loosely coupled components that are fairly resilient, it's not like someone is going to say that moving a component form a shared library to a separate process is going to get critiqued on the basis that it intrinsically makes for more tightly coupled code.
Or wait, are you suggesting that systems where all these libraries are rolled up in to one process would be more modular? [looks at critique of how systemd puts too much stuff in to one process...]
(It's kind of unfair but I could not resist):
Just like dbus today? http://www.ubuntu.com/usn/usn-2352-1/
There's only one PID 1 - it would not be hard to locate its UNIX domain socket.
They're file descriptors so they work with select, poll, etc.
Please elaborate. There's nothing inherently vulnerable with FD passing. In fact, dbus relies on it so if you can't make FD passing secure then you can't make dbus secure either.
I think you're overstating the difficulty of doing this correctly.
I for one am OK with PID 1 not being introspectable.
It's quite possible to maintain ABI compatibility, but still, how much of a moving target should PID 1's ABI be anyways?
dbus is a great solution for normal applications, but PID 1 is special. The generality provided by dbus is unnecessary. PID 1 should not be servicing requests from ordinary users, so any security concerns with using UNIX domain sockets directly are moot. If there are certain actions, such as shutdown, that need to be triggered from non-root users, then there should be a separate, unprivileged, process that listens on dbus, implements authorization logic, and then relays the command to PID 1 over a UNIX domain socket using a very simple and easily-audited interface. That's good security and reliability engineering.
It all depends on the use cases.
For me, FreeBSD is a no go given my desktop usage requirements.
This currently makes me a Windows/Mac OS X guy in what concerns desktop usage.
I'm just glad for Slackware at this point.
The author of this piece makes the classic mistake of equating the init system as the process manager and process supervisor. These are, in fact, all separate stages. The init system runs as PID 1 and strictly speaking, the sole responsibility is to daemonize, reap its children, set the session and process group IDs, and optionally exec the process manager. The process manager then defines a basic framework for stopping, starting, restarting and checking status for services, at a minimum. The process supervisor then applies resource limits (or even has those as separate tools, like perp does with its runtools), process monitoring (whether through ptrace(2), cgroups, PID files, jails or whatnot), autorestart, inotify(7)/kqueue handlers, system load diagnostics and so forth. The shutdown stage is another separate part, often handled either in the initd or the process manager. Often, it just hooks to the argv of standard tools like halt, reboot, poweroff, shutdown to execute killall routines, detach mount points, etc.
To stuff everything in the init system, I'd argue, is bad design. One must delegate, whether to auxiliary daemons, shell scripts, configuration syntax (in turn read and processed by daemons) or what have you.
sysvinit is certainly inadequate. The inittab is cryptic and clunky, and runlevels are a needlessly restrictive concept to express what is essentially a named service group that can be isolated/overlayed.
Of course, to start services on socket connections, you either use (x)inetd, or you reimplement a subset or (partial or otherwise) superset of it. There's no way around this, it's choosing to handle more on your own rather than delegate. In systemd's case, they do this to support socket families like AF_NETLINK.
As for systemd being documented, I'd say it's quote mediocre. The manpages proved to be inconsistent and incomplete, and for anyone but an end user or a minimally invested sysadmin, of little use whatsoever. Quantity is nice, but the quality department is lacking.
sysvinit's baroque and arduous shell scripts are not the fault of using shell scripts as a service medium, but have to deal with sysvinit's aforementioned cruft (inittab and runlevels) and the historical lack of any standard modules. BSD init has the latter in the form of /etc/rc.subr, which implements essential functions like rc_cmd and wait_for_pids. Exact functions vary from BSD to BSD, but more often than not, BSD init services are even shorter than systemd services: averaging 3-4 lines of code.
A unified logging sink is nothing novel, it's just that systemd is the first of its kind that gained momentum, but with its own unique set of issues. syslogd and kmsg were still passable, and the former also seamlessly integrated itself with databases.
Once again, changing the execution environment is a separate stage and has multiple ways of being done. Init-agnostic tools that wrap around syscalls are probably my favorite, but YMMV.
As for containers, it's about time Linux caught up to Solaris and FreeBSD.
> The author of this piece makes the classic mistake
> of equating the init system as the process manager
> and process supervisor.
In the interest of full disclosure I must admit I was on duty when AT&T and Sun were creating the unholy love child of System V and BSD, I'm sorry.
The architecture, as bespoke by AT&T system engineers, was that process 1 was a pseudo process which configured the necessary services and devices which were appropriate for an administrator defined level of operation. Aka a 'run level.' I think they would have liked the systemd proposal, but they would no doubt take it completely out of the process space. I am sure they would have wanted it to be some sort of named stream into the inner consciousness of the kernel which could configure the events system so that the desired running configuration was made manifest. They always hated the BSD notion that init was just the first 'shell process' which happened to kick off various processes that made for a multi-user experience.
Originally users were just like init, in that you logged in and everything you did was a subprocess of your original login shell. It was a very elegant system, root's primal shell spawned getty, and getty would spawn a shell for a user when they logged in, everything from that point on would be owned by the user just like everything that happened before was owned by root. The user's login shell logged out and everything they had done got taken down and resources reclaimed. When the root shell (init) logged out all resources got reclaimed and the system halted.
But Linux, like SunOS before it, serves two masters. The server which has some pretty well defined semantics and the "desktop user" which has been influenced a whole bunch by microcomputer operating systems like Windows.
I wasn't the owner of the init requirements document, I think Livsey was, the important thing was that it was written in the context of a bigger systems picture, and frankly systemd doesn't have that same context. I think that is what comes across as confusion.
Not to mention, most of the systemd hate seems to be spread by only two main sources now, and both cite each other as sources (ironic a little).
systemd was really designed with servers in mind, and really does bring a lot to the table for server admins.
Jupiter Broadcasting are an unreliable source, to say the least. I did watch that episode. When you use such pristine arguments as "Someone reimplemented systemd's D-Bus APIs, therefore systemd is portable!" (much like the Windows API is portable, because Wine exists) and claim that systemd is a "manufactured controversy" while responding to easy straw man arguments, there is a term for that kind of person: a shill.
I was also very amused by the Linux Action Show's coverage of uselessd. They spent the entire time whining about the name, thinking it makes fun of the systemd developers, when in fact it's making fun of ourselves. They also got mad over the use of the word "cruft" and later called us "butthurt BSD users".
Good to see that you bring some new insights, however. Very mature and enlightening.
If a truly better init system already exists, then people who care strongly and/or have very specific use-cases where that init system excels exceptionally, then they will use it. Nobody is married to systemd.
One must also look at how many industry heavyweights are behind systemd now (even Canonical). I'm certain they have considered the pros and cons to systemd much more extensively than all of the armchair quarterbacks appearing in this thread. Perhaps you personally dislike systemd for what you think are good reasons, but know you are in the minority now (you weren't always).
Bottom line -- systemd is targeting servers, everything else is tertiary. Don't like it, then don't use it. But quit using every possible chance to spread needless hate. systemd is not an assault on you personally. No matter how loud you scream -- systemd is not going anywhere for the time being.
"Hey, everybody, look at all the people using systemd! They must know better than you, so shut the fuck up and use whatever you want - no one is stopping you! By the way, systemd is meant for servers, even though the developers have never said anything like that and have made it clear that it's meant for all use cases."
In this regard, you are little more than a troll. Or a person who thinks popularity means quality. Both, even.
Which is totally ironic too in that the server-admins hate it. (speaking just for myself here=) )
I am a sysadmin of a medium sized data-center. I am in charge of 100-150 servers at any given point. None of the changes that systemd 'fixes' benefit me or my systems.
Boot times? What's the point when it takes 10-minutes for the drive-arrays to spin-up?
Logging? I pray a system never dies and I have to access those rotten binary log-files from a live-cd.
Network changes/configuration? Nope, every server is configured with static network configs.
Power Management? Ha! That's funny. Downtime in minutes costs more then electricity does in a a month.
I could go on. But there is one major caveat: As a laptop user, systemd is fantastic.
As my Debian servers need to and/or get updated and start requiring systemd then I will just migrate them to OpenBSD. This process has already begun.
Systemd is changing things for the wrong group of people. Mobile/Desktop users have alot of wiggle room and areas that need improvement. Server admins need stability; in software, hardware, (script) syntaxes, and interfaces. Users need everything that systemd offers.
I will concede that systemd might be a good fit with Docker, and I am looking into that too; but I guarantee you it will be on it's own box and not homogeneous with the rest of my network.
Ran into a recent interview where he kept referring back to the OSX sound system when talking about Pulseaudio, and Avahi is zeroconf/bonjour. And with Systemd he constantly makes references to Launchd, the OSX "init".
BTW, Red Hat just now announced that the future of the company would be Openstack and the cloud. Fits perfectly with the push for containerization in Systemd.
More and more i get the impression that the "developers" mentioned as benefiting from Systemd are the likes of the Reddit crew. Reddit pretty much could not exist without Amazon's cloud services.
Meaning that for Poettering the future is two things, cloud computing and cloning OSX. And given the number of web monkeys that seems to sport a Mac, i am not surprised at all.
I just wish that they could avoid infecting the rest of the Linux environment...
This statement, "systemd isn't final - it's software, and will come and go.", is the one that most captures my angst. And you can replace 'systemd' with 'linux' or 'gstreamer' or 'webkit' or 'gcc' or 'fsck' for that matter. Not only are they not 'final' but what they would be able to do if they were 'final' is left unspecified. That puts the system on the DAG equivalent of a drunken walk. And users don't seem to like it when their systems are evolving randomly.
I really enjoyed the early RFC process of the IETF because we could argue over what was and was not the responsibility for a protocol, what it had to do and what was optional, and what it would achieve when it was 'done.' Then people compared what they had coded up. When the architecture is the code and the code is the spec, my experience is that sometimes we lose track of where it was we were going in the first place.
I think systemd has a lot going for it, and it's been pretty stable on my Arch notebook, but I'm not too thrilled with the way it takes over so many tasks at once and eschews text log files. What's frustrating is that I didn't have much choice in the matter. Yeah, I could switch to another distro, but since Red Hat, Suse, and now Debian and Ubuntu are switching to systemd, that leaves Gentoo or BSD or something. Which are perfectly fine in their own right, but that's pretty drastic if I just want to avoid systemd.
With so many heavyweight linux enterprise companies jumping on systemd, one must wonder what consideration they have given the issue? I'd wager, a lot. Also, note that systemd is really designed with servers in mind, so it's not surprising for a desktop/laptop distro user to find it bothersome (it wasn't designed with your use-case in mind). With that said, the beauty of Arch is you can yank systemd out and go with whatever init system you desire.
As for the Systemd design. I Think it started with Poettering drooling over OSX Launchd (his other projects also seem to be straight OSX feature clones), that since then has been hitched on the cloud computing push within RH.
In essence, the kind of server that Systemd seems to favor are cloud computing instances where storage and networking can come and go as the back end gets configured for new needs.
Traditional static big iron and clusters don't really benefit much from the "adaptive" nature of Systemd. If those breaks they usually have a hot reserve taking over while the admins get to work figuring out what broke.
The process manager gets killed. How do you recover?
If you have respawn logic for it in PID 1, how do you log information about a failure to respawn the process manager?
Perhaps you build in some basic logic for logging. Where do you store the data? What if the user level syslog the user wants you to feed data to can't be brought up yet, because it depends on a file system that is not yet mounted?
There may very well be alternatives to the systemd design, but I've yet to see any that are remotely convincing, in that most of them fail to recognise substantial aspects of why systemd was designed the way it is, and just tear out stuff without proper consideration of the implications.
Most proposed alternative stacks to systemd falls down on the very first question above.
I agree with you that it doesn't seem like a great idea to stuff everything in the init system, but I don't agree that "one must delegate" unless the delegation reduces complexity, and I've not seen any convincing demonstrations that it does.
I'd love it if someone came up with something that provided the capabilities and guarantees that systemd does with indepenent, less coupled component, though.
But there's no way I'm giving up on the capabilities systemd are providing again.
What happens if the process manager crashes if you're running systemd: the process manager is in PID1 (or, equivalently, in a tightly coupled process that PID1 depends on - because the whole point of your post was that you can never get to a state where PID1 is working but logging isn't working), so your system crashes, every time. How is that better? And if that's really what you want, it's easy to configure a decoupled init system to do that.
Hey, some people like their logs to be sent as email. Maybe we should move sendmail into PID1 as well.
If someone could come up with a systemd replacement which manages to keep the systemd features while using a design philosophy more in line with that of Daemontools, that would be fantastic, but it'd end up looking very different to s6. Some stuff could certainly be cleanly layered on top (such as using a wrapper to avoid the start/stop problem using the same method of cgroup containment as systemd). Other things, such as explicit or implicit (via socket activation etc.) dependency management, I'm not so how you'd fit into that model easily.
I'd love it if someone tried, though. It would certainly make it easier to experiment with replacing specific subsets of functionality.
Logs are essentially write-once, write-often, read-rarely data. As such, the optimal format is always going to be a flat, append-only file.
... in the indicative rather than in the subjunctive, and in fact already mentioned here once. http://homepage.ntlworld.com./jonathan.deboynepollard/Softwa...
> The process manager gets killed. How do you recover?
In nosh terminology, this is the service manager. If it gets killed, the thing that spawned it starts another copy. This could be systemd, if one were running the service manager under systemd. It could be the nosh system manager. Of course, recovery is imperfect. If one designs a system like the nosh package, one makes an engineering tradeoff in the design; the same as one does when one designs a package like systemd. The system manager and the service manager are separate, but the underlying operating system kernel will re-parent orphaned service daemon processes if the service manager dies. One trades the risk of that for the greater separation of the twain, and greater simplicity of the twain. The program that one runs as process #1 is a lot simpler, being concerned only with system state, but there's no recovery in a very rare failure mode. Indeed, the simplicity makes that rarity even greater, if anything. systemd makes the tradeoff differently: there's recovery in a very rare failure mode (which I've yet to see occur in either system outwith me, with superuser privileges, sending signals by hand) at the expense of all of the logic for tracking service states, and for trying to recover them (in circumstances where one knows that the process has failed somehow and might possess corrupted service tracking data), all in that one program that runs as process #1.
> If you have respawn logic for it in PID 1, how do you log information about a failure to respawn the process manager?
In the log that is there for the system manager. See the manual page for system-manager, which explains the details of the (comparatively) small log directory and the (one) logging daemon that is directly controlled by the system-manager, both intended to be dedicated to logging only the stuff that is directly from the system manager and service manager.
> Perhaps you build in some basic logic for logging. Where do you store the data?
In a tmpfs, just like systemd-journald does in the same situation. /run/system-manager/log/ in this particular case. Strictly speaking, this "basic logging" isn't built-in. In theory, it is replaceable with whatever logging program one likes, as the system-manager just spawns a child process running cyclog and that name could be fairly simply made configurable. In practice, difficulties with the C++ runtime library on BSDs being placed on the /usr volume rather than the / volume, and indeed the cyclog program itself living on the /usr volume when it has to be under /usr/local/, have made it necessary to couple more tightly than wanted here, so far. But those problems could go away in the future; if the BSD people were persuaded to put the C++ runtime library in the same place as the C runtime library, for example.
> Most proposed alternative stacks to systemd falls down on the very first question above.
In many ways, that's because it's a poor question that focusses on a very rare circumstance. As I said, I've yet to see either system exhibit this failure mode in real-world use absent my deliberately triggering it. (Nor indeed have I ever seen it occur with upstart or launchd.) Much better questions are ones like "Where are inter-service dependencies and start/stop orderings recorded?", "Is there an XML parser in the program for process #1?", "What makes up a service bundle?", "How do system startup and shutdown operate?", "How does the system cope with service bundles that are on the /var volume when /var hasn't been mounted yet?", "How does the system handle service bundles in /etc when the / volume is read-only?", and "What does the system manager do?". Those are all answered in the package's manual pages and Guide, of course.
The bigger point is that there are lots of these "narrow corner cases" all over a typical SysV-init setup, not least due to tons of badly written init scripts. The number of times services have failed to start
To produce a systemd alternative, creating something that competes favorably with SysV-init is insufficient. Today you also need to demonstrate how you deal with those corner cases, or why they don't matter - many of us have no intention of going back to the bad old days.
I don't say that Xorg hasn't crashed, it did rarely when running RC code or proprietary drivers.
In fact I probably had as many Xorg crashes as kernel panics, which says something about how stable Xorg is.
Still I wouldn't want to run it as PID1, where a crash would really bring down everything.
For starters, as an example, I have 100 times as many servers than I have desktops to deal with - for a lot of us Xorg is not an important factor. But the process manager is vital to all of them - server and desktop alike if you want to keep them running. If the process manager fails, it doesn't matter if it wasn't Xorg that took things down.
Secondly, that X clients fail if the server fails is not a good argument for moving Xorg into pid 1 too, because it would not solve anything. If pid 1 crashes, you're out of luck - the best case fallback is to try to trigger a reboot.
Having (at least minimal) process management in pid 1 on the other hand serves the specific purpose of always retaining the ability to respawn required services - including X if needed. (Note that it is certainly not necessary to have as complicated respawn capabilities in pid 1 as Systemd does).
Having Xorg in pid 1 would not serve a comparable purpose at all: if it crashes, the process manager can respawn Xorg. If you then need to respawn X clients, and be able to recover from an Xorg crash, there are a number of ways to achieve that which can work fine as long as your process manager survives, including running the clients under a process manager, and have them interface with X via a solution like Xpra, or write an Xlib replacement to do state tracking in the client and allow for reconnects to the X server.
Desktop recoverability is also a lot less important for most people: Every one of our desktops have a human in front of it when it needs to be usable. Most of them are also rebooted regularly in "controlled" ways. Most applications running on them get restarted regularly. People see my usage as a bit weird when I keep my terminals and browsers open for a month or two at a time.
On the other hand, our servers are in separate data centres and need to be availably 24x7, and many have not been rebooted for years, and outside of Android and various embedded systems, this is where you find most Linux installs.
While we can remote reboot or power cycle most of them, with enough machines there is a substantial likelihood of complications if you reboot or shudder power cycle (last time we lost power to a rack, we lost 8 drives when it was restarted. Even with "just" reboots there is a substantial chance of problems that requires manual intervention to get the server functional again (disk checks running into problems; human error the last time something was updated etc.)
That makes it a big deal to increase the odds of the machines being resilient against becoming totally non-responsive.
It is all-or-nothing, whereas if you could gradually replace the old sysvinit/policykit/consolekit/etc. stuff with systemd/logind then problems during that transition could be debugged more easily. You could also choose to not replace some components where the systemd/non-systemd replacement is broken.
The author is not making any mistake at all, or no more so than you are.
I'm sure you both value engineering principles like separation of concerns and a single source of truth.
The author believes that by removing the redundancy between initd / xinetd / supervisord / syslog the system is improved.
You disagree, and believe that these are separate concerns.
That's fine, you have different values / judgements in this matter. But saying he's `mistaken` for not agreeing with you is childish.
The dead simple rc.conf file seems so much nicer than the stuff I was dealing with in the entire world of Linux-based systems, like going back to the way Arch used to be when I really liked it.
With FreeBSD, my impression is that manual shell scripting is still the norm. Integrating RCTL (FreeBSD's resource-limiting facility) with service management basically consists of manually writing in a bunch of imperative calls to RCTL into scripts. There's no way to configure services with limits declaratively, ensure the right thing happens when services are started/stopped, etc., precisely because there's no integration between the RCTL facility and the process-management or init facilities. Or at least I haven't found a way. The closest is that if you need such integration only for jails, you do have the option of third-party "monolithic" management systems, such as CBSD.
I will also add that managing disparate platforms is never a reality from experience. There are perhaps two core platforms at a company and they are migrated together in blocks, all together. For us, we have a couple of legacy Ubuntu machines that are being canned this month. Everything else is Windows 2012 R2 and FreeBSD 10.
The "systemd way" is to provide a monolithic abstraction over many things with a DBus API. It's the equivalent of adding WMI and a registry to a Unix platform i.e. it's against the fundamental tenets of the operating system. Having managed windows systems for years, this is really not something I want to see. Time will tell, but if I'm not right about that then I'll eat all three of my hats.
And yes I have experience with systemd as well through evaluation of RHEL7. Within two hours, I'd hit a wall with timedatectl enabling NTP on the machine. The steps to debug the mess were horrible and the issue eventually just spontaneously disappeared.
That's reminiscent of the stateful nature of windows which brings back many years of pain in the 1990's and 00's for me.
Isn't there a bunch of fine print that goes along with that?
How? The installer doesn't even work.
Also I care about being able to use my computer and for the first in 15 years a systemd update caused my computer to needlessly dropping into systemd emergency mode at boot and this emergency mode being broken I was effectively locked out of my computer because an optional external usb drive that was defined in fstab with no issue for a couple years now required a nofail option. Now consider that this computer is located in a remote location 1000 km away from where I live.
To me systemd already caused way more very real problems I do care a great deal about than it has solved reducing boot time by a few seconds is not something I care that much about.
I just want to get shit done and solve problems and anything that risks that gets outed now.
FreeBSD hits the sweet spot, probably followed by NetBSD.
It's pretty clear how that's going to shake out, isn't it? Google is pretty much a non-issue here; yes, Android and ChromeOS use a Linux kernel base, but they have no impact on any mainline distros, and there's no indication Google wants them to. So it reduces down to two parties fighting for control: Canonical and Red Hat. And Red Hat is going to win. Canonical doesn't have the resources to go its own way on more than a handful of fronts (this is why when Debian switched to systemd Upstart was killed off; Canonical is far too reliant on Debian as an upstream to fight every issue), and their requirement for a CLA to accept anyone else's code means they are entirely reliant on their own coders, as nobody wants to sign Canonical's CLAs. We'll see how long they can stick it out on Mir, but they don't have the resources to fight a war with Red Hat on two fronts, so that's the only issue I expect to see them fighting over.
Creeping up on their arses is Microsoft (again) with Azure and incredibly cheap commercial offerings.
For my part, I agree that binary logs was not necessary, though I've yet to encounter any issues with it, and journald certainly does provide a lot of functionality that makes it more pleasant to deal with logs than before. All of that could have been achieved while retaining text logs, though. But at the same time, it is still trivial to log to text files by telling journald to log to syslog if that matters to you.
Other things I do care about include getting rid of init scripts - that is a persistent source of problems. I'm inclined to believe not a single one of them are bug free, though that's probably a bit uncharitable. Unit files helps. So does cgroup containment to rid us of the abomination that is the need to rely on pid-files and hope that works reliably (it doesn't, since pretty much nobody are through enough when writing init scripts). Other things include better recoverability in cases where critical processes gets killed, and well thought out handling of early stage logging. And things like systemd-cgtop and systemd-cgls are nice.
I'm sure we'll eventually get solutions that split more of this functionality out into more cleanly separate components, and that'll be great, but until then I'm happy to stay with systemd.
As for the problems you ran into, that sucks, but any large change like this will have painful teething problems and they're not a good basis for judging whether it's a good long term solution - I've had plenty of boot failures caused by problems with init scripts as well.
Boot time is a long way down the list of benefits for me too - most of our servers have uptimes measured in years, and even my home laptop usually goes a month or two between reboots.
It would be a step backwards: it is simpler, and does less stuff, so booting would be slower and some features are missing.
My BSD systems (not front-facing and therefore on a lesser patch cycle) rarely get rebooted and neither do the processes so this is indeed moot for me.
Yes that's a memcached uptime on a host that has had 10,185,367,932 cache hits...
OpenBSD however works wonderfully.
Also there are some peculiarities in the way the LSB init script compatibility is implemented in systemd: it tries to be 'smart' and remember their state.
So you start an init script, and it failed for some reason / perhaps even exited with an error code, perhaps you are still developing that init script.
Now fixing the problem and running the init script / systemctl start doesn't even try to run the script because it thinks it has run it yet. You first have to tell it to stop it (which fails), and only then you can run it again.
My BSD systems boot quickly enough for me.
It's pretty dumb, and not enough of a problem for me that I'd figure out how to work around it, but it's a pretty good example.
My experience is that a substantial amount of time is wasted weeding out undesired timeouts in startup scripts, because they lead to increasing downtime.
We always need strong alternatives, even if they face the risk of being taken as simply a political statement, the effects of the statement will be seen in the decision-making down the road.
There are some real issues being pointed out (particularly regarding monolithic design) but no-one has attempted to actually fix that in any way (in code, that is).
While it is unlikely that I will end up using uselessd (unless it "wins" in some way, e.g. in embedded space with uclibc and musl), I very much welcome the effort to bring out alternatives that address the same problems as systemd, yet trying to fix some of the issues there are.
A much better solution for the problem of user-facing applications (e.g. "desktop environment" software) depending on systemd's public dbus interfaces is to provide a fake service that gives them fake data - the same way you would sandbox Android apps for privacy by giving them a fake Contacts list, etc.
As for the other main "public interface" of systemd that things are starting to depend on, the systemd service file format, it would be easy to add support for this file format to any other process supervision system.
At the moment, yes. We do keep much of the internal systemd architecture in tact, but we do eventually aim on partially decoupling it, or at the very least expanding the breadth of configure prefixes for tuning its behavior. We are a pretty early stage project, after all.
Indeed, the systembsd and systemd-shim projects are working on the D-Bus interface reimplementation part.
Our goal right now is to be a minimal systemd base that can be plugged in interchangeably and have the vast majority of unit options be respected.
There already are systems that offer primitives to reuse systemd units. nosh is one of them, and there also exist scripts that can convert systemd services to SysV initscripts, and even the opposite (dshimv).
You've slayed the straw man... ;-)
systemd doesn't put everything in pid 1. It defines some mechanisms to orchestrate the whole thing that include pid 1.
All of the existing mechanisms are also a "system" that compromises a ton of processes... If systemd is monolithic on these grounds, then so are they.
> What matters is that it has a monolithic architecture, whereby breakage in any one part or their communication channels can bring down the whole system.
Uh-huh... I think you are speaking to branding more than technology. Keep in mind that systemd is using existing components in much the same fashion they were already being used (hence the accusations about them "absorbing" udev).
If you look at the architecture, it has got very clear points of encapsulation that is much more structured than the loosey gooesy stuff that came before it.
> This is not just a theoretical concern; it has REPEATEDLY happened.
Yeah... with existing systems. There's any number of points of failure that are the stuff of legends in Unix system administration. Obviously, it will take time to get systemd thoroughly cleaned up, but it's not hard to look at the design and see how it provides plumbing to simplify and avoid a whole host of these scenarios.
With systemd on the other hand, all of the components under the systemd banner are tightly interconnected and communicating. In particular pid 1 has ongoing communication with multiple other components, and misbehavior from them can, both in theory and in practice, deadlock the whole system. In case you missed it, this is roughly what "monolithic architecture" means: even though the components are modular, they're designed for use in a tightly interwoven manner that's fragile. It's completely the opposite type of "monolithic" from the kernel, which has everything running in one address space, but with architectural modularity, where interdependency between components is kept fairly low.
You mean like how if even one of my SysV init system start up scripts hung indefinitely, all subsequent components would never get started? Or are you referring to how the whole system would hang when the root filesystem device was temporarily unmounted (really fun with network filesystems, although to be fair, NFS implementations eventually became robust enough that this wouldn't be a complete disaster)? Or are you referring to fork bombs those race conditions you mentioned that would bring my system to a complete stand still? Or are you referring to how a race condition with date formatting in syslog actually hung my entire system time and again? Or perhaps you mean how a lot of init scripts had little (if any) retry logic such that you'd often end up without the critical component of your system not running... often in ways where you'd not find out about it or worse still not be able to do anything about it without some really intrusive intervention? Or maybe you are referring to how if you got your init startup order wrong for one of many critical components, you'd have a deadlock before you ever got a chance to actually fail.. Or maybe you're referring to how the right kind of getty failures triggered by a weird byte in a config file could turn your system to a paperweight?
It's so hard to tell which scenario you are referring to. ;-)
Then why can't it offer a stable interface that lets me swap out e.g. udev with eudev, like I could before?
That's what makes it monolithic - not the implementation details but the absence of well-defined interfaces between the pieces.
I'm not sure it can't.... To the extent it _doesn't_, I imagine it is not much of a priority, since eudev is a fork from udev, and is lacking the enhancements to udev the systemd project has been working on.
The scripts were buggy in such a way that starting the database would bring it up okay, but prevent the rest of the instances from starting. Also, using the "stop" directive would successfully stop the database... and all the others, as well.
The bug probably occurred because the init scripts were horrible to begin with and had been copied (ugh) to accommodate more instances, without the necessary modifications to not screw things up.
One of my "favourite" problems with init scripts for service stop/start is that way too many of them basically throws their hands up if the contents of the pid-file doesn't match what it expects. Never mind that 90% of the time when I want to actually run stop/start/restart, it is because something has crashed or is misbehaving, and there's a high likelihood the pid file does not reflect reality.
So a far too common scenario is: Process dies. Tries to run "start". Nothing happens, because the pid-file exists and the script doesn't verify that the pid actually matches a running process (or it checks that it matches a running process, but not that the process with that pid is actually what we want).
Ok, so we try "restart" or "stop". Gets an error, because the pid-file content does not match a running process, and rather than then cleaning out the pid file and starting the process, it just bails.
Basically I don't trust init scripts from anyone but distro maintainers themselves, and even then there are often plenty of edge cases that cause problems.
Regardless of systemd, I really like the systemd solution to this of using cgroups to ensure it can keep proper track of exactly which processes belongs to a service without resorting to brittle pid-files which seem to rarely be properly implemented. Of course that cgroups approach could be implemented as a separate tool, but pid-files badly needs to die.
My take: Containers are not well managed by general, daemon-oriented process supervisors with a localhost-oriented purview. However, those supervisors would do well to use container-related features to better secure and manage daemons as appropriate. In future, processes will be more likely managed across clusters by parallel capable supervisory systems with high availability goals and network infrastructure configuration, load and topology knowledge. Less and less people will even see the init system, except perhaps behind a logo or as it flashes past while booting their device in debug mode.
(Edit: stumbled on http://www.gossamer-threads.com/lists/gentoo/user/284741 which explains the scenario .. would hate to be on BSD)
The main thing that scares me is the binary loging format I can think of some benefits but mostly it just seams scary. I guess I will get go se later if the benifits outweighs the rest.
In principal everybody is on terms with the need for a new and modern init system. But yet I'm not even sold on this issue. sysvinit is still holding stance with extra tools and doing it's job cleanly. By introducing a fully reimplemented and still controversial system with many dependencies and with need for many reimplementation on our existing software we are not helping the issue but blurring the waters.
And What's the fascination about boot times?
Nowadays on desktops nobody boots. You just boot once and hibernate/suspend forever. And for servers, if you are rebooting you are doing something wrong. So pulling efforts from building controversial init systems to optimize hibernate/suspend in the kernel would be a better effort on this field.