The political significance is quite high, and I have to say I feel that this move, though perhaps a bit childish, is a valid signal to express grievances with the absolutist attitudes of the systemd developers. Plus let's be honest here, they aren't the pinnacles of mature behavior either. I think this is the first time a major project has done a statement like this against systemd. I would expect more to follow their example.
I of course summarized my issues in "Structural and semantic deficiencies in the systemd architecture for real-world service management"  and hope to see systemd go the way of devfsd and HAL.
I disagree. My initial reaction was to lament the fact that BusyBox syslogd has lost the ability to be passed its socket as an open file descriptor. One may debate whether systemd's mechanism or UCSPI-UNIX (extended to datagram sockets) is the better way to pass such descriptors along. But it now has no mechanism. BusyBox syslogd is now less than it was.
Also, for busybox syslogd, it does not matter at all whether you can hold its socket. syslog isn't a reliable mechanism anyway, and busybox is light enough that the non-ready period is really short.
So, on the technical side, the impact is quite minimal.
On the political side, however, it looks like Denys made a splash, and I'm not going to complain about it. :)
The significant technical aspect is, as I mentioned, that now there's no mechanism to pass an already-open socket to the daemon. nosh has several service bundles supplied out of the box for providing syslog service. They provide the various combinations of two different syslog tools over /run/log, UDP port 514, and /run/systemd/journal/syslog. Each operates in the UCSPI(-like) way, opening the datagram socket specific to the service being run, dropping privileges (in the syslog-read case), and then invoking the daemon program. All sorts of fairly obvious (to those familiar with the daemontools way of doing things) consequences ensue, like separated streams for local clients and remote clients, and control of whether and whose remote client service is provided that is as straightforward as taking the individual services up and down.
Rainer Gerhards rsyslogd and the nosh toolset's own syslog-read both support this. The BusyBox syslogd used to be usable in this way, as well. udp-socket-listen and local-datagram-socket-listen have a --systemd-compatibility option that would have interoperated quite happily with the BusyBox code as it was.
But thanks to BusyBox syslogd now being less than it was before, the systemd compatibility won't work and there's no mechanism for this in BusyBox syslogd. In taking a sideswipe at systemd people, Denys Vlasenko has made BusyBox less operable with other systems that are not systemd. That's a shame, in my view.
I don't develop embedded systems, but systemd is actually popular in that space because of its watchdog capabilities and its inclusion in projects like GenIVI and Tizen.
More detail is in this post from an embedded systems developer here:
GEOM, cvsup, how many versions of port managers?, perl and gcc extraction/clang migration in base system, and the big ones NetBSD and OpenBSD--all represented wrenching progress--*BSD has its own set of people throwing temper tantrums at various points, as well.
A software project this large, complex, controversial and coupled that wants to be PID 1? Absolutely no way.
People making this complaint don't seem to have any idea what is in init normally or why you might want to add more stuff there (for example, where are you going to manage cgroup trees for system processes from?)
So the same can be done on Linux. At least one system, OpenRC, has explicit support in such a manner.
(I take it you haven't read the architectural critique? Most pro-systemd arguments fall flat on their face in any event.)
Legacy cgroup API is going away.
It is the same case for the /usr merge . systemd is not forcing the change, but it is complying with the changes required and is getting blamed for it.
> systemd is NOT making the change to manage cgroups from PID1. Kernel is...
I've heard systemd proponents assert that both udev and the cgroup manager must live in either kernel space or in PID 1, because to do otherwise would expose systemd to races or something while PID 1 started udev and/or the cgroup manager.
These are also erroneous statements. It's rather important to correct such statements, as we're dealing with a (sadly) highly politicized technical topic.
Given that vezzy-fnord told you: 
> The kernel mandates a single [cgroup] writer.
you must have come to the understanding that the way cgmanager gets its work done on a systemd system is to pass control group management requests through to systemd's control group manager. Because the ultimate plan is to have a single CG manager, either cgmanager, or systemd's control group manager handles the CG management requests. There's no other way it can work. Because the systemd project is heavily vertically integrated, the odds that cgmanager uses its own CG manager are near-zero.
> you are correct - I had conflated the PID1 argument ... because as far as I stand.. it doesnt make a difference to me.
Yep. Your arguments and assertions have been almost exclusively soft and non-technical. Here's some advice: When people are making comments about technical topics, don't join in the conversation unless your level of understanding on the topic is just about as deep as that of those who are speaking. 
> Probably another reason not be scared of systemd ;)
Given the timestamp on this comment, it seems unlikely that you've not had the opportunity to read my reply  to one of your much earlier comments. Given that I lay out five solid non-fear-based arguments for why one might be worried about the systemd project, your assertion that I shouldn't be "scared of systemd" is extremely dismissive.
 Unless -of course- you're joining the conversation to learn more about the topic. In that case, refrain from making uninformed assertions, ask clarifying questions about things you are unsure about, and make the limits of your knowledge clear up front.
If your service manager process were to crash, what are you going to do about it?
If you restart the service manager, it won't know what state the system is in, which services are running and which are not, which services were running at the time when it crashed but then stopped just before it restarted, etc.
How are you going to do that in a race-free and reliable way that is actually better in practice than the alternative (reboot)?
And if your service manager is a single point of failure, it doesn't matter much which PID it's running it, it has to be perfectly reliable anyway (just like the kernel).
The idea that the service manager would not be able to know the system and service states is completely false. Solaris SMF is a design that does, via its use of the configuration repository. Simpler designs can then deduce enough metadata from the persistent configuration in the supervisor tree. There's many possible approaches.
The idea that such fault recovery is implausible is a naive one that only one unfamiliar with the research literature could espouse.
If we take your logic to its conclusion, we should just run everything in ring 0 with a single unisolated address space, because hey, anything can fail. Component modularization and communication boundary enforcement is the first step to fault isolation, which is the first step to fault tolerance.
Let's see... init(1) is apparently restarted by the Solaris kernel
automatically, which is different from Linux, no automatic kernel panic.
* State File and Restartability
* Premature exit by init(1M) is handled as a special case by the kernel:
* init(1M) will be immediately re-executed, retaining its original PID. (PID
* 1 in the global zone.) To track the processes it has previously spawned,
* as well as other mutable state, init(1M) regularly updates a state file
* such that its subsequent invocations have knowledge of its various
* dependent processes and duties.
* Process Contracts
* We start svc.startd(1M) in a contract and transfer inherited contracts when
* restarting it. Everything else is started using the legacy contract
* template, and the created contracts are abandoned when they become empty.
If svc.startd(1) crashes, init(1) will restart it inside the existing contract of the crashed instance, so it can find its spawned services (in nested contracts), as well as its companion svc.configd(1).
Now during startup, svc.startd(1) calls ct_event_reset(3), and this is really the interesting bit here:
The ct_event_reset() function resets the location of the
listener to the beginning of the queue. This function can be
used to re-read events, or read events that were sent before
the event endpoint was opened. Informative and acknowledged
critical events, however, might have been removed from the
With any luck it will also handle the situation if a supervised process exits after the service manager crashes, and before it is restarted,
as the contact should buffer the event in the kernel until it is read.
Notably this is a Solaris specific kernel feature of the contract(4) filesystem; does Linux have anything equivalent in cgroups or somewhere?
The other SMF process, svc.configd, uses an SQLite database (actually 2, a persistent one and a tmpfs one for runtime state), so it's plausible that it's properly transactional.
> If we take your logic to its conclusion, we should just run everything in ring 0 with a single unisolated address space, because hey, anything can fail.
That is an entirely erroneous extrapolation, as I never claimed any other single point of failure [in user-space] than the service manager.
If all of one's system and service management relies upon a system-wide software "bus", then another similar problem is what to do when one has restarted the "bus" broker service and it has lost track of all active clients and servers.
Related problems are what to do when one cannot shut down one's log daemon because the only way to reach its control interface is via a "bus" broker service, and the broker in turn relies upon logging being available until it is shut down. Again, this is an example of engineering tradeoffs. Choose one big centralized logger daemon for logging everything, and this complexity and interdependence is a consequence. A different design is to have multiple log daemons, independent of one another. With the cyclog@dbus service logging to /var/log and that log daemon's own and the service manager's log output being logged by a different daemon to /run/system-manager/log/, one can shut down the separate logging services at separate points in the shutdown procedure.
With the assistance of the SRC_kex.ext extension, you re-establish knowledge of all running services in a new service manager.
Or you make the other engineering tradeoffs.
That could very well be a solution to the problem, but perhaps not one that vezzy-fnord was hoping for.
I actually wanted to link the second of your linked comments but couldn't find it unfortunately.
the unix way is simplicity and transparency. systemd is complex and opaque.
it's ok to have systemd's goals, but an additional goal should be "not a huge monolith"
Systemd has issues I'm sure and I don't trust poettering's software further than I can throw him but not being 'unix'-y isn't a strike against it.
We all agree with you on the 'simple as possible' but you need to spend some effort on the 'no simpler' part.
God help the software industry if that is indeed the case (of course, it is not).
I know that Postgresql's and Erlang's documentation is really rather good. So, go use Postgresql or Erlang for a slightly non-trivial project, then -now that you know about the topic that the docs cover- compare the quality of the systemd documentation to the documentation of either other project.
Pay special attention to the documentation provided to folks who want to understand the internals of systemd, Postgres, or Erlang. AIUI,  systemd's internals documentation is woefully lacking.
 And as has been repeated by everyone I've ever seen try to use said documentation.
> The documentation for systemd and its utilities is second to none.
What little I've seen of systemd's user/sysadmin documentation leads me to believe that it is okay. I also understand that documentation is often the least interesting part of any project, and often sorely neglected.
However. Everyone I've heard of that tests out the Systemd Cabal's claim that
"Systemd is not monolithic! Systemd is fully documented and modular, so any sufficiently skilled programmer can replace any and all parts of it with their own implementation."
by attempting to make a compatible reimplementation has failed at their task  and reported that the internals documentation is woefully insufficient.
When you're writing software for general consumption, good user documentation is a requirement. After all, if noone can figure out how to use your system, "noone" will use it.
When you also claim that you go out of your way to provide enough documentation to allow others to understand the relevant parts of your internals, and be able to write compatible, independent implementations of your software, the quality of the documentation about your internals is now in scope for evaluation and criticism.
 I am very aware that this task is made harder by the fact that it is large and thankless. :)
Their response given was that it was the only way for them to avoid tying their database to the systemd signaling lib.
This was counted by one of the systemd devs claiming they could just use a socket that systemd provides.
But when i poked at the documentation, the only place such a "option" was mentioned was at the bottom of the man file for said lib. And it was presented as a note on the internal workings of systemd.
And you will find warning after warning about not using systemd internals, as the devs reserve the right to change the behavior of those internals at any time.
With respect to socket activation, a pretty useful tutorial, published by the systemd author, can be found here: http://0pointer.de/blog/projects/socket-activated-containers...
The DBus API can be found here: http://www.freedesktop.org/wiki/Software/systemd/dbus/
You missed the point. I'll isolate each component for you:
"[A] database was being publicly shamed for producing a ... unit file ... [that used] a shell script [to start] the database[.]"
"[The database devs mentioned] that it was the only way for them to avoid tying their database to the systemd signaling lib."
"[O]ne of the systemd devs [mentioned] they could just use a socket that systemd provides."
"[But this] ... 'option' ... was presented [at the bottom of the man page for the systemd signalling lib that the database authors were trying to not use] as a note on the internal workings of systemd."
"[You] will find warning after warning about not using systemd internals, as the devs reserve the right to change the behavior of those internals at any time."
So, this "option" -as documented- is something that you cannot rely on, as it is subject to change at any time, without warning.
> With respect to socket activation, a pretty useful tutorial...
Tutorials are no substitute for documentation. Documentation describes the contracts that the software commits to. Tutorials can exploit edge cases and undocumented behaviors without warning. Moreover, if the docs say that the tutorial is demonstrating a feature that's subject to change at any time, you'd have to be a madman to rely on it.
> The DBus API can be found here...
If the database devs don't want to depend on the systemd signalling lib, I bet they really don't want to depend on DBus. This might come as a surprise to some, but many servers don't run a DBus daemon.
Socket activation doesn't have any systemd-based interface. You just get a file descriptor passed in the normal Unix way. The systemd library functions related to socket activation are utility functions for examining the inherited socket, but they are just wrappers for any other way you might do so.
You can configure daemons like nginx or PHP-FPM to use sockets inherited from systemd instead of their own, and it works fine. They don't have any specific support for systemd socket activation, nor do they need to. They can't even tell the difference between the systemd sockets and ones they'd get on a configuration reload.
Then -according to digi_owl's report- it sounds like the documentation for the signalling lib should be fixed.
> Internally, these functions send a single datagram with the state string as payload to the AF_UNIX socket referenced in the $NOTIFY_SOCKET environment variable. If the first character of $NOTIFY_SOCKET is "@", the string is understood as Linux abstract namespace socket. The datagram is accompanied by the process credentials of the sending service, using SCM_CREDENTIALS.
I can see how someone would be reluctant to rely on that, even given the interface promise and the nudging of the systemd developers. To be more consistent with what's a stable, public interface and the admonition to avoid internals, I would probably drop the word "internally."
Indeed, I've created a pull request:
However, even with your change, I still read that section as describing implementation detail that's not guaranteed to be stable. If that note describes a stable, documented protocol, a link back to the documentation of that protocol would be helpful and reassuring.
$ man none
No manual entry for none
 But that's okay. I strongly suspect that when most folks say "sysvinit", they really mean "sysvrc". Hell, I used to be one of those folks until a while back.
But then i am sitting here using a distro that boot by way of a couple of flat files. Frankly i kinda like it, but then i grew up fiddling with autoexec.bat...
I don't believe that's the case. The "service" command (which is part of the sysvinit-utils package, not Upstart) invokes either Upstart or SysV init as necessary, but Upstart itself has no awareness of the SysV init world. You couldn't have an Upstart service depend on a SysV init one, list SysV service status with Upstart, or enable a SysV service through Upstart.
In case you're curious, the wrapping on the systemd side is more comprehensive. SysV scripts appear as units, and systemd parsed the semi-standard metadata at the top of most SysV init files to determine when the service should start if enabled (translating from the traditional run levels). As units, the SysV init scripts are possible to enable/disable, start/stop, and list using the standard systemd commands. They can also participate in dependency graphs alongside systemd units.
The discrediting debates, labeling near abuse and mockery of opponents, appeals to authority and exaggerated consensus do not look accidental. This has all the markings of a sophisticated campaign.
For many this can be off putting but its also a wake up call on some naive ideas about how the world works. Money drives decisions, the cathedral and bazaar as a metaphor has no meaning when the bazaar has spawned a billon dollar cathedral in its midst, and like the failure of communism in practice this too could be anticipated. Money has its own agenda and always corrupts everything. And we all know this because its part of our historical record and common sense. And there is nothing we can do about that because in the real work words are meaningless. Individuals have zero power, groups have some power but its groups with money that hold the cards.
The Debian management tells users,not out of exasperation, or frustration but of arrogance that they essentially don't matter, and the only way they can have a voice is with code. This disrespect for users lies in contrast to a project that has none and thus the self awareness to realise it is meaningless without them. This new found arrogance sits uneasily with the ideals of the open source movement but anyone who draws the dots will realise its not an open source movement but a full fledged corporate movement with the thinnest veneer of ideology that is not up for 'management'.
For the next generation you cannot protect ideals if you enable and allow cathedrals to grow in your midst. For the current things do not automatically fix themselves, the awareness of the power of money to influence outcomes and how to firewall them, how to ensure co-existence, sustenance and growth, how to reward projects and developers and how to ensure you are not getting hijacked and sidelines by interested parties should be the 'community', and there is no such current organization in the ecosystem that's not tainted by corporatism and opportunism.
> The discrediting debates, labeling near abuse and mockery of opponents, appeals to authority and exaggerated consensus do not look accidental. This has all the markings of a sophisticated campaign.
Sophisticated? That's pretty laughable. To most outside observers this looks like monkeys flinging poo.
In addition, you make a serious error in your assumptions:
Popular != technically correct
Unpopular != technically incorrect
RedHat did not just magically have an epiphany and decide to shove systemd down on everybody. They started by using "upstart". Remember how everybody went apeshit over that? Sounds a whole lot like the current debate doesn't it? After using upstart for a while, folks at RedHat decided that wasn't sufficient and they needed a replacement for that.
People seem to think that just magically RedHat decided to rip things up for no reason at all. That's patently false--RedHat has some issues that need to be solved and systemd solves those. If you want to beat systemd, you're going to have to give RedHat something which works better.
After their experience with upstart, it's no surprise that RedHat just decided to ignore the community. It was clear that a solution was never going to be accepted.
As I'm sure you know, systemd grew out of Red Hat, and Debian as most other Linux distributions have adopted it in lack of a better alternative. There is no grand scheme to it other than getting rid of SysV.
In a few years time uselessd, nosh, dmd and the many other systemd-lookalikes in development have matured and there will once again be diversity in init land.
And replace "systemd-lookalikes" with service and system management systems, while you are at it. Neither of those is aimed at looking like systemd, and it would do them a disservice to miscategorize them as such.
nosh 1.22 is about to be announced.
Can we get a bit a of context on that?
"I'm not willing to merge something where the maintainer is known
to not care about bugs and regressions and then forces people in other
projects to fix their project."
1. A system hang due an assertion in journald spamming kmsg.
2. The question of whether systemd should be logging to kmsg or doing anything based on the "debug" kernel command line argument.
Linus got pissed because, based on Steven Rostedt's post to LKML and the referenced bug report, it looked like Kay was refusing to fix issue #1, but that bug had already been addressed and a fix committed to systemd. In the aftermath there was also a flamewar over issue #2, which is what Lennart is responding to in that G+ post.
I mean rate limiting stuff that comes from anywhere is needed. not cause of the fact that systemd does it.
I mean if I could write a userspace tool that crashes the kernel.. is somewhat aweful.
RedHat had enough, and finally jammed systemd down on the Linux ecosystem.
This had two effects:
1) Technical: it exposed a LOT of shortcomings in the architecture of Linux for operating on modern systems. The whole "This belongs in PID 1! No it doesn't!" stems from there not being good, correct, and obvious ways of accomplishing the tasks that need to be done.
2) Political: it pissed off everybody from Linus down who actually thought they had power and control by demonstrating forcibly that their opinions don't really matter.
As for the technical issues, I'm not terribly sympathetic. Somebody needed to fix this, and it was painfully clear that nobody was ever going to get consensus on this. In addition, Linus blocks a LOT of stuff attempting to evolve the kernel in directions that are improvements, but that he does like or doesn't understand. For example, Linus didn't handle the ARM board issues very proactively. He basically ignored things until they got untenable and then finally blew up at the ARM guys and threatened to rip them out of the kernel. People had tried several times to get a modular configuration system put in place, but Linus never wanted to commit it as there "wasn't consensus". True, but once his pants caught on fire, he didn't give a shit about consensus anymore.
As for political issues, I have even less sympathy, as RedHat simply did what Linus has been doing for years and imposed their will on the ecosystem by force. Enlightened dictatorship is a wonderful governmental mode, until you're no longer the dictator. Perhaps if Linus had figured out how to actually build consensus, I would find his words less hypocritical and more deserving of consideration.
The BSD's are no strangers to controversy. The whole existence of OpenBSD and NetBSD is a tribute to that. Even with FreeBSD, the GEOM subsystem changeover caused great friction. The difference is that Poul-Henning Kamp did the work and got a signficant level of consensus from the BSD leadership--but by no means unanimous and it was sometimes acrimonious.
the well-publicized incident to which you're referring had systemd introduce something that broke userland, then instead of fixing it in systemd code, proposed a kernel change for it. this is, obviously, bad practice.
it would be one thing if the systemd code exposed a vulnerability or inefficiency in the linux kernel. but fixing your bad code by modifying the kernel is like trying to modify a popular compiler because some bad code you wrote isn't doing what you want it to.
I examined that thread, and the upshot seems to be that the kernel wasn't rate limiting logging so userland could break the kernel. And Linus threw a hissy fit instead of actually examining the situation. The kernel not rate limiting is a fault in the kernel. The fact that a user program exposes it does not mean that the user program is the one in the wrong. And Sievers basically said: "Not rate limiting logging is a kernel problem. We're going to make you fix it." So, basically he had the temerity to both A) pull a political power play and B) be technically correct--and it pissed Linus off ferociously.
Linus, however, is fighting with the 800lb gorilla. That's not a good position to be in. Linus needs the RedHat programmers more than the RedHat programmers need Linus. They'll keep Linus around to avoid the bad PR that would come with a fork, but, if he gets in the way, they'll just cut him out of the loop.
You really, really, really need to take ten minutes or so and read the whole thread. If you've not the time for that, then the single message linked here provides a decent amount of context: https://news.ycombinator.com/item?id=10484317
Linus' first reaction to systemd accidentally flooding the kmesg queue was to make an ad hominem flame directly at another developer when the kernel was responsible for at least half the problem (lack of rate limiting).
Most people would qualify that as "throwing a hissy fit".
Are you talking about this?
"Key, I'm fcking tired of the fact that you don't fix problems in the code you* write, so that the kernel then has to work around the problems you cause."
If you are, then you have to know that he said that because Sievers has a history of aggressively refusing to own up to (and fix(!)) the bugs he introduces into his code.  Contrary to popular understanding, Torvalds doesn't dress down people unless they've demonstrated that they should know better and continue to fail to meet their demonstrated potential.
Most people call that sort of management style "stern" and "meritocratic".
 Indeed, it is this behavior that led to Sievers's Linux kernel commit bit being unset. Because of this, Greg-KH had to become the front-man and shepherd for the kdbus project.
Wikipedia have more info:
Another part of the reason is that Red Hat forcibly landed systemd in Fedora and then RHEL7, and RH is an elephant on the scale of Linux development.
Just seems like if it's as bad as so many say, it just doesn't make sense for all these distros to blindly go along. Even the GNOME lockin doesn't seem like it'd explain it.
Most RPM-based distros (openSUSE included) tend to follow RH's direction, so that's not surprising. Besides, SUSE has always been enthusiastic about most desktop efforts.
Arch Linux have at least two systemd developers on their team (Tom Gundersen and Dave Reisner). In fact, it was tomegun who wrote the Arch migration rationale: https://bbs.archlinux.org/viewtopic.php?pid=1149530#p1149530, based on the usual fallacious arguments (http://judecnelson.blogspot.com/2014/09/systemd-biggest-fall...).
Also, distros have blindly went along with other bad ideas before. The most prominent examples were HALd and LSB-style initscripts.
In fact, I'm not sure why you're at all surprised. Large groups of people in real life have collectively made far, far more catastrophic decisions. Why would a bunch of distribution maintainers adopting a piece of software be so shocking?
When there were problems about a desktop manager revamp or some crashy audio daemon, you could liquidate certain choices as irrelevant or lazy. Now the entire ecosystem's guts are being rewritten by RedHat for RedHat, and the developer community is simply going along because, in practice, they have no other choice. Nobody else has the appetite, the manpower or the political weight to put together a competing project with a single chance of success.
When chips are down, Red Hat owns Linux more than we like to admit.
Thing is though that while freedesktop.org is presented as being about cooperation and compatibility between desktop environments, a very large portion of what happens there is dictated by Gnome.
And Gnome is yet another project that on paper is independent, but with big RH contributions in terms of programming manpower (its the primary DE of both Fedora and RHEL).
Probably the worst part is that none of this is planned, there is likely no grand conspiracy. Its just that so many of the people involved walk in the same halls, share the same cafeteria tables, and sit in on the same meetings that a internal consensus ends up formed about what is the "right" way to do things.
This. The whole Linux community seems to be paranoid. It has to be a grand scheme, a plot to take over Linux, a conspiracy to kill "the UNIX way". Maybe it's just a group of guys solving their problems.
I don't see how anything of this is really Red Hat's "fault". Seriously. They develop a piece of software which solves their problems. They also incorporate projects they already maintain to improve functionality and make developing things easier. It's their projects, they're allowed to do that. The code is still open. It's not like everybody (Debian, Ubuntu, SUSE, etc) didn't have the ability to fork udev.
Yet all these projects chose not to fork the projects incorporated by systemd. Reason (1) might be it solves problems for them as well. There's no need to fork something that perfectly works for you. Reason (2) might be they don't have the resources. But that's a declaration of bankruptcy for the whole "Linux is built by community volunteers" thing. The people advising against "corporate influence" are not able to do anything of value in modern computing anymore (I'm well aware this is over-simplification, please bear with me). Requirements for software have changed. It's not 1970 anymore. And while the "old-school" users and devs are complaining about systemd & co, the "youngsters" are busy changing the ecosystem.
May well be the case.
Somewhere GregKH mentioned how the pace of kernel development had changed, with git being a major part of it.
It may well be that the pace have now gotten to the point where hobbyists have a hard time keeping up, as they have things they need to do besides stare at code all day.
This is nobody's fault, but it's not healthy in the long run.
Totally agree. The needs of modern computing (server-, desktop- and security-wise) have drastically increased IMHO. You need to read so much documentation and existing code before you can even start coding. That's just not feasible in your free time after work if you're having an active social life or another computer-unrelated hobby.
Although I have to admit I might be biased about this topic. I'm contributing to a FOSS project which got a lot of flak for trying to raise money for employees in a ... well, more aggressive way. While not necessarily agreeing with everything that happened, I do believe you need to work full-time on modern FOSS projects to push ahead.
This is so not true that it is bordering on FUD. Systemd's leadership depended on debian making a fairly democratic, open and violently fought battle between upstart vs systemd. Everyone knew that Ubuntu (which is pretty much defacto installed in every laptop sold in Asia) would adopt systemd based on Debian's decision.
It was an argument that went on for a year. Read it for yourself if you want - https://bugs.debian.org/727708
Not only did Debian make the decision to support systemd, it voted to NOT support other init systems. Mark Shuttleworth made his announcement the day after (http://www.markshuttleworth.com/archives/1316 ).
It is an interesting position to take by painting systemd as the lackey of a capitalist monopoly trying to take over the world. I can see how that can get a lot of mindshare. The truth is far simpler - systemd is far superior.
> Not only did Debian make the decision to support systemd, it voted to NOT support other init systems.
This is incorrect: systemd was merely voted as the default init system, other init systems are explicitly supported, here's the TC decision:
actually, I was referring to https://www.debian.org/vote/2014/vote_003 . But I see how my language could be interpreted that way.
Canonical revenue is 67 million according to wikipedia, by contrast Red Hat is 1.5 billion. That is 3 extra zeros (and even then it is almost 1/100 of Microsoft's revenue).
But that is a recent move, while RH has been in the Linux development effort for some time.
The fact of the matter is that "Distros" like Debian etc are dependent on upstream developers to provide new versions of there system. If a large portion of important subsystems are developed by RedHat developers then they will adopt that code and be driven in the direction upstream wants to go.
"Distros" do not develop, they package upstream into compiled binaries and add there configs and perhaps package management. In short most of the distros will bend which ever way the upstream wind blows.
Thats why I use a "Distro" Like Slackware or Crux that is built from scratch and not based on another system. What limited autonomy there is in Linux land is with the independent Distros.
For instance their main book use eudev rather than udev, because they found the effort of extracting udev from the larger systemd project a right pain.
They do however maintain a parallel systemd book for anyone interested.
By comparison Debian just has enough resources for keeping Debian running. SUSE seems to be an afterthought at this point. Ubuntu seems to mostly be concerned with itself.
I think that's a touch unfair. The fact that Debian runs on the BeagleBone Black and RedHat doesn't puts a bit of lie to that statement.
In particular as Fedora already have ARM versions across the board.
And this is not about the init-daemon, it is about systemd as a whole integrated suite of daemons that connect easily and add fancy new features, while on the other side you might have to modify two completely independent software projects to add a new feature that uses both, and even then there most likely are alternatives to each one, and you feature will only work with this specific combination. So a more monolithic system, which systemd is despite Lennard denying this, makes development faster, but will bite you in the long run, especially when the code and documentation quality is as poor as systemd's.
For example, isolating the /tmp of an application, monitoring the process and restarting if necessary, more logging options (and imho cleaner/more efficient).
Then again, all this has been debated over and over again. In practice, people are adopting it. Haters are noisy. It's like reading about debates on IPv6 migration strategies.
(Actually I think supervision goes back to IBM's SRC in at least 1992, but I'm not exactly sure if the earliest versions had anything beyond process management.)
At present multiple processes can set up and manage cgroups, but the maintainer wants to change to there being a single user space manager.
Thus systemd was pushed as "the" cgroups user space process.
You can already seeing the systemd devs behaving as if this was a done deal.
Ran into a email a while back about systemd clobbering a libvirt managed cgroup. And in it Poettering "suggested" that libvirt should hand control over to systemd as it would be THE cgroups manager going forward.
You can probably find similar encounters between, say, Docker and systemd.
It also looks a lot more like systemd in systems with good process supervision like Solaris.
Not at all. SMF supports delegated restarters, its milestones can contain far more state information (and explicitly) than targets can, it uses a configuration repository which enables runtime dynamic service state modification at a far higher rate than the relatively static systemd, its dependency system is simpler, it's integrated with the hardware fault manager (also to provide service resource identifiers beyond the COMM name), its profiles are more granular than presets, it has explicit service instance support and logging is flat file-based in /var/svc/log.
SMF has plenty of its own issues, but it's quite different from systemd. I think the delegation aspect is one of its most valuable lessons, but ultimately I also believe it's too overarching for a modern init.
Then again systemd gobbled other critical components such as udev, there's also gnome that made it a strict requirement, like a cancer it grows and takes over other components.
They began work on a logind compatibility layer over ConsoleKit2, they're writing a NetworkManager alternative, they directly influenced and are supporting a udev alternative called vdev (which also has libudev compatibility), and a host of other things.
For a rage-fork, it's pretty impressive. They're changing a lot. It's easier to just astroturf in the corner, though, I suppose.
Have you a notion as to why they're using vdev rather than eudev? What appears to be the vdev introductory blog post makes no mention of eudev.
And that stripping will be more complicated moving forward, as i recall a recent systemd release moved various bits from udev to a new systemd lib. Leaving the udev interfaces as stubs to be removed at some undetermined future date.
The rest seems to have been executive decisions (with or without a "deal with it" meme accompanying), often by people already involved with systemd development.
I definitely tend toward the curmudgeonly, but to this grumpy old man, it seems like we're replacing things simply for the sake of change.
"It's often said that a half-truth is worse than a lie. In the case of systemd, a half-improvement is worse than a complete flop. People are focusing on the features they want to use and ignoring the ones that will come back to bite them later.
No, my biggest problem with systemd is that it is repeating the mistakes of System V all over again. It's just given them a change of clothes to appeal to the 21st century."
rest the rest at: http://www.steven-mcdonald.id.au/articles/systemd.shtml
for the debunking of the necessity of having journald see http://lpar.ath0.com/2014/05/18/why-i-dont-like-systemd/
I think just as much of a stink comes with how you need to have systemd-the-init to use systemd-logind (replacing consolekit for session tracking) or any number of other possibly interesting, but tied to the hip of systemd, sub-projects.
Fortunately for me, I'm not one of them.
Then there is slackware, gentoo, pclinuxos that either did not bit the systemd bullet or offer alternative option. Not sure about ubuntu.
...well, kinda: manjaro supports openrc, but the official, main installs are systemd. you can start with an official install and convert it, or you can use an openrc iso, but those are not officially supported.
Also, most of systems using busybox shouldn't be running Systemd (or any other 'new' init system to be honest)
I will use the Linux kernel (although it is not my favorite kernel).
But I am interested very little in GNU userlands and all the idiosyncracies, complexity and politics that comes with them.
Whether systemd is a good thing or a bad thing depends ultimately on what you're trying to do and what sort of operating system you're using. For some people systemd is really helpful, for others it gets in the way and creates unnecessary complexity where it otherwise wouldn't exist.
I've been using systemd along with fleet on coreos, and it's fantastic for me but I personally wouldn't want systemd on my desktop. The problem is that it has been forced onto people left right and centre when it doesn't suit their needs. That essential reasonable debate never occurred and those that are forced to either swap distro or 'convert' to systemd get upset. If there was a reasonable debate people would still hate on systemd, but at least the reasons for using it would be more known and the criticisms atleast acknowledged.
I can't shake the suspicion that ever since gamergate, or perhaps even "for the lulz" anonymous, there have been people going around the net, latching onto anything vaguely controversial, and trying their best to stir up trouble by making threats and statements that are barely relevant to the topic.
Although not all haters are trolls of course, but it's really hard(impossible?) to tell over the internet (Poe's law), especially when you have haters, fanatics and trolls all playing each other and the only person who wins is the troll in that scenario.
Right from network interfaces that dont change names when you swap hardware (powered by systemd) to making it damn easy to file crash bugs (using coredumpctl) to checking what services have failed "systemctl --failed" to a more secure graphical desktop (rootless gnome with systemd) - it makes for a better linux.
Deploying web-services on systemd is so much better - think supervisord, but much more stable and robust. Even docker machines using systemd is great (in fact it is a great way to explore systemd).
The biggest problem with systemd is its mediocrity. The journal is an interesting idea, but it's inflexible. Its insistence on json and internal formats mean that it doesn't work with the rest of the extant logging infrastructure tools that exist, so if you want to have a dedicated log server, you have to run essentially two syslogds. Its core dump misfeatures make it harder to use extant access control features to manage files. Its unit files don't allow for any sort of extensibility, so adding things like a configtest command for a unit are impossible. In short, its oversimplified itself to the point where it adds a huge management load just to keep servers where they were.
I think it depends somewhat on how one define server.
The traditional hardware on shelf doing their designated task (i have seen it be referred to as "pet" servers) not so much, no.
But they do seem to heavily support container and virtual machine based servers ("cattle", to go with the pet reference earlier).
I assume you're talking about udev's "Predictable Network Interface Names". 
That's a feature of udev, and -IIRC- has been around since shortly after udev's repo got merged into systemd's repo, but quite a bit before it became obvious that the Systemd Cabal wasn't going to put much -if any- particular effort into making it easy for folks to use udev without systemd.
While -due to inertia- I do use it on my systems, it doesn't provide value on systems that don't have more than one network interface of a given type. 
> ...to making it damn easy to file crash bugs...
I've been doing exactly this in KDE since... KDE 4.1? 4.2? So, this would be back in the ~2008->2009 time frame. I have fuzzy memories of a crash reporter in KDE 3, but -back then- I didn't build my systems with debugging info or frame pointers, so the backtraces were always useless and -thus- went unsubmitted.
> ...to checking what services have failed...
rc-status --all --crashed
> ...to a more secure graphical desktop...
I'm very glad that the Wayland folks are making good progress with their work.
> Deploying web-services on systemd is so much better...
Can you be specific here? There are a whole host of service supervision systems out there of varying quality and feature sets.
 After all, you can name your single wired ethernet NIC eth0, your single WiFi NIC wlan0, and your single USB-connected cellular radio usb0. That's far easier and more predictable than reading the output of lspci to figure out the name of your NIC. (If you get the urge to point out the cases in which the NIC-physical-path-dependent naming scheme does help, please carefully re-read the 'graph to which this footnote was attached.)
>Can you be specific here? There are a whole host of service supervision systems out there of varying quality and feature sets.
True - but that is a good question by itself right ? Why - when you already had sysv and upstart and everything. Supervisord seems to be very popular that is very, very similar to systemd in its working (same concept of unit files, declarative language,etc). If you like supervisord, then systemd is just a short hop away.
OpenRC would do the same - but are you claiming that openrc is BETTER than systemd ? Because systemd is making that claim.
What I'm failing to understand is how is systemd bad ? it is making my linux machine extremely stable and working with it (creating new unit files) is extremely intuitive. Where is it making everyone's life hard ? the old way of "service nginx restart" still works. "dmesg" still works...
If I were making that claim, I would have made it.
OpenRC's sysvrc replacement is at least as good as systemd's sysvrc replacement. Moreover, I have much more faith in abilities and reasonableness of the OpenRC Cabal than I do those of the Systemd Cabal.
> True - but that is a good question by itself right ? Why...
I'll ask again. Can you get into specifics about why "[d]eploying web-services on systemd is so much better..."? All I'm getting from you is soundbites and equivocation.
> ...[systemd] is making my linux machine extremely stable...
In the ~20 years that I've been using Linux, I've never had instability introduced by an init or RC system. What init or RC-induced instability have you observed in your Linux systems?
> Supervisord ... [has the] same concept of unit files, declarative language,etc ...
Does this mean that that the meaning of the keywords and parameters in supervisord's config files is very close or identical to those in systemd? Or does it just mean that supervisord's configuration files are in .INI format, just like systemd's?
If the latter, then who cares?
1) Superficial syntax similarities do not necessarily enhance understanding. In cases where similar keywords mean differing things in two different systems, they can (and often do) cause confusion and misunderstanding.
2) Startup files for services whose startup sequence is the most complex you can handle in a systemd unit file are equally terse and readable in both systemd and OpenRC.  If the startup requirements are more complex than this, systemd has to call out to a shell script(!) or other external program. OpenRC (and other sysvrc replacements) that use interpreted startup scripts can bake such functionality right in to the startup script. This means that -in these systems- you only need to ship and maintain one file, rather than two. ;)
> Where is [systemd] making everyone's life hard...
Systemd is worrisome for several reasons:
1) systemd's scope continues to creep.
2) The Systemd Cabal continues to assert that pretty much every part of systemd is optional. An honest look at the state of systemd and projects like Gnome puts the lie to that statement.
3) Systemd continues to assert that systemd is faster than anything out there. Real-world observation indicates that this means that they've never heard of -say- OpenRC.
4) The Systemd Cabal continues to assert that systemd is modular. They assert that anyone can read their documentation and reimplement any part of systemd. Many people have attempted to do this and found the documentation sorely lacking.
5) The Systemd Cabal aggressively refuses patches that fix breakage that they introduced by changing decades-old behavior for no better reason than "The behavior was legacy and thus broken.". 
In short, the attitudes of the people in charge of the project are dreadfully worrisome. Having udrepper in charge of glibc was bad enough. Systemd's devs are substantially more bullheaded, and the project itself is angling to swallow almost all of Linux userspace.
 See Nailer's representative systemd unit file at  and my conversion of it to an OpenRC startup script.  Also, notice the confusion that one has when one is not already familiar with the keywords contained in a systemd unit file. 
 Their attitude on such things has been summarized as "Fuck your usecase.".
I mean network interfaces that don't change names? I think in over 15 years of using Linux I have never had a problem with this, and if I did I'm guessing it would be a) obvious b) trivial to fix. Same with the other stuff.
To get these niche features we need to install a very complex, opaque, fragile and verbose set of tools that throw away most of what I've learned in my 15 years. Blah!
I have run into this. Two wired Ethernet NICs in a system. After a kernel upgrade, the module load order of each NIC got swapped, and the name of each NIC changed. Took me a while to track that one down. :P
For 99.9% of desktop users (and -I suspect- many servers), this doesn't ever matter, and the "predictable" names are substantially less predictable and discoverable than 'eth0' or 'wlan0' or whatever.
Additionally, if you move a NIC in your system to another expansion slot in the system, its name will change. So, there's that to remember about this particular scheme.
NOTE: I'm not trying to claim that it's not helpful! The "predictable network interface" naming scheme solves a real problem. It's just that it -like most things- creates a few unique problems of its own. ;)
Because while I'm sure in a competition of esoteric kernel knowledge I would lose to a great many people, I also simply don't care. systemd makes establishing my service start up dependencies amazingly simple. It makes daemon deployment simple. It simplifies a whole host of problems which are not cleanly solvable by other means. It handles process restarts, limits and a whole host of other things for me.
No one is bringing a superior solution to the table. Everyone is telling me daemon-tools and init scripts are "fine" (they're not).
What I don't get is why so many people are in this debate. It hardly matters to anyone what init system (or 'central management and configuration system') is used. It doesn't matter to users, they just want their linux systems to boot, and their confs to be easy to write. It doesn't matter to user space developers, it's just another system to implement support for, conveniently one that's used by most distros and as far as I can tell not terribly hard to grok.
FTR, I understand "linux" better than most and I'm very happy with systemd. So there. It's by no means perfect and could probably stand to be revisited architecture-/design-wise in a few years when there's even wider community experience with it -- but that can come as incremental improvements.
 Whether you meant the kernel or user-space. Both as a user, administrator and developer.
Theres the really deeply embedded systems running on tiny processors which probably only run a single binary and are never updated.
And then there are big systems (think cellphones, automotive infotainment, etc.) running on quad core procoessors where you have dozens of processes, which might even be independently installed or updated. For those kind of systems you really want a sophisticated init system, and the cost of systemd is probably minimal compared to the instances of chromium/webkit/blink that you might already have on your system.
One might think that the technical changes here are "completely insignificant", but in fact there has been quite a lot of interaction between systemd and the BusyBox world over the years. With those as context, the headlined patch appears in rather a different perspective. Some examples:
* Davide Cavalca's patches from 2011 adding socket inheritance to BusyBox's syslogd, some options to hwclock, and some units: http://lists.busybox.net/pipermail/busybox/2011-January/0743...
* Davide Cavalca's patches from 2012 disabling some log services in favour of the systemd journal: http://openbricks-commits.narkive.com/jCnYGx8H/r13814-busybo...
* Peter Korsgaard's patch to avahi for BusyBox-rootfs disabling things that uClibc did not have, from 2010: https://github.com/enclustra-bsp/busybox-rootfs/blob/4e302ac...
* Peter Seiderer's patch adding a PostgreSQL systemd unit to uClibc buildroot in 2014: http://lists.busybox.net/pipermail/buildroot/2014-May/097163... ( approved in 2015: http://git.buildroot.net/buildroot/commit/?id=828d7b2f0d2288... )
* Albert Antony's BusyBox patch from 2015 to add crond.service: http://permalink.gmane.org/gmane.comp.embedded.ptxdist.devel...
* Denys Vlasenko is the person removing the ability for syslogd to inherit its socket (via a systemd mechanism) in the headlined patch. Here is the very same person adding that same code in the first place, back in 2011: http://git.busybox.net/busybox/commit/?id=9b3b9790b32d440eb8...
For the curious, here is the original commit comment:
> remove systemd support
> systemd people are not willing to play nice with the rest of the world.
> Therefore there is no reason for the rest of the world to cooperate with them.
> Signed-off-by: Denys Vlasenko <email@example.com>
* That's not what the removed code was doing at all.
* That's a 1980s idea of "normal". Forking is something that has been gradually disappearing as standard practice for daemons for the past 16 years, as can be seen from the large number of daemons that now have "don't fork" modes, as compared to the number in the middle 1990s. The idea that daemons fork as some sort of standard practice was the mainstream thinking then, but it is not now.
* Most programs in the wild do not correctly speak the forking readiness protocol, in part because those programs are not forking as a readiness protocol in the first place. Many people, in particular those involved in the Debian Technical Committee hoo-hah a while back, considered the opposite of what you claim to be actually true. There is no need for the flawed, bodged, and in practice broken forking readiness protocol when one has a proper readiness notification protocol. http://homepage.ntlworld.com./jonathan.deboynepollard/FGA/un...
All right, what was the removed code doing?
> * That's a 1980s idea of "normal".
That’s why I called it “normal and traditional”.
> * Most programs in the wild do not correctly speak the forking readiness protocol
True enough. But for practical purposes, it’s mostly good enough. And I’d think that it doesn’t matter that much for people running BusyBox with sysvinit.
"But politics shouldn't be a factor in software design. We should cooperate to work with the technologically most useful solution and not let personal or ideological differences get in the way."
Bless your heart.
GNU/Linux projects have been designed for figthing on socio-ethico-political grounds against proprietary software. The first one being Unix. (GNU's Not Unix)
On the other hand, BSD prefers to see itself like a community of pragmatism, and creating values in business by sharing externalities (plus a bunch of fanatics that loves nice code like others ferarris).
The constant criticism of BSD vs linux is building tools for ideological reasons rarely favours the best solutions because you give yourself an obligation to beat the time to market of "proprietary companies" and to add support for stuff that do not worth being shared.(Word/WYSIWIG editors are a terrible idea in the first place, why spend resources to give them more traction by helping to broaden the user base?)
It favours kludges and hacks instead of a consistent simple design. It burns benevolent time, and attention.
And BSD have been denouncing GNU/linux projects (Gcc, gnome, systemd, binary blobs in linux kernels) like long term disasters by locking people in technical debts of poor designs.
Actually, I am a linux guy with BSD boxes and hadn't I problem with hardware support I would be fully on BSD.
I would say they have a point. And just for the record, GNU fundation is not Linus Torvald's best friends.
The first time I heard Moglen's talk he was litterally saying that linux was a bad example of a free software project.
Yes, freedom of choice is political. But it is not a question of organisation, but individuality.
Still some communities aims to gather more zealots than master in their domains. That is the distinction between Free Software and Open Source.
The reason the Linux community is so dysfunctional is because, for most people born during a certain time period, it's the first ever OS they use that isn't Windows, and the first ever Unix. Naturally this creates a lot of sudden revelations, and a lot of blowhards who think they're hot because they can rice their Arch Linux box. In the process a lot of false sense of technical prowess is generated.
Moreover, the network effects become so strong that at some point (which has already been crossed) Linux becomes the alternative OS, and from then on people feel like they can just ignore everyone else with impunity. They start to perceive themselves as the leaders, and everyone else must be biting their dust. Notice how Linux users often tend to be ignorant (and not only that, but resentful) of what BSD, Solaris, MINIX, Hurd and other folks are doing. Not the case with users of those other OSes, who as underdogs have more of a reason to cooperate and usually also have to study what the other is doing, especially so that Linux the big dog doesn't poorly reinvent some interface that ends up mutating across FOSS and leaving their access to portable software in the dust.
If through some historical accident 386BSD ended up making it unfettered from the trademark lawsuit fallout, it likely would have followed the same course. So would have the Hurd.
It was fixed fast, but the PR standed long. (calomniez, calomniez, il en restera toujours quelque chose)
The BSD community having been beaten early by the IP problems have been more cautious since this time whereas the linux (as an OS) community becoming an official UNIX (c)(tm) in 1997 as they became POSIX compliant and have been artificially protected from IP problems has been careless in disentangling itself from all the proprietary shit that IBM and other big company that wanted to kill the cost of maintaining their own OS have been putting in the OS. (the legal construct for protecting linux from patent/IP problems involves a lot of big companies and complex clauses).
POSIX may have follow IP protocol in the direction of bloatware specifications.
Linux without this compliance and the support of the big companies seeing it as a way to reduce their costs (RH/IBM/maya/Oracle) would not have been able to substitute itself to other proprietary UNIX in the realm of "professional IT". Especially because big vendors made a pax romana around linux concerning the claims of patents when contributing to the OS.
But by mimicking and being driven by normalization/fundations where the main stake holders are proprietary vendors (HW/SW...) linux has became something of a proprietary software itself.
(Just look at who are the main ISO/IETF/IEEE/POSIX contributors nowadays, and the member of OSI/linuxfundation.)
Those who controls the API controls the OS.
Because at its heart is a collection of tools that can be combined in whatever permutation that solves the task that the system user/admin has before him.