The technical aspect here is completely insignificant. All they did was remove a basic listener function that was already optional, used to communicate with systemd's socket activator without linking to libsystemd itself. And it seems only one Busybox daemon ever made use of it.
The political significance is quite high, and I have to say I feel that this move, though perhaps a bit childish, is a valid signal to express grievances with the absolutist attitudes of the systemd developers. Plus let's be honest here, they aren't the pinnacles of mature behavior either. I think this is the first time a major project has done a statement like this against systemd. I would expect more to follow their example.
I of course summarized my issues in "Structural and semantic deficiencies in the systemd architecture for real-world service management" [1] and hope to see systemd go the way of devfsd and HAL.
This is a lengthy write up that does an excellent job of pointing out technical details regarding my top issue with systemd. That is, it just doesn't make any damn sense. Everything is confusing and buried underneath layer after layer of indirection and dependencies. You can't do anything without touching several files and multiple directories. Try as I might, I can't get on board with systemd. It doesn't in anyway feel like it belongs in a Linux OS.
> The technical aspect here is completely insignificant.
I disagree. My initial reaction was to lament the fact that BusyBox syslogd has lost the ability to be passed its socket as an open file descriptor. One may debate whether systemd's mechanism or UCSPI-UNIX (extended to datagram sockets) is the better way to pass such descriptors along. But it now has no mechanism. BusyBox syslogd is now less than it was.
Honestly, the people who use Busybox are probably aware of the problems - political as well as technical - with systemd, and most likely not using systemd in the first place because it's not exactly a good choice for embedded systems.
Also, for busybox syslogd, it does not matter at all whether you can hold its socket. syslog isn't a reliable mechanism anyway, and busybox is light enough that the non-ready period is really short.
So, on the technical side, the impact is quite minimal.
On the political side, however, it looks like Denys made a splash, and I'm not going to complain about it. :)
The significant technical aspect is, as I mentioned, that now there's no mechanism to pass an already-open socket to the daemon. nosh has several service bundles supplied out of the box for providing syslog service. They provide the various combinations of two different syslog tools over /run/log, UDP port 514, and /run/systemd/journal/syslog. Each operates in the UCSPI(-like) way, opening the datagram socket specific to the service being run, dropping privileges (in the syslog-read case), and then invoking the daemon program. All sorts of fairly obvious (to those familiar with the daemontools way of doing things) consequences ensue, like separated streams for local clients and remote clients, and control of whether and whose remote client service is provided that is as straightforward as taking the individual services up and down.
Rainer Gerhards rsyslogd and the nosh toolset's own syslog-read both support this. The BusyBox syslogd used to be usable in this way, as well. udp-socket-listen and local-datagram-socket-listen have a --systemd-compatibility option that would have interoperated quite happily with the BusyBox code as it was.
But thanks to BusyBox syslogd now being less than it was before, the systemd compatibility won't work and there's no mechanism for this in BusyBox syslogd. In taking a sideswipe at systemd people, Denys Vlasenko has made BusyBox less operable with other systems that are not systemd. That's a shame, in my view.
> it's not exactly a good choice for embedded systems.
I don't develop embedded systems, but systemd is actually popular in that space because of its watchdog capabilities and its inclusion in projects like GenIVI and Tizen.
I see nothing there that can't be achieved by a simple supervision suite. s6, nosh, even runit provide those features; the extra complexity of systemd isn't needed at all.
If even embedded developers are getting pulled in by the sirens of systemd because OH MY GOSH IT HAS WATCHDOG CAPABILITIES, then we definitely need to work more on raising awareness about supervision.
I really like systemd because all the controversy made me look towards FreeBSD, and this has really been a great experience. No offense, but I can't stop thinking about the Linux community as a screaming child with short attention span. The BSD community seems more like an old grey beard, sitting calmly in the corner solving problems in the best way possible. This may take some time but the result is often superior.
As a FreeBSD guy, I'd like to claim smug, grey beard superiority, but there are more than enough counterexamples to poke holes in that attitude.
GEOM, cvsup, how many versions of port managers?, perl and gcc extraction/clang migration in base system, and the big ones NetBSD and OpenBSD--all represented wrenching progress--*BSD has its own set of people throwing temper tantrums at various points, as well.
People making this complaint don't seem to have any idea what is in init normally or why you might want to add more stuff there (for example, where are you going to manage cgroup trees for system processes from?)
Yes, not charting out your module boundaries and just bundling the system and service state along with parsing and cgroup management in the same process is relatively unwise. Even Solaris SMF got it right by keeping init(8) small amidst the otherwise highly impressive feature set of the main service management, using contracts (the equivalent to cgroups, which actually predate them) outside PID 1.
So the same can be done on Linux. At least one system, OpenRC, has explicit support in such a manner.
(I take it you haven't read the architectural critique? Most pro-systemd arguments fall flat on their face in any event.)
systemd is NOT making the change to manage cgroups from PID1. Kernel is - systemd is just the first (and currently the only) one to comply with this change.
Legacy cgroup API is going away.
It is the same case for the /usr merge [1]. systemd is not forcing the change, but it is complying with the changes required and is getting blamed for it.
You are right, but the cgroup manager should still be something that starts very very early, so you can start components that make use of it. If literally everything your init system starts should automatically use cgroups, you have to start it with the init system or as the first thing the init system will start.
Remember that vezzy-fnord was responding to a comment that made the following erroneous statement:
> systemd is NOT making the change to manage cgroups from PID1. Kernel is...
I've heard systemd proponents assert that both udev and the cgroup manager must live in either kernel space or in PID 1, because to do otherwise would expose systemd to races or something while PID 1 started udev and/or the cgroup manager.
These are also erroneous statements. It's rather important to correct such statements, as we're dealing with a (sadly) highly politicized technical topic.
you are correct - I had conflated the PID1 argument with cgroup daemon, because as far as I stand.. it doesnt make a difference to me.
There have been alternative managers like cgmanager [1] that lxc is bringing - which (surprisingly) work well with systemd as well [2] . Probably another reason not be scared of systemd ;)
> There have been alternative managers like cgmanager [1] that lxc is bringing - which (surprisingly) work well with systemd as well...
Given that vezzy-fnord told you: [0]
> The kernel mandates a single [cgroup] writer.
you must have come to the understanding that the way cgmanager gets its work done on a systemd system is to pass control group management requests through to systemd's control group manager. Because the ultimate plan is to have a single CG manager, either cgmanager, or systemd's control group manager handles the CG management requests. There's no other way it can work. Because the systemd project is heavily vertically integrated, the odds that cgmanager uses its own CG manager are near-zero.
> you are correct - I had conflated the PID1 argument ... because as far as I stand.. it doesnt make a difference to me.
Yep. Your arguments and assertions have been almost exclusively soft and non-technical. Here's some advice: When people are making comments about technical topics, don't join in the conversation unless your level of understanding on the topic is just about as deep as that of those who are speaking. [1]
> Probably another reason not be scared of systemd ;)
Given the timestamp on this comment, it seems unlikely that you've not had the opportunity to read my reply [2] to one of your much earlier comments. Given that I lay out five solid non-fear-based arguments for why one might be worried about the systemd project, your assertion that I shouldn't be "scared of systemd" is extremely dismissive.
[1] Unless -of course- you're joining the conversation to learn more about the topic. In that case, refrain from making uninformed assertions, ask clarifying questions about things you are unsure about, and make the limits of your knowledge clear up front.
This question has always appeared to me as academic, with little or no real-world relevance.
If your service manager process were to crash, what are you going to do about it?
If you restart the service manager, it won't know what state the system is in, which services are running and which are not, which services were running at the time when it crashed but then stopped just before it restarted, etc.
How are you going to do that in a race-free and reliable way that is actually better in practice than the alternative (reboot)?
And if your service manager is a single point of failure, it doesn't matter much which PID it's running it, it has to be perfectly reliable anyway (just like the kernel).
There's been a lot of research in fault recovery through message logging and checkpoint-based methods that could be applied here, e.g. [1]. Of course, you use "academic" as a snarl world, so I don't think anything will convince you.
The idea that the service manager would not be able to know the system and service states is completely false. Solaris SMF is a design that does, via its use of the configuration repository. Simpler designs can then deduce enough metadata from the persistent configuration in the supervisor tree. There's many possible approaches.
The idea that such fault recovery is implausible is a naive one that only one unfamiliar with the research literature could espouse.
If we take your logic to its conclusion, we should just run everything in ring 0 with a single unisolated address space, because hey, anything can fail. Component modularization and communication boundary enforcement is the first step to fault isolation, which is the first step to fault tolerance.
* State File and Restartability
* Premature exit by init(1M) is handled as a special case by the kernel:
* init(1M) will be immediately re-executed, retaining its original PID. (PID
* 1 in the global zone.) To track the processes it has previously spawned,
* as well as other mutable state, init(1M) regularly updates a state file
* such that its subsequent invocations have knowledge of its various
* dependent processes and duties.
Then init(1) and SMF's svc.startd(1) seem to have a bit of a relationship:
* Process Contracts
* We start svc.startd(1M) in a contract and transfer inherited contracts when
* restarting it. Everything else is started using the legacy contract
* template, and the created contracts are abandoned when they become empty.
So init(1) creates the initial contract for svc.startd(1), then the latter creates nested contracts below that. (Aside: doing the equivalent cgroup manipulation on Linux would run afoul of the notorious one-writer rule.)
If svc.startd(1) crashes, init(1) will restart it inside the existing contract of the crashed instance, so it can find its spawned services (in nested contracts), as well as its companion svc.configd(1).
Now during startup, svc.startd(1) calls ct_event_reset(3), and this is really the interesting bit here:
The ct_event_reset() function resets the location of the
listener to the beginning of the queue. This function can be
used to re-read events, or read events that were sent before
the event endpoint was opened. Informative and acknowledged
critical events, however, might have been removed from the
queue.
I'm willing to entertain the idea that with this feature, SMF can properly track the state of the services that its previous incarnation launched, even if it crashed in the middle of handling an event.
With any luck it will also handle the situation if a supervised process exits after the service manager crashes, and before it is restarted,
as the contact should buffer the event in the kernel until it is read.
Notably this is a Solaris specific kernel feature of the contract(4) filesystem; does Linux have anything equivalent in cgroups or somewhere?
The other SMF process, svc.configd, uses an SQLite database (actually 2, a persistent one and a tmpfs one for runtime state), so it's plausible that it's properly transactional.
> If we take your logic to its conclusion, we should just run everything in ring 0 with a single unisolated address space, because hey, anything can fail.
That is an entirely erroneous extrapolation, as I never claimed any other single point of failure [in user-space] than the service manager.
> I never claimed any other single point of failure [in user-space] than the service manager.
If all of one's system and service management relies upon a system-wide software "bus", then another similar problem is what to do when one has restarted the "bus" broker service and it has lost track of all active clients and servers.
Related problems are what to do when one cannot shut down one's log daemon because the only way to reach its control interface is via a "bus" broker service, and the broker in turn relies upon logging being available until it is shut down. Again, this is an example of engineering tradeoffs. Choose one big centralized logger daemon for logging everything, and this complexity and interdependence is a consequence. A different design is to have multiple log daemons, independent of one another. With the cyclog@dbus service logging to /var/log and that log daemon's own and the service manager's log output being logged by a different daemon to /run/system-manager/log/, one can shut down the separate logging services at separate points in the shutdown procedure.
It's literally named SRC_kex.ext? So... would it be fair to say that part of SRC is implemented in kernel-space? The manual page gives me this impression.
That could very well be a solution to the problem, but perhaps not one that vezzy-fnord was hoping for.
I actually wanted to link the second of your linked comments but couldn't find it unfortunately.
The unix way is also a different incompatible implementation of regex in every utility and a thousand interesting and dangerous modes of failure in the event of whitespace
Systemd has issues I'm sure and I don't trust poettering's software further than I can throw him but not being 'unix'-y isn't a strike against it.
> The unix way is also a different incompatible implementation of regex in every utility and a thousand interesting and dangerous modes of failure in the event of whitespace
In order to see the quality difference, you have to compare the docs to another project that you make extensive use of.
I know that Postgresql's and Erlang's documentation is really rather good. So, go use Postgresql or Erlang for a slightly non-trivial project, then -now that you know about the topic that the docs cover- compare the quality of the systemd documentation to the documentation of either other project.
Pay special attention to the documentation provided to folks who want to understand the internals of systemd, Postgres, or Erlang. AIUI, [0] systemd's internals documentation is woefully lacking.
[0] And as has been repeated by everyone I've ever seen try to use said documentation.
Remember that this thread was sparked by otterly's comment: [0]
> The documentation for systemd and its utilities is second to none.
What little I've seen of systemd's user/sysadmin documentation leads me to believe that it is okay. I also understand that documentation is often the least interesting part of any project, and often sorely neglected.
However. Everyone I've heard of that tests out the Systemd Cabal's claim that
"Systemd is not monolithic! Systemd is fully documented and modular, so any sufficiently skilled programmer can replace any and all parts of it with their own implementation."
by attempting to make a compatible reimplementation has failed at their task [1] and reported that the internals documentation is woefully insufficient.
When you're writing software for general consumption, good user documentation is a requirement. After all, if noone can figure out how to use your system, "noone" will use it.
When you also claim that you go out of your way to provide enough documentation to allow others to understand the relevant parts of your internals, and be able to write compatible, independent implementations of your software, the quality of the documentation about your internals is now in scope for evaluation and criticism.
Perhaps. Observed a lovely exchange a while back where a database was being publicly shamed for producing a poor unit file (i think they actually had the unit file launch a shell script that fired up the database).
Their response given was that it was the only way for them to avoid tying their database to the systemd signaling lib.
This was counted by one of the systemd devs claiming they could just use a socket that systemd provides.
But when i poked at the documentation, the only place such a "option" was mentioned was at the bottom of the man file for said lib. And it was presented as a note on the internal workings of systemd.
And you will find warning after warning about not using systemd internals, as the devs reserve the right to change the behavior of those internals at any time.
Right, you're supposed to use the published interfaces. There's nothing particularly novel about that -- neither Microsoft nor Apple will support you if you don't use their public APIs, and in fact Apple will refuse to publish your software in their app store if you don't.
> Right, you're supposed to use the published interfaces.
You missed the point. I'll isolate each component for you:
"[A] database was being publicly shamed for producing a ... unit file ... [that used] a shell script [to start] the database[.]"
"[The database devs mentioned] that it was the only way for them to avoid tying their database to the systemd signaling lib."
"[O]ne of the systemd devs [mentioned] they could just use a socket that systemd provides."
"[But this] ... 'option' ... was presented [at the bottom of the man page for the systemd signalling lib that the database authors were trying to not use] as a note on the internal workings of systemd."
"[You] will find warning after warning about not using systemd internals, as the devs reserve the right to change the behavior of those internals at any time."
So, this "option" -as documented- is something that you cannot rely on, as it is subject to change at any time, without warning.
> With respect to socket activation, a pretty useful tutorial...
Tutorials are no substitute for documentation. Documentation describes the contracts that the software commits to. Tutorials can exploit edge cases and undocumented behaviors without warning. Moreover, if the docs say that the tutorial is demonstrating a feature that's subject to change at any time, you'd have to be a madman to rely on it.
> The DBus API can be found here...
If the database devs don't want to depend on the systemd signalling lib, I bet they really don't want to depend on DBus. This might come as a surprise to some, but many servers don't run a DBus daemon.
Socket activation doesn't have any systemd-based interface. You just get a file descriptor passed in the normal Unix way. The systemd library functions related to socket activation are utility functions for examining the inherited socket, but they are just wrappers for any other way you might do so.
You can configure daemons like nginx or PHP-FPM to use sockets inherited from systemd instead of their own, and it works fine. They don't have any specific support for systemd socket activation, nor do they need to. They can't even tell the difference between the systemd sockets and ones they'd get on a configuration reload.
The closest I could find in the docs to what digi_owl said is the following:
> Internally, these functions send a single datagram with the state string as payload to the AF_UNIX socket referenced in the $NOTIFY_SOCKET environment variable. If the first character of $NOTIFY_SOCKET is "@", the string is understood as Linux abstract namespace socket. The datagram is accompanied by the process credentials of the sending service, using SCM_CREDENTIALS.
I can see how someone would be reluctant to rely on that, even given the interface promise and the nudging of the systemd developers. To be more consistent with what's a stable, public interface and the admonition to avoid internals, I would probably drop the word "internally."
However, even with your change, I still read that section as describing implementation detail that's not guaranteed to be stable. If that note describes a stable, documented protocol, a link back to the documentation of that protocol would be helpful and reassuring.
For those who want context and specifics: The whole argument from some of the people who didn't want to rely upon something that is explicitly described as "internal" is set out at length in places like https://news.ycombinator.com/item?id=7809174 .
It only replaces sysvrc, [0] but I find the OpenRC-powered systems I admin to be quite sane and easy to manage.
[0] But that's okay. I strongly suspect that when most folks say "sysvinit", they really mean "sysvrc". Hell, I used to be one of those folks until a while back.
I am worried even more that people people may well have used upstart for a number of years and still think they are using sysv, because upstart could grok the scripts without change.
But then i am sitting here using a distro that boot by way of a couple of flat files. Frankly i kinda like it, but then i grew up fiddling with autoexec.bat...
> because upstart could grok the scripts without change
I don't believe that's the case. The "service" command (which is part of the sysvinit-utils package, not Upstart) invokes either Upstart or SysV init as necessary, but Upstart itself has no awareness of the SysV init world. You couldn't have an Upstart service depend on a SysV init one, list SysV service status with Upstart, or enable a SysV service through Upstart.
In case you're curious, the wrapping on the systemd side is more comprehensive. SysV scripts appear as units, and systemd parsed the semi-standard metadata at the top of most SysV init files to determine when the service should start if enabled (translating from the traditional run levels). As units, the SysV init scripts are possible to enable/disable, start/stop, and list using the standard systemd commands. They can also participate in dependency graphs alongside systemd units.
I've come around to being a systemd skeptic too after initially supporting it. It's just so over engineered, confusing, hard to use. Today in the age of user experience anything that wants to replace the (also ugly) old init should be a huge step forward and a breath of fresh air. We're replacing bash nastiness with over engineered "enterprisey" nastiness.
I have to make this comment with a throwaway as its related to my previous job. Leaving aside the technical issues the launch of systemd 'looks' very much like a playbook PR campaign designed to push through something unpopular.
The discrediting debates, labeling near abuse and mockery of opponents, appeals to authority and exaggerated consensus do not look accidental. This has all the markings of a sophisticated campaign.
For many this can be off putting but its also a wake up call on some naive ideas about how the world works. Money drives decisions, the cathedral and bazaar as a metaphor has no meaning when the bazaar has spawned a billon dollar cathedral in its midst, and like the failure of communism in practice this too could be anticipated. Money has its own agenda and always corrupts everything. And we all know this because its part of our historical record and common sense. And there is nothing we can do about that because in the real work words are meaningless. Individuals have zero power, groups have some power but its groups with money that hold the cards.
The Debian management tells users,not out of exasperation, or frustration but of arrogance that they essentially don't matter, and the only way they can have a voice is with code. This disrespect for users lies in contrast to a project that has none and thus the self awareness to realise it is meaningless without them. This new found arrogance sits uneasily with the ideals of the open source movement but anyone who draws the dots will realise its not an open source movement but a full fledged corporate movement with the thinnest veneer of ideology that is not up for 'management'.
For the next generation you cannot protect ideals if you enable and allow cathedrals to grow in your midst. For the current things do not automatically fix themselves, the awareness of the power of money to influence outcomes and how to firewall them, how to ensure co-existence, sustenance and growth, how to reward projects and developers and how to ensure you are not getting hijacked and sidelines by interested parties should be the 'community', and there is no such current organization in the ecosystem that's not tainted by corporatism and opportunism.
> Leaving aside the technical issues the launch of systemd 'looks' very much like a playbook PR campaign designed to push through something unpopular.
> The discrediting debates, labeling near abuse and mockery of opponents, appeals to authority and exaggerated consensus do not look accidental. This has all the markings of a sophisticated campaign.
Sophisticated? That's pretty laughable. To most outside observers this looks like monkeys flinging poo.
In addition, you make a serious error in your assumptions:
Popular != technically correct
Unpopular != technically incorrect
RedHat did not just magically have an epiphany and decide to shove systemd down on everybody. They started by using "upstart". Remember how everybody went apeshit over that? Sounds a whole lot like the current debate doesn't it? After using upstart for a while, folks at RedHat decided that wasn't sufficient and they needed a replacement for that.
People seem to think that just magically RedHat decided to rip things up for no reason at all. That's patently false--RedHat has some issues that need to be solved and systemd solves those. If you want to beat systemd, you're going to have to give RedHat something which works better.
After their experience with upstart, it's no surprise that RedHat just decided to ignore the community. It was clear that a solution was never going to be accepted.
How is this related to anything, let alone your job?
As I'm sure you know, systemd grew out of Red Hat, and Debian as most other Linux distributions have adopted it in lack of a better alternative. There is no grand scheme to it other than getting rid of SysV.
In a few years time uselessd, nosh, dmd and the many other systemd-lookalikes in development have matured and there will once again be diversity in init land.
The self-styled "uselessd guy" is present on this page, and xe has gone on record some months ago that uselessd is a "dead project". Replace uselessd in your list there with s6-rc and System XVI, therefore.
And replace "systemd-lookalikes" with service and system management systems, while you are at it. Neither of those is aimed at looking like systemd, and it would do them a disservice to miscategorize them as such.
After they repurposed the "debug" kernel flag and made it literally unusable with systemd, Linus banned one of their core devs (Kay Sievers) from contributing to the Linux kernel, and I quote:
"I'm not willing to merge something where the maintainer is known
to not care about bugs and regressions and then forces people in other
projects to fix their project."
It's worth noting that this is a case of Linus flying off the handle based on incomplete information. It was latter revealed that the bug which caused this whole spat on LKML had already been fixed in systemd, Kay Sievers just failed to communicate that in the bug opened by Borislav Petkov.
1. A system hang due an assertion in journald spamming kmsg.
2. The question of whether systemd should be logging to kmsg or doing anything based on the "debug" kernel command line argument.
Linus got pissed because, based on Steven Rostedt's post to LKML and the referenced bug report, it looked like Kay was refusing to fix issue #1, but that bug had already been addressed and a fix committed to systemd. In the aftermath there was also a flamewar over issue #2, which is what Lennart is responding to in that G+ post.
I mean rate limiting stuff that comes from anywhere is needed. not cause of the fact that systemd does it.
I mean if I could write a userspace tool that crashes the kernel.. is somewhat aweful.
The place have become something of a second home for a lot of Linux related devs. One potential problem with that is that the poster is free to delete any and all comments made to his posts.
People have been trying to fix the boot process in linux for years. Like most things, everybody screams when something changes, but nobody is willing to put in the work to make something better.
RedHat had enough, and finally jammed systemd down on the Linux ecosystem.
This had two effects:
1) Technical: it exposed a LOT of shortcomings in the architecture of Linux for operating on modern systems. The whole "This belongs in PID 1! No it doesn't!" stems from there not being good, correct, and obvious ways of accomplishing the tasks that need to be done.
2) Political: it pissed off everybody from Linus down who actually thought they had power and control by demonstrating forcibly that their opinions don't really matter.
As for the technical issues, I'm not terribly sympathetic. Somebody needed to fix this, and it was painfully clear that nobody was ever going to get consensus on this. In addition, Linus blocks a LOT of stuff attempting to evolve the kernel in directions that are improvements, but that he does like or doesn't understand. For example, Linus didn't handle the ARM board issues very proactively. He basically ignored things until they got untenable and then finally blew up at the ARM guys and threatened to rip them out of the kernel. People had tried several times to get a modular configuration system put in place, but Linus never wanted to commit it as there "wasn't consensus". True, but once his pants caught on fire, he didn't give a shit about consensus anymore.
As for political issues, I have even less sympathy, as RedHat simply did what Linus has been doing for years and imposed their will on the ecosystem by force. Enlightened dictatorship is a wonderful governmental mode, until you're no longer the dictator. Perhaps if Linus had figured out how to actually build consensus, I would find his words less hypocritical and more deserving of consideration.
The BSD's are no strangers to controversy. The whole existence of OpenBSD and NetBSD is a tribute to that. Even with FreeBSD, the GEOM subsystem changeover caused great friction. The difference is that Poul-Henning Kamp did the work and got a signficant level of consensus from the BSD leadership--but by no means unanimous and it was sometimes acrimonious.
I don't want to argue about systemd vs this or that, but re: (2) you are totally misrepresenting linus's objection to systemd (developers).
the well-publicized incident to which you're referring had systemd introduce something that broke userland, then instead of fixing it in systemd code, proposed a kernel change for it. this is, obviously, bad practice.
it would be one thing if the systemd code exposed a vulnerability or inefficiency in the linux kernel. but fixing your bad code by modifying the kernel is like trying to modify a popular compiler because some bad code you wrote isn't doing what you want it to.
I wasn't referring to any particular systemd thread (my experience has been watching Linux/Linus in the context of ARM), but I think you are referring to the post by semi-extrinsic and this thread: https://lkml.org/lkml/2014/4/2/420
I examined that thread, and the upshot seems to be that the kernel wasn't rate limiting logging so userland could break the kernel. And Linus threw a hissy fit instead of actually examining the situation. The kernel not rate limiting is a fault in the kernel. The fact that a user program exposes it does not mean that the user program is the one in the wrong. And Sievers basically said: "Not rate limiting logging is a kernel problem. We're going to make you fix it." So, basically he had the temerity to both A) pull a political power play and B) be technically correct--and it pissed Linus off ferociously.
Linus, however, is fighting with the 800lb gorilla. That's not a good position to be in. Linus needs the RedHat programmers more than the RedHat programmers need Linus. They'll keep Linus around to avoid the bad PR that would come with a fork, but, if he gets in the way, they'll just cut him out of the loop.
> And Linus threw a hissy fit instead of actually examining the situation.
You really, really, really need to take ten minutes or so and read the whole thread. If you've not the time for that, then the single message linked here provides a decent amount of context: https://news.ycombinator.com/item?id=10484317
I did read the thread before I even posted, thanks.
Linus' first reaction to systemd accidentally flooding the kmesg queue was to make an ad hominem flame directly at another developer when the kernel was responsible for at least half the problem (lack of rate limiting).
Most people would qualify that as "throwing a hissy fit".
> Linus' first reaction to systemd accidentally flooding the kmesg queue was to make an ad hominem flame...
Are you talking about this?
"Key, I'm fcking tired of the fact that you don't fix problems in the code you* write, so that the kernel then has to work around the problems you cause."
If you are, then you have to know that he said that because Sievers has a history of aggressively refusing to own up to (and fix(!)) the bugs he introduces into his code. [0] Contrary to popular understanding, Torvalds doesn't dress down people unless they've demonstrated that they should know better and continue to fail to meet their demonstrated potential.
Most people call that sort of management style "stern" and "meritocratic".
[0] Indeed, it is this behavior that led to Sievers's Linux kernel commit bit being unset. Because of this, Greg-KH had to become the front-man and shepherd for the kdbus project.
I think the way you did. Many things in systemd aren't as good as they should be. However it's working and does his job.
The issue that raised up here is definitly a problem in the kernel. I mean it's not a good practice that other programs could flood yours. However the problem whey they argued that so hard is caused by the fact that both parties trying to fight a political war over the linux ecosystem which is just bad, kay sievers also said it.
They make a project that can only be good if all parties, kernel, init, gui, are playing together and not fighting against each other like childish politicans.
What I don't get is, if systemd is so troublesome, why are so many distros picking it up? I know popularity isn't a perfect signal, but in this case of highly technical users that are distributing OSes it seems valid.
Part of the reason is that systemd has absorbed functionality of a number of additional pieces of software, to the point that they are no longer maintained discretely - udev being the best example.
Another part of the reason is that Red Hat forcibly landed systemd in Fedora and then RHEL7, and RH is an elephant on the scale of Linux development.
So RedHat's choices impact Debian/Ubuntu, Arch, and SUSE so much? Honest question; I don't know the details. Or are you saying that distros rely on other components that are simply not feasible (maintained) anymore, so they have no choice?
Just seems like if it's as bad as so many say, it just doesn't make sense for all these distros to blindly go along. Even the GNOME lockin doesn't seem like it'd explain it.
Debian isn't really a leader. They're more of a passive target platform and their committee has people from various strokes of the Linux community. As such, RH decisions with significant influence definitely would impact them. Ubuntu, in turn, is symbiotic with Debian, though still quite forked from it in most aspects beyond the packaging infrastructure (now with Snappy diverging even further). Nonetheless, Unity needs GNOME and Canonical are still a small player who are perfectly capable of foreseeing future trends. Adopting systemd is the path of least resistance and will help them track Debian's packages better.
Most RPM-based distros (openSUSE included) tend to follow RH's direction, so that's not surprising. Besides, SUSE has always been enthusiastic about most desktop efforts.
Also, distros have blindly went along with other bad ideas before. The most prominent examples were HALd and LSB-style initscripts.
In fact, I'm not sure why you're at all surprised. Large groups of people in real life have collectively made far, far more catastrophic decisions. Why would a bunch of distribution maintainers adopting a piece of software be so shocking?
Systemd is pointing out the truth about lots of "emperor's clothes" in the current Linux ecosystem.
When there were problems about a desktop manager revamp or some crashy audio daemon, you could liquidate certain choices as irrelevant or lazy. Now the entire ecosystem's guts are being rewritten by RedHat for RedHat, and the developer community is simply going along because, in practice, they have no other choice. Nobody else has the appetite, the manpower or the political weight to put together a competing project with a single chance of success.
When chips are down, Red Hat owns Linux more than we like to admit.
I suspect many didn't see it coming because systemd was under the freedesktop.org umbrella rather than a Red Hat fronted project.
Thing is though that while freedesktop.org is presented as being about cooperation and compatibility between desktop environments, a very large portion of what happens there is dictated by Gnome.
And Gnome is yet another project that on paper is independent, but with big RH contributions in terms of programming manpower (its the primary DE of both Fedora and RHEL).
Probably the worst part is that none of this is planned, there is likely no grand conspiracy. Its just that so many of the people involved walk in the same halls, share the same cafeteria tables, and sit in on the same meetings that a internal consensus ends up formed about what is the "right" way to do things.
> Probably the worst part is that none of this is planned, there is likely no grand conspiracy. Its just that so many of the people involved walk in the same halls, share the same cafeteria tables, and sit in on the same meetings that a internal consensus ends up formed about what is the "right" way to do things.
This. The whole Linux community seems to be paranoid. It has to be a grand scheme, a plot to take over Linux, a conspiracy to kill "the UNIX way". Maybe it's just a group of guys solving their problems.
I don't see how anything of this is really Red Hat's "fault". Seriously. They develop a piece of software which solves their problems. They also incorporate projects they already maintain to improve functionality and make developing things easier. It's their projects, they're allowed to do that. The code is still open. It's not like everybody (Debian, Ubuntu, SUSE, etc) didn't have the ability to fork udev.
Yet all these projects chose not to fork the projects incorporated by systemd. Reason (1) might be it solves problems for them as well. There's no need to fork something that perfectly works for you. Reason (2) might be they don't have the resources. But that's a declaration of bankruptcy for the whole "Linux is built by community volunteers" thing. The people advising against "corporate influence" are not able to do anything of value in modern computing anymore (I'm well aware this is over-simplification, please bear with me). Requirements for software have changed. It's not 1970 anymore. And while the "old-school" users and devs are complaining about systemd & co, the "youngsters" are busy changing the ecosystem.
> (2) might be they don't have the resources. But that's a declaration of bankruptcy for the whole "Linux is built by community volunteers" thing.
May well be the case.
Somewhere GregKH mentioned how the pace of kernel development had changed, with git being a major part of it.
It may well be that the pace have now gotten to the point where hobbyists have a hard time keeping up, as they have things they need to do besides stare at code all day.
This has been the case for some time now, but the ecosystem of large-ish companies was diverse enough as to avoid appearing dominated by this or that player. The fall of Novell and Nokia, coupled with Ubuntu's various pivots and IBM's troubles, left Red Hat as the lone real force with the capability to steer the most significant projects.
This is nobody's fault, but it's not healthy in the long run.
> It may well be that the pace have now gotten to the point where hobbyists have a hard time keeping up, as they have things they need to do besides stare at code all day.
Totally agree. The needs of modern computing (server-, desktop- and security-wise) have drastically increased IMHO. You need to read so much documentation and existing code before you can even start coding. That's just not feasible in your free time after work if you're having an active social life or another computer-unrelated hobby.
Although I have to admit I might be biased about this topic. I'm contributing to a FOSS project which got a lot of flak for trying to raise money for employees in a ... well, more aggressive way. While not necessarily agreeing with everything that happened, I do believe you need to work full-time on modern FOSS projects to push ahead.
This is Too Big To Fork https://news.ycombinator.com/item?id=6810259 at work. Any legally free-slash-open-source software project which is so complex that only a few big actors have the will and ability to make and maintain a fork is de facto under the shared proprietary control of those big actors. (The same goes for software where control of the installed base means de facto control over any changes to widely-used interfaces.) "Freedom of the press is guaranteed only to those who own one." The Linux 'ecosystem' is only unusual in that (as it seems) there's exactly one big actor left standing by now.
> Debian isn't really a leader. They're more of a passive target platform and their committee has people from various strokes of the Linux community.
This is so not true that it is bordering on FUD. Systemd's leadership depended on debian making a fairly democratic, open and violently fought battle between upstart vs systemd. Everyone knew that Ubuntu (which is pretty much defacto installed in every laptop sold in Asia) would adopt systemd based on Debian's decision.
Not only did Debian make the decision to support systemd, it voted to NOT support other init systems. Mark Shuttleworth made his announcement the day after (http://www.markshuttleworth.com/archives/1316 ).
It is an interesting position to take by painting systemd as the lackey of a capitalist monopoly trying to take over the world. I can see how that can get a lot of mindshare. The truth is far simpler - systemd is far superior.
RH is the 800-pound gorilla of the Linux world. They employ a sizeable portion of the developers working on various projects within the Linux ecosystem.
Canonical revenue is 67 million according to wikipedia, by contrast Red Hat is 1.5 billion. That is 3 extra zeros (and even then it is almost 1/100 of Microsoft's revenue).
Or at least don't clash. Intel is mostly involved on the hardware end. The biggest clash is perhaps Oracle, as they forked RHEL some years back. but i think that is more about shoring up a silo around their database business, rather than getting involved in the general workstation and server business.
But that is a recent move, while RH has been in the Linux development effort for some time.
At this point RedHat has consumed de-facto governance of some core projects that make up the Linux desktop system that sit atop the kernel. They pay developers to work on these projects and there employees have decision making authority. SystemD is almost entirely driven by current/former RedHat employees.
The fact of the matter is that "Distros" like Debian etc are dependent on upstream developers to provide new versions of there system. If a large portion of important subsystems are developed by RedHat developers then they will adopt that code and be driven in the direction upstream wants to go.
"Distros" do not develop, they package upstream into compiled binaries and add there configs and perhaps package management. In short most of the distros will bend which ever way the upstream wind blows.
Thats why I use a "Distro" Like Slackware or Crux that is built from scratch and not based on another system. What limited autonomy there is in Linux land is with the independent Distros.
Ultimately you can go for something like Linux From Scratch. And even they have gotten somewhat fed up with the Systemd antic.
For instance their main book use eudev rather than udev, because they found the effort of extracting udev from the larger systemd project a right pain.
They do however maintain a parallel systemd book for anyone interested.
Linux was/is an extremely flexible system. The vendors like RedHat should be able to refashion Linux in what ever way they see fit. The big problem with SystemD is that it stepped over the red line from in-house RedHat Linux component like SE-Linux and there various "enhancement" to a rigid default that attempts to lock down a large array of system components under a project that has commercial motives and drivers, driven by a single vendor. And it was done with a pre-meditated social engineering push which was quite nasty. Big no-no for Linux and Open Source eco-system in general. Projects Like Linux from scratch, GNU, Busybox etc will not role over for the endless attempts at vendor lockin. Its all been tried before and while the players may be new the game is very old and ultimately we will defeat these new attempts just like we did with SCO and the proprietary Unix/Microsoft Corporations that tried to kill Linux in the 90s.
By comparison Debian just has enough resources for keeping Debian running. SUSE seems to be an afterthought at this point. Ubuntu seems to mostly be concerned with itself.
If ARM became a big platform for servers or workstations, RH would probably roll out a ARM variant within the week (could be they even have test versions floating around their internal network).
In particular as Fedora already have ARM versions across the board.
My comment was more the fact that Debian has enough excess resources to support the Beaglebone Black. So, categorizing Debian as having barely enough developers to support itself is disingenuous.
Nonetheless, Red Hat really is doing a lot of upstream development, certainly more than Debian does as far as I know. Indeed, the Debian community has managed to keep their release schedule and meet their goals lately, so they're probably in a better shape than "barely enough to support itself", but there hardly seems to be room for comparison between the two.
In this case the decision does influence other people, because as pointed out, systemd has taken over many other components' roles, not just init, and if those components are no longer maintained because RedHat pulls its support, then other distributions (with far far fewer resources) are forced to use unmaintained code or switch to systemd.
Red Hat is definitely under no obligations to maintain things it doesn't see as a valuable, and if no one else wants to pick up the slack either then action and code speaks much more loudly then words about the actual value people assign those things.
I'm just guessing here, but systemd makes some things more convinient by being overall more monolithic, like Windows, and especially distros that are targeting a less technical crowd want to be able to compete with Windows in terms of features and usability. Non-technical people most likely don't see any of the beauty of a strictly modular system or proper clean code.
And this is not about the init-daemon, it is about systemd as a whole integrated suite of daemons that connect easily and add fancy new features, while on the other side you might have to modify two completely independent software projects to add a new feature that uses both, and even then there most likely are alternatives to each one, and you feature will only work with this specific combination. So a more monolithic system, which systemd is despite Lennard denying this, makes development faster, but will bite you in the long run, especially when the code and documentation quality is as poor as systemd's.
I'm more tempted to say that systemd does well what previously you needed a dozen half-broken tools to implement.
For example, isolating the /tmp of an application, monitoring the process and restarting if necessary, more logging options (and imho cleaner/more efficient).
Then again, all this has been debated over and over again. In practice, people are adopting it. Haters are noisy. It's like reading about debates on IPv6 migration strategies.
Process supervisors with reliable logging are nearly two decades old at this point. Isolating /tmp is then a namespacing feature, which in a system where execution state is composed as an explicit external manifest via chain loading (as opposed to serializing from a unit file into a private ExecContext structure, as with systemd) should be completely orthogonal to any one service manager. Else it is inflexible if it needs complex internal scaffolding.
(Actually I think supervision goes back to IBM's SRC in at least 1992, but I'm not exactly sure if the earliest versions had anything beyond process management.)
I think one thing that gave systemd adoption a big boost was a declaration from the maintainer of the cgroups sub-system of Linux.
At present multiple processes can set up and manage cgroups, but the maintainer wants to change to there being a single user space manager.
Thus systemd was pushed as "the" cgroups user space process.
You can already seeing the systemd devs behaving as if this was a done deal.
Ran into a email a while back about systemd clobbering a libvirt managed cgroup. And in it Poettering "suggested" that libvirt should hand control over to systemd as it would be THE cgroups manager going forward.
You can probably find similar encounters between, say, Docker and systemd.
Which is a failure of people making server distros, not those writing process supervisors.
Not at all. SMF supports delegated restarters, its milestones can contain far more state information (and explicitly) than targets can, it uses a configuration repository which enables runtime dynamic service state modification at a far higher rate than the relatively static systemd, its dependency system is simpler, it's integrated with the hardware fault manager (also to provide service resource identifiers beyond the COMM name), its profiles are more granular than presets, it has explicit service instance support and logging is flat file-based in /var/svc/log.
SMF has plenty of its own issues, but it's quite different from systemd. I think the delegation aspect is one of its most valuable lessons, but ultimately I also believe it's too overarching for a modern init.
Well that's a good question, I'd like to have an answer to that.
debian adoption of systemd was a bumpy ride to say the least, it caused a few long time contributors to resign and others to fork debian to remove systemd in a new distro called devuan.
Then again systemd gobbled other critical components such as udev, there's also gnome that made it a strict requirement, like a cancer it grows and takes over other components.
Devuan is someone's extended tantrum and little more. Like every other rage-fork it'll die a slow death because who wants to develop on a platform founded on the premise of "why do we need to change anything? It's all working fine!"
They began work on a logind compatibility layer over ConsoleKit2, they're writing a NetworkManager alternative, they directly influenced and are supporting a udev alternative called vdev (which also has libudev compatibility), and a host of other things.
For a rage-fork, it's pretty impressive. They're changing a lot. It's easier to just astroturf in the corner, though, I suppose.
Possibly because vdev is a clean break, while eudev is still mostly about udev stripped from systemd.
And that stripping will be more complicated moving forward, as i recall a recent systemd release moved various bits from udev to a new systemd lib. Leaving the udev interfaces as stubs to be removed at some undetermined future date.
Yeah the most documented such evaluation, the Debian process, seemed to only evaluate sysv, upstart and systemd (openrc was briefly mentioned but quickly dismissed).
The rest seems to have been executive decisions (with or without a "deal with it" meme accompanying), often by people already involved with systemd development.
I haven't evaluated all of the previous attempts since that would consume more time than I have available. Can you provide a comparison or some insights into why they're worse?
If you get time that would be neat, I'm sure others would appreciate it too. I've heard good things about djb's daemontools from a few friends and colleagues but never had the chance to try it and if you have any insight on that one I'd really appreciate that.
I'd also like an answer to this question, as well as an answer to the question of what, in exacting detail please, was so wrong with the previous system it needed to be torn out and replaced?
I definitely tend toward the curmudgeonly, but to this grumpy old man, it seems like we're replacing things simply for the sake of change.
it did, but systemd is not the silver bullet that will fix it.
"It's often said that a half-truth is worse than a lie. In the case of systemd, a half-improvement is worse than a complete flop. People are focusing on the features they want to use and ignoring the ones that will come back to bite them later.
No, my biggest problem with systemd is that it is repeating the mistakes of System V all over again. It's just given them a change of clothes to appeal to the 21st century."
At this point in time though i wonder if focusing on the init part is missing the forest for the trees.
I think just as much of a stink comes with how you need to have systemd-the-init to use systemd-logind (replacing consolekit for session tracking) or any number of other possibly interesting, but tied to the hip of systemd, sub-projects.
Arch linux spawned a distro called manjaro which leads the way on not joining the systemd bandwagon. Debian got forked into devuan which is debian minus systemd. As systemd originated from redhat it is expected that the other faces of redhat namely fedora and centos would also feature systemd.
Then there is slackware, gentoo, pclinuxos that either did not bit the systemd bullet or offer alternative option. Not sure about ubuntu.
"Arch linux spawned a distro called manjaro which leads the way on not joining the systemd bandwagon."
...well, kinda: manjaro supports openrc, but the official, main installs are systemd. you can start with an official install and convert it, or you can use an openrc iso, but those are not officially supported.
Just to add to the others CentOS was essentially bought by RedHat and there Core developer is now a RedHat employee. Magically there was a new major 7.0 version bump adopting SystemD as the one and only init system. A new flashy website and a refusal to allow any alternative init system into the official packages other then SystemD. Needles to say CentOS is no longer an independent alternative to RedHat Linux.
RedHat has to release there sources for others to use. Most of the code is under some form of GPL. CentOS was an independent volunteer driven effort that compiled and packaged that code into a new distro not affiliated with RedHat. That is why they had to remove RedHat branding and they could not market CentOS as RHEL etc.
What about initrd? Most of those use busybox and it's now "recommended" to start systemd there. (Granted, I don't see why anyone would run busybox's syslog - the only component affected by this change - in an initrd.)
If you want an extremely small system, you can put everything you need into the initrd. It's not strictly an initrd then, since it's not really used for bootstrapping a larger system image.
There's a lot of hate for systemd and there's also a lot of people ignoring valid criticisms and going so far as labeling those making the criticisms as haters. This does not bode well for reasonable discussion. It has become very political.
Whether systemd is a good thing or a bad thing depends ultimately on what you're trying to do and what sort of operating system you're using. For some people systemd is really helpful, for others it gets in the way and creates unnecessary complexity where it otherwise wouldn't exist.
I've been using systemd along with fleet on coreos, and it's fantastic for me but I personally wouldn't want systemd on my desktop. The problem is that it has been forced onto people left right and centre when it doesn't suit their needs. That essential reasonable debate never occurred and those that are forced to either swap distro or 'convert' to systemd get upset. If there was a reasonable debate people would still hate on systemd, but at least the reasons for using it would be more known and the criticisms atleast acknowledged.
> There's a lot of hate for systemd and there's also a lot of people ignoring valid criticisms and going so far as labeling those making the criticisms as haters. This does not bode well for reasonable discussion. It has become very political.
I can't shake the suspicion that ever since gamergate, or perhaps even "for the lulz" anonymous, there have been people going around the net, latching onto anything vaguely controversial, and trying their best to stir up trouble by making threats and statements that are barely relevant to the topic.
I think there has pretty much always been trolls on the internet. Back in the 90s it felt like no one took the happenings on the internet all that seriously, it was a bit of a lark. Now that it's much more mainstream, it's embedded in our everyday lives and companies make a lot of money from it. So it would make sense that everyone takes it a lot more seriously and it's easier to troll people if they take you seriously.
Although not all haters are trolls of course, but it's really hard(impossible?) to tell over the internet (Poe's law), especially when you have haters, fanatics and trolls all playing each other and the only person who wins is the troll in that scenario.
has anyone used Fedora 23 with Gnome, Wayland and Systemd ? I see a lot of religious handwaving around systemd, but F23 shows you the future of Linux desktop.. and it is brilliant.
Right from network interfaces that dont change names when you swap hardware (powered by systemd) to making it damn easy to file crash bugs (using coredumpctl) to checking what services have failed "systemctl --failed" to a more secure graphical desktop (rootless gnome with systemd) - it makes for a better linux.
Deploying web-services on systemd is so much better - think supervisord, but much more stable and robust. Even docker machines using systemd is great (in fact it is a great way to explore systemd).
I think the key word there is desktop. The systemd folks are desktop folks first and foremost, server operations be damned. coredumpctl requires root access, which means one has to be much more diligent in scripting automatic actions to handle core dumps. Failed services again are something that any supervisor service can handle and report -- launchd can get all that information from launchctl. Rootless X is something OpenBSD has had for years, through better access control than linux devs seem to be able to be bothered with.
The biggest problem with systemd is its mediocrity. The journal is an interesting idea, but it's inflexible. Its insistence on json and internal formats mean that it doesn't work with the rest of the extant logging infrastructure tools that exist, so if you want to have a dedicated log server, you have to run essentially two syslogds. Its core dump misfeatures make it harder to use extant access control features to manage files. Its unit files don't allow for any sort of extensibility, so adding things like a configtest command for a unit are impossible. In short, its oversimplified itself to the point where it adds a huge management load just to keep servers where they were.
> ...network interfaces that dont change names when you swap hardware...
I assume you're talking about udev's "Predictable Network Interface Names". [0]
That's a feature of udev, and -IIRC- has been around since shortly after udev's repo got merged into systemd's repo, but quite a bit before it became obvious that the Systemd Cabal wasn't going to put much -if any- particular effort into making it easy for folks to use udev without systemd.
While -due to inertia- I do use it on my systems, it doesn't provide value on systems that don't have more than one network interface of a given type. [1]
> ...to making it damn easy to file crash bugs...
I've been doing exactly this in KDE since... KDE 4.1? 4.2? So, this would be back in the ~2008->2009 time frame. I have fuzzy memories of a crash reporter in KDE 3, but -back then- I didn't build my systems with debugging info or frame pointers, so the backtraces were always useless and -thus- went unsubmitted.
> ...to checking what services have failed...
rc-status --crashed
or
rc-status --all --crashed
does the same for me on my OpenRC systems. Every sysvrc replacement that's not a toy provides this functionality.
> ...to a more secure graphical desktop...
I'm very glad that the Wayland folks are making good progress with their work.
> Deploying web-services on systemd is so much better...
Can you be specific here? There are a whole host of service supervision systems out there of varying quality and feature sets.
[1] After all, you can name your single wired ethernet NIC eth0, your single WiFi NIC wlan0, and your single USB-connected cellular radio usb0. That's far easier and more predictable than reading the output of lspci to figure out the name of your NIC. (If you get the urge to point out the cases in which the NIC-physical-path-dependent naming scheme does help, please carefully re-read the 'graph to which this footnote was attached.)
>> Deploying web-services on systemd is so much better...
>Can you be specific here? There are a whole host of service supervision systems out there of varying quality and feature sets.
True - but that is a good question by itself right ? Why - when you already had sysv and upstart and everything. Supervisord seems to be very popular that is very, very similar to systemd in its working (same concept of unit files, declarative language,etc). If you like supervisord, then systemd is just a short hop away.
OpenRC would do the same - but are you claiming that openrc is BETTER than systemd ? Because systemd is making that claim.
What I'm failing to understand is how is systemd bad ? it is making my linux machine extremely stable and working with it (creating new unit files) is extremely intuitive. Where is it making everyone's life hard ? the old way of "service nginx restart" still works. "dmesg" still works...
> ...are you claiming that openrc is BETTER than systemd...
If I were making that claim, I would have made it.
OpenRC's sysvrc replacement is at least as good as systemd's sysvrc replacement. Moreover, I have much more faith in abilities and reasonableness of the OpenRC Cabal than I do those of the Systemd Cabal.
> True - but that is a good question by itself right ? Why...
I'll ask again. Can you get into specifics about why "[d]eploying web-services on systemd is so much better..."? All I'm getting from you is soundbites and equivocation.
> ...[systemd] is making my linux machine extremely stable...
In the ~20 years that I've been using Linux, I've never had instability introduced by an init or RC system. What init or RC-induced instability have you observed in your Linux systems?
> Supervisord ... [has the] same concept of unit files, declarative language,etc ...
Does this mean that that the meaning of the keywords and parameters in supervisord's config files is very close or identical to those in systemd? Or does it just mean that supervisord's configuration files are in .INI format, just like systemd's?
If the latter, then who cares?
1) Superficial syntax similarities do not necessarily enhance understanding. In cases where similar keywords mean differing things in two different systems, they can (and often do) cause confusion and misunderstanding.
2) Startup files for services whose startup sequence is the most complex you can handle in a systemd unit file are equally terse and readable in both systemd and OpenRC. [0] If the startup requirements are more complex than this, systemd has to call out to a shell script(!) or other external program. OpenRC (and other sysvrc replacements) that use interpreted startup scripts can bake such functionality right in to the startup script. This means that -in these systems- you only need to ship and maintain one file, rather than two. ;)
> Where is [systemd] making everyone's life hard...
Systemd is worrisome for several reasons:
1) systemd's scope continues to creep.
2) The Systemd Cabal continues to assert that pretty much every part of systemd is optional. An honest look at the state of systemd and projects like Gnome puts the lie to that statement.
3) Systemd continues to assert that systemd is faster than anything out there. Real-world observation indicates that this means that they've never heard of -say- OpenRC.
4) The Systemd Cabal continues to assert that systemd is modular. They assert that anyone can read their documentation and reimplement any part of systemd. Many people have attempted to do this and found the documentation sorely lacking.
5) The Systemd Cabal aggressively refuses patches that fix breakage that they introduced by changing decades-old behavior for no better reason than "The behavior was legacy and thus broken.". [4]
In short, the attitudes of the people in charge of the project are dreadfully worrisome. Having udrepper in charge of glibc was bad enough. Systemd's devs are substantially more bullheaded, and the project itself is angling to swallow almost all of Linux userspace.
[0] See Nailer's representative systemd unit file at [1] and my conversion of it to an OpenRC startup script. [2] Also, notice the confusion that one has when one is not already familiar with the keywords contained in a systemd unit file. [3]
yeah, i had similar experience while deploying production stuff to systemd. It does not hurt anymore, it has lot reliable tooling that just works. I understand that it's not "unix way" as people like to say. For me that also means "being reliable". Systemd gives me as an application developer everything i need and takes a lot of pain away. I'm happy.
I think this is what annoys me most about systemd. All of the advantages you list seem extremely niche to me.
I mean network interfaces that don't change names? I think in over 15 years of using Linux I have never had a problem with this, and if I did I'm guessing it would be a) obvious b) trivial to fix. Same with the other stuff.
To get these niche features we need to install a very complex, opaque, fragile and verbose set of tools that throw away most of what I've learned in my 15 years. Blah!
> I think in over 15 years of using Linux I have never had a problem with this, and if I did I'm guessing it would be a) obvious b) trivial to fix.
I have run into this. Two wired Ethernet NICs in a system. After a kernel upgrade, the module load order of each NIC got swapped, and the name of each NIC changed. Took me a while to track that one down. :P
For 99.9% of desktop users (and -I suspect- many servers), this doesn't ever matter, and the "predictable" names are substantially less predictable and discoverable than 'eth0' or 'wlan0' or whatever.
Additionally, if you move a NIC in your system to another expansion slot in the system, its name will change. So, there's that to remember about this particular scheme.
NOTE: I'm not trying to claim that it's not helpful! The "predictable network interface" naming scheme solves a real problem. It's just that it -like most things- creates a few unique problems of its own. ;)
I'm going to step out on a ledge here, but those that are happy with it really don't understand linux or much of how it works beyond editing a few confs. None of which ever touch systemd
What understanding do you think they would need to have to find your unhappiness?
Because while I'm sure in a competition of esoteric kernel knowledge I would lose to a great many people, I also simply don't care. systemd makes establishing my service start up dependencies amazingly simple. It makes daemon deployment simple. It simplifies a whole host of problems which are not cleanly solvable by other means. It handles process restarts, limits and a whole host of other things for me.
No one is bringing a superior solution to the table. Everyone is telling me daemon-tools and init scripts are "fine" (they're not).
I'm pretty happy with systemd on my macbook (running Arch). Not sure what you're doing on that ledge, but I've been running and managing linux systems for over 10 years (I only use Windows for games, and OSX on the road). systemd's user interface (i.e. writing the confs, controlling the processes) is fine, and that's about all that's important for users.
What I don't get is why so many people are in this debate. It hardly matters to anyone what init system (or 'central management and configuration system') is used. It doesn't matter to users, they just want their linux systems to boot, and their confs to be easy to write. It doesn't matter to user space developers, it's just another system to implement support for, conveniently one that's used by most distros and as far as I can tell not terribly hard to grok.
Considering how many distros have adopted systemd, of whom I am willing to bet most of their devs "understand" Linux and aren't completely against systemd, I would not say that they are happy with it because they don't understand it.
Well I was using systemd-analyze and systemctl {cat,disable, restart} and it seemed similar to Upstart in Ubuntu if not better. (And I was able to use it intuitively without needing to refer to manpages).
Classic ad hominem. Do you have anything substantive?
FTR, I understand "linux"[1] better than most and I'm very happy with systemd. So there. It's by no means perfect and could probably stand to be revisited architecture-/design-wise in a few years when there's even wider community experience with it -- but that can come as incremental improvements.
[1] Whether you meant the kernel or user-space. Both as a user, administrator and developer.
Wait, people are using Systemd on embedded Linux?
You go out of your way to replace the Linux userland with a single small binary, and possibly go through the trouble of using a small libc implementation, and then you install this mammoth called systemd.
Theres the really deeply embedded systems running on tiny processors which probably only run a single binary and are never updated.
And then there are big systems (think cellphones, automotive infotainment, etc.) running on quad core procoessors where you have dozens of processes, which might even be independently installed or updated. For those kind of systems you really want a sophisticated init system, and the cost of systemd is probably minimal compared to the instances of chromium/webkit/blink that you might already have on your system.
One might think that the technical changes here are "completely insignificant", but in fact there has been quite a lot of interaction between systemd and the BusyBox world over the years. With those as context, the headlined patch appears in rather a different perspective. Some examples:
* Denys Vlasenko is the person removing the ability for syslogd to inherit its socket (via a systemd mechanism) in the headlined patch. Here is the very same person adding that same code in the first place, back in 2011: http://git.busybox.net/busybox/commit/?id=9b3b9790b32d440eb8...
Ah, very good. The page the submission links to did have some kind of error at the time I posted my comment - instead of the commit message and diffs it had a vaguely-404ish error message in red. But it sounds like it was probably just a temporary error - perhaps the HN effect!
This comment thread seems to be the 2-minutes-hate for systemd, with mostly predictable results. But the actual news seems to be that Busybox removed the use of the systemd notify system which lets systemd know that the service has indeed started and it’s OK to start other processes which depend on it. This is no great thing – the normal and traditional Unix way of daemons is for a program to fork, where the fork continues to be the actual daemon, and the original process exits. Systemd can detect this exiting of the started process, and will take that as the signal that the daemon is ready, so there is no need for the notifying function in this case.
* That's not what the removed code was doing at all.
* That's a 1980s idea of "normal". Forking is something that has been gradually disappearing as standard practice for daemons for the past 16 years, as can be seen from the large number of daemons that now have "don't fork" modes, as compared to the number in the middle 1990s. The idea that daemons fork as some sort of standard practice was the mainstream thinking then, but it is not now.
* Most programs in the wild do not correctly speak the forking readiness protocol, in part because those programs are not forking as a readiness protocol in the first place. Many people, in particular those involved in the Debian Technical Committee hoo-hah a while back, considered the opposite of what you claim to be actually true. There is no need for the flawed, bodged, and in practice broken forking readiness protocol when one has a proper readiness notification protocol. http://homepage.ntlworld.com./jonathan.deboynepollard/FGA/un...
> * That's not what the removed code was doing at all.
All right, what was the removed code doing?
> * That's a 1980s idea of "normal".
That’s why I called it “normal and traditional”.
> * Most programs in the wild do not correctly speak the forking readiness protocol
True enough. But for practical purposes, it’s mostly good enough. And I’d think that it doesn’t matter that much for people running BusyBox with sysvinit.
"But politics shouldn't be a factor in software design. We should cooperate to work with the technologically most useful solution and not let personal or ideological differences get in the way."
That's one of the critics of open source (BSD) world versus free software world.
GNU/Linux projects have been designed for figthing on socio-ethico-political grounds against proprietary software. The first one being Unix. (GNU's Not Unix)
On the other hand, BSD prefers to see itself like a community of pragmatism, and creating values in business by sharing externalities (plus a bunch of fanatics that loves nice code like others ferarris).
The constant criticism of BSD vs linux is building tools for ideological reasons rarely favours the best solutions because you give yourself an obligation to beat the time to market of "proprietary companies" and to add support for stuff that do not worth being shared.(Word/WYSIWIG editors are a terrible idea in the first place, why spend resources to give them more traction by helping to broaden the user base?)
It favours kludges and hacks instead of a consistent simple design. It burns benevolent time, and attention.
And BSD have been denouncing GNU/linux projects (Gcc, gnome, systemd, binary blobs in linux kernels) like long term disasters by locking people in technical debts of poor designs.
Actually, I am a linux guy with BSD boxes and hadn't I problem with hardware support I would be fully on BSD.
I would say they have a point. And just for the record, GNU fundation is not Linus Torvald's best friends.
The first time I heard Moglen's talk he was litterally saying that linux was a bad example of a free software project.
Yes, freedom of choice is political. But it is not a question of organisation, but individuality.
Still some communities aims to gather more zealots than master in their domains. That is the distinction between Free Software and Open Source.
The reason the Linux community is so dysfunctional is because, for most people born during a certain time period, it's the first ever OS they use that isn't Windows, and the first ever Unix. Naturally this creates a lot of sudden revelations, and a lot of blowhards who think they're hot because they can rice their Arch Linux box. In the process a lot of false sense of technical prowess is generated.
Moreover, the network effects become so strong that at some point (which has already been crossed) Linux becomes the alternative OS, and from then on people feel like they can just ignore everyone else with impunity. They start to perceive themselves as the leaders, and everyone else must be biting their dust. Notice how Linux users often tend to be ignorant (and not only that, but resentful) of what BSD, Solaris, MINIX, Hurd and other folks are doing. Not the case with users of those other OSes, who as underdogs have more of a reason to cooperate and usually also have to study what the other is doing, especially so that Linux the big dog doesn't poorly reinvent some interface that ends up mutating across FOSS and leaving their access to portable software in the dust.
If through some historical accident 386BSD ended up making it unfettered from the trademark lawsuit fallout, it likely would have followed the same course. So would have the Hurd.
The AT&T lawsuits for the Unix (c) infringement has been largely used as a FUD from both MS and GNU against open source.
It was fixed fast, but the PR standed long. (calomniez, calomniez, il en restera toujours quelque chose)
The BSD community having been beaten early by the IP problems have been more cautious since this time whereas the linux (as an OS) community becoming an official UNIX (c)(tm) in 1997 as they became POSIX compliant and have been artificially protected from IP problems has been careless in disentangling itself from all the proprietary shit that IBM and other big company that wanted to kill the cost of maintaining their own OS have been putting in the OS. (the legal construct for protecting linux from patent/IP problems involves a lot of big companies and complex clauses).
POSIX may have follow IP protocol in the direction of bloatware specifications.
Linux without this compliance and the support of the big companies seeing it as a way to reduce their costs (RH/IBM/maya/Oracle) would not have been able to substitute itself to other proprietary UNIX in the realm of "professional IT". Especially because big vendors made a pax romana around linux concerning the claims of patents when contributing to the OS.
But by mimicking and being driven by normalization/fundations where the main stake holders are proprietary vendors (HW/SW...) linux has became something of a proprietary software itself.
(Just look at who are the main ISO/IETF/IEEE/POSIX contributors nowadays, and the member of OSI/linuxfundation.)
That also makes the assumption that there is only one "technologically most useful solution", when in practice every application has its own unique requirements, and trying to satisfy all of them just makes a system which is not particularly good at any one of them (systemd seems to be moving in that direction, not unlike other "enterprise" software.)
And that is why Unix as a concept has endured even in the face of opposition like Windows.
Because at its heart is a collection of tools that can be combined in whatever permutation that solves the task that the system user/admin has before him.
Possibly some backwards incompatible change was introduced & nobody feels like having to maintain version checking down to minor releases. systemd should learn how to run as a non-init service daemon
The political significance is quite high, and I have to say I feel that this move, though perhaps a bit childish, is a valid signal to express grievances with the absolutist attitudes of the systemd developers. Plus let's be honest here, they aren't the pinnacles of mature behavior either. I think this is the first time a major project has done a statement like this against systemd. I would expect more to follow their example.
I of course summarized my issues in "Structural and semantic deficiencies in the systemd architecture for real-world service management" [1] and hope to see systemd go the way of devfsd and HAL.
[1] http://blog.darknedgy.net/technology/2015/10/11/0/