That is, every time I've reported something is broken, wonky, doesn't work reliably, et cetera, I've been told, "Submit a patch.", "Write some code.", or worse, "Implement it yourself."
Someone finally got fed up with the haphazard state of affairs in Linux-land. Fed up with the fragmented and sometimes many places you have to look for error logs. Fed up with the many files you have to edit to configure the network correctly (different on every major distribution). Fed up with the half dozen ways to configure X, where X is a common function to every modern operating system.
It seems Lennart has taken the advice and followed through, and distribution maintainers liked it. They liked the idea that someone was taking all this complicated work - this dirty, boring to write and maintain code - and making their lives easier. Why else would nearly every distribution be on board?
Systemd is offering a more compelling solution than anyone else, and if you don't like it, well, you should submit a patch, write some code, or implement it yourself.
I installed CentOS 7 on a machine last night that we're replacing CentOS 6 on and was poked in the face with timedatectl and dbus problems for an entire hour, some of which were intermittent. Debugging these issues is a horrific pain. I lost 4 hours on it. I've never lost that much time on a system function before. This is not what I expected and there is no way I could possibly introduce that to our production environment.
I think that might why people are slightly sensitive to it.
Yes you're exactly right, but replacing something with something less stable, more complicated and more difficult to debug isn't a rational or good engineering. I'm sure many people will be fed up with systemd much quicker than what was already there.
Not impressed with a community which pushes this as stable, quality software. Voting with my feet: FreeBSD is being trialled instead. WhatsApp throwing a million dollars at it draws a lot of valuable attention and puts it in the business's mindset.
Choice is as much of a valuable aspect of open source too...
Pulseaudio was Lennart's previous project. It broke everything in linux sound for a while, everybody moaned and hated it and said it was the worst thing since the crucifixion of Christ.
Yet, name one problem you had with sound on linux in the past year? There are very few. Pulseaudio now just works(tm) and is a unseed, unheard of part of the plumbing.
If you remember what is was like messing with ALSA and (shudders) OSS before pulseaudio came along you will agree that the current state of affairs is a million miles better. It used to be really difficult to get more than one application to be able to play sound at a time. I remember compiling sound drivers from source just to get them working. Configuring ALSA config files to get surround sound working was practically a black art. Creating manual scripts that unmute the sound card on every boot because the driver didn't initialize it properly.
With pulseaudio, I never have to worry about any of that and configuring surround sound takes me two clicks of the mouse.
Lennart did a fantastic job with pulseaudio, he took on a dirty problem that nobody else dared to touch and went through years of criticism to produce a really high quality solution that solved the linux audio problem so well that you don't hear complaints about it anymore.
In light of that, I trust him to do a good job with systemd. It'll be a couple of years of everyone moaning and bitching and whining about it, then one day it will have become a seamless part of the plumbing, everyone will take it for granted and wonder how they ever managed fighting with shell scripts and fragmented init systems before systemd came along.
It's ironic that Lennart Poettering is probably the most abused developer in the entire OSS ecosystem, yet he is one of the people contributing most to it. For our sake, I'm glad he has such a thick skin. If I was him I'd have quit this game long ago.
That's just it. Linux sound worked fine for me before Pulseaudio, and FreeBSD sound has always worked perfectly fine for me. In fact, FreeBSD solved sound mixing sooner via /dev/pcm virtualization (while Linux chose to create the Linux-only ALSA instead), and has always had lower observed latency.
Pulseaudio screwed up my audio so badly that for a year I was running the closed source OSSv4 binaries and manually recompiling all the audio libraries to use OSS instead of ALSA/Pulse.
It is not fantastic to push horribly broken code onto the entire Linux userbase while others frantically jump in to help patch and fix the trainwreck.
And we're doing the same thing again with systemd. Instead of having a few years where users can choose between systemd, sysvinit, openrc or upstart, while all of the major bugs are worked out, we're being forced immediately from sysvinit (Wheezy) to systemd (Jessie). I was on Lennart's treadmill with Pulse, I'm not getting on it again with systemd.
Now PulseAudio was released into the wild too soon by too many distros BUT it has fundamentally fixed what was HORRIBLE in Linux. (Previously a Sound Engineer and Record Studio owner)
BUT I would say that Systemd is extremely stable and not broken. What people are complaining about is the philosophy aspect.
To be fair, I didn't say I never had Linux audio issues prior to Pulseaudio (whereas I did say that about FreeBSD.)
Back in '98, my SB16 ISA card would only output sound at 8-bit monaural under mikmod, and I could only play CD-audio with that passthrough cable between the CD-ROM drive and the sound card. Once I was able to get sound working well enough, the only way I was able to play MIDIs was through Timidity and Soundfont emulation. And until ALSA, there was obviously pain whenever two things would want to play sound at the same time. This of course was due to the OSSv3 author changing the license before introducing his own audio mixing, and all of those awful sound server daemons (esd et al) never really worked, since there were multiple daemons and each application wanted different daemons or just wanted to stab right at the OSSv3 ioctl's.
But once ALSA was established and working, yes. Audio under Linux at that point worked just fine for me. Pulseaudio was a solution looking for a problem.
> (Previously a Sound Engineer and Record Studio owner)
I won't claim to be either of these. I like to listen to music while I write code, I'll occasionally watch some movies or play some games, and I want Pidgin to make a chime when someone sends me a message.
In particular, I'm very sensitive to latency in gaming (emulation), but that's about the extent of what I need speaker sound output for.
> What people are complaining about is the philosophy aspect.
To me, the worst part is the backroom politics, the complete disregard for portability, and the lock-in effects of consuming other daemons and services, and making software dependent upon it.
However, I do also object to the design itself, as well as to the developers responsible for working on the project, and the attitude of disdain they present to the community at large.
The issue was ALSA was HUGE latency to use for anything in recording was just not doable! I had to buy a closed source solution under Windows. Today I could easily do it in Linux.
/* A */ sample = (sample_a >> 1) + (sample_b >> 1); //lowers volume of A and B by 50%
/* B */ sample = max(-32768, min(+32767, sample_a + sample_b)); //prone to clamping
Playing this up as a bogeyman for not being in user-space is FUD, especially when video card drivers also run in kernel space, and are literally thousands upon thousands of times more complex and error-prone. And now the big push is to have kernel mode setting for video cards (even FreeBSD is doing this), which I believe to be a terrible direction to go in.
I have never in my entire life seen a system crash due to audio mixing, but I've personally experienced plenty of video card drivers causing kernel page faults.
If people were even remotely serious about the protection of kernel space (and I certainly wish they were), Minix would be more than a footnote in history. Neither Linux nor the BSDs make serious efforts at microkernel designs. Not even passive attempts to run non-critical device drivers under ring 1. Personally, I'm really rooting for Minix 3 and hope that it takes off more now that it's gained binary compatibility with NetBSD.
I just find it amusing how a monolithic design of doing all audio stuff in the kernel is held up by some as an example of reliability and as superior to a more modular design that is more in line with the UNIX philosophy.
About the KMS however, I've heard that DisplayPort link training has latency requirements that are difficult to meet in anything but a kernel interrupt handler... a quick duckduckgo search finds a short note about that on:
Also X servers have traditionally needed direct PCI bus access to get the hardware initialized, which means that a buggy X server can hang your PCI bus so the driver running in user space likely doesn't increase reliability in practice.
It's an interesting question to what extent the limited success of microkernel based UNIX implementations is to historical accidents and network effects, and to what extent due to actual technical limitations and additional complexity of a microkernel architecture.
Okay, my apologies as well then. It was hard to get a read from just that one sentence with the word kernel emphasized.
> (Do audio devices support floating point formats nowadays?)
Natively, no. You can be lazy and do it anyway in software mixing though.
> I just find it amusing how a monolithic design of doing all audio stuff in the kernel is held up by some as an example of reliability and as superior to a more modular design that is more in line with the UNIX philosophy.
Certainly, it would be ideal if everything non-critical were in user space. But audio in the kernel is probably at the very bottom of the list. Audio mixing is maybe 0.0001% of the kernel code, and is some of the safest, simplest arithmetic code imaginable. It's worrying about the one ant you saw on the counter when your entire house is infested with termites.
> About the KMS however, I've heard that DisplayPort link training has latency requirements that are difficult to meet in anything but a kernel interrupt handler
I don't know if that's true or not, but I am running a DisplayPort monitor (ZR30w) now without KMS, and it works fine. Obviously the video driver is still running in kernel mode, but at least it's a module outside of the kernel itself that runs after my system is booted.
What I'd really like to see is distros and vendors instead relying on UEFI GOP for boot-time mode setting.
> Also X servers have traditionally needed direct PCI bus access to get the hardware initialized
Well, compare it to audio. Eventually even a userland mixer will have to send the samples through some sort of hardware interface. But if your goal is stability, then it would be ideal to get as much code out of the kernel as possible.
> and to what extent due to actual technical limitations and additional complexity of a microkernel architecture.
Certainly nothing is ever perfect. There are so many potential problems with computers. Cosmic rays can flip bits in your RAM if you don't shell out an extra $500 for the premium CPU, mainboard and ECC RAM. Strong enough power surges (lightning) can burn through and destroy absolutely any running computing equipment. Hardware can literally fail and take down your system. Things can overheat, there can be design flaws in the silicon itself, etc.
So I look at it like OpenBSD looks at security. You want to stack all the protections you can. Mirror your drives, use ECC RAM, don't run anything in kernel space you don't have to, try and build as much redundancy and safety as you can into the system. It won't be perfect, but every bit will help increase uptime.
So again, sure, audio should preferably be in user space. Just, it's many thousands of times worse that video isn't even trying to do this, and is in fact going in the opposite direction to become more tightly coupled with the kernel.
However, with one small caveat: servers don't generally have sound cards so the impact of this was relatively low. There aren't that many desktop Linux users out there. I mean I'm a Unix guy at heart and I'm typing this on a Windows laptop. I've never used Linux on the desktop and probably never will.
Now servers do have init processes and we don't really want to spend the next 3-4 years being guinea pigs. I'm quite happy for the vendors to do this behind the scenes or offer it as an alternative but we've got an RHEL+CentOS release with systemd in it already and a Debian with systemd in it just around the corner. A pulseaudio situation, even for 6 months, will result in no small amount of chaos.
I do indeed remember times before even ALSA when you had to pay OSS for drivers for your turtle beach card etc. But that's in the distant past, not right now and of little relevance. Windows was fine on the desktop then as well and the sound worked fine out of the box.
Then you're not really a Unix guy at heart.
At home, all I run is Linux, including the laptop my non-geeky wife uses.
For me it was a hard choice. I knew she would object because it would be "different" and she's not really interested in learning a gazillion different computing-systems, but on the flip side it meant it was simpler, quicker and less work for me to maintain the computers at home.
Once setup things just work, and ensuring everything (including flash and other vulnerability vectors) is up to date is one apt-get upgrade away.
I'm not sure that's a great vote of confidence for the road ahead of systemd (given that systemd presumably has a bit more to it than PulseAudio). To quote the article, "I do honestly believe this will end up being the start of a rocky period for Linux".
Ubuntu 14.04 will keep me happy for ~5 years, then I can take another look at what the current state of things are.
It is only since 14.04 that you have a small chance that opting to use pulseaudio is the better choice.
Just trying to get Skype working (which uses pulseaudio) cost me 2 hours last week, which is not at all nice when you have a call starting in 5 minutes.
Pulseaudio still won't detect the headphone jack on my old intel board, and Skype on my newer machines on Linux will routinely fuckup playback.
One also wonders how much of the PA cleanup was handled by people that weren't Lennart.
In particular for problems of getting Youtube (or any browser audio) to work while other apps use JACK directly.
Although on a recent new install it seemed to work without them as well.
One problem is I need to start/stop the qJackCtl thing every time my laptop comes back from sleep, to get sound working again. There must be a way to automate (or, preferably, fix) this, right? Anyone know?
But, also to be fair - like you, I maintain my own systems and do not overly depend on the teeming-mass-reality as a derivation of stability. My personal Linux DAW systems, running now for decades, have attained a level of productivity that I would at least hope is represented in the current niveau, vis a vis Popular Linux Distro designed for audio (e.g. pure:dyne, Arch Pro Audio, 64 Studio, UbuntuStudio, et al.) .. for the newcomer, it should of course 'all just work' from boot-up, which I hope is the case. It is for me, anyway: I've expunged pulseaudio from all of my machines, and make do with Jack. My studio uses 48-channels of digital audio, everything-is-a-file .. a working and functional DAW, thousands of plugins, about 12 MIDI devices (synthesizers/effects rack) and so on, and the best thing of all: all source code included. So, yeah .. ;)
EDIT: apropos qjackctl, yeah, apmd:
.. or some such similar thing.
A month ago with Ubuntu 14.04 - paired with a bluetooth speaker, but would not send any audio to it (A2DP) without any indication as to how to diagnose or correct the issue.
It may be standard on some systems, but not, apparently on debian (it's an "optional" package).
And that's part of the point: stuff that needn't be present shouldn't be. systemd's a whole 'nother ball of wax in that regard.
And yes, I'll even allow that Linux audio has been frustrating over the years. But in my case, problems going away had nothing to do with Lennart's work.
I've been with Ubuntu since 2007 and went through the PA transition. I agree it is so much better now. Changing audio sources is easy and faster than on Windows 7. By Ubuntu 12.04 this was stable for me. Like changing from speakers to headsets for a meeting, smooth with PA at least on Ubuntu. Until PA I never thought I'd see audio united on Linux.
On my Debian system youtube videos stop playing sound once in a while (video continues), thought I suppose it's not pulseaudio's fault (so just a general sound problem).
You asked ;-) (I still agree with you that it got better than it once was)
In some sense systemd is more stable in that it's fixing some longstanding bugs with sysvinit, but of course it will have some bugs of it own. If you don't want to deal with that, you could skip a release.
There is a distinct lack of engineering prowess and quality control. It originates at the core GNU + kernel + freedesktop teams and waterfalls down through the distribution houses.
That's the problem and it's endemic within Linux.
Imagine you were an architect of buildings. Your day to day job is to design mundane strip malls and gas stations. You have building code on your side for much of the process. As long as you don't violate the regulation, you at worst can only make an inconvenient building, but not a dangerous one.
But imagine instead that you're building a large office building every six months, your clients demand you don't reuse any design principles on your future clients, and you not only lacked the building code, but also 1/4th of the heavy machine equipment, 1/2ths of the tools, and 3/4ths of the raw materials. I don't just mean you don't have them in stock, I mean nobody has invented them yet. And of the ones that have been invented, we don't even know all of their material properties, say nothing about what material properties we should be looking for. Will this particular bolt we are using with these particular cross beams hold up to the stresses placed on them? The answer is unknown, and I'm some cases unknowable.
It's really easy as a user to say, "they should have tested this more". While strictly true that more testing may have found your issues ahead of time (presuming the right tests were done), it is inefficient engineering to exhaustively test things. Even mechanical engineering bases a lot on statistical modeling, which will always, always have corner cases that don't match reality.
In the real world, people had to learn the hard way about things like lightning rods and sacrificial electrodes. They didn't come about from "testing during development". They came about from testing live, and seeing which buildings and boats did or did not burn down or sink. That's not bad engineering. That's just the nature of unknown problems.
What the general state of affairs shows is the following traits:
1) There is no thought and research going into the design of a piece of software. Ergo, we do not learn from past mistakes.
2) There are isolated individuals writing vast swathes of software which are trusted unconditionally. Ergo, we do not learn the benefit of multiple eyes on a problem, review and discussion.
3) We assume that software is correct from one person's viewpoint and opinion. Ergo, we do not test software properly nor cover those tests with objectives.
4) We work to deadlines, not quality objectives. Ergo, we trade quality for tolerance from others.
In this case someone came along and didn't think about the problem, didn't work with others, assumed they were unconditionally correct and chose tolerance over quality.
To use your analogy, they're now selling stainless steel lightning rods (poor conductivity), are the only vendor of them, are a vocal marketing front and houses are catching fire everywhere.
Or more specifically, in one example, the entire process was above and from the author to the distributor, no one even noticed that loginctl doesn't work properly.
On your points:
#1 is patently false, to the point of being extremely insulting. You've lost all sympathy from me at this point. Go peddle your baseless opinions somewhere the audience doesn't know better.
#2 is also laughably false, as that is the entire freaking point of open source and often considered the greatest strength of Linux. You think because your highly qualified opinion wasn't consulted before you had to spend a whole four hours, OMG on a problem that means that no review or discussion was done?
#3 is false again, because software is tested. You use the word "properly", so I will sit here and wait for you to bestow upon us your great wisdom on what we could be doing better.
#4 is false on both presumptions that software is not built to quality standards instead of deadlines, or that other fields are not dictated by deadlines.
When I say tested properly, I mean tested completely. If you miss an entire functional unit of the software and a client reports it as broken, its pretty obvious what the problem is.
Our senior software guys sat down for the other four hours and presented all our findings together and cumulatively said "we're not supporting that shit; we can't trust it".
Regarding #4, it's plain to see that RHEl was released with a broken systemd implementation due to deadline...
I am still in the learning phase, but even I know that complete testing of any complex software is practically impossible.
So, how do you guarantee completedness in "proper" testing? I know you can't without redefining the word "completely". What's your definition?
Also see Impossibility of Complete Testing by Cem Kanen, co-founder of http://www.associationforsoftwaretesting.org/
When your system consists of functions "A, B, C, D", I'd expect to see test suites for "A, B, C, D". In this case there were test suites for "A, B, C". The client found D therefore the test suite was incomplete.
Now if a bit of A, B, C or D suites were missing that would be different and entirely expected.
As for testing--notice that when a lot of people here are reporting issues with systemd/pulseaudio, their reports are pretty much dismissed out of hand, or they're told "no, you've done something wrong".
For #2, a lot of times somebody with the right political position (say, Lennart at Redhat) or just the ability to shout louder and longer than anyone else will get something put in, regardless of technical advantage. Don't even try to claim otherwise.
1) government contractors are so fun when they get their software required to deal with certain parts of government
Honest question: I've been using sysvinit for a very long time and I have no concept of what those bugs might be.
Assume server with lots of processes.
Service A starts writes PID to disk, lets say 123.
lots of processes start and stop as it goes along and does its work.
Service A crashes/stops working
PID 123 gets reused by a new process
SysAdmin comes and hits /etc/init.d/ServiceA restart
shell script calls
which was a totally different process now not at all related to ServiceA.
Clean unmounting not depending on timeouts to be high enough.
Not starting a database before the filesystem with the database files is mounted.
I created a specialized FUSE filesystem to deal with this. Processes create PID files in it, but when they die, the filesystem automatically removes them.
The Readme is rather sparse, could you add an example how to use it from a init shell script?
Although the Capsicum model (in FreeBSD, slowly getting into Linux) where you can have file descriptors for processes is another different model.
This was solved decades ago with numbered init symlinks:
So I would say it has been hacked around for decades. Not cleanly solved. But I am not the best informed here so please add more details about how numbered init symlinks guarantee file system being there before a service is started.
It took a long time and a giant ecosystem to get where Linux is today at big enterprises. OSes are commodities in that space. They are not commodities in many other spaces though (e.g. startups, HPC, science, etc).
Whoever is pushing for this is an idiot then. Verisign, for example, has had 100% DNS uptime for the .net, .org, and .com root servers for ~15 years because of their mixed environments. In every one of their POPs they tend to have at least two racks of equipment with:
* 2 different brands of load balancers
* 2 different brands of firewalls
* 2 different brands of switches
* 2 different brands of servers
** servers are from different hardware generations
* 2 different OSes (Linux and FreeBSD)
* a choice from 3 different DNS server software
This is how you run a reliable global-scale service. Anyone who plays the "it's just easier if we all use ____" is in for a big surprise when their entire infrastructure is at risk due to one bug.
As long as a sufficient fraction of servers at a sufficient fraction of Verisigns clusters has an uncorrupted set of data and is able to serve responses, Verisigns TLD zones remain up.
Pretty much the only thing that can go wrong in that case, assuming you have safeguarded the integrity of the zone is bugs in components outside their direct control.
It makes 100% sense for them to focus their efforts on ensuring diversity, because the class of problems that can solve for them makes up an unusually high percentage of the possible failure classes, and the nature of the service also means that most of the potential problems that this can cause is only likely to take out some proportion of their capacity that still leaves them with a functional system, so the potential benefit is higher for them than for most with a heterogeneous, and the potential risks are lower for them than for most.
For Google and Facebook, the systems are so much more complex that the tradeoffs are vastly different.
Which excludes the vast majority of functionality of Google/Facebook, and most other major web properties.
There are very few heterogenic systems in the enterprise. That is an objective, but the main thing is that we deliver what we're paid to deliver by choosing an appropriate platform. We have Solaris, zSeries, Linux and Windows. We just got rid of AIX.
As for minor differences, FreeBSD has a lot of much bigger wins than people realise at first glimpse. The differences are far from minor. For example:
ZFS, dtrace, rctl, a scary good IP stack, virtio support, documentation that doesn't suck, a POSIX base, LLVM/clang, a MAC framework that doesn't suck, OpenBSM, CARP and a pile more. Oh plus an automated deployment story that is pretty tidy.
Sure we can replicate some of these on CentOS 7 for example with similar tech but the above are a million times more cohesive.
Unfortunately when the first and second time differ even though identical (recorded) steps were performed, one has to ask the question: why and can I trust it?
My rule of thumb is "search for the problem on Google. If nothing comes up,
maybe something is wrong on my end".
Did you find any results or reported bugs similar to what you experienced?
Yes there were other mentions of it with notes to it being fixed in a later systemd drop, which we can't deploy because RH/CentOS don't ship it. I think one of our guys raised a case with RH but I was dragged off onto something else then.
I have my share of reservations about systemd (and PA), but thought that it might be worth pointing out that "known good" hardware A with software X, doesn't have to mean hardware A is all good, just that A has no bugs/errors not exposed when running X. So Y comes along (new kernel, drivers?) with entirely new code - and suddenly things behave erratically.
I don't have the error on my phone which I'm on at the moment but it threw a dbus error with no debug info.
I imagine 10 years ago you would be the person complaining that GCC segfaults randomly during compiling Linux kernel, complaining that it's not "tested completely". While the segfault was caused by CPU overheating (not cooled properly) and flaky memory (causing bits to flip).
Just because a problem is unusual, intermittent, or only affects one person doesn't mean it's not a regular old software bug. And in my experience, it almost always is. And once you do debug it, you often (but not always) understand why it was intermittent, under what conditions it happened, and why you were the only person that saw it.
Nope. Not that.
We don't buy crappy hardware or not test it.
Where I think the systemd-naysayers have a valid point is around the tight coupling that has been introduced, and is still being introduced, between systemd and various other components of a fully functional Linux system.
To take your "just submit a patch" example - say N years from now I'm unhappy with some aspect of how systemd works. I can submit a patch, or I can rewrite that whole component from scratch. However, it's entirely possible that the piece I'm unhappy with is so tightly coupled to the rest of systemd that I can't rewrite one component of it without rewriting the rest of systemd, or convincing the systemd maintainers to accept my rewrite and bake it in as the new "official" version of that component.
Where I think the criticism of systemd is valid is that the idea of modularity has taken a backseat, and the APIs between the different components of systemd haven't been very well-thought-out. The informal spec is "whatever systemd does today is correct", which of course destroys any sort of interoperability.
And by way of full disclosure, I'm an Arch user, and run systemd on 4 systems I use everyday - home desktop, home server, work desktop, work laptop. Whatever else I have to say about its design, I use it every day, and actually like the parts of it that I use. eg, the boot time for my desktop is stupidly fast, and if I want to know about some log message, I just run journalctl. I no longer care whether the foo daemon uses syslog, or writes to its own /var/log/foo.log that I should set up rotation for, or handles its own rotation as /var/log/foo/2014-11-20.log, and so on.
And just to play devil's advocate with my own position - there's a certain point where tight coupling makes sense. Linux kernel modules, for example, are tightly coupled to the Linux kernel, and don't work unmodified when compiled against a *BSD or Solaris kernel.
Plus: This tight coupling did not exactly replace existing communication features. It created new ones. These are made use of.
Yes, systemd is bringing lots of new functionality. Under the hood - that is why sysadmins love or loathe it and users mostly don't care. That "tight integration" argument is mostly one that comes from people (please do not take offence, you're weighting it carefully indeed!) who bemoan that other userspace system infrastructure is left behind feature-wise. And those who love to argue about and against design decisions.
Sounds eerily like "Embrace Extend Extinguish" redux.
Don't get me wrong, I am aware systemd is a technically superior solution. But politically, it is a trainwreck.
Sure, but the coupling was contained. You could still run Gnome on any distro (or on non-linux), whichever way around your init scripts were.
It's in RedHat's interest for software that's currently portable to FreeBSD or especially to Solaris to become tied to Linux. This wouldn't be the first time RedHat has adopted anti-opensource methods out of fear of Oracle - compare their policy of deliberately obfuscating the history of their kernel source.
Submitting a patch, implies you agree with the general direction but need a bug fixed or a feature added.
Humm, I know this will disappoint you, but we are not particularly
interested in merging patches supporting other libcs, if those are not
compatible with glibc. We don't want the compatibility kludges in
systemd, and if a libc which claims to be compatible with glibc actually
is not, then this should really be fixed in the libc, not worked around
As for forking the whole thing, remember that logind was briefly liberated so it could be build as a standalone package, Lennart went and did a big rewrite so the next version was much more integrated with systemd. When he controls the internal APIs and can change them whenever he wants, a clone will have to be a total replacement right from the start, or it ends up perpetually having to catch up to the changes that will be introduced just to cause breakage.
Why should the Systemd team pay the overhead - in terms of complicating their code - to work around incompatibilities in another libc that will also affect portability of a lot of other Linux software?
We're not talking about asking for some new work to be done. We're not talking about any kind of change to how the project works.
This is about trivial changes like #defining function name that aren't even included in the build unless you were using the libc. It is actually rather surprising behavior to see in a publicly-developed project. This kind of fix is incredibly common we've created tools su chasd "cmake" and "autoconf" to handle the common cases and easier #ifdef-ing.
I wish more projects would take this line.
Autoconf is the devil. It's a symptom of how broken Unix-y environments have been, and how people were willing to impart a massive maintenance cost of countless application code bases instead of either pushing their vendors to getting things right, or agreeing on common compatibility layers.
In this particular case, mkstemp() is not a viable replacement for mkostemp(). A proper fix is to provide mkostemp() in uClibc, or to compile with a shim that provides it.
Arguing over whether including the shim in Systemd would be acceptable would be a different matter, but parts of the patches as presented were flat out broken.
And the Linux kernel is not starting to depend on systemd or the others. The Linux kernel is moving towards demanding a single cgroups writer, and at the moment Systemd is the main contender in that space.
That Systemd is depending on Linux is unsurprising, given that they stated from the outset exactly that they were unwilling to pay the price of trying to implement generic abstractions rather than taking full advantage of the capabilities Linux offers. You may of course disagree with that decision, but frankly, for a lot of us getting a better init system for the OS we use is more important than getting some idealised variation that the BSD's could use too.
> an architecture that promotes this very lock-in to begin with.
The "architecture that promotes this very lock-in" in this case is "provide functionality that people want so badly they're prepared to introduce dependencies on systemd".
At some points enough is enough, and sub-optimal advances still end up getting adopted because the alternatives are worse. Systemd falls squarely in that category: I agree it'd be nicer if it was presented and introduced in nice small digestible separate chunks with well defined, stadardised APIs so that people could be confident in the ability to replace the various APIs. But if the alternative is remaining with the alternatives? I'll pick Systemd warts and all.
Looking at posts from the Gnome people, the original intent appears to have been to provide a narrow logind shim exactly to make it easier to replace logind/systemd with something else. If someone feels strongly enough to come up with a viable shim or an alternative API that can talk to both systemd and other systems reliably, then I'd expect Gnome to be all over that exactly because they will otherwise have the headache of how to continue to support other platforms.
The problem is that Gnome already for a long time have dependend on expectations of the user session management that ConsoleKit on top of other init systems have been unable to properly meet, so Gnome has in many scenarios been subtly broken for a long time.
As to logind, it may have been a better choice for the long term to do a separate implementation of the public and stable logind DBus API instead of trying to run the systemd-logind implementation without systemd as PID1, but supposedly whoever did the latter thought it was the best short-term choice.
Most being: this is not the issue the loudest voices say it is.
Second - while it will provide an alternative that helps frame the debate, this is not a minor undertaking. With every other distribution caving maintaining a distribution that does not use Systemd will require a lot of work to keep all of the software out there working properly with whatever alternative init system it chooses to use.
This alternative distro is also going to have to deal with how to solve the init problem. We had some good options in play but I don't believe we'd found the best answer to the problem yet when Lennart came bowling through like a bull in a china shop. So any distribution effort is going to have to take on the role of choosing the best of breed alternative and make the effort to ensure it continues to develop and improve.
This isn't something you take on lightly.
Or go back to using windows for general use and software development which is in fact what I've done. It's amazingly sad after many years of being a strong linux supported but this has killed linux for me. I see no point in continuing to use it.
 tailored as in platform specific. For cross platform stuff, it's not so good.
Color me surprised.
But that's not at all what Torvalds's quote means. He meant that, if FreeBSD had been available, he would have worked on contributing to that instead and improving it for daily use, rather than building Linux to be used for daily use. (The state of FreeBSD in this hypothetical world has no bearing on the state of FreeBSD today).
In terms of FreeBSD today, while it's possible to use it for daily use, it suffers from even worse driver issues than Linux does, and of all the problems of just simply having much smaller market share (both for contributing developers and users) than Linux does.
This may or may not be 'good enough' for OP's purposes, but it's disingenuous to suggest that Torvalds's hypothetical from the early 1990s implies that FreeBSD is a clean substitute for end-user Linux today.
No it does not. It has fewer driver issues by far. Because it does not have broken half-assed binary only drivers by obscure vendor X that don't actually work. Unsupported hardware is simply unsupported, rather than broken.
I've always found it interesting that Nvidia offers a more complete and stable BSD driver than its GNU/Linux counterpart. That said, AMD/ATI support is abysmal, and even Intel video is lacking compared to GNU/Linux.
> Unsupported hardware is simply unsupported, rather than broken.
That's a matter of interpretation. If FreeBSD doesn't support my hardware, it's the equivalent of being broken for me, given that I can't use it anyway. That said, I try to build or buy the most OS-agnostic workstations possible so my options are always open.
Not so sure about that, or maybe it depends on the situation. For instance, I've been running FBSD and Linux in VMs (specifically, Hyper-V/Win 8.1 on a Surface Pro 2).
After updating to FBSD 10.1, I decided to try the Lumina DE (from PC-BSD). I've been surprised at the performance of the GUI under the constrained memory and CPU availability. It's about as good as the host (Windows), albeit running minimally demanding applications.
OTOH Linux versions (SUSE, CentOS) have been much more sluggish and GUI usability much lower. I realize this is impressionistic and hardly a deep analysis. Nonetheless, I think it points out that it's risky to make assumptions when circumstances and system requirements are so tremendously variable.
Then you're abstracting away from the video hardware, and not getting the same results as you would on bare metal. The only thing really lacking in Intel video versus Nvidia is proper KMS support; Intel video on FreeBSD works generally well otherwise. The FreeBSD Nvidia-provided driver, while closed source and binary only, is more or less feature complete.
That's the point. People like to pretend linux has more hardware support, but mostly what is has is broken drivers for obscure buggy hardware that you can't actually use. The "well supported stable hardware that actually works" list is practically identical between them.
This is why choice is good, and lock-in is so bad.
As an aside, I've heard it theorized that part of the reason Microsoft tends to do massive GUI facelifts every few releases, is to keep the Windows/Office training industry going strong.
But wouldn't the path of least resistance be to switch to a project that does not have this "haphazard state of affairs"?
When I originally tried Linux I got fed up within _days_. It is the _relative_ lack of default "configuration" (that is decided by someone else) that makes me stay with FreeBSD and NetBSD. Of course, lack of default configuration is the antithesis of popular Linux distributions. Whenever I have to use one, I spend more time learning how to turn things off than I ever did learning how to turn things on.
The answer to the original question is, I think, "no", switching is probably not the path of least resistance for many Linux users. Because when the Linux user makes that switch, they immediately find that someone has not done everything for them.
And from what I have seen, observing the questions of Linux users who first try FreeBSD or NetBSD, they generally do not like that. It means they have to do some configuration of their own. And even if they are comfortable doing configuration, it means they have to learn things that are different from the "Linux way"; and they inevitably encounter shortcomings that are due to lack of developer resources (read: time).
In doing things for yourself you learn about how things work. The rc.d system that all BSD projects use is coherent and relatively easy to understand. For whatever that is worth.
This debate over systemd seems to cut to the core of the value of learning about how things work. The reader can draw their own conclusions.
Linux is only a kernel and it should still be possible and thus optional to run that kernel with a basic init (or init alternative, e.g., one based on daemontools) and with userland utilties that do not need systemd.
The question I have is how difficult the popular Linux distribution folks are going to make that for their users to do.
And if they do make it difficult, it raises the question, "Why?"
This informal fallacy is based on the idea that everything in the world can be distilled to a single answer. The real answer is more complicated. For example:
Red Hat is on board because they pay its creator's salary. So they rely on an individual bias.
Debian is on board because they rely upon 'collective wisdom' and committees to make decisions. So they rely on the bias of group thinking.
Ubuntu is on board because Debian is on board. So they rely on the bias of the other.
Other distributions are using it because 'every other distribution is using it', or they're small enough that it doesn't cause conflicts for its use base, or because it's a GNOME dependency, or because it's just new technology.
To make someone think something is a good idea, show them someone else thinks it's a good idea. This is a fact of all human beings' thought processes. Decisions are not based on merit, or logic, or even a quorum; it is based on fallacies created by heuristics. There exists a heuristic in which the more an idea is adopted, the more other people think it is a good idea.
We imagine our thoughts are logical, and that other people also think logically, and that their decisions must be made for a good reason. But in fact, the great majority of all decisions we make are based on guesses; this is how our brains are able to carry out complex calculations and come to decisions in split-seconds.
For example, you might look at systemd and say, "it fixes so many problems! it provides so many features! it standardizes Linux! CLEARLY this is superior. we must adopt it."
For people who care about the purity of the highest technical ideals, this makes sense. For people who care about being able to use their computer, these things don't matter, and in the systemd implementation, actually makes things worse for them. The changes systemd purports to make are not bad things. It's really just the way in which they did it that is bad.
It's like wanting to upgrade your bicycle to four wheels, but requiring the rider now operate it lying flat down and using mirrors to navigate. The four wheels was a great idea. Using mirrors to navigate? Maybe not so great.
Of course, its creators will turn this inconvenience into a feature, saying "you get to lay down! it's therefore more efficient and easier to use!", completely ignoring how other people want to ride a 4-wheeler.
Have you ever run a linux box? How about dozens or hundreds of them in a production environment? I'm guessing no on both counts based on the nonsense you're spewing forth in your post.
If you don't know what you're talking about, it's best to just keep quiet.
If you absolutely MUST run Linux, my recommendation is to minimize the interaction with the base distro as much as possible. CoreOS (when it’s finally baked and production ready) can bring you an LXC based ecosystem
Systemd is absolutely key to how CoreOS works. It's the basis for the distributed init system it provides — a major selling point.
Taking any of this blog's advice would be harmful. I'd suggest a better approach would be to accept that the majority of distributions have settled on systemd and that generally this decision has not been made by idiots. So it would be worth either understanding what their pain points are and how they can be solved with an alternative to Systemd, or to help solve the issues that are apparently in Systemd yourself.
But not because I have anything against Systemd. I love Systemd so far.
It's hilarious that he's proposing CoreOS as an alternative, given that it's one of the most radical rethinks of a Linux distro out there.
The problem here isn't change, or re-thinking linux. The problem is re-inventing the wheel, and doing it poorly.
CoreOS uses systemd, but it's not a distribution in the classic sense -- rather it's a platform for containers. The narrow use-case for systemd here removes some (most? all?) of the concerns.
Whilst I agree that the blog is probably hocum, there's nothing wrong with critical thinking.
The answer "let's throw everything out" isn't that useful; likewise, dismissing the considered opinion of lots of people who have been doing this sort of thing for a while needs to be done with some rationality. An empty, bandwagon-jumping appeal like this adds little value and just helps spread more misinformation.
Moving all the chess pieces at once, which is what is happening is not productive, professional or a sign of experience.
A lot of the bug descriptions are quite scarily bad when you consider them in context such as "various loginctl commands not working" etc.
That's the point – if systemd has important bugs they should be fixed. Clearly, the groups responsible for the decision have concluded that the tradeoff is worth it, and have accepted that a large, fundamental change will have issues. That's fine – there are a bunch of other distress you can use that have not adopted systemd, which you can use in the meantime if you disagree.
People are shipping production operating systems with systemd that is chock full of bugs.
An all consuming tentacle monster like systemd is fine if you want to dogfood it but to throw at paying customers and/or supporters of your distribution is a little off key.
A person is smart. People are dumb, panicky dangerous animals and you know it. Fifteen hundred years ago everybody knew the Earth was the center of the universe. Five hundred years ago, everybody knew the Earth was flat, and fifteen minutes ago, you knew that humans were alone on this planet. Imagine what you'll know tomorrow.
I mean all these movie quotes are all cool sounding but are quite shallow.
Just because some thought has to go into the interpretation does't make it shallow.
IMHO I kind of shrug at this, since Unix was never really all that great to begin with. Unix won because the only commercially viable and well supported alternative was Windows, an OS that was (and in many ways still is) significantly worse especially for server and embedded applications. Everyone rallied around Unix and especially free/open Unix as an alternative, and so here we are.
It's also tough to compete with free, and Unix OSes got a huge boost from both Linux and the various free flavors of BSD. Yet that boost came at the expense of things like BeOS, Plan9, original NeXT, and the OS I still feel is hiding behind the JVM ... which for their day represented fresh ideas that might have gone somewhere.
Ultimately I think the existing Unix paradigm is going to be killed by Docker and mobile OSes that containerize in similar ways, and I'm not sure this is a step forward. It escapes much of the ugliness and the poor permission model of Unix, but it does so by handing virtually everything to the app. Docker containers (and mobile apps) can be thought of as something almost akin to giant statically linked binaries. We're getting more monolithic and coarse-grained.
That's because there were other components handling those tasks, like inetd and /etc/inittab. I do like having Upstart handle respawning for me, though.
The service name entry is the name of a valid service in the file
/etc/services. … For UNIX domain sockets this field specifies the
path name of the socket.
The protocol must be a valid protocol as given in /etc/protocols.
Examples might be “unix”, “tcp” or “udp”. … A protocol of “unix”
is used to specify a socket in the UNIX domain.
The benefits far outweigh the risks (imo obviously)
What generally annoy me are things like supervisor and other things people use to "auto restart" services but these aren't exactly integrating nicely and put stuff all over the filesystem/etc.. I like that systemd includes that and does it mostly properly.
Hi, I'm a sysadmin who's fed up with neckbeards (most of who apparently don't know much and refuse to learn) claiming to speak for all sysadmins on this topic.
> large risk and little reward.
It's four years old, and claiming "large risk and little reward" is like listening to someone claim that moving from sendmail to postfix would be a disaster.
Perhaps if you're tired of neckbeards speaking for all sysadmins, you should return the favour and not declare what all neckbeards are saying. A lot of old, experienced admins are for systemd. It's not the young go-getters who are at the top level of distros making the foundational architectural decisions, after all.
There are some things I've wanted reliable and consistent mechanisms for so long: starting/restarting/inspecting services, isolation/resource limiting, socket activation, log collection.
One of the huge benefits of the Unix/Linux, CLI, and Free Software traditions is that they tend to be very strongly preserving of established knowledge. Changes are incremental, usually additive, a reliance on scripting means that interfaces are unlikely to change, and new tools are very frequently drop-in replacements for old.
As specific examples:
I first learned editing under BSD vi in the mid 1980s. In the time since I've learned and used on various PCs (and a few other systems): WordPerfect, WordStar, MacWrite, AmiPro, several iterations of MS Word, the EDT and EVE editors under VAX, the TSO-ISPF editor, and a few others under Unix: emacs, ae, nano, nedit, Abiword, Lyx, and various iterations of what's now LibreOffice. Most of that skill-acquisition is now dead to me -- the tools simply aren't available or aren't useful.
I'm no longer using vi, but vim (adopted in the mid 1990s as I switched to Linux), but the basic muscle-memory is the same. And its an editor I can utilize across a huge number of systems (though I do admit to finding traditional vi / nvi painful).
Similarly, the bash shell is an iteration on the basic Bourne and Korn shells.
ssh is a drop-in replacement for rsh, to the extent that /usr/bin/rsh is typically a symlink to ssh. While the dynamic is slightly different from telnet, it's still pretty similar with a few exceptions.
The rare occasions in which a utility changes its commandline options you'll virtually always hear about it. The fact that it's so painful (and tends to break decades-old scripts) means its generally avoided. Authors who make a point of doing this tend to find that people avoid their tools.
A bigger point is that forgetting stuff is often much harder (and more important) than learning stuff. And when you're invalidating long-established patterns, that's really painful.
There's also the fact that we manage technology by managing complexity, and most of us in the field work at the limits of our ability to manage the complexity we're faced with: the basic OS, shells and interpreters, hardware, vendors, hosting providers, management tools, employers, clients, customers, co-workers, engineering and development teams, services, abuse and security concerns. It's a really complex and dynamic field.
Linux has done quite well (with a few notable exceptions) of maintaining a balance between capabilities provided and complexity imposed. One problem is that as systems become more complex, the additional benefits of yet more complexity are lower, and the costs are higher (this is a very general rule, not just specific to Linux, operating systems, or computers).
The question of how to introduce radical change is a key one. I've seen a number of failed attempts to drastically revise existing systems in place -- this almost always fails. Linux itself wasn't introduced in this way -- it emerged as an alternative to both "traditional" proprietary Unices, to Big Iron (mainframes, VAX), and Microsoft's then-new WinNT. Linux ended up dominating virtually all of these categories, but it did so by incrementally beating out the competitors through replacement.
An interesting space where a lot of this comes to a head specifically is in the graphical user interface field. I've noted several times that Apple, notable for a great deal of success in this area, has been exceptionally conservative in its GUI development. It's effectively had two GUIs, the initial Mac System interface, and Aqua. Each has had a roughly 15 year lifespan, and yes, there was incremental improvement over the span of both, but the essential base remained the same.
Since the early 1990s, I've watched Unix/Linux go from twm to fvwm, Motif/mwm, VUE/CUE (a "corporate" standard based on Motif plus a desktop), Enlightenment, GNOME, and KDE, and now alternatives such as xfce4 and ... oh, that funky graphics thing Suse's got, as the "primary" desktops. GNOME and KDE themselves have gone through about three major revisions. And there are a number of other "lesser" more minimal desktops as well -- I use one of these, WindowMaker, which is actually based in a late 1980s ancestor of the Aqua interface now used by Apple.
Microsoft's experienced some similar recent tribulations. As has pretty much every online site ever that's done a site redesign.
As jwz has observed: changes to GUIs just don't offer that much win. They're highly disruptive, they're possible because the interfaces generally aren't scripted (other than via automated QA testing systems, but that's another story), but more importantly: the productivity benefits granted users really aren't that significant, especially regards the cost.
Worse: changing an existing interface leaves users in a no-recourse situation, especially in the case of SAAS. For Linux and systemd, the options are slightly more open in that (for now) it's possible to disable or block systemd from installing in at least some cases. But over the long run, it may be that the only options are voice and exit, as opposed to loyalty (a reference to the book and concept of Voice, Exit, and Loyalty, which I recommend looking up).
So yes: those of us with numerous decades of experience in the field often do have an extremely jaundiced view toward radical change. And with very good reason.
But your comment is really unwarranted.
I really have the feeling that people are using double standards here, especially when suggesting Solaris or Solaris-derived systems. Since systemd is implementing pretty much what has been in Solaris (SMF) and OS X (launchd) for a while now:
Also, it is of somewhat questionable ethics that members of the Solaris community submit such troll posts (as others have pointed out, there is not much substance there). It reeks of wanting to destroy Linux' image for your own (Illumos, SmartOS) gain.
It assumes that this is a troll post - which I don't think is fair. The author has concerns that are legitimate to them, and outright dismissal as a troll, whether or not you agree with them, is petty and judgmental.
Second, you are somehow conflating dislike of systemd with love of sysv init. The cognitive dissonance here only makes sense to me if you believe that systemd is perfectly fine, and think that the only reason people dislike it is because it's different.
However, if someone is recommending a solution that utilizes SMF, is it such a stretch to think that it might not be because they are in love with sysv init, and instead might think that the implementation of systemd is lacking?
I personally like the underlying idea of SystemD - because I like SMF. I do not like the implementation of systemd, and also have reservations about the people helming the project.
SMF does not seem to want to own every bit of my Linux machine, however.
It's not that I don't like systemd, it's that [insert affiliated party] is way too cocky
It blows my mind to see people regress so far back into arguments that this because an issue of emotions in a technical debate.
It's a matter of having observed similar behavior in other projects which went similarly off the rails.
Poettering's own track record with Pulseaudio comes to mind. There's also the GNOME project, which I identified as actively intelligent-user-hostile around 2004. It's been somewhat gratifying to see that particular perception bear out with time.
There are other projects which have shown similar levels of arrogance, though mostly with more limited and self-contained damage.
And being prickly or hard to deal with has shades. Neither Linus nor Theo de Raat are pussycats, but both focus very much on technical issues and are generally highly responsive to specific technical complaints. Sure, they make mistakes and bad calls occasionally, but on balance they've tended to get things right.
The attitudes expressed by Poettering and Sievers in particular aren't simply cocky, but contemptuous. And they're getting called on it. Including by Linus.
I could give a shit about personalities themselves, I really could. For the most part I really don't care how socially awkward someone is if they're good at their job. And if they don't start going out of their way to do harm to me or others. Personality disputes in discussions bore the piss out of me.
But I'm also not blind to technical failings with roots in personality traits. And those are what I'm seeing in the systemd crowd and leadership.
Then stop poisoning the well.
But I'm also not blind to technical failings with roots in personality traits. And those are what I'm seeing in the systemd crowd and leadership.
The problem that I see that most arguments against systemd are first and foremost about Lennart Poettering. And if technical reasons are brought forward, they can all be summarized as: does not conform to the UNIX philosophy (monolithic, replaces existing tools with tightly-coupled equivalents, binary logs).
I think that a reasonable argument can be made that, with the exception of binary logs, these things are true for many UNIXen. You will find only few people who would say that BSD does not conform to the UNIX philosophy. However, the BSDs have the aforementioned traits as well: developed by one project and tightly coupled (e.g. you cannot just take most BSD utilities and libraries and compile them on Linux or Solaris, it requires serious effort).
People always argued that this was a good trait of the BSDs (and I agree to some extend), because it allows better integration and use of BSD-specific features.
However, when systemd does it, it's suddenly violating the UNIX philosophy.
I've dithered on whether or not to respond, but this bugs me.
Your response, again, typically of many systemd supporters, looks at the option of responding to the relevant points of my argument (personalities can have relevant technical consequences), and dives to the personality dispute "stop poisoning the well".
I'm not poisoning the well. I'm pointing out that the well has been poisoned.
The elements of the Unix philosophy which you allude to exist for good reasons, and violating them imposes very high costs. This is a lesson that those of us who've been around for a while, and have multi-platform experience (check on both counts for myself) are well aware of.
Monolithic systems transcend ready replacement. Generally you've got to toss the whole mess out. Pluggable systems avoid that. There are instances in which monolithic design does seem to be at the very least hard to avoid, but you'd best be very aware of this and defend your position well. Systemd violates this principle by assuming gratuitous monolithic nature and explicitly refusing compatibility and modular alternatives.
Tightly-coupled systems are similarly brittle. The classic case of this is probably the Windows platform as a whole. Among the best arguments for loose coupling comes from Steve McConnell's 1990s classic Code Complete (ironically, McConnell was a Microsoft developer). I strongly recommend you read the relevant sections on tight vs. loose coupling.
Binary logs (and binary file formats in general) preclude use of alternative tools. The Windows Registry (again from Microsoft) comes to mind. One of the better hacks of this I know of are Unix/Linux compatibility systems which treat the registry as a filesystem interface. This originated with UWIN (from Steve Korn of AT&T and Korn shell fame), and has since been adopted by Cygwin. The ability to grep the registry, process it with scripting tools (sed, awk, perl, etc.), and modify it (using specific commandline utilities offered for the purpose) makes dealing with that particular hairball _slightly_ less annoying. The lack of self-documenting formats for registry values themselves (a trait shared by GNOME's gconf system) is another fatal flaw.
Even packaging formats are subject to this. Red Hat (gee ... aren't they involved with systemd....) designed a binary file format for RPM which requires specific tools to unpack. Joey Hess's 'alien' links to the RPM libraries for this purpose, and a set of Perl tools I'm aware of has to apply specific offsets (varying by RPM version) to extract data from the files. Contrast this with Debian's DEB format: tarballs packed in an ar archive. This can be unpacked with standard shell tools, or busybox.
Putting together the concepts of monolithic, loosely coupled, non-binary, standard tools, I've more than once rescued Debian systems which failed to pivot-root from initrd by breaking into the initrd shell, unpacking, and installing DEB packages using shell tools, facilitated by the use of an interactive shell, busybox for tools, and the DEB format. I'm thwarted on several levels from a similar recovery option in Red Hat systems due to the use of a special and explicitly noninteractive shell used in initrds (which is larger than Debian's 'dash' used for the same role), and the binary format of RPM packages. Working in cramped quarters and difficult situations, I can assure you of which system I'd prefer to be working with.
Systemd's violation of these principles is objectionable because it's not necessary (see OpenBSD's shim replacement for functionality, or uselessd, among others), gratuitous (decisions are being deliberately made), and, as your comment above illustrates, the very valid reasons for not doing just this are belittled.
But, ultimately, what happens in tech is as much about people and personalities as it is about actual technical merit. To delude ourselves otherwise is dangerous. When someone claims to be arguing from technical merit, look very closely at their history and probable motivations. There's always more there.
Nothing about systemd removes the basic unix command line. Because he's most definitely not explaining the init system, which wouldn't have been the same from year to year then, or even similar decade to decade.
But that's still a good 25-30 years of work, experience, practices, and smoothing out the rough edges that will be shot down the drains.
Systemd also fundamentally changes the control locus of key features within Linux and how applications, the kernel, and OS as a whole are constructed and constrained. Putting all of that under the control of a small group with highly evident disdain for any "outside" concerns (in quotes as these are of the larger Linux community, and the concerns are most decidedly inside that group), contempt, and plays-poorly-with-others attitudes.
I'm not impressed.
Nor with your comment, FWIW.
The rest of your comment is fear mongering which could be applied to any group of core devs on any OSS project in existence. After all who controls Debian and security defaults? Do YOU trust them?
What 1978 Unix did have was security and authentication. The OS was multi-user from the very beginning -- hence the pun in the name: uniplexing operating system (Dennis and Ken created a two-user OS to play Space Traveller).
As Bruce Perens recently discussed in a set of comments at LWN, the first thing he did as DPL of Debian was decentralize the management of Debian packaging. He recommends a very similar process for Systemd. The Systemd proponents in that discussion aren't particularly taken with the idea.
It's not a matter of fear mongering when the stated goals and practices of Systemd are to intentionally break compatibility with other Unixen, to reject compatibility patches, and to provide "choice" in the form of allowing users the option of any Linux distro on which they can run systemd:
As Jon Corbet noted at LWN in his Grumpy Editor post on the topic, it would greatly behoove systemd leadership and proponents to demonstrate a modicum of gracious victory.
As for Debian's governance, that process has been more than slightly troubled of late, with at least four key departures (Joey Hess, Ian Jackson, Russ Allbery, and Tollef Fog Heen), only in the past couple of weeks. The cabal question was raised by former DPL Bruce Perens in the LWN post linked above. And, frankly, no, I haven't been happy with the recent directions of Debian's Technical Committee of late. Joey Hess's resignation (as well as those of Ian and Russ) calls into question more than just the specific decisions, but the process as a whole.
Your attempts to smear my own comments which are based on actual events, facts, and highly considered views of those with deep and broad experience in the field is, I'm really sorry to say, far too typical of what I see from systemd proponents (the attacks on Perens in the LWN thread strike a pretty similar tenor).
Something is sick in this process. That more than anything is what's bothering me about it, though I've also grave doubts over the technical direction.
The most important elements to consider about Plan9 are these:
1. Plan9 wasn't Unix (nor was it Linux). It was its own OS, it was absolutely informed by Unix, and tried to learn from mistakes practiced in Unix. Because it wasn't Unix it provided for an independent test bed in which these ideas could be explored without disrupting a large established installed base and user community. And that is a key benefit of branched development. All of these I consider positives of Plan9.
2. It was hampered by an overbearing corporate control and licensing model. It was an ugly stepchild of AT&T's, under a proprietary license. The fact that it was under development kept it from being widely deployed (among other factors), the fact that it had a restricted license meant that other possible collaborators couldn't get involved.
When Linux emerged in the early 1990s, it had a lot of problems -- it was far from the best or most obvious Unix alternative out there (look up ESR's PC Unix guides from that era). But in a world of large proprietary Unicies priced far out of the hobbyist's range, a handful of small PC ports of varying quality, and BSD which was embroiled in its lawsuit with AT&T (speaking of Plan9), Linux was unencumbered, free, and (pretty quickly) available under the GPL. That gave it the critical mass to develop. As with Plan9, it was its own OS, providing a testbed environment for development, but also allowing stable cuts to be made for use in specific deployments as it reached sufficient states of readiness.
Which is to say: the community and development dynamics mattered a lot.
I'm seeing a far more troubled path for Systemd in this regard.
Also of note: in the Debian init system debate, a specific concern raised against upstart, one of the init alternatives, was its own requirement of a developer license grant to Canonical, which was seen as a strong demerit against upstart. As with Plan9, exercising too much proprietary control may well have cost Canonical critical votes in the Debian decision.
I must admit the ever-growing scope of systemd is starting to concern me somewhat (though I've been running it with satisfaction more or less since it became available in Debian experimental).
It was fun for a while, but I grew out of it.
Similarly, though the underlying hardware and code share basically nothing with Unixen of yore, old knowledge is still useful on modern Linux. This commonality of interface is more important than inner workings.
By the way the most expensive cars by far (therefore arguably the most desirable) are old to very old. A Ferrari 250 GTO is way more valuable than any new car. IIRC the most expensive car ever is a 1929 Bugatti Royale, and even you can drive it.
As for my comment being unwarranted, sysadmin'ing requires learning new tech. If there is an improvement on a tech such that it has mass adoption, learn the tech. It's your job. If you don't like it, change jobs. I'm not saying you should shut up and put up. However, we're far past that stage of valued input and people are still complaining. The decisions have pretty much been made that are going to be made concerning systemd adoption. Yet here I am, reading yet again how systemd was the wrong choice, even though rigorous debate was had and core teams decided it was the best decision. Even though this was the biggest drama piece since that blogger blasted linus for being rude. Here we are with 'radical change' in systemd.
But where the user interacts with the system, things have been remarkably stable. Even the relatively minor changes which have been presented have been covered with the usual Apple levels of obsession -- skewmorphic vs. flat designs, etc., ad nauseum.
Again the point being: screw with how things are visually and how users interact with the system, you're going to create huge usability costs with little to show for.
BUT IT WAS THE FIRST SUCH BREAK IN 15 YEARS OF THE GUI, AND IT'S BEEN THE ONLY MAJOR BREAK IN THE PAST 15 YEARS.
I'm also not saying that Aqua hasn't changed at all. It has, with the most notable addition that I'm aware of being virtual desktops (something NextSTEP had in the 1980s). But other than some minor cosmetic changes, and largely invisible-to-the-user under-the-hood updates, the visible UI has NOT changed appreciably.
Contrast that with the disruption that's prevailed in the Microsoft Windows and Linux spaces from 1999 to present. We've gone from the Win98 UI to the candy-cane XP styling, and Metro in Windows, and at least three generations each of KDE and GNOME on Linux, plus a few other desktops which have waxed and waned in popularity.
I've continued to use WindowMaker, and after 17 years, it is, hands down, the one GUI metaphor I've had the longest experience with of any. It's been exceptionally stable, with very few changes. Even minor ones are quite jarring to me, which is somewhat odd to reflect on.
X11 and/or replacements is a whole 'nother discussion, but I'll simply note that the network transparency of X has been hugely underappreciated by many who've sought to upend it (I don't know what the status of Wayland is in this regard).
I think IT managers would prefer it if they didn't have to spend time and money re-training their sysadmins or hiring/firing them to ensure their staff has the skills to use the $NEW_SHINY from $CORPORATE_VENDOR. Skill transference is a boon for customers (see also: "Stop breaking the UI!").
If you want to see the ultimate extent of this, look at the Wii and Wii U. Each game ships with an "IOS": effectively an OS kernel+initrd update package. Every game boots to the newest IOS available, so if one game updates to IOS v6653, then another game that only shipped with IOS v6652 will find the newer version on disk and use it.
However, a game's IOS requirement doesn't just have a version; it also has a slot. Each console has space for 256 individual copies of IOS, which are each independently versioned. So if two games both use IOS, then the game providing v6653 will overwrite v6652 on disk, and then the game providing v6652 will boot into v6653. But if one game is providing a version of IOS, and the other is providing a version of IOS, then their effects on one-another are isolated.
You can think of it a bit like the IOS codebase having 256 branches, and each piece of software being able to specify which branch of the kernel it was developed on. It gets the newest kernel released on that branch.
This allows a sort of "move fast and break things" approach to kernel development, where a kernel can be hacked to support new software in a way that breaks old software: you just stick your modified kernel into an as-yet-unused IOS slot, and old software will have nothing to worry about. This approach has resulted in my own (pretty unused) Wii U having ~73 different IOS slots populated with kernels.
Interestingly, if you think about it, this is pretty much a continuation of what Nintendo was allowing developers to do before: shipping random collections of chips in their own cartridges that DMA to the console, effectively creating their own extended console to run upon. Allowing your software to ship its own kernel is basically the software equivalent.
Docker though... man how different it is and how clean it makes my system feel. I do feel that Docker will move towards some kind of Docker optimized minimal dockers that are not Debian or Ubuntu or what ever, that is just a stage so you feel some familiarity.
CoreOS meanwhile, who will ever touch its init system except to auto-start containers? Which will be done by nice tool which hides systemd in the future I guess.
Ok, my posts does focus on the server side of course.
But lets say you are building a highly specialized application. You are going to be making quite a few customizations which are far more manageable through a shell scripting environment than by customizing a bunch of binaries.
I assume that Redhat is going to cover a lot of the bases for most users out there. But for those of us in highly customized environments it's going to suck.
Also if busybox is not enough, a minimal systemd system will still be leaner and faster than the equivalent sysV system.
IIRC it's also part of some soon to be shipping vehicle integrations, for in-car entertainment systems and mapping.
Also, it's inevitable that if systemd and software expecting to use it take over more and more aspects of userland and the kernel, vendors will be left with no choice but to use it as well. So "more vendors are switching to systemd" is not a convincing argument either. I like to make my own decisions on the basis of modularity and replaceability (vendor lockin has been a huge burden in other major projects not mentioned in my online persona), not popularity.
is also a current story on HN.
Just an example of how powerful that simple 70's Unix is. Allows features that appear "magical" to thorsten, anyway.
Windows? Wasn't really even an option... until 20 years later. And, of course, Unix really isn't that good, either. But, before you ignore it, please come to feature parity, at least.
Which 'Windows'? Before Windows 2000, there was Windows and Windows NT, the former being a more-or-less just a shell running on top of DOS.
Unix didn't beat Windows. Unix beat VMS and LISP machines and AS/400 and various other minicomputer operating systems. In fact, if we're talking about mainline commercial Unixes, NT started beating the shit out of them in the late 90s - if Unix lovers hadn't had the free ixen (Linux, BSDs) to fall back on it would be a sad state of affairs indeed.
Hey now, I'll have you know AS/400 is still alive in going in my workplace! We also have an entire position just for it's programming...
At the same time we are pushing more heterogenous software stacks to production and configuring more specific dependencies for our applications.
It almost seems like you're using cross-platform as a pejorative. ;)
Now we are close to having a OS where you can seriously just expect anything "Linux" to just run. Bad I guess to some :P
Right now I have most clients running OpenSUSE, just because I cannot be bothered to fuck with Upstart anymore. Once systemd is in place, the fact zypper is much nicer than apt doesn't make up for the incredible market size difference between Suse and Debian and its children.
Great, so now instead of adopting a package system system with a solid theoretical foundation like Nix or guix, we're going to dump all dependencies into fat binaries and more or less end up with the solution the NeXT people came up with in the 90s. Such progress.
Not to mention that Lennart's proposed package system would depend on btrfs-specific features, adding even more code coupling.
With OpenSUSE Build Service what does Debian server get you? Just wondering.
The Ubuntu LTS cycle is just an optimal compromise in my book. You even get Debian Testing as a good rolling release, Debian Stable as a great server release, Ubuntu Server as an enterprise option, and they all (soon) will be using a common core.
For now I advocate the SUSE's, but while it has been stable the general obscurity of it and the dwindling userbase and the fact Novel (I know they have also since sold SUSE) backed out of maintaining OpenSUSE directly, I can't be confident in its future. You cannot underestimate the Ubuntu mindshare, because it means "Linux" software is often Ubuntu first, repackaged by hobbyists for other distros second.
Fully agree. It seems some people are quite happy with a few xterms in the X-Windows replicating a twm user experience, stuck in the past.
I would also add Oberon, Active Oberon, Singularity, Verve and the current unikernel/library OS research.
> OS I still feel is hiding behind the JVM
Android kind of got us there. Now with Java being compiled to native code, maybe other C++ layers might be replaced in future versions, given how Android team looks at the NDK.
All in all, I want the Xerox PARC and Douglas Engelbart's visions, not the AT&T one.
Predictably, all the blame is laid at systemd's feet.
The current churn is happening, because all of Linux's core developers (kernel and user space) are wanting that change...to push the envelope.
For example, the current change in CGroup's namespaces are because kernel is mandating that the current cgroup access mechanism be deprecated. They want a single writer to Cgroups. Systemd is in the unfortunate position of complying with that request. Guess what? Soon enough, so will Upstart.
Again with kdbus, the person who made the push is not "evil" Lennart, but Kay Sievers - a long time maintainer of udev.
Systemd is nice. Don't be afraid.
To be clear, I'm not claiming that SysV init is The Best Way. Shell scripts are not the Happiest Place. But I am claiming that systemd is a crummy and overbearing replacement.
It seems like that's part of their mission statement, given comments like this: "Some day, we will have turned the old crap into a real operating system. :)" -- Kay Sievers (https://plus.google.com/+TomGundersen/posts/eztZWbwmxM8)
In fact, OS X's launchd was a direct inspiration for systemd because of how nicely it works there. I've wanted launchd on servers for so many years.
we should not emulate the things we've overtaken and have a higher server market share than. Something we've been doing before is doing it right, and I believe it's the ability to be dynamic and modular.
Service management has been a problem on (Linux) servers for a long time. Just because launchd originates on a desktop doesn't mean it's not a good idea.
About udev, Linus has had multiple serious complaints about udev maintainership since GregKH passed it to Kay. Don't you recall the async firmware loading issue...
The article is right - it's not Linux as we know it anymore for better or worse.
They want to prevent direct access to Cgroups, other than through a single writer. This change is happening regardless of whether you want systemd or not.
He may speak for some subset of sysadmin, but he certainly does not speak for us all.
And from the looks of it, this has been done: https://cgmanager.linuxcontainers.org/ as reported at http://lwn.net/Articles/618411/
systemd may be nice but it's coming from Redhat and cgroups are changed due to because systemd folks wanted it that way as far as I followed that debate...
Ted Lemon (the author and maintainer of ISC DHCP from its inception to 2003 ) asks for the location of the project's source repo. Sievers replies with a LMGTFY link that doesn't even answer Lemon's question. Lemon politely criticizes Sievers for his rude and unhelpful answer. Sievers fails to even apologize.
Both Sievers and Poettering have pretty serious attitude issues. It's one thing to lambast a peer who frequently fails to meet the potential that they've demonstrated in the past. It's entirely another thing to try to score social points with your callous indifference and blinkered bullheadedness.
 Check the first few comments of: https://plus.google.com/+TomGundersen/posts/eztZWbwmxM8
The project being run by people who hold unreasonable and downright odious views who act like, frankly, utter asshats, is a much more serious problem.
The Kay/Linus debacle is something you can expect to see more of from these fine folks going forward. Mark my words. Ask yourself if you want software developed this way running your OS.
Thank you for doing this :) I love systemd myself, but I still think it's important to have alternatives available; also it makes me very happy to see somebody creating their own choice instead of tearing down other people's choices :D
That being said, it's a little too esoteric for my tastes (among other things: "if you are looking for a stable production system that respects your freedom as a computer user, a good solution at this point is to consider one of more established GNU/Linux distributions.").
Do you know of any Linux distros that a) don't use systemd, b) are vaguely active / supported, and c) run on UEFI? (My current laptop, unfortunately, has UEFI. It's a royal pain, but oh well.)
Yes, we're still in alpha. Not ready for prime time yet.
>Do you know of any Linux distros that a) don't use systemd, b) are vaguely active / supported, and c) run on UEFI?
Gentoo? I use Debian most of the time, which of course uses systemd now.
Not addressing any other of your points, doesn't most laptops/computers which ships with UEFI allow you to set it to boot in "legacy-BIOS" mode?
Even if you're currently UEFI-booting, I would be seriously surprised if UEFI-support was a requirement for every OS your machine can boot.
And regardless, this is a temporary solution.
Of course, I'm in the favourable position of not having to maintain/administer a bunch of Linux boxes for a living. I can fully understand the frustration of people who built and shipped custom solutions on top of SysV init.
The end of Linux? No, it's the end of Linux as a traditional Unix with lots of arbitrary optional features on top perhaps. We'll get used to it, or switch to something better.
This feels like a non-sequitur. The nice GUI of Mac OS X has nothing to do with launchd (which is the systemd-like portion of Mac OS X).
SysV init is ill-suited to that sort of complex event-oriented management, which (for example) is exactly why Canonical developed Upstart in the first place.
My knee-jerk reaction would be to say "no", but many people who are more involved with this part of Linux seem to think so, I don't feel qualified to challenge that.
I am confident that more radical changes are good in general, as long as the bazaar model allows the best solutions to survive and the worst to be undone if needed.
Speaking of zones and Solaris, if that’s an option for you it’s probably the best of breed stack right now.
Does the author have no idea what's going on with Solaris? Hint: Nothing. Nothing is going on with Solaris, because Oracle doesn't care about Solaris. They closed the source, and now push out the occasional minor update from on high for their enterprise customers. Anyone who is suggesting Solaris as an alternative to Linux at this point in history is simply not credible.
Regardless, it's one example of many where the author exhibits a very poor grasp of...well, everything he talks about. Dunning-Kruger effect is funny that way.
But, if you insist, let's break it down a bit:
"...FreeBSD...also ships with ZFS as part of the kernel and has a jails which is a much more baked technology and implementation than LXC."
Which is an assertion that would require significant citation and specification about the ways in which the author believes jails to be superior in order to be a useful claim. I believe it is an assertion based on ignorance of either Jails or LXC, or the ways those technologies have been used historically and are being used today. For most of the uses I see talked about on HN, LXC is the "more baked" implementation. While Jails has existed for a long time, it was not intended for the purposes we're using LXC for today in Docker and similar deployments. The tools exist, the resource management exists, for LXC and they don't, or are quite rudimentary for jails. To suggest someone choose jails where they are currently using Docker and LXC is to suggest they live with a large variety of limitations and pain points, and in a lot of cases to simply not do what they are currently doing, or to do it in wildly different ways. All to avoid the minor pain that is represented by SystemD for most users.
In short: Jails are not (currently) a reasonable alternative to LXC in that context, and it exhibits some kind of ignorance to suggest them.
Continuing on, despite your suggestion that he is talking about SmartOS or OmniOS, he quite clearly is not. He specifically mentions Solaris while mentioning the others as other options:
"Speaking of zones and Solaris, if that’s an option for you it’s probably the best of breed stack right now. Rich mature OS-level virtualization. SmartOS brings along KVM support for when you HAVE to run Linux but backed by Solaris tech under the hood. There’s also OmniOS as a variant as well."
That paragraph clearly is recommending Solaris, specifically. If you'd like to argue that Solaris is a reasonable alternative for most Linux users, it's a conversation I'm going to opt out of. I'm pretty sure we'd be speaking completely different languages.
"If you absolutely MUST run Linux, my recommendation is to minimize the interaction with the base distro as much as possible. CoreOS (when it’s finally baked and production ready) can bring you an LXC based ecosystem."
So, CoreOS is the Linux option he recommends? The same CoreOS that uses SystemD? Indeed, was among the first distros to embrace SystemD with gusto. CoreOS, that is remarkably different than all other Linux distros. All because "Linux is becoming something different than it was"? So, in response to Linux becoming something different, he recommends people switch to something that is utterly different, like an entirely different operating system (FreeBSD, Solaris(!), etc.) or a Linux distribution that rethinks everything, not just the init system (CoreOS).
All to avoid something being different. It absolutely boggles my mind, and I have hard time responding with anything other than derision; for that, I apologize.
You're right that I haven't been particularly persuasive, and have been quite abrasive. This article just really rubbed me the wrong way.
He says, as he quotes a paragraph that specifically calls out SmartOS and OmniOS -- both of which are under the IllumOS branch of OpenSolaris.
"So, CoreOS is the Linux option he recommends? The same CoreOS that uses SystemD?"
The problem he's raising isn't that linux is going to be different, and if you think it is you need to re-read. Try doing it with your --with-reading-comprehension switch. It being different is just a statement of fact, the problem statement was separate.
CoreOS is not a distribution in the classic sense. It is a platform for deploying containers, and in the use-case they've setup the author clearly believes it will allow you to still be successful in spite of systemd.
Its CoreOS point is stunningly ignorant.
However, Zones and Jails are a good substitute for LXC; they are not yet such a good fit for the Docker way of using containers, which is somewhat different. But for a whole system image type model like LXC they are both great.
SmartOS is doint great things with Linux compatibility, both through emulation and KVM, it is worth looking at.
Also, it hasn't been Open Source in several years. That makes Solaris utterly irrelevant in my world. This blog post is suggesting Linux users move to Solaris. It's just a bizarre recommendation, made as an off-the-cuff assertion with no reasons given.
Also, Oracle. What hacker chooses to be beholden to Oracle?
How about almost every single Java, Scale, Clojure etc developer. We are all using Oracle's JVM and they have been unquestionably a fantastic steward of the Java platform. I know the company deserves a lot of criticism but remember it is a big company with many different departments.
That is debatable. First off, we're not all using Oracle's JVM :) It's also not at all clear that Oracle has been "a fantastic steward of the Java platform." They don't appear to have totally wrecked it, but that's a low bar for "fantastic."
The irony there is quite impressive.
>Does the author have no idea what's going on with Solaris?
Even the commercial Solaris from Oracle is actively updated and a release was done this year. Anyone who is suggesting Solaris is not an alternative to Linux at this point in history is simply not credible.
The question why do we have this project and why there is such a buzz is more interesting.
But the answers doesn't belong to realms of system design or engineering, they are in more obscure notions of "business strategies" and "sales techniques".
It is a text-book example how to sell - "show them that they have a problem they didn't realize before, and give them the one and only fix". Fine. Except that the problem doesn't exist outside discussions of "why this product is good - it solves this and that".
Another thing is that it is an excellent strategy to "create a niche for oneself". Look, we are the ones who give you a solution (to a problem which doesn't exist). We are "world leader" - look, we have lots of experts, web-presence, lines of code.
There are so many examples in "parallel worlds", beginning with eco products or "fair-trade" coffee, you name it.
But you see, lots of people don't need an eco or environmental-friendly daemon which does everything "the right way". No, thank you.
Another thing, back to system design. If Windows has "registry" and there are millions copies of Windows everywhere, or Mac has that settings daemon (from which GNOME Gsettings has been copied) it doesn't mean that these design decisions are superior to plain old text configs. You could compile them and have an "intermediate storage" to gain a 5% more efficiency for application startup times, but this is, again, not a fundamental problem. Being able to use regular expressions on configuration and logs without any restriction are much more fundamental. And memorizig all the details of all these xxxxctl insted of saying something like
$ grep -R ^sysctl /etc
Most of the problems systemd is trying to "solve" does not exist. At least not BSD, AIX, Solaris, you name it. And, of course, there never been any problem with syslog, init or cron. They were *good-enough".
> Most of the problems systemd is trying to "solve" does not exist. At least not BSD, AIX, Solaris, you name it.
Don't forget that (open) Solaris revamped the init system, not just the disk/filesystem-system. I think there are arguments to be made for integrated systems like the zones/zfs/SMF.
I don't think systemd is a reasonable way to go about it, and I certainly don't think it is a good fit for "Linux in general". My impression is: Systemd is too big -- will fail.
Wasn't it Gnome that at one time tried to mimic the windows registry for settings (because binary data on disk: really fast, lol) -- only to go back to ini-style files? (I might be misremembering that one).
Different designs are fine (see: eg plan9) -- but moving away from fundamental design principles (everything is a (text) file) effectively means abandoning the old system, making a new system.
If systemd was a bit more upfront about "making a new operating system sharing some code with Linux kernel and traditional userland" - rather than trying to sell systemd as "more of the same, just better" -- maybe they'd meet with more positive reception. That a new system is unstable is fine -- just don't expect people to use your new crap in production.
Mac OS X has launchd for 10 years now.
Solaris 10 replaced sysvinit with SMF.
Don't have any experience with AIX but the documentation
says it has a "System Resource Controller" with a
"srcmstr" daemon to manage services;
though it looks like it runs on top of something sysvinit-like:
BSDs have generally never used sysvinit, although they do have shell-based service management.
So actually none of the particular examples you cite
use purely sysvinit to manage services.
And it's also quite obvious that there are use-cases where reliable and fast service management are very important on servers.
One is containers, where you may run 10000 of them on a single host, and you have to reboot the host some time too, so you don't want to delay the start up of all those containers by loads of slow and racy shell scripts. Or even better, you want to have socket activated containers that don't actually start until they are needed.
Another is hyperscale servers: There are now really small ARM and x86 based servers and you can put 4-500 of those in a single rack. That means a lot more individual servers to admin, and failure modes that are relatively rare in current server rooms will become an order of magnitude less rare, so more robust OS level service management is helpful.
Systemd is a broken silver bullet for handling the decrease in quality of packaging.
There are problems with sysinit: the first one is the missing link between devs and syadmins in companies: people known as packagers.
For a sysinit to work well shell scripts, permissions, where resources are located, the dependency management has to be done with art and expertise. It is a human job with human which are:
1) a very skilled rare resource,
2) not identified has needed by companies,
3) company's software QA is shitty (it has barely the level it works for me (tm) of quality
Talking about debian, which is considered the distribution with the most talented packagers debian has 2 flows in this domain:
* too rigid:
projects goes for logical units of packaging that are consistent and organization of assets in packages that can ease maintaining. Debian has its guideline that makes them «fix» poorly packaged softwares like lateX, python, ruby: cutting language distro in at least runtime, dev, extra.
Debian packagers are often debian experts, not upstream software package experts and they first break some stuffs (latex is so poorly packaged on debian it can be considered broken), AND it adds more works
* too much features
typical linux distro compared to BSD are pacakging fucking more packages in their core resulting in more work; less attentions to the details and conflicts of functionality/overlaps.
This result in more resource drained from the packagers.
Like we have 4 shells considered OK for writing shell related stuff, when they have only one: «sh».
The problem of linux vs BSD is symbolized by the systemd vs sysinit: linux is an OS of devops that are super devops, poor coders and sysadmins, BSD is an OS of sysadmins and devs that are good sysadmins and devs, but no devops.
And we still lack in 2014 of maintainer, sysadmins and coders of quality.
Linux/Gnome ... FSF projects are not sustainable in these conditions. They think of free software has an infinite resource of benevolence. And they exponentially overblow the works required for maintaining, deploying ... thus they are mathematically doomed to die under their own weight.
I see BSD as a calvinist boring protestant community turned towards humility doing what is right and linuces as catholic exhuberant rockstars over spending the good will of developers without thinking of the future.
Being lutherian, I still dream of THE right OS that would less terse than BSD.
this problem was noticed and was trying to be fixed for a long time, by a lot of people
For a server it doesn't matter, leave alone "broken". long delays during boot is grossly exaggerated, because delays are mostly due to network errors/timeouts or file-system checks, but you can do almost nothing "in parallel" without clean FS or configured network interface.
Well, for some Ubuntu for mobile devices they could have adapt some "optimized" sysvinit-scripts (not /sbin/init) replacement, but, please, don't tell us that everyone needs this. Leave servers and home workstations alone.
and my point was there has been a whole bunch of prior attempts at changing init, each one of those had issue with the pre-existing systems.
I am not telling you that everyone needs it, or that its broken. I'm telling you that the issues didn't suddenly exist just because systemd
Then again, I could be completely wrong, and systemd will end up in every single surviving distro by default, turning GNU/Linux into systemd/Linux. That's when we'll see a true exodus of those opposed to it to other OSes.
Unfortunately Slackware is still a bugaboo when it comes to installing it on the Pi, but once you've got it on there you can image your SD card and have a ready-to-roll distro that's lean and mean.
Linux hasn't really been that way for an incredibly long time, but a large proportion of the userbase still cling to this notion, some for ideological reasons, and some out of sentimentality.
If the core utilities around the kernel and the interfaces they use are well-documented, designed, and modular, then different machine types can pick and choose the components they need, and easily write their own by opening a well-defined API on something in /dev. If systemd continues to take over and alternatives smothered, very quickly the kernel will become useless to any system where systemd makes no sense, especially non-desktop embedded systems.
* Embedded systems use watchdogs. systemd implements a watchdog supervisor chain, where systemd supervises applications, and the hardware watchdog supervises systemd.
* kdbus: efficient IPC
* networkd: simple network setup, very fast DHCP client
* fast boot times
* handles many complexities, so that embedded developers can focus on their application
As for the second question: certain parts of systemd can certainly be used on non-systemd systemd (such as udev or nss-myhostname). But most would require at least some changes.
In other words to some it seems like lot of not-so-tested software replacing software that was well tested. from an outsider perspective two things seem problematic:
1. A lot of Linux software worked on principle of, don't break userland. Systemd appears to break userland here and there.
2. From any such large software replacing a well tested infrastructure, bugs are expected. The problem appears to be, in many cases systemd developers push the blame of breakage to other subsystem devs (sometimes it could be Kernel dev, sometimes it could be end user apps written on top of KDE/GNOME). This is the part which makes lot of people angry apparently.
But systemd IS NOT only an init system.
So if systemd 208 (until 213) by default saves the core files in the journal and your core file is bigger that what systemd devs decided was appropriate in a .c file (around 768MB, IIRC), you lose it. That is _inacceptable_.
And while we (=the company i work for) still have not officially started our evaluation of the platform, if RHEL7(.0) does not have this bug fixed then I will be strongly against supporting it officially, since in case of crash we would not be able to get or pass up to the devs the core file for analysis.
Is that not, however, a simple fix? There might be a case of death by million cuts but that's true of any new software that replaces any existing software.
They'll be supporting RHEL7 (with Systemd 208) in some form until 2027
I guess the point is end users will shy away from the short term mess and move to another system like FreeBSD.
While I don't know if SystemD is a vastly superior design and implementation to any of the already existing things and I am not sure how much of a stability/security/complexity concern it is, I think that the 'end' of Linux will not be due to SystemD - it has a lot of momentum going for it. It will take something bigger to derail Linux at this point.
All Unix-based systems start by running a single process known as 'init' which is responsible for setting up the system, starting all other programs, and managing various services as they run. The change under contention is the widespread inclusion of a relatively new piece of software called systemd, which replaces the historically popular sysvinit, Ubuntu's alternative known as upstart, and various other competing systems. All of these are different approaches to building an init system.
In contrast to some of the other systems, systemd is written less in terms of traditional Unix-style tools (like pipes and plain text files for storage), choosing instead to build on newer and more elaborate communication interfaces and store in specially designed binary formats. Systemd consists of a large family of interrelated pieces of software, many of which are nominally optional but generally expect to be used together. One point of contention is whether systemd is "too large", as proponents argue that developing these pieces together will increase their quality, while detractors argue that this makes it too difficult to substitute components if necessary and that these pieces should not be part of the same conceptual package.
Additionally, the architectural choices of a large, widely-used package integral to the functioning of a running system will influence the design and assumptions of other pieces of software and even Linux itself. Already, the Linux kernel has incorporated the newer communication systems used by systemd into the operating system itself (the mentioned KDBus), which many believe to be a sign that the inclusion of systemd will change the way that Linux operates and the way programs expect to interact with the kernel and with each other.
An important factor here, whether good or bad, is that these choices make Linux into a very different system than it has been historically, and very different than other Unix variants. Proponents argue that this is a step forward, as the facilities offered by Linux historically might not be appropriate tools for the current uses of Linux. Detractors disagree. Either way, the conclusion is that systemd is a change in the way the operating system is structured and used which has been a major point of contention in some communities.
"Ubuntu plans to take over maintainership (more precisely Martin Pitt
from Canonical), to maintain it as long as they still need it, and will
change the name while doing so."
Now im not saying that i like systemd in general - for the reasons the author explains I don't really like it. Systemd could have been way better if it wasnt for the political shit and attempts to take control over distros, kernel, etc.
That said it has some features I do like. Likewise for kdbus. This is like Chrome vs Firefox. People would like to support Firefox better. But Chrome and Google apps do enough of what they want right now to go with the solution they don't really approve of - I'll say it again: it works.
Of course, whether those other options will be as well supported and developed as systemd remains to be seen.
That said, all of the sysvinit replacements back then (eINIT, initng, depinit, s6, perp, etc.) never made it and were all ignored during their time.
You are certainly no expert on this.
I've been using Gentoo since 1.4... back in 2003, maybe 2004. I vaguely remember when Stable Gentoo was switched to OpenRC, which happened in mid 2011. Unstable Gentoo (which I ran -and still run- on my laptop) switched to OpenRC much earlier [but that date I cannot remember].
What the first topic in the blog post that you linked to is really saying is:
"We Gnome developers still claim that recent Gnomes don't require systemd. We stand by this assertion. Recent Gnomes only require init and system management daemons that behave exactly like systemd in pretty much every aspect; they don't actually require you to have systemd installed.
It's a pity that developers don't want to spend time re-working their init and system management software to be systemd clones. If they did, then the world would finally understand why we Gnome developers continue to assert that Gnome doesn't require systemd."
"For one, in the last stages of GNOME 3.8.0 as release team we specifically approved some patches to allow Canonical to run logind without systemd. Secondly, the last official statement still stands, No hard compile time dep on systemd for “basic functionality”. This is a bit vague but think of session tracking as basic functionality."
It's now the position of the systemd developers that running logind without systemd was never supported, and that distros like Ubuntu should never have tried to do it. I believe they've now broken the ability to do so. It's one of the major reasons Ubuntu had to switch to systemd.
(Oh, and for some context, http://www.freedesktop.org/wiki/Software/systemd/logind/ is the logind DBUS API. Good luck reimplementing that!)
There are many ways to implement the same APIs.
However, I would argue that it's far easier for an end-user to drop in a replacement for a DBUS API implementation than it is to drop in a replacement for a C/C++ API implementation. So... there's that, I guess. :/
"Apparently GDM 3.8 assumes that an init system will also clean up any processes it started. This is what systemd does, but OpenRC didn’t support that. Which means that GDM under OpenRC would leave lingering processes around, making it impossible to restart/shutdown GDM properly. The Gentoo GNOME packagers had to add this ability to OpenRC themselves. Then there were various other small little bugs, details which I already forgot and cannot be bothered to read the IRC logs. "
So apparently there are bugs when using OpenRC with Gnome. That is not to say, Gnome requires systemd (I did not make that claim).
Your statement is not true. Stable Gentoo had its default init system switched to OpenRC in mid 2011. OpenRC has been great for Gentoo.
"Apparently GDM 3.8 assumes that an init system will also clean up any processes it started."
AFAIK, the only Linux init system that behaves in this way is systemd. Expecting this behavior means that one expects one's init system to behave like systemd. It is disingenuous to claim that one's software doesn't require the use of systemd when it relies on process management -and other- behavior that can only be found in systemd.
To make an analogy: I write software that makes extensive use of cgroups. If I said:
"My software doesn't require Linux. We could run on *BSD if they'd just implement cgroups, and Windows if they'd just implement POSIX and cgroups. It's a pity that they don't make this effort, and I don't have the bandwidth to help them out, but my software doesn't require you to use Linux to run it."
you would likely accuse me of sophistry; and with good reason!
In essence, systemd has becomes Gnome's mother...
My beef with the article that the GGP links to is not that GDM now requires such a system (strategic laziness is a virtue!), but that the author refuses to admit that
1) There currently exists only one such runtime system that provides the behavior that GDM relies on.
2) There is absolutely no guarantee that the GDM folks won't come to rely on more systemd implementation detail in the future. Indeed, given the way Gnome development seems to happen, it's almost a certainty that GDM will depend on more and more systemd implementation detail in an entirely ad-hoc manner as time goes on.
Void linux is a rolling release, has binary packages, uses a very good package manager (even better than pacman) and the community is small but very friendly.
Crux is very minimal, and you have to compile your packages.
I use Void Linux, and am very happy with it.
I am using Arch at the moment but I spent the last 45 minutes trying to fix a network problem which involved fighting with journalctl to see what was actually happening.
Void linux looks like a great alternative.
But enough about the technical distractions. Why is systemd so important? It's certainly not technical quality or design (even if some of the ideas are useful, the implementation is junk). There is a far more important reason, that some of you have noticed parts of, at least tangentially.
From this very thread: "systemd makes a lot of sense for embedded systems". Yes, it does, andor, and it's all because of this: "kdbus: efficient IPC". The thing is, kdbus/dbus isn't really that great for a lot of things - you have to bounce through the kernel at a minimum, and there is and encode/decode steps that add some overhead. It might be useful for some types of IPC, but it is replacing what should be a fast and simple library call in many places.
Now, here's where a lot of you are going to start calling me crazy or "obviously wrong", both without actually addressing the key claim, which was is probably better explained by stevel over in the Gentoo forums. I encourage reading that post.
The goal with all of this is not technology related at all: the systemd takeover is an attempt to separate Linux and many userspace tools from the GPL, so that software can be used under the LGPL terms instead.
What is the big difference between GPL and LGPL? Linkage. Linking to a GPL library requires you to follow certain requirements if you link against it, while the LGPL specifically allows taht usage. (k)dbus provides the workaround, by replacing what would be a normal function call into a library with a "IPC". It's slower, but so what, computers are way faster than needed. In the end, while you can still choose to release your code as GPL, if you have to use an IPC mechanism to do anything useful the license requirements that will actually apply ends up being being more like the LGPL.
Well, if I wanted to release under the LGPL, I would. What I'm not going to do is undermine my choice of license just because a bunch of embedded developers (and others) want to use what were traditionally GPL projects without having to be bound by the coopyleft requirements. If this was proprietary software, you would call that kind of behavior "stealing".
(seriously, the linked comment below does a much better job of explaining this)
Besides, Linux is no longer a UNIX-like system. Linux is now only Linux unto itself with UNIX-similarities.
It has been a common thread throughout the systemd mess. Lennart himself is very strongly outspoken against any use of shell scripting (not just in init).
"Myth: systemd is incompatible with shell scripts. This is entirely bogus. We just don't use them for the boot process, because we believe they aren't the best tool for that specific purpose, but that doesn't mean systemd was incompatible with them. You can easily run shell scripts as systemd services, heck, you can run scripts written in any language as systemd services, systemd doesn't care the slightest bit what's inside your executable. Moreover, we heavily use shell scripts for our own purposes, for installing, building, testing systemd. And you can stick your scripts in the early boot process, use them for normal services, you can run them at latest shutdown, there are practically no limits."
* arp networks
There are quite a few cheap-ish dedicated server vendors out there that can either be ordered with FreeBSD or provide some means to install it yourself.
Reliable and cheap, even more so if you take 5 minutes to google for a discount coupon.
Would not the incorporation of many loosely-coupled but individually secure mechanisms into a single monolithic mechanism be useful to an entity whose purpose was to monitor communications, view/modify systems unbeknownst to sysadmins and users, etc.? Yes, I'm talking about the NSA et al. I reference the following which also brings up Red Hat's control of Linux:
"Julian Assange: Debian Is Owned By The NSA"
Author is saying that CoreOS is a good solution because you use it with heavily isolated containers. Thus, any use of systemd is unable to screw up hosted applications. For that use case, it makes sense, whereas in more general use, systemd wants to get all up in everything.
Like Canonical changing Ubuntu to use Unity and Mir, so that it was easier to use on tablets and modern PCs. It just doesn't seem to work and it drove me to Lubuntu and use LDXE with an XP like Start Menu UI.
Ubuntu doesn't use SystemD yet, but I got a feeling it will.
I am downloading Fedora 20 because it has SystemD in it, so I get some idea what it is like.
But I think every Linux company wants to become another Apple for some reason. They saw how *BSD Unix went into making Mac OSX, and they want to try and copy that with their Linux distro and right now this SystemD seems like a path to that.
It is like a change from free and open source to commercial Linux. We all saw how Lindows/Linspire tried that and failed.
I might go to ReactOS, HaikuOS, AROS when one of them gets finished to the right level that it is ready for prime time. All they need to do is port Apache2, PHP, MySQL and other stuff to those operating systems to be used in VPS hosts so they can be used as alternative to Linux with SystemD if it ever breaks things.
It is kind of confusing.
Is there a link where one can read about the rationale of why systemd was needed? Perhaps by the person who started off the initial project?
http://0pointer.net/blog/ - is his blog
are you seriously trying to tell me that someone is going to have to explain what a "container" is by comparing with chroots? :D how have you been running your production stack all these years if you didn't have an internet connection under your rock :)
There are some aspects of systemd that are simpler than sysvinit. The mess of double-forking and PID files goes away, and process management is much more reliable. Setting up containers for services is much simpler. Unit files are generally much simpler than a given init script.
I think more useful adages ("boned wisdom for weak teeth" - AB) would be, "Everything should be as simple as possible, but no simpler," and "Simple things should be simple, complex things should be possible." There is an enormous amount of complexity inherent in the problem of initializing and monitoring a modern OS. If your solution to this is actually simple, it is probably wrong or incomplete. How you manage that complexity and what your abstraction layers are are the vital questions.
And FWIW with Gentoo/Portage you can create and install binary packages.
I guess time doesn't equate to knowledge, eh?
(on the wiki, the big Spelling header could not be more explicit)
1. FreeBSD has nothing like cgroups or namespaces. You can't apply cpu or memory limits to a whole jail, only individual processes in that jail.
2. it is early days for virtual network cards and ethernet bridging and jails: you have to recompile kernel to add VIMAGE.
FreeBSD 10 was released only this year. For reference, Heroku began in 2007, using resource limits with LXC... a whopping 7 years ago.
How is it that SystemD is about to dominate the market? Who is driving SystemD adoption, and why?
- Arch Linux
- Debian and Ubuntu in the near future
> Everything I read about SystemD is negative. Negative on the technology, negative on the people who created it. Nothing positive.
There's a vocal group that seem to think it's some Red Hat conspiracy to destroy Linux and take over the world. You don't hear the positives because people either don't care about init systems (as long as their distro continues to work) and happy programmers are generally silent on the issue.
The fact that the article says that Solaris is the 'best of breed stack right now' and also suggests CoreOS (apparently not knowing they use Systemd) should speak volumes...
I don't hate the registry... I'm just not in a hurry to see Linux adopt it. What's so bad about /etc/app/config or ~/.config/app/config? It works in pretty much every platform (though the paths may be slightly different). Windows does have both roaming and local data directories, which can be pretty nice (if you don't bloat them).
Yep, the Windows registry has its own ACL. Was bitten by that when trying (naively) to move an account's files between Windows installs. Logged in and found myself back at default settings, and changes not applying.
I wonder if this shift in mentality has something to do with the M-I contracts.
I have a hunch that for !RedHat, the motivation to switch comes just as much from not having to fight upstream (if not more so) than from systemd's relative merits. After all, there are plenty of different init systems, but only one of them is coupled to the rest of userspace to a significant degree.
Also, I wouldn't write off Solaris zones :) They're pretty powerful--in some ways, more so than LXC is today (syscall translation comes to mind).
In this case, we have red hat, lennart, Kay, and freedesktop.org working together to force a LGPL RPC loophole to circumvent the GPL. Sounds like an appropriate term.
That well-known paragon of SysV systems 8)
Solaris is based off of Unix System V, as are AIX and HPUX.
You can be based off SysV without utilizing the init system from it
systemd has both good aspects (e.g. faster boot times and removal of the nasty nest of shell scripts) as well as bad ones (incredibly monolithic). There are arguments on both sides of the fence. It's just that you're more likely to hear from people who dislike something than you are to hear from people that like it. (And at this point most of the systemd proponents have given up talking about it because of the incredibly vocal and relentless opposition.)
OpenRC does this quite nicely. Gentoo has been using it as the default init since at least 2011.
What one person finds nasty another may find sexy.
Most distros are yes. Some are not. Personally I like shell scripts and plan to continue using them as long as feasible. They work, and when/if they don't, I can usually find out why.
"Strong consensus" might be good argument for some people. I personally find it a fallacious appeal to popularity.
Redhat, Gnome and Pottering. While no-one can know exactly what they're thinking, it's in Redhat (and other commercial linux vendors') interest for more software to be more tightly coupled to linux and not run on other OSes. And Gnome is mostly developed by people who work for these commercial linux vendors (in a notable contrast to KDE, which has a wider spread of contributors, including more hobbyists and a number who are supported by government grants, particularly in Europe. Which leads to a different set of incentives).
My impression is that Gnome and associated software is, by and large, the wedge which is driving adoption in other distributions like Debian; previous "faster init" solutions were often available as options but never pushed to default because it wasn't necessary; users who wanted a different init could install one, but all their software would work fine either way.
Systemd has been a replacement for the worst part of Linux, init. It was confusing and was just barely workable. It has been needing replacement for 5 or more years. There were SEVERAL solutions brought out and well more developers like SystemD more then the rest.
As a user in Arch Linux, OpenSUSE and Fedora it has been rock solid and I have been able to do things at my knowledge level much more consistent and lower level.
at my place we kept both journald and have rsyslog to redirect logs to the network. ie our /var/log is relatively empty (rsyslog isnt setup to log to disk) so we look at log on the machine via journald if needed (rarely - since we look at the central log aggregator instead)
That's a contradiction. You're running journald even if it's dead weight.
(and I like systemd, at least in theory - and running Fedora 20).
Religiously passionate salesmen, selling an upgrade to your car stereo, who conveniently forget to mention that you also have to replace your whole car.
> Who is driving SystemD adoption,
Kay Sievers, Harald Hoyer, Daniel Mack, Tom Gundersen, David Herrmann, and its creator, Lennart Poettering .
> and why?
One guy wanted to make booting his desktop faster. 
Later it was decided that systemd would be "a big opportunity for Linux standardization. Since it standardizes many interfaces of the system that previously have been differing on every distribution, on every implementation, adopting it helps to work against the balkanization of the Linux interfaces. Choosing systemd means redefining more closely what the Linux platform is about."  Basically, they want to change "how we put together Linux systems." 
Tired of having to write init scripts for every distro, writing forking daemons, and dealing with the too simple syslog.
May be a bias in action. People who like it, or who really don't care much about it (I dislike journalctl, but, apart from that, I'm mostly OK with it) don't waste time writing how not much changed for them and how things continue to work as expected.
This lad seems really enthusiastic about systemd logging and systemctl:
There are a lot of interesting features, but personally I'd prefer to have both binary and text logging. Text logging is in cases where a system goes tits up and you may only have access to some basic tools such as grep, vi etc.
Companies like RedHat clearly drives systemd adoption, because it solve problems for their customers, that would be large enterprise customers. Many of those who dislike systemd has simpler requirements, so systemd becomes some new they need to learn, but it doesn't provide them with any tangible benefits. They didn't see or care about many of the problem or features that systemd address.
Of cause you're going to be negative if a new system, one you didn't need is forced upon you. If you're happy with it you're most likely a large company that doesn't blog about systemd, especially if it's something that just works.
At least that's my take on it.
Nevertheless, Red Hat and Canonical and others continue to try to foster desktop linux, on the now-obvious misguided theory that (a) the desktop matters and (b) they can take share from windows and osx installations.
As a result, there's hundreds of terrible paid desktop programmers, and thousands of their users, who are dying every day because, e.g., the last several iterations of wireless networking scripts were written by morons, their graphics libraries are comically bad, etc., etc.
Into this charged mix of total incompetence and frustration comes a small group of mediocre coders with hubris, backing, and political nous. And what they are promising to the long-abused desktop users sounds amazing to them, like wizardly magic, and literally, and almost entirely, boils down to this: freedom from having to deal with the shitty wireless networking script system. No joke. That is the fundamental issue at play, and the driver behind "faster boot times", "socket activation", and all of the other marketing points. If that idiot who wrote the wireless provisioning scripts had been competent, this entire mess would never have happened.
So the desktop linux users and desktop linux developers, who again have been living in a tiny cage being pooped on every day by their own regrettable choices, reach for this solution with the religious fervor of a drowning victim. And since desktop linux developers tend to be the C team, they don't care about good architecture, they just want things to work for them and their very specific desktop linux use case, which objectively and axiomatically, again, has not worked for decades and will never work.
So they band together in unison, following the exciting, energetic, charismatic and opinionated lead developer. And obviously, Red Hat is delighted, because that's their employee, and maybe they get more market share. And they pack Debian with developers, because there's a ton of horrible little desktop linux apps that grant them votes, and shout down the opposition. And they set up an IRC channel, and they brigade every forum with the same nonsensical attacks on the very architecture that made it possible for the internet to happen in the first place. Including, obviously, this very thread.
In actuality, this group of users is vanishingly tiny compared with the linux installed base, which is mostly phones and servers, where the real action is, and which don't need this halfassed dbus nonsense or the accreting blob of carelessly rewritten known-good-daemons. The desktop linux people are chasing a dead target with a terrible design and religious fervor substituted for technical ability. It will be intriguing to watch it play out. FreeBSD is about to get a big positive jolt of people that know what 'good' is.
For me, I plan to stay as far away from systemd as I can; it's an abomination of software design.
His implication about switching to FreeBSD is a good one and I notice a decent influx of Linux admins switching to FreeBSD showing up on every forum I visit.
For example the author of this blog article does not seem to have contribute a lot of code to any project and is more "just" an admin and not a dev.
for me it looks like they don't like new stuff love the linux world how they knew it and don't want things to get easier or even change.
Sysadmins complaining about it is pretty important, as at the end of the day they are the ones who are maintaining the systems.
Personally, I have a strong dislike for Systemd. I don't believe it's the right replacement for sysvinit however it's not going away so we'll have to try and work around its warts. Also, currently it's buggy as hell so I will use Centos 6/Ubuntu 14.04 for a few more years to see if the many problems are sorted by then.
People who don't want to help out use the most recent stable and have nothing to worry about.
Which software stack do you know that you instantly get in 20 mins. I for one cannot understand the sysvinit or upstart in 20 mins (ie) to develop and debug. How do you debug sysvinit anyway? Echo's I place in the code do not appear on the screen. Can i claim 'Oh shit this is so broken...'?
And yes LSB init scripts are broken under systemd, exactly because of what you describe (placing echo doesn't show) because it redirects the execution through systemd, and stores the output in journald.
Try writing (or debugging) an init script on a non-systemd system and see how much easier it is.
systemd also tries to act "smart" and remember the last state a service was in which makes developing LSB init scripts on a systemd system ... complicated.
If an init scripts exits with success (perhaps because the deamon wasn't configured yet) then systemd will remember that and the next time you issue a 'start' it'll be a noop, and claim it was successful.
Which leads to countless hours wasted until you figure out what really happened: systemd never even run your script again. So then you run 'restart' on the init script and all is well again.