Hacker News new | comments | show | ask | jobs | submit login
Systemd redux: The end of Linux (lusis.org)
235 points by bcantrill on Nov 21, 2014 | hide | past | web | favorite | 446 comments



Perhaps this is a controversial idea, but is this not just someone finally taking the tried and true Open Source "advice" to heart?

That is, every time I've reported something is broken, wonky, doesn't work reliably, et cetera, I've been told, "Submit a patch.", "Write some code.", or worse, "Implement it yourself."

Someone finally got fed up with the haphazard state of affairs in Linux-land. Fed up with the fragmented and sometimes many places you have to look for error logs. Fed up with the many files you have to edit to configure the network correctly (different on every major distribution). Fed up with the half dozen ways to configure X, where X is a common function to every modern operating system.

It seems Lennart has taken the advice and followed through, and distribution maintainers liked it. They liked the idea that someone was taking all this complicated work - this dirty, boring to write and maintain code - and making their lives easier. Why else would nearly every distribution be on board?

Systemd is offering a more compelling solution than anyone else, and if you don't like it, well, you should submit a patch, write some code, or implement it yourself.


I had no opinion on systemd until yesterday. In fact I had a glace or two at the code and it's pretty clean and I liked the rough objectives laid out.

I installed CentOS 7 on a machine last night that we're replacing CentOS 6 on and was poked in the face with timedatectl and dbus problems for an entire hour, some of which were intermittent. Debugging these issues is a horrific pain. I lost 4 hours on it. I've never lost that much time on a system function before. This is not what I expected and there is no way I could possibly introduce that to our production environment.

I think that might why people are slightly sensitive to it.

Yes you're exactly right, but replacing something with something less stable, more complicated and more difficult to debug isn't a rational or good engineering. I'm sure many people will be fed up with systemd much quicker than what was already there.

Not impressed with a community which pushes this as stable, quality software. Voting with my feet: FreeBSD is being trialled instead. WhatsApp throwing a million dollars at it draws a lot of valuable attention and puts it in the business's mindset.

Choice is as much of a valuable aspect of open source too...


You know this is funny, I remember reading comments EXACTLY like this about 3-4 years ago but with pulseaudio in place of systemd.

Pulseaudio was Lennart's previous project. It broke everything in linux sound for a while, everybody moaned and hated it and said it was the worst thing since the crucifixion of Christ.

Yet, name one problem you had with sound on linux in the past year? There are very few. Pulseaudio now just works(tm) and is a unseed, unheard of part of the plumbing.

If you remember what is was like messing with ALSA and (shudders) OSS before pulseaudio came along you will agree that the current state of affairs is a million miles better. It used to be really difficult to get more than one application to be able to play sound at a time. I remember compiling sound drivers from source just to get them working. Configuring ALSA config files to get surround sound working was practically a black art. Creating manual scripts that unmute the sound card on every boot because the driver didn't initialize it properly.

With pulseaudio, I never have to worry about any of that and configuring surround sound takes me two clicks of the mouse.

Lennart did a fantastic job with pulseaudio, he took on a dirty problem that nobody else dared to touch and went through years of criticism to produce a really high quality solution that solved the linux audio problem so well that you don't hear complaints about it anymore.

In light of that, I trust him to do a good job with systemd. It'll be a couple of years of everyone moaning and bitching and whining about it, then one day it will have become a seamless part of the plumbing, everyone will take it for granted and wonder how they ever managed fighting with shell scripts and fragmented init systems before systemd came along.

It's ironic that Lennart Poettering is probably the most abused developer in the entire OSS ecosystem, yet he is one of the people contributing most to it. For our sake, I'm glad he has such a thick skin. If I was him I'd have quit this game long ago.


> Yet, name one problem you had with sound on linux in the past year?

That's just it. Linux sound worked fine for me before Pulseaudio, and FreeBSD sound has always worked perfectly fine for me. In fact, FreeBSD solved sound mixing sooner via /dev/pcm virtualization (while Linux chose to create the Linux-only ALSA instead), and has always had lower observed latency.

Pulseaudio screwed up my audio so badly that for a year I was running the closed source OSSv4 binaries and manually recompiling all the audio libraries to use OSS instead of ALSA/Pulse.

It is not fantastic to push horribly broken code onto the entire Linux userbase while others frantically jump in to help patch and fix the trainwreck.

And we're doing the same thing again with systemd. Instead of having a few years where users can choose between systemd, sysvinit, openrc or upstart, while all of the major bugs are worked out, we're being forced immediately from sysvinit (Wheezy) to systemd (Jessie). I was on Lennart's treadmill with Pulse, I'm not getting on it again with systemd.


WAIT you NEVER had an audio problem in Linux before PulseAudio? I would have said the weakest link in Linux on desktop WAS audio.

Now PulseAudio was released into the wild too soon by too many distros BUT it has fundamentally fixed what was HORRIBLE in Linux. (Previously a Sound Engineer and Record Studio owner)

BUT I would say that Systemd is extremely stable and not broken. What people are complaining about is the philosophy aspect.


> WAIT you NEVER had an audio problem in Linux before PulseAudio?

To be fair, I didn't say I never had Linux audio issues prior to Pulseaudio (whereas I did say that about FreeBSD.)

Back in '98, my SB16 ISA card would only output sound at 8-bit monaural under mikmod, and I could only play CD-audio with that passthrough cable between the CD-ROM drive and the sound card. Once I was able to get sound working well enough, the only way I was able to play MIDIs was through Timidity and Soundfont emulation. And until ALSA, there was obviously pain whenever two things would want to play sound at the same time. This of course was due to the OSSv3 author changing the license before introducing his own audio mixing, and all of those awful sound server daemons (esd et al) never really worked, since there were multiple daemons and each application wanted different daemons or just wanted to stab right at the OSSv3 ioctl's.

But once ALSA was established and working, yes. Audio under Linux at that point worked just fine for me. Pulseaudio was a solution looking for a problem.

> (Previously a Sound Engineer and Record Studio owner)

I won't claim to be either of these. I like to listen to music while I write code, I'll occasionally watch some movies or play some games, and I want Pidgin to make a chime when someone sends me a message.

In particular, I'm very sensitive to latency in gaming (emulation), but that's about the extent of what I need speaker sound output for.

> What people are complaining about is the philosophy aspect.

To me, the worst part is the backroom politics, the complete disregard for portability, and the lock-in effects of consuming other daemons and services, and making software dependent upon it.

However, I do also object to the design itself, as well as to the developers responsible for working on the project, and the attitude of disdain they present to the community at large.


The thing is that we HAD TO HAVE JACK to over come latency in Linux and MAN that was HARD and once it worked DON'T mess with it or else 3 hours later you had a broken keyboard, mouse and monitor.

The issue was ALSA was HUGE latency to use for anything in recording was just not doable! I had to buy a closed source solution under Windows. Today I could easily do it in Linux.


To clarify, "/dev/pcm virtualization" means FreeBSD does audio mixing and re-sampling in kernel space.


That is correct. Let's look at the simplest form of sound mixing:

    /* A */ sample = (sample_a >> 1) + (sample_b >> 1);  //lowers volume of A and B by 50%
    /* B */ sample = max(-32768, min(+32767, sample_a + sample_b));  //prone to clamping
Obviously, the algorithms will become fancier (to mix better, to support multiple bit depths and frequency rates, to avoid popping if one stream runs out of samples, etc), but it's still an incredibly basic and perfectly safe bit of code to run.

Playing this up as a bogeyman for not being in user-space is FUD, especially when video card drivers also run in kernel space, and are literally thousands upon thousands of times more complex and error-prone. And now the big push is to have kernel mode setting for video cards (even FreeBSD is doing this), which I believe to be a terrible direction to go in.

I have never in my entire life seen a system crash due to audio mixing, but I've personally experienced plenty of video card drivers causing kernel page faults.

If people were even remotely serious about the protection of kernel space (and I certainly wish they were), Minix would be more than a footnote in history. Neither Linux nor the BSDs make serious efforts at microkernel designs. Not even passive attempts to run non-critical device drivers under ring 1. Personally, I'm really rooting for Minix 3 and hope that it takes off more now that it's gained binary compatibility with NetBSD.


Sorry, it was not my intent to "play this up as a bogeyman", and don't know enough about audio to have an opinion on this design decision anyway. (Do audio devices support floating point formats nowadays?)

I just find it amusing how a monolithic design of doing all audio stuff in the kernel is held up by some as an example of reliability and as superior to a more modular design that is more in line with the UNIX philosophy.

About the KMS however, I've heard that DisplayPort link training has latency requirements that are difficult to meet in anything but a kernel interrupt handler... a quick duckduckgo search finds a short note about that on: http://fedoraproject.org/wiki/Features/RadeonDisplayPort

Also X servers have traditionally needed direct PCI bus access to get the hardware initialized, which means that a buggy X server can hang your PCI bus so the driver running in user space likely doesn't increase reliability in practice.

It's an interesting question to what extent the limited success of microkernel based UNIX implementations is to historical accidents and network effects, and to what extent due to actual technical limitations and additional complexity of a microkernel architecture.


> Sorry, it was not my intent to "play this up as a bogeyman"

Okay, my apologies as well then. It was hard to get a read from just that one sentence with the word kernel emphasized.

> (Do audio devices support floating point formats nowadays?)

Natively, no. You can be lazy and do it anyway in software mixing though.

> I just find it amusing how a monolithic design of doing all audio stuff in the kernel is held up by some as an example of reliability and as superior to a more modular design that is more in line with the UNIX philosophy.

Certainly, it would be ideal if everything non-critical were in user space. But audio in the kernel is probably at the very bottom of the list. Audio mixing is maybe 0.0001% of the kernel code, and is some of the safest, simplest arithmetic code imaginable. It's worrying about the one ant you saw on the counter when your entire house is infested with termites.

> About the KMS however, I've heard that DisplayPort link training has latency requirements that are difficult to meet in anything but a kernel interrupt handler

I don't know if that's true or not, but I am running a DisplayPort monitor (ZR30w) now without KMS, and it works fine. Obviously the video driver is still running in kernel mode, but at least it's a module outside of the kernel itself that runs after my system is booted.

What I'd really like to see is distros and vendors instead relying on UEFI GOP for boot-time mode setting.

> Also X servers have traditionally needed direct PCI bus access to get the hardware initialized

Well, compare it to audio. Eventually even a userland mixer will have to send the samples through some sort of hardware interface. But if your goal is stability, then it would be ideal to get as much code out of the kernel as possible.

> and to what extent due to actual technical limitations and additional complexity of a microkernel architecture.

Certainly nothing is ever perfect. There are so many potential problems with computers. Cosmic rays can flip bits in your RAM if you don't shell out an extra $500 for the premium CPU, mainboard and ECC RAM. Strong enough power surges (lightning) can burn through and destroy absolutely any running computing equipment. Hardware can literally fail and take down your system. Things can overheat, there can be design flaws in the silicon itself, etc.

So I look at it like OpenBSD looks at security. You want to stack all the protections you can. Mirror your drives, use ECC RAM, don't run anything in kernel space you don't have to, try and build as much redundancy and safety as you can into the system. It won't be perfect, but every bit will help increase uptime.

...

So again, sure, audio should preferably be in user space. Just, it's many thousands of times worse that video isn't even trying to do this, and is in fact going in the opposite direction to become more tightly coupled with the kernel.


You're entirely right and I've upvoted you.

However, with one small caveat: servers don't generally have sound cards so the impact of this was relatively low. There aren't that many desktop Linux users out there. I mean I'm a Unix guy at heart and I'm typing this on a Windows laptop. I've never used Linux on the desktop and probably never will.

Now servers do have init processes and we don't really want to spend the next 3-4 years being guinea pigs. I'm quite happy for the vendors to do this behind the scenes or offer it as an alternative but we've got an RHEL+CentOS release with systemd in it already and a Debian with systemd in it just around the corner. A pulseaudio situation, even for 6 months, will result in no small amount of chaos.

I do indeed remember times before even ALSA when you had to pay OSS for drivers for your turtle beach card etc. But that's in the distant past, not right now and of little relevance. Windows was fine on the desktop then as well and the sound worked fine out of the box.


> I mean I'm a Unix guy at heart and I'm typing this on a Windows laptop. I've never used Linux on the desktop and probably never will.

Then you're not really a Unix guy at heart.

At home, all I run is Linux, including the laptop my non-geeky wife uses.

For me it was a hard choice. I knew she would object because it would be "different" and she's not really interested in learning a gazillion different computing-systems, but on the flip side it meant it was simpler, quicker and less work for me to maintain the computers at home.

Once setup things just work, and ensuring everything (including flash and other vulnerability vectors) is up to date is one apt-get upgrade away.


Let's say PulseAudio is really good now (I can't disagree). It was initially released 10 years ago. So 6-7 years after initial release ("3-4 years ago") it was causing people grief.

I'm not sure that's a great vote of confidence for the road ahead of systemd (given that systemd presumably has a bit more to it than PulseAudio). To quote the article, "I do honestly believe this will end up being the start of a rocky period for Linux".

Ubuntu 14.04 will keep me happy for ~5 years, then I can take another look at what the current state of things are.


Pulseaudio still has problems. Sound suddenly being muted, it using the wrong alsamixer settings, sound being garbled until you pass in arcance settings or change it back to Alsa. Granted, the error might be more in part of the rest of the ecosystem, though I doubt it. But it is sadly far from "just use pulseaudio and everything will work instantly". And if it works, there is no need to think that Alsa alone wouldn't have worked as well, the configuration was way less brittle and cumbersome than it is described now. Mostly it just worked.

It is only since 14.04 that you have a small chance that opting to use pulseaudio is the better choice.

Just trying to get Skype working (which uses pulseaudio) cost me 2 hours last week, which is not at all nice when you have a call starting in 5 minutes.


Seconding this.

Pulseaudio still won't detect the headphone jack on my old intel board, and Skype on my newer machines on Linux will routinely fuckup playback.

One also wonders how much of the PA cleanup was handled by people that weren't Lennart.


Strange I don't see the issues you're talking about I use Ubuntu Desktop on a variety of desktops and laptops. I'd say PA was stable by Ubuntu 12.04. I do Skype and Google Talkplugin (now Hangouts).


Working in your use case doesn't invalidate someone else's. I've had some systems that pulseaudio has been great for. I have some in which I still don't have fully working sound.


Try using it with JACK and setting up a DAW.. then you'll see the pain.


I've had success by adding these four pre/post startup/stop scripts to qJackCtl: https://wiki.archlinux.org/index.php/PulseAudio/Examples#The...

In particular for problems of getting Youtube (or any browser audio) to work while other apps use JACK directly.

Although on a recent new install it seemed to work without them as well.

One problem is I need to start/stop the qJackCtl thing every time my laptop comes back from sleep, to get sound working again. There must be a way to automate (or, preferably, fix) this, right? Anyone know?


To be fair, the only reason to run Pulseaudio is "everyone else is" - i.e. its fully glommed into the distro. ALSA and Jack have been stable for a lot of people, even before Lennart decided to tackle 'all the problems'.

But, also to be fair - like you, I maintain my own systems and do not overly depend on the teeming-mass-reality as a derivation of stability. My personal Linux DAW systems, running now for decades, have attained a level of productivity that I would at least hope is represented in the current niveau, vis a vis Popular Linux Distro designed for audio (e.g. pure:dyne, Arch Pro Audio, 64 Studio, UbuntuStudio, et al.) .. for the newcomer, it should of course 'all just work' from boot-up, which I hope is the case. It is for me, anyway: I've expunged pulseaudio from all of my machines, and make do with Jack. My studio uses 48-channels of digital audio, everything-is-a-file .. a working and functional DAW, thousands of plugins, about 12 MIDI devices (synthesizers/effects rack) and so on, and the best thing of all: all source code included. So, yeah .. ;)

EDIT: apropos qjackctl, yeah, apmd:

http://www.tutorialspoint.com/unix_commands/apmd.htm

.. or some such similar thing.


Thank you! I keep meaning to figure out that power-management thing, but keep putting it off because I vaguely don't know where to start. Now I have something, I will dig into it :-D


Might be worth trying ALSA->JACK routing?

http://pastebin.com/iVAjZzTS


Thanks you just ruined my Friday remembering those days!!!!


Up to this day, while libpulse0 is required by some package, I have no pulsed running. Everything is using alsa directly, and I'm using jack for audio work. The impact of pulseaudio is way lower than systemd, and never actually impacted anyone producing music that have no reason to use use a pulseaudio sink. Notably, the audio layer is (in)famous to be able to route anything through anything. Contrast to systemd, where sysadmins actually have now limited choice to many parts of the system (not only init!).


Your pulseaudio example is naive, because this: pulseaudio broke audio for many professionals who were already using JACK and ALSA. This is why the upheaval is criticized.. Whats being done to improve things caters only to a low common denominator; it doesn't push the state of the art forward.


> Yet, name one problem you had with sound on linux in the past year?

A month ago with Ubuntu 14.04 - paired with a bluetooth speaker, but would not send any audio to it (A2DP) without any indication as to how to diagnose or correct the issue.


Hey, pulseaudio still does not work for me and when it rarely manage to get out some sound, my CPU is over 10%.


Great example except ... I don't have pulseaudio installed on my system. Not even out of any specific effort to avoid it.

It may be standard on some systems, but not, apparently on debian (it's an "optional" package).

And that's part of the point: stuff that needn't be present shouldn't be. systemd's a whole 'nother ball of wax in that regard.

And yes, I'll even allow that Linux audio has been frustrating over the years. But in my case, problems going away had nothing to do with Lennart's work.


Audio in my Lenovo with Ubuntu is completely erratic. Not going to blame this on Lennart, but it is certainly not a fixed problem. I liked Alsa, though I was working with high end hardware at that point.


>> Pulseaudio was Lennart's previous project. It broke everything in linux sound for a while, everybody moaned and hated it

I've been with Ubuntu since 2007 and went through the PA transition. I agree it is so much better now. Changing audio sources is easy and faster than on Windows 7. By Ubuntu 12.04 this was stable for me. Like changing from speakers to headsets for a meeting, smooth with PA at least on Ubuntu. Until PA I never thought I'd see audio united on Linux.


I still haven't tried pulseaudio; my first interaction with it was so terrible.


I checked out several free synthesizers recently on an Ubuntu system. None worked on first try. Thought I got half of them working after a while (the others would have needed JACK).

On my Debian system youtube videos stop playing sound once in a while (video continues), thought I suppose it's not pulseaudio's fault (so just a general sound problem).

You asked ;-) (I still agree with you that it got better than it once was)


pulseaudio is a continuous disaster for me.


Are you really extrapolating from ONE data point?

In some sense systemd is more stable in that it's fixing some longstanding bugs with sysvinit, but of course it will have some bugs of it own. If you don't want to deal with that, you could skip a release.


No it's not just one data point. This is the final straw to use the old phrase. After a few years of serious problems, our most recent being CIFS VFS problems causing panics and mounts locking up on CentOS 6.5, hard locks on RH certified hardware, power management hell and so much incredible churn with no progress and the sudden "fuck POSIX" approach, it paints a really bad picture of the current state of things.

There is a distinct lack of engineering prowess and quality control. It originates at the core GNU + kernel + freedesktop teams and waterfalls down through the distribution houses.

That's the problem and it's endemic within Linux.


I caution you to not apply the word "engineering" to software, especially in the apparent manner you've been applying it to this thread. You seem to have an idea that there are universal software development practices that are so well understood we can make regulation around them. But that isn't how software works. There are some practices that are known and some that are unknown. For working software "engineers", we are all, daily, put into a position for having to figure out for ourselves even what tools and materials to use to make anything.

Imagine you were an architect of buildings. Your day to day job is to design mundane strip malls and gas stations. You have building code on your side for much of the process. As long as you don't violate the regulation, you at worst can only make an inconvenient building, but not a dangerous one.

But imagine instead that you're building a large office building every six months, your clients demand you don't reuse any design principles on your future clients, and you not only lacked the building code, but also 1/4th of the heavy machine equipment, 1/2ths of the tools, and 3/4ths of the raw materials. I don't just mean you don't have them in stock, I mean nobody has invented them yet. And of the ones that have been invented, we don't even know all of their material properties, say nothing about what material properties we should be looking for. Will this particular bolt we are using with these particular cross beams hold up to the stresses placed on them? The answer is unknown, and I'm some cases unknowable.

It's really easy as a user to say, "they should have tested this more". While strictly true that more testing may have found your issues ahead of time (presuming the right tests were done), it is inefficient engineering to exhaustively test things. Even mechanical engineering bases a lot on statistical modeling, which will always, always have corner cases that don't match reality.

In the real world, people had to learn the hard way about things like lightning rods and sacrificial electrodes. They didn't come about from "testing during development". They came about from testing live, and seeing which buildings and boats did or did not burn down or sink. That's not bad engineering. That's just the nature of unknown problems.


That sounds like an excuse which I don't accept. I have a formal engineering background and whilst you're fundamentally right, engineering is based upon cumulative experience gained. We have a hell of a lot of experience as a society of writing software that works and is of merchantable quality.

What the general state of affairs shows is the following traits:

1) There is no thought and research going into the design of a piece of software. Ergo, we do not learn from past mistakes.

2) There are isolated individuals writing vast swathes of software which are trusted unconditionally. Ergo, we do not learn the benefit of multiple eyes on a problem, review and discussion.

3) We assume that software is correct from one person's viewpoint and opinion. Ergo, we do not test software properly nor cover those tests with objectives.

4) We work to deadlines, not quality objectives. Ergo, we trade quality for tolerance from others.

In this case someone came along and didn't think about the problem, didn't work with others, assumed they were unconditionally correct and chose tolerance over quality.

To use your analogy, they're now selling stainless steel lightning rods (poor conductivity), are the only vendor of them, are a vocal marketing front and houses are catching fire everywhere.

Or more specifically, in one example, the entire process was above and from the author to the distributor, no one even noticed that loginctl doesn't work properly.


Wait, so let me get this straight. You're an engineer-engineer, not a system administrator or a software developer. And you think there is a fundamental problem of qualification in software development, yet you lack qualification in software development.

On your points:

#1 is patently false, to the point of being extremely insulting. You've lost all sympathy from me at this point. Go peddle your baseless opinions somewhere the audience doesn't know better.

#2 is also laughably false, as that is the entire freaking point of open source and often considered the greatest strength of Linux. You think because your highly qualified opinion wasn't consulted before you had to spend a whole four hours, OMG on a problem that means that no review or discussion was done?

#3 is false again, because software is tested. You use the word "properly", so I will sit here and wait for you to bestow upon us your great wisdom on what we could be doing better.

#4 is false on both presumptions that software is not built to quality standards instead of deadlines, or that other fields are not dictated by deadlines.


10 years in EE (embedded systems, defense industry), then 15 in what we now call devops/architecture. Experience is fine. I have no problems being arrogant about that. I've fallen down a lot more holes than a lot of people and know what I'm talking about.

When I say tested properly, I mean tested completely. If you miss an entire functional unit of the software and a client reports it as broken, its pretty obvious what the problem is.

Our senior software guys sat down for the other four hours and presented all our findings together and cumulatively said "we're not supporting that shit; we can't trust it".

Regarding #4, it's plain to see that RHEl was released with a broken systemd implementation due to deadline...


> When I say tested properly, I mean tested completely.

I am still in the learning phase, but even I know that complete testing of any complex software is practically impossible.

So, how do you guarantee completedness in "proper" testing? I know you can't without redefining the word "completely". What's your definition?

Also see Impossibility of Complete Testing[0] by Cem Kanen, co-founder of http://www.associationforsoftwaretesting.org/

[0] http://kaner.com/pdfs/impossible.pdf


Yes you're right.

When your system consists of functions "A, B, C, D", I'd expect to see test suites for "A, B, C, D". In this case there were test suites for "A, B, C". The client found D therefore the test suite was incomplete.

Now if a bit of A, B, C or D suites were missing that would be different and entirely expected.


You should at least be testing the happy path and most common failure mode of every component of a software system, whether manually or automatically. The most visible components to the user should be tested most. Imagine an ecommerce site where nobody ever tested checkout, or an OS where nobody tested logging in.


Nah, they're pretty much right.

As for testing--notice that when a lot of people here are reporting issues with systemd/pulseaudio, their reports are pretty much dismissed out of hand, or they're told "no, you've done something wrong".

For #2, a lot of times somebody with the right political position (say, Lennart at Redhat) or just the ability to shout louder and longer than anyone else will get something put in, regardless of technical advantage. Don't even try to claim otherwise.


I read him as leveling his complaints against the entire field of software development, and if he was being more specific than that, it was at least as big as the subset who develop Linux.


If there is one thing that HN has taught me it's not to make broad, hyperbolic statements just because I don't like something; especially if it is a divisive issue. There is always a "moron4hire" who will rip you a new one (and rightly so).


It seems to me that OpenBSD stands for everything you want in OS dev, except maybe point 3, Theo seems really at the center of everything.


You're 100% correct. In fact my own mail server/web machine uses it. Theo is not right at the centre; there's a large group of people who work together as equals from what I can see. I can't recommend it for our "enterprise use" though because we need some of the more friendly features that FreeBSD offers such as ZFS.


I run a mixed environment right now with OpenBSD, FreeBSD, Red Hat, and Windows. I use Red Hat and Windows because certain software requires it (basically they are single app servers[1]). The OpenBSD servers are all doing basic utilities and the FreeBSD servers are doing stuff the requires a big file system (ZFS). There are a lot of enterprise tasks that OpenBSD is fine with doing, and I just use FreeBSD for tasks that require the file system or really heavy load.

1) government contractors are so fun when they get their software required to deal with certain parts of government


Hmm, I don't see any of that, odd isn't it?


Not really odd. You just haven't poked the bits I have.


it's fixing some longstanding bugs with sysvinit

Honest question: I've been using sysvinit for a very long time and I have no concept of what those bugs might be.


Simple one PID files.

Assume server with lots of processes.

Service A starts writes PID to disk, lets say 123. lots of processes start and stop as it goes along and does its work. Service A crashes/stops working PID 123 gets reused by a new process SysAdmin comes and hits /etc/init.d/ServiceA restart shell script calls kill 123 which was a totally different process now not at all related to ServiceA.

etc...

Clean unmounting not depending on timeouts to be high enough.

etc...

Not starting a database before the filesystem with the database files is mounted.

etc...


> Service A starts writes PID to disk, lets say 123. lots of processes start and stop as it goes along and does its work. Service A crashes/stops working PID 123 gets reused by a new process SysAdmin comes and hits /etc/init.d/ServiceA restart shell script calls kill 123 which was a totally different process now not at all related to ServiceA.

I created a specialized FUSE filesystem to deal with this. Processes create PID files in it, but when they die, the filesystem automatically removes them.

Code: https://github.com/jcnelson/runfs


Nice to see a different solution than cgroups for this

The Readme is rather sparse, could you add an example how to use it from a init shell script?


I still think 64 bit non reused pids are the best long term solution. There are other pid race conditions. (Not having pid files deleted on reboot is a different issue of course).

Although the Capsicum model (in FreeBSD, slowly getting into Linux) where you can have file descriptors for processes is another different model.


(although Posix does I believe require pid_t to be an int which is an issue)


> Not starting a database before the filesystem with the database files is mounted.

This was solved decades ago with numbered init symlinks:

  K20postgresql
  S20postgresql


This was solved in the same way that assembly languages are touring complete. It's true, but it's not useful. You need a compiler to output those meaningless numbers. What happens when you install something after booting? Who starts it? On a long running system those files are useless. Which ones have you run? Are they idempotent? Systemd attempts to solve all of these problems and more.


Don't those correspond to runlevels? I can boot to runlevel 5 and not have all file systems present and working e.g. an NFS filesytem might not be available.

So I would say it has been hacked around for decades. Not cleanly solved. But I am not the best informed here so please add more details about how numbered init symlinks guarantee file system being there before a service is started.


It hardly does anything for FreeBSD in the enterprise world. Companies cannot afford to support Windows Server, Linux AND another flavor of UNIX (FreeBSD). They are already dumping Solaris/AIX/HP-UX as much as they can so environments are a bit more homogeneous (and easier to support). There is no point in onboarding FreeBSD for what they perceive as minor technical differences (that, truth be told, are overshadowed by dozens of layers of bureaucracy and any efficiency benefit is completely lost).

It took a long time and a giant ecosystem to get where Linux is today at big enterprises. OSes are commodities in that space. They are not commodities in many other spaces though (e.g. startups, HPC, science, etc).


> They are already dumping Solaris/AIX/HP-UX as much as they can so environments are a bit more homogeneous (and easier to support).

Whoever is pushing for this is an idiot then. Verisign, for example, has had 100% DNS uptime for the .net, .org, and .com root servers for ~15 years because of their mixed environments. In every one of their POPs they tend to have at least two racks of equipment with:

   * 2 different brands of load balancers
   * 2 different brands of firewalls
   * 2 different brands of switches
   * 2 different brands of servers
    ** servers are from different hardware generations
   * 2 different OSes (Linux and FreeBSD)
   * a choice from 3 different DNS server software
This is all pretty much randomized at each location. As a result, a bug in one piece of the stack (hardware, software, driver, security, etc) will not take down their service completely.

This is how you run a reliable global-scale service. Anyone who plays the "it's just easier if we all use ____" is in for a big surprise when their entire infrastructure is at risk due to one bug.


Yet Google and Facebook keep using Linux for all their servers.


Facebook also uses PHP, the most reviled language around here. Google has a developer team to rewrite components and adapt the Linux kernel, consider that.


I don't recall seeing Facebook or Google have 15 years of uninterrupted service.


It's an unfair comparison in any case: DNS is "trivial" to keep up compared to even small fractions of Facebook and Google's infrastructure.

As long as a sufficient fraction of servers at a sufficient fraction of Verisigns clusters has an uncorrupted set of data and is able to serve responses, Verisigns TLD zones remain up.

Pretty much the only thing that can go wrong in that case, assuming you have safeguarded the integrity of the zone is bugs in components outside their direct control.

It makes 100% sense for them to focus their efforts on ensuring diversity, because the class of problems that can solve for them makes up an unusually high percentage of the possible failure classes, and the nature of the service also means that most of the potential problems that this can cause is only likely to take out some proportion of their capacity that still leaves them with a functional system, so the potential benefit is higher for them than for most with a heterogeneous, and the potential risks are lower for them than for most.

For Google and Facebook, the systems are so much more complex that the tradeoffs are vastly different.


It's equally trivial if you serve your web infrastructure off of load balanced caches.


Sort of (ignoring that web browsers don't retry failed requests), if all you are serving is static/cacheable content.

Which excludes the vast majority of functionality of Google/Facebook, and most other major web properties.


I disagree.

There are very few heterogenic systems in the enterprise. That is an objective, but the main thing is that we deliver what we're paid to deliver by choosing an appropriate platform. We have Solaris, zSeries, Linux and Windows. We just got rid of AIX.

As for minor differences, FreeBSD has a lot of much bigger wins than people realise at first glimpse. The differences are far from minor. For example:

ZFS, dtrace, rctl, a scary good IP stack, virtio support, documentation that doesn't suck, a POSIX base, LLVM/clang, a MAC framework that doesn't suck, OpenBSM, CARP and a pile more. Oh plus an automated deployment story that is pretty tidy.

Sure we can replicate some of these on CentOS 7 for example with similar tech but the above are a million times more cohesive.


I installed CentOS 7, with no prior systemd experience, had no problems and found .service files way neater than sysvinit bash scripts and liked having meaningful names in the log/ journal rather than 'local3'.


I had no problems the second time as well.

Unfortunately when the first and second time differ even though identical (recorded) steps were performed, one has to ask the question: why and can I trust it?


Maybe it was a hardware problem on your end?

My rule of thumb is "search for the problem on Google. If nothing comes up, maybe something is wrong on my end".

Did you find any results or reported bugs similar to what you experienced?


The hardware is known good on CentOS 6.5. In fact it was high end HP kit that we pulled out of production.

Yes there were other mentions of it with notes to it being fixed in a later systemd drop, which we can't deploy because RH/CentOS don't ship it. I think one of our guys raised a case with RH but I was dragged off onto something else then.


To be fair, pulseaudio did expose bugs in alsa-drivers, so a "known good" configuration could stop working when "upgraded" to use PA.

I have my share of reservations about systemd (and PA), but thought that it might be worth pointing out that "known good" hardware A with software X, doesn't have to mean hardware A is all good, just that A has no bugs/errors not exposed when running X. So Y comes along (new kernel, drivers?) with entirely new code - and suddenly things behave erratically.


I've been a longtime linux sysadmin (back into the 90's) and have run into similar issues as you. I've never had as many problems with basic system stuff as I have since testing some of the new systemd-defaulting distro's. For instance, the centos/rhel 7 boxes I've tried have erratic problems - sometimes not setting the hostname, sometimes services don't start, sometimes services I've disabled _do_ start. It's making me really think about shifting to FreeBSD (or, god help me back to Solaris).


Out of curiosity, what ended up being the root cause of the timedatectl problem?


Absolutely no idea. It just went away spontaneously which is even more worrying as that suggests the system is non-deterministic.

I don't have the error on my phone which I'm on at the moment but it threw a dbus error with no debug info.


If you don't know what the problem was why are you so convinced it was systemd? Could be the kernel, could be udev, dbus, filesystem, or hardware (sorry, there is never 100% "known good" hardware).


It must be a hardware problem on your side, dude. Simple as that.

I imagine 10 years ago you would be the person complaining that GCC segfaults randomly during compiling Linux kernel, complaining that it's not "tested completely". While the segfault was caused by CPU overheating (not cooled properly) and flaky memory (causing bits to flip).


Dude, prove it. The system should be able to exonerate itself by detecting and reporting those problems (and other systems do). At the very least, point to some actual evidence. This is the "engineering" process that the parent poster referred to elsewhere.

Just because a problem is unusual, intermittent, or only affects one person doesn't mean it's not a regular old software bug. And in my experience, it almost always is. And once you do debug it, you often (but not always) understand why it was intermittent, under what conditions it happened, and why you were the only person that saw it.


Known good HP DL380 G8 pulled from production, ECC RAM, monitoring, SAS disks, full hardware test and memtest86 pass.

Nope. Not that.

We don't buy crappy hardware or not test it.


I completely agree with you that Lennart scratched an itch, which is the way all good software gets started, and others picked up on it.

Where I think the systemd-naysayers have a valid point is around the tight coupling that has been introduced, and is still being introduced, between systemd and various other components of a fully functional Linux system.

To take your "just submit a patch" example - say N years from now I'm unhappy with some aspect of how systemd works. I can submit a patch, or I can rewrite that whole component from scratch. However, it's entirely possible that the piece I'm unhappy with is so tightly coupled to the rest of systemd that I can't rewrite one component of it without rewriting the rest of systemd, or convincing the systemd maintainers to accept my rewrite and bake it in as the new "official" version of that component.

Where I think the criticism of systemd is valid is that the idea of modularity has taken a backseat, and the APIs between the different components of systemd haven't been very well-thought-out. The informal spec is "whatever systemd does today is correct", which of course destroys any sort of interoperability.

And by way of full disclosure, I'm an Arch user, and run systemd on 4 systems I use everyday - home desktop, home server, work desktop, work laptop. Whatever else I have to say about its design, I use it every day, and actually like the parts of it that I use. eg, the boot time for my desktop is stupidly fast, and if I want to know about some log message, I just run journalctl. I no longer care whether the foo daemon uses syslog, or writes to its own /var/log/foo.log that I should set up rotation for, or handles its own rotation as /var/log/foo/2014-11-20.log, and so on.

And just to play devil's advocate with my own position - there's a certain point where tight coupling makes sense. Linux kernel modules, for example, are tightly coupled to the Linux kernel, and don't work unmodified when compiled against a *BSD or Solaris kernel.


Well, the very distro-specific bunch of scripts in /etc/init.d (or is it /etc/rc.d/init.d/? or /etc/rc.d? or a symlink to ...?) were some kind of tight coupling, too.

Plus: This tight coupling did not exactly replace existing communication features. It created new ones. These are made use of.

Yes, systemd is bringing lots of new functionality. Under the hood - that is why sysadmins love or loathe it and users mostly don't care. That "tight integration" argument is mostly one that comes from people (please do not take offence, you're weighting it carefully indeed!) who bemoan that other userspace system infrastructure is left behind feature-wise. And those who love to argue about and against design decisions.


Honestly, I can understand why people are uneasy about this. "Yes, tight coupling is being introduced in many core Linux projects, but don't worry -it's only these shiny NEW features you don't have anywhere else!"

Sounds eerily like "Embrace Extend Extinguish" redux.

Don't get me wrong, I am aware systemd is a technically superior solution. But politically, it is a trainwreck.


> Well, the very distro-specific bunch of scripts in /etc/init.d (or is it /etc/rc.d/init.d/? or /etc/rc.d? or a symlink to ...?) were some kind of tight coupling, too.

Sure, but the coupling was contained. You could still run Gnome on any distro (or on non-linux), whichever way around your init scripts were.


Say whatever you want about shell scripts, but that is not what tight coupling means.


You forgot /etc/init (upstart) :)


Lennart didn't just submit the code and put it out there. He lobbied other projects to hard-depend on it, and lobbied distros to adopt it. Systemd didn't succeed where less poisonous equivalents failed because it was technically superior (it isn't), it succeeded because of shady back-room politics.


How dare he ask other people to use the software he wrote.


I think "ask" here is the wrong word, and you know it.


I'm curious to know what kind of leverage he had over the distros where it wasn't just "ask". Any suggestions on where I can find some info?


Let's say you're right. What kind of "political" leverage Lennard had to convince distros to use systemd?


All he would've needed was the support of RedHat management, after that everything falls into place: most Gnome contributors work for RedHat, so they can convince Gnome to follow a particular direction and hard-depend on systemd. Then most distributions are under pressure from their userbase to support Gnome (because of a legacy of politics and FUD about the KDE license, because a minimally-configurable environment is popular with the kind of large, inflexible organization that buys expensive support contracts, and because it's actually quite good software) and are therefore also obliged to hard-depend on systemd.

It's in RedHat's interest for software that's currently portable to FreeBSD or especially to Solaris to become tied to Linux. This wouldn't be the first time RedHat has adopted anti-opensource methods out of fear of Oracle - compare their policy of deliberately obfuscating the history of their kernel source.


Thanks for the explanation. I never thought about this before. It makes senses, but it is scary. I hope you're wrong. :)


You can't realistically submit a patch to change the direction that Systemd's is going in. For example, they won't accept a patch which removes 95% of the code, so a more modular system can be built.

Submitting a patch, implies you agree with the general direction but need a bug fixed or a feature added.


Not only would submitting a patch be agreeing to their goals (Lennart gets to push the Overton window a bit further), suggesting that we should simply submit patches presupposes that the systemd cabal would ever accept them. Unless it is perfectly in agreement with their goals - including the complete software - they probably won't accept it. They don't even accept already written and tested patches for trivial things like #ifdef-ing a couple minor fixes so the project can build on a different libc.

Lennart Poettering[1]:

    Humm, I know this will disappoint you, but we are not particularly
    interested in merging patches supporting other libcs, if those are not
    compatible with glibc. We don't want the compatibility kludges in
    systemd, and if a libc which claims to be compatible with glibc actually
    is not, then this should really be fixed in the libc, not worked around
    in systemd.

If they aren't interested in trivial compatibility patches, they certainly aren't going to accept any patch that dares to disrupt their tight integration or questionable design choices.

As for forking the whole thing, remember that logind was briefly liberated so it could be build as a standalone package, Lennart went and did a big rewrite so the next version was much more integrated with systemd. When he controls the internal APIs and can change them whenever he wants, a clone will have to be a total replacement right from the start, or it ends up perpetually having to catch up to the changes that will be introduced just to cause breakage.

[1] http://lists.freedesktop.org/archives/systemd-devel/2011-Jul...


His response seems perfectly reasonable to me - even more so after reading that whole exchange.

Why should the Systemd team pay the overhead - in terms of complicating their code - to work around incompatibilities in another libc that will also affect portability of a lot of other Linux software?


The same reason most other project accept trivial patches like that: it's not actually a cost or complication, and helping compatibility and interoperability in the software ecosystem is a good thing.

We're not talking about asking for some new work to be done. We're not talking about any kind of change to how the project works.

This is about trivial changes like #defining function name that aren't even included in the build unless you were using the libc. It is actually rather surprising behavior to see in a publicly-developed project. This kind of fix is incredibly common we've created tools su chasd "cmake" and "autoconf" to handle the common cases and easier #ifdef-ing.


"Trivial" changes like that contributes substantially to making projects hard to read and understand. When there are no better alternatives, that may be warranted, but in this case there is an obvious alternative: Fix the libc implementations that are incompatible with glibc, and at the same time gain the benefit of helping other applications.

I wish more projects would take this line.

Autoconf is the devil. It's a symptom of how broken Unix-y environments have been, and how people were willing to impart a massive maintenance cost of countless application code bases instead of either pushing their vendors to getting things right, or agreeing on common compatibility layers.


Well in this specific case the patches would have been subtely broken. I.e. replacing a thread safe call with one that is not. So it was not just id deffing (thay even suggest some ways to do that better in the patches. E.g. capability based if def instead of uclib or not)


It's a matter of perspective. You could also say that glibc is adding incompatibilities by deviating from standards, and now systemd depends on them. I don't consider it "perfectly reasonable" that Gnome, systemd and the Linux kernel are now starting to depend on each other when previously all of these components could be exchanged for others. It's a mischaracterization to say that the systemd "shouldn't have to pay the overhead" of making their code compatible, because they started out with introducing an architecture that promotes this very lock-in to begin with.


glibc is the standard for C libraries to follow on Linux.

In this particular case, mkstemp() is not a viable replacement for mkostemp(). A proper fix is to provide mkostemp() in uClibc, or to compile with a shim that provides it.

Arguing over whether including the shim in Systemd would be acceptable would be a different matter, but parts of the patches as presented were flat out broken.

And the Linux kernel is not starting to depend on systemd or the others. The Linux kernel is moving towards demanding a single cgroups writer, and at the moment Systemd is the main contender in that space.

That Systemd is depending on Linux is unsurprising, given that they stated from the outset exactly that they were unwilling to pay the price of trying to implement generic abstractions rather than taking full advantage of the capabilities Linux offers. You may of course disagree with that decision, but frankly, for a lot of us getting a better init system for the OS we use is more important than getting some idealised variation that the BSD's could use too.

> an architecture that promotes this very lock-in to begin with.

The "architecture that promotes this very lock-in" in this case is "provide functionality that people want so badly they're prepared to introduce dependencies on systemd".

At some points enough is enough, and sub-optimal advances still end up getting adopted because the alternatives are worse. Systemd falls squarely in that category: I agree it'd be nicer if it was presented and introduced in nice small digestible separate chunks with well defined, stadardised APIs so that people could be confident in the ability to replace the various APIs. But if the alternative is remaining with the alternatives? I'll pick Systemd warts and all.

Looking at posts from the Gnome people, the original intent appears to have been to provide a narrow logind shim exactly to make it easier to replace logind/systemd with something else. If someone feels strongly enough to come up with a viable shim or an alternative API that can talk to both systemd and other systems reliably, then I'd expect Gnome to be all over that exactly because they will otherwise have the headache of how to continue to support other platforms.

The problem is that Gnome already for a long time have dependend on expectations of the user session management that ConsoleKit on top of other init systems have been unable to properly meet, so Gnome has in many scenarios been subtly broken for a long time.


For better or worse, systemd has adopted the OpenBSD approach to portability. Nothing is stopping you from creating a systemd-portable project, similar to how OpenSSH-portable makes OpenSSH usable on non-OpenBSD platforms.

As to logind, it may have been a better choice for the long term to do a separate implementation of the public and stable logind DBus API instead of trying to run the systemd-logind implementation without systemd as PID1, but supposedly whoever did the latter thought it was the best short-term choice.


You can fork it. You can fork all of Debian. But there's numerous reasons that's going nowhere.

Most being: this is not the issue the loudest voices say it is.


Forking a distribution will help by giving an alternative, but it's not as easy as just saying it. Maintaining a distribution is a massive effort that takes a lot of work. Building a team of people with enough time to make that happen isn't something you do overnight. The fact that one hasn't magically appeared since this started has more to do with that than anything else. (Followed closely by people generally waiting to see how this shakes out before they pull the trigger.)

Second - while it will provide an alternative that helps frame the debate, this is not a minor undertaking. With every other distribution caving maintaining a distribution that does not use Systemd will require a lot of work to keep all of the software out there working properly with whatever alternative init system it chooses to use.

This alternative distro is also going to have to deal with how to solve the init problem. We had some good options in play but I don't believe we'd found the best answer to the problem yet when Lennart came bowling through like a bull in a china shop. So any distribution effort is going to have to take on the role of choosing the best of breed alternative and make the effort to ensure it continues to develop and improve.

This isn't something you take on lightly.


> Systemd is offering a more compelling solution than anyone else, and if you don't like it, well, you should submit a patch, write some code, or implement it yourself.

Or go back to using windows for general use and software development which is in fact what I've done. It's amazingly sad after many years of being a strong linux supported but this has killed linux for me. I see no point in continuing to use it.


That's really not a terrible idea actually. I switch back and forth a lot between the Windows and the Unixy ecosystems and by far the best end user experience, tailored[1] developer experience and (I'm going to get slated for this), mobile experience is on Windows by far.

[1] tailored as in platform specific. For cross platform stuff, it's not so good.


Someone created a new account in order to post a pro-windows reply in a systemd thread.

Color me surprised.


[flagged]


Nice, a personal attack at the end. I'll definitely not call you a shill now!


Ok perhaps the personal attack was uncalled for. Please accept my apologies.


Or stop using Linux entirely and move to FreeBSD.


Unfortunately FreeBSD doesn't have very good MAC system. Capsicum and friends implement only ~10% of what SELinux offers. Also the virtualization is far more basic, and lacks tooling (it's like receiving a huge bag of Legos and instructions on how to build space shuttle).


IMHO Capsicum has significant potential to provide similar real-word security improvements in a far simpler way than SELinux (once the relevant profiling of daemons is done); there was some work by somebody at Google to port it to Linux and I hope that will be usable sometime soon.


Linux Torvalds said he would never have created Linux is FreeBSD was available at the time. And the only reason it wasn't available is because it was embroiled in a lawsuit over embedded ATT UNIX code back in the 1990s(?).


I'm not sure how this is relevant. Saying he wouldn't have created Linux at the time is not the same thing as saying he wouldn't work on it or use it today instead of FreeBSD.


I may be wrong, but I took GP to mean that FreeBSD is a good enough OS to use daily, hence Torvalds wouldn't have had a need to create Linux. Obviously, Torvalds hasn't abandoned Linux in favor of FreeBSD today, and no one said he has or is planning on it.


> I may be wrong, but I took GP to mean that FreeBSD is a good enough OS to use daily,

But that's not at all what Torvalds's quote means. He meant that, if FreeBSD had been available, he would have worked on contributing to that instead and improving it for daily use, rather than building Linux to be used for daily use. (The state of FreeBSD in this hypothetical world has no bearing on the state of FreeBSD today).

In terms of FreeBSD today, while it's possible to use it for daily use, it suffers from even worse driver issues than Linux does, and of all the problems of just simply having much smaller market share (both for contributing developers and users) than Linux does.

This may or may not be 'good enough' for OP's purposes, but it's disingenuous to suggest that Torvalds's hypothetical from the early 1990s implies that FreeBSD is a clean substitute for end-user Linux today.


The thing is though, back then in the early 90s FreeBSD could have been good enough; most x86 computers didn't have a GUI at all, and if they did it was OS/2 or Windows over DOS. Hardware was arguably much simpler too. Linux was created as a response to Minix, not BSD[1], and Minix was just a "teaching" OS, not a daily driving workstation OS. He didn't initally build Linux for "daily use", but as a hobby project, per his own words.

[1] https://groups.google.com/forum/#!msg/comp.os.minix/dlNtH7RR...


>In terms of FreeBSD today, while it's possible to use it for daily use, it suffers from even worse driver issues than Linux does

No it does not. It has fewer driver issues by far. Because it does not have broken half-assed binary only drivers by obscure vendor X that don't actually work. Unsupported hardware is simply unsupported, rather than broken.


> Because it does not have broken half-assed binary only drivers by obscure vendor X that don't actually work.

I've always found it interesting that Nvidia offers a more complete and stable BSD driver than its GNU/Linux counterpart. That said, AMD/ATI support is abysmal, and even Intel video is lacking compared to GNU/Linux.

> Unsupported hardware is simply unsupported, rather than broken.

That's a matter of interpretation. If FreeBSD doesn't support my hardware, it's the equivalent of being broken for me, given that I can't use it anyway. That said, I try to build or buy the most OS-agnostic workstations possible so my options are always open.


> ... and even Intel video is lacking compared to GNU/Linux.

Not so sure about that, or maybe it depends on the situation. For instance, I've been running FBSD and Linux in VMs (specifically, Hyper-V/Win 8.1 on a Surface Pro 2).

After updating to FBSD 10.1, I decided to try the Lumina DE (from PC-BSD). I've been surprised at the performance of the GUI under the constrained memory and CPU availability. It's about as good as the host (Windows), albeit running minimally demanding applications.

OTOH Linux versions (SUSE, CentOS) have been much more sluggish and GUI usability much lower. I realize this is impressionistic and hardly a deep analysis. Nonetheless, I think it points out that it's risky to make assumptions when circumstances and system requirements are so tremendously variable.


> I've been running FBSD and Linux in VMs

Then you're abstracting away from the video hardware, and not getting the same results as you would on bare metal. The only thing really lacking in Intel video versus Nvidia is proper KMS support; Intel video on FreeBSD works generally well otherwise. The FreeBSD Nvidia-provided driver, while closed source and binary only, is more or less feature complete.


>If FreeBSD doesn't support my hardware, it's the equivalent of being broken for me, given that I can't use it anyway

That's the point. People like to pretend linux has more hardware support, but mostly what is has is broken drivers for obscure buggy hardware that you can't actually use. The "well supported stable hardware that actually works" list is practically identical between them.


And if Linux weren't available, Lennart would most likely be working on systemd for BSD. Whether or not he would be successful in getting it adopted there is unknowable, but it would be possible.

This is why choice is good, and lock-in is so bad.


I don't think it would get adopted in its current form. Systemd is a massive violation of the Principle of Least Astonishment[0].

[0] http://www.unixguide.net/freebsd/faq/16.17.shtml


To avoid violating this principle, the commonly accepted industry best practice to never change anything at all.


If it ain't broke, don't fix it. If it is broke, make sure people still know how to use it after you're done fixing it.

As an aside, I've heard it theorized that part of the reason Microsoft tends to do massive GUI facelifts every few releases, is to keep the Windows/Office training industry going strong.


"Someone finally got fed up with the haphazard state of affairs in Linux-land"

But wouldn't the path of least resistance be to switch to a project that does not have this "haphazard state of affairs"?

When I originally tried Linux I got fed up within _days_. It is the _relative_ lack of default "configuration" (that is decided by someone else) that makes me stay with FreeBSD and NetBSD. Of course, lack of default configuration is the antithesis of popular Linux distributions. Whenever I have to use one, I spend more time learning how to turn things off than I ever did learning how to turn things on.

The answer to the original question is, I think, "no", switching is probably not the path of least resistance for many Linux users. Because when the Linux user makes that switch, they immediately find that someone has not done everything for them.

And from what I have seen, observing the questions of Linux users who first try FreeBSD or NetBSD, they generally do not like that. It means they have to do some configuration of their own. And even if they are comfortable doing configuration, it means they have to learn things that are different from the "Linux way"; and they inevitably encounter shortcomings that are due to lack of developer resources (read: time).

In doing things for yourself you learn about how things work. The rc.d system that all BSD projects use is coherent and relatively easy to understand. For whatever that is worth.

This debate over systemd seems to cut to the core of the value of learning about how things work. The reader can draw their own conclusions.

Linux is only a kernel and it should still be possible and thus optional to run that kernel with a basic init (or init alternative, e.g., one based on daemontools) and with userland utilties that do not need systemd.

The question I have is how difficult the popular Linux distribution folks are going to make that for their users to do.

And if they do make it difficult, it raises the question, "Why?"


> Why else would nearly every distribution be on board?

This informal fallacy is based on the idea that everything in the world can be distilled to a single answer. The real answer is more complicated. For example:

Red Hat is on board because they pay its creator's salary. So they rely on an individual bias.

Debian is on board because they rely upon 'collective wisdom' and committees to make decisions. So they rely on the bias of group thinking.

Ubuntu is on board because Debian is on board. So they rely on the bias of the other.

Other distributions are using it because 'every other distribution is using it', or they're small enough that it doesn't cause conflicts for its use base, or because it's a GNOME dependency, or because it's just new technology.

--

To make someone think something is a good idea, show them someone else thinks it's a good idea. This is a fact of all human beings' thought processes. Decisions are not based on merit, or logic, or even a quorum; it is based on fallacies created by heuristics. There exists a heuristic in which the more an idea is adopted, the more other people think it is a good idea.

We imagine our thoughts are logical, and that other people also think logically, and that their decisions must be made for a good reason. But in fact, the great majority of all decisions we make are based on guesses; this is how our brains are able to carry out complex calculations and come to decisions in split-seconds.

For example, you might look at systemd and say, "it fixes so many problems! it provides so many features! it standardizes Linux! CLEARLY this is superior. we must adopt it."

For people who care about the purity of the highest technical ideals, this makes sense. For people who care about being able to use their computer, these things don't matter, and in the systemd implementation, actually makes things worse for them. The changes systemd purports to make are not bad things. It's really just the way in which they did it that is bad.

--

It's like wanting to upgrade your bicycle to four wheels, but requiring the rider now operate it lying flat down and using mirrors to navigate. The four wheels was a great idea. Using mirrors to navigate? Maybe not so great.

Of course, its creators will turn this inconvenience into a feature, saying "you get to lay down! it's therefore more efficient and easier to use!", completely ignoring how other people want to ride a 4-wheeler.


haphazard? fragmented? many places for error logs? half-a-dozen ways to configure X?

Have you ever run a linux box? How about dozens or hundreds of them in a production environment? I'm guessing no on both counts based on the nonsense you're spewing forth in your post.

If you don't know what you're talking about, it's best to just keep quiet.


Really bored of this nonsense at this point. Take this for example:

If you absolutely MUST run Linux, my recommendation is to minimize the interaction with the base distro as much as possible. CoreOS (when it’s finally baked and production ready) can bring you an LXC based ecosystem

Systemd is absolutely key to how CoreOS works. It's the basis for the distributed init system it provides — a major selling point.

Taking any of this blog's advice would be harmful. I'd suggest a better approach would be to accept that the majority of distributions have settled on systemd and that generally this decision has not been made by idiots. So it would be worth either understanding what their pain points are and how they can be solved with an alternative to Systemd, or to help solve the issues that are apparently in Systemd yourself.


I agree with that specific advice (minimize the interaction with the base distro). I'm on a quest to isolate every major component of my user experience in containers (including things like browser etc.).

But not because I have anything against Systemd. I love Systemd so far.

It's hilarious that he's proposing CoreOS as an alternative, given that it's one of the most radical rethinks of a Linux distro out there.


For those with reading comprehension skills this isn't funny but rather a natural conclusion. The problem for you is that you're making assumptions about the author that you haven't verified.

The problem here isn't change, or re-thinking linux. The problem is re-inventing the wheel, and doing it poorly.

CoreOS uses systemd, but it's not a distribution in the classic sense -- rather it's a platform for containers. The narrow use-case for systemd here removes some (most? all?) of the concerns.


ah so you're saying that systemd is flexible because it was chosen for a distro where it has very limited remit? so you mean it wasn't some horrible octopus which tried to suck all of coreos into it before they rejected its entanglements?


Many large decisions have been made by idiots, badly. Blindly accepting consensus as a rational choice without a counterpoint had caused many years of suffering for the human race from wars to bad science to bad medicine.

Whilst I agree that the blog is probably hocum, there's nothing wrong with critical thinking.


I completely agree – but critical thinking involves something along the lines of "here are the things that I think are wrong, this is why I think they're wrong, this is how I think they could be solved."

The answer "let's throw everything out" isn't that useful; likewise, dismissing the considered opinion of lots of people who have been doing this sort of thing for a while needs to be done with some rationality. An empty, bandwagon-jumping appeal like this adds little value and just helps spread more misinformation.


Actually the answer is to be slightly more conservative with the approach. No one has to pick up systemd now for example. Leave it a year and see where it is. The technical merits will be obvious and the technical problems will also be.

Moving all the chess pieces at once, which is what is happening is not productive, professional or a sign of experience.


That's not the case though – systemd has been enabled by default by Fedora for three and a half years or so, and has been steadily adopted since then by most other major distros. Not everybody is moving at once, so why would waiting a year make any difference?


This bugzilla query against RHEL 7.0 systemd says otherwise:

https://bugzilla.redhat.com/buglist.cgi?bug_status=NEW&bug_s...

A lot of the bug descriptions are quite scarily bad when you consider them in context such as "various loginctl commands not working" etc.


So what you mean is "Systemd still has serious bugs," which is nothing to do with whether or not everybody is moving at once.

That's the point – if systemd has important bugs they should be fixed. Clearly, the groups responsible for the decision have concluded that the tradeoff is worth it, and have accepted that a large, fundamental change will have issues. That's fine – there are a bunch of other distress you can use that have not adopted systemd, which you can use in the meantime if you disagree.


The two are related.

People are shipping production operating systems with systemd that is chock full of bugs.

An all consuming tentacle monster like systemd is fine if you want to dogfood it but to throw at paying customers and/or supporters of your distribution is a little off key.


Linux developers are generally very smart people in my experience. It's consensus of many experienced and smart people that makes it significant.


They are individually I agree, but as a group of people it's not such a good story. It's quite dysfunctional from what I've seen.


I can;t comment specifically on the Linux group, but in general you are absolutely correct. As a consultant, I see it daily in my practice. When I talk to people individually, they seem smart/knowledgeable, but as a group, they often make not-so-smart decisions.


That's exactly my point. It's not limited to the Linux core team. We divide into working groups of 2-3 people max to avoid this. Works quite well.


What does this comment even mean? What's wrong with them as a group?


To borrow a quote from a popular movie:

    A person is smart. People are dumb, panicky dangerous animals and you know it. Fifteen hundred years ago everybody knew the Earth was the center of the universe. Five hundred years ago, everybody knew the Earth was flat, and fifteen minutes ago, you knew that humans were alone on this planet. Imagine what you'll know tomorrow.


I think this is trying to say you are as strong as the weakest link? Maybe it's trying to say something about group think.

I mean all these movie quotes are all cool sounding but are quite shallow.


Answer: Yes.

Just because some thought has to go into the interpretation does't make it shallow.


A large group is often dumber than its weakest link.


One thing I've realized about the Linux community through all this systemd flame warring is how unbelievably conservative a large subsection of it is. There's this huge so-called "neckbeard" continent that views anything architecturally beyond the 1980s as a huge affront to Unix.

IMHO I kind of shrug at this, since Unix was never really all that great to begin with. Unix won because the only commercially viable and well supported alternative was Windows, an OS that was (and in many ways still is) significantly worse especially for server and embedded applications. Everyone rallied around Unix and especially free/open Unix as an alternative, and so here we are.

It's also tough to compete with free, and Unix OSes got a huge boost from both Linux and the various free flavors of BSD. Yet that boost came at the expense of things like BeOS, Plan9, original NeXT, and the OS I still feel is hiding behind the JVM ... which for their day represented fresh ideas that might have gone somewhere.

Ultimately I think the existing Unix paradigm is going to be killed by Docker and mobile OSes that containerize in similar ways, and I'm not sure this is a step forward. It escapes much of the ugliness and the poor permission model of Unix, but it does so by handing virtually everything to the app. Docker containers (and mobile apps) can be thought of as something almost akin to giant statically linked binaries. We're getting more monolithic and coarse-grained.


People who aren't extremely familiar with how the Linux init system works and whose job doesn't include keeping the servers stable don't see why the neckbeards are up in arms about systemd, but there's good reason. Many peoples' jobs depends on making sure the servers are working, and knowing how the servers work is a big part of their job (and sanity). In its current incarnation, systemd changes the fundamentals of how servers work without much increase in features - large risk and little reward.


I've had huge productivity gains with unit files for systemd over trying to write spaghetti shell code for old sysv. The syntax and features are well documented and writing them is extremely simple. I also don't believe any sysv implementation had crash recovery or socket activation of daemons, both of which are huge feature wins.


I also don't believe any sysv implementation had crash recovery or socket activation of daemons, both of which are huge feature wins.

That's because there were other components handling those tasks, like inetd and /etc/inittab. I do like having Upstart handle respawning for me, though.


Inetd only did TCP socket activation, not of unix sockets, though.


Inetd only did TCP socket activation, not of unix sockets, though.

False.

http://manpages.ubuntu.com/manpages/hardy/man8/inetd.8.html

    The service name entry is the name of a valid service in the file
    /etc/services. … For UNIX domain sockets this field specifies the
    path name of the socket.


    The protocol must be a valid protocol as given in /etc/protocols.
    Examples might be “unix”, “tcp” or “udp”. … A protocol of “unix”
    is used to specify a socket in the UNIX domain.
xinetd does not appear to support this feature.


Oops, you're right, I misread an article about socket activation.


I deal with sysadmin stuff for quite some time(since 2001), and I just won't use distributions without systemd.

The benefits far outweigh the risks (imo obviously)


Meh I don't mind either way. There's nice stuff in systemd but nothing thats so critical i wouldnt use a sysv based system (there's pretty good ones).

What generally annoy me are things like supervisor and other things people use to "auto restart" services but these aren't exactly integrating nicely and put stuff all over the filesystem/etc.. I like that systemd includes that and does it mostly properly.


> People who aren't extremely familiar with how the Linux init system works and whose job doesn't include keeping the servers stable don't see why the neckbeards are up in arms about systemd

Hi, I'm a sysadmin who's fed up with neckbeards (most of who apparently don't know much and refuse to learn) claiming to speak for all sysadmins on this topic.

> large risk and little reward.

It's four years old, and claiming "large risk and little reward" is like listening to someone claim that moving from sendmail to postfix would be a disaster.


The only sysadmin I know with an actual neckbeard (over a foot long) is a 20-year unix/linux admin, and he greatly favours systemd.

Perhaps if you're tired of neckbeards speaking for all sysadmins, you should return the favour and not declare what all neckbeards are saying. A lot of old, experienced admins are for systemd. It's not the young go-getters who are at the top level of distros making the foundational architectural decisions, after all.


Servers is precisely where I want systemd!

There are some things I've wanted reliable and consistent mechanisms for so long: starting/restarting/inspecting services, isolation/resource limiting, socket activation, log collection.


Server admins have so much new technology to learn and play with to stay relevant. If they feel like learning systemd is a chore, they might be in the wrong business.


All the more reason to be judicious in what new technologies are introduced.

One of the huge benefits of the Unix/Linux, CLI, and Free Software traditions is that they tend to be very strongly preserving of established knowledge. Changes are incremental, usually additive, a reliance on scripting means that interfaces are unlikely to change, and new tools are very frequently drop-in replacements for old.

As specific examples:

I first learned editing under BSD vi in the mid 1980s. In the time since I've learned and used on various PCs (and a few other systems): WordPerfect, WordStar, MacWrite, AmiPro, several iterations of MS Word, the EDT and EVE editors under VAX, the TSO-ISPF editor, and a few others under Unix: emacs, ae, nano, nedit, Abiword, Lyx, and various iterations of what's now LibreOffice. Most of that skill-acquisition is now dead to me -- the tools simply aren't available or aren't useful.

I'm no longer using vi, but vim (adopted in the mid 1990s as I switched to Linux), but the basic muscle-memory is the same. And its an editor I can utilize across a huge number of systems (though I do admit to finding traditional vi / nvi painful).

Similarly, the bash shell is an iteration on the basic Bourne and Korn shells.

ssh is a drop-in replacement for rsh, to the extent that /usr/bin/rsh is typically a symlink to ssh. While the dynamic is slightly different from telnet, it's still pretty similar with a few exceptions.

The rare occasions in which a utility changes its commandline options you'll virtually always hear about it. The fact that it's so painful (and tends to break decades-old scripts) means its generally avoided. Authors who make a point of doing this tend to find that people avoid their tools.

A bigger point is that forgetting stuff is often much harder (and more important) than learning stuff. And when you're invalidating long-established patterns, that's really painful.

There's also the fact that we manage technology by managing complexity, and most of us in the field work at the limits of our ability to manage the complexity we're faced with: the basic OS, shells and interpreters, hardware, vendors, hosting providers, management tools, employers, clients, customers, co-workers, engineering and development teams, services, abuse and security concerns. It's a really complex and dynamic field.

Linux has done quite well (with a few notable exceptions) of maintaining a balance between capabilities provided and complexity imposed. One problem is that as systems become more complex, the additional benefits of yet more complexity are lower, and the costs are higher (this is a very general rule, not just specific to Linux, operating systems, or computers).

The question of how to introduce radical change is a key one. I've seen a number of failed attempts to drastically revise existing systems in place -- this almost always fails. Linux itself wasn't introduced in this way -- it emerged as an alternative to both "traditional" proprietary Unices, to Big Iron (mainframes, VAX), and Microsoft's then-new WinNT. Linux ended up dominating virtually all of these categories, but it did so by incrementally beating out the competitors through replacement.

An interesting space where a lot of this comes to a head specifically is in the graphical user interface field. I've noted several times that Apple, notable for a great deal of success in this area, has been exceptionally conservative in its GUI development. It's effectively had two GUIs, the initial Mac System interface, and Aqua. Each has had a roughly 15 year lifespan, and yes, there was incremental improvement over the span of both, but the essential base remained the same.

Since the early 1990s, I've watched Unix/Linux go from twm to fvwm, Motif/mwm, VUE/CUE (a "corporate" standard based on Motif plus a desktop), Enlightenment, GNOME, and KDE, and now alternatives such as xfce4 and ... oh, that funky graphics thing Suse's got, as the "primary" desktops. GNOME and KDE themselves have gone through about three major revisions. And there are a number of other "lesser" more minimal desktops as well -- I use one of these, WindowMaker, which is actually based in a late 1980s ancestor of the Aqua interface now used by Apple.

Microsoft's experienced some similar recent tribulations. As has pretty much every online site ever that's done a site redesign.

As jwz has observed: changes to GUIs just don't offer that much win. They're highly disruptive, they're possible because the interfaces generally aren't scripted (other than via automated QA testing systems, but that's another story), but more importantly: the productivity benefits granted users really aren't that significant, especially regards the cost.

Worse: changing an existing interface leaves users in a no-recourse situation, especially in the case of SAAS. For Linux and systemd, the options are slightly more open in that (for now) it's possible to disable or block systemd from installing in at least some cases. But over the long run, it may be that the only options are voice and exit, as opposed to loyalty (a reference to the book and concept of Voice, Exit, and Loyalty, which I recommend looking up).

So yes: those of us with numerous decades of experience in the field often do have an extremely jaundiced view toward radical change. And with very good reason.

But your comment is really unwarranted.


Wow, great comment -- and one that all who endeavor to innovate in systems should take to heart. As my former colleague Bart Smaalders was fond of saying, "the hardest software to upgrade is the software in our brains"; when inventing new abstraction, it must be done so sparingly and (as much as reasonable) by leveraging extant notions. This isn't merely to allow a technology to be readily understood (though that too, certainly); it also requires thinking in terms of reinvention versus reuse. This thinking enforces a kind of humility: you must learn about the systems that have come before, if only to understand which of their abstractions can be reused. I think it is a perceived lack of this kind of humility in systemd that has been so alienating for those who have a long history with Unix: it's not as if other approaches are being rejected so much as they are not being considered at all.


I think it is a perceived lack of this kind of humility in systemd that has been so alienating for those who have a long history with Unix: it's not as if other approaches are being rejected so much as they are not being considered at all.

I really have the feeling that people are using double standards here, especially when suggesting Solaris or Solaris-derived systems. Since systemd is implementing pretty much what has been in Solaris (SMF) and OS X (launchd) for a while now:

https://docs.oracle.com/cd/E23824_01/html/821-1451/dzhid.htm...

https://developer.apple.com/library/mac/documentation/Darwin...

Also, it is of somewhat questionable ethics that members of the Solaris community submit such troll posts (as others have pointed out, there is not much substance there). It reeks of wanting to destroy Linux' image for your own (Illumos, SmartOS) gain.


This is a rather disingenuous response.

It assumes that this is a troll post - which I don't think is fair. The author has concerns that are legitimate to them, and outright dismissal as a troll, whether or not you agree with them, is petty and judgmental.

Second, you are somehow conflating dislike of systemd with love of sysv init. The cognitive dissonance here only makes sense to me if you believe that systemd is perfectly fine, and think that the only reason people dislike it is because it's different.

However, if someone is recommending a solution that utilizes SMF, is it such a stretch to think that it might not be because they are in love with sysv init, and instead might think that the implementation of systemd is lacking?

I personally like the underlying idea of SystemD - because I like SMF. I do not like the implementation of systemd, and also have reservations about the people helming the project.


SMF is a pita - far, far worse than the process management stuff in systemd.

SMF does not seem to want to own every bit of my Linux machine, however.


I want to know how this even became an argument in the first place.

It's not that I don't like systemd, it's that [insert affiliated party] is way too cocky

It blows my mind to see people regress so far back into arguments that this because an issue of emotions in a technical debate.


It's not a matter of emotions.

It's a matter of having observed similar behavior in other projects which went similarly off the rails.

Poettering's own track record with Pulseaudio comes to mind. There's also the GNOME project, which I identified as actively intelligent-user-hostile around 2004. It's been somewhat gratifying to see that particular perception bear out with time.

There are other projects which have shown similar levels of arrogance, though mostly with more limited and self-contained damage.

And being prickly or hard to deal with has shades. Neither Linus nor Theo de Raat are pussycats, but both focus very much on technical issues and are generally highly responsive to specific technical complaints. Sure, they make mistakes and bad calls occasionally, but on balance they've tended to get things right.

The attitudes expressed by Poettering and Sievers in particular aren't simply cocky, but contemptuous. And they're getting called on it. Including by Linus.

I could give a shit about personalities themselves, I really could. For the most part I really don't care how socially awkward someone is if they're good at their job. And if they don't start going out of their way to do harm to me or others. Personality disputes in discussions bore the piss out of me.

But I'm also not blind to technical failings with roots in personality traits. And those are what I'm seeing in the systemd crowd and leadership.


I could give a shit about personalities themselves, I really could.

Then stop poisoning the well.

But I'm also not blind to technical failings with roots in personality traits. And those are what I'm seeing in the systemd crowd and leadership.

The problem that I see that most arguments against systemd are first and foremost about Lennart Poettering. And if technical reasons are brought forward, they can all be summarized as: does not conform to the UNIX philosophy (monolithic, replaces existing tools with tightly-coupled equivalents, binary logs).

I think that a reasonable argument can be made that, with the exception of binary logs, these things are true for many UNIXen. You will find only few people who would say that BSD does not conform to the UNIX philosophy. However, the BSDs have the aforementioned traits as well: developed by one project and tightly coupled (e.g. you cannot just take most BSD utilities and libraries and compile them on Linux or Solaris, it requires serious effort).

People always argued that this was a good trait of the BSDs (and I agree to some extend), because it allows better integration and use of BSD-specific features.

However, when systemd does it, it's suddenly violating the UNIX philosophy.


"Then stop poisoning the well."

I've dithered on whether or not to respond, but this bugs me.

Your response, again, typically of many systemd supporters, looks at the option of responding to the relevant points of my argument (personalities can have relevant technical consequences), and dives to the personality dispute "stop poisoning the well".

I'm not poisoning the well. I'm pointing out that the well has been poisoned.

The elements of the Unix philosophy which you allude to exist for good reasons, and violating them imposes very high costs. This is a lesson that those of us who've been around for a while, and have multi-platform experience (check on both counts for myself) are well aware of.

Monolithic systems transcend ready replacement. Generally you've got to toss the whole mess out. Pluggable systems avoid that. There are instances in which monolithic design does seem to be at the very least hard to avoid, but you'd best be very aware of this and defend your position well. Systemd violates this principle by assuming gratuitous monolithic nature and explicitly refusing compatibility and modular alternatives.

Tightly-coupled systems are similarly brittle. The classic case of this is probably the Windows platform as a whole. Among the best arguments for loose coupling comes from Steve McConnell's 1990s classic Code Complete (ironically, McConnell was a Microsoft developer). I strongly recommend you read the relevant sections on tight vs. loose coupling.

Binary logs (and binary file formats in general) preclude use of alternative tools. The Windows Registry (again from Microsoft) comes to mind. One of the better hacks of this I know of are Unix/Linux compatibility systems which treat the registry as a filesystem interface. This originated with UWIN (from Steve Korn of AT&T and Korn shell fame), and has since been adopted by Cygwin. The ability to grep the registry, process it with scripting tools (sed, awk, perl, etc.), and modify it (using specific commandline utilities offered for the purpose) makes dealing with that particular hairball _slightly_ less annoying. The lack of self-documenting formats for registry values themselves (a trait shared by GNOME's gconf system) is another fatal flaw.

Even packaging formats are subject to this. Red Hat (gee ... aren't they involved with systemd....) designed a binary file format for RPM which requires specific tools to unpack. Joey Hess's 'alien' links to the RPM libraries for this purpose, and a set of Perl tools I'm aware of has to apply specific offsets (varying by RPM version) to extract data from the files. Contrast this with Debian's DEB format: tarballs packed in an ar archive. This can be unpacked with standard shell tools, or busybox.

Putting together the concepts of monolithic, loosely coupled, non-binary, standard tools, I've more than once rescued Debian systems which failed to pivot-root from initrd by breaking into the initrd shell, unpacking, and installing DEB packages using shell tools, facilitated by the use of an interactive shell, busybox for tools, and the DEB format. I'm thwarted on several levels from a similar recovery option in Red Hat systems due to the use of a special and explicitly noninteractive shell used in initrds (which is larger than Debian's 'dash' used for the same role), and the binary format of RPM packages. Working in cramped quarters and difficult situations, I can assure you of which system I'd prefer to be working with.

Systemd's violation of these principles is objectionable because it's not necessary (see OpenBSD's shim replacement for functionality, or uselessd, among others), gratuitous (decisions are being deliberately made), and, as your comment above illustrates, the very valid reasons for not doing just this are belittled.


LOL. Exactly. When you don't have any good response based on the merits of the argument, attack the presentation. This tactic is very familiar to those of us who call out sexism or racism: "You should be nicer when asking for your problems to be taken seriously."

But, ultimately, what happens in tech is as much about people and personalities as it is about actual technical merit. To delude ourselves otherwise is dangerous. When someone claims to be arguing from technical merit, look very closely at their history and probable motivations. There's always more there.


Thanks.


Exactly that. You know what's insanely great? When you watch this presentation video of 1978 at AT&T where Ken Thomson explains Unix and type some commands on his VT-52 and you think: all of this is still current knowledge, and all of his explanations still hold true. Just like celestial mechanics or Pythagorean theorems. We are heirs of this ancient wisdom and this is friggin good, this is culture.


And systemd is changing precisely none of that.

Nothing about systemd removes the basic unix command line. Because he's most definitely not explaining the init system, which wouldn't have been the same from year to year then, or even similar decade to decade.


Systemd does touch numerous parts of Unix as it existed in 1978: logging, authentication, and devices come to mind. But much of what it's interacting with came along afterward: networking, far more services than existed at the time, a much more complex security scope, and more.

But that's still a good 25-30 years of work, experience, practices, and smoothing out the rough edges that will be shot down the drains.

Systemd also fundamentally changes the control locus of key features within Linux and how applications, the kernel, and OS as a whole are constructed and constrained. Putting all of that under the control of a small group with highly evident disdain for any "outside" concerns (in quotes as these are of the larger Linux community, and the concerns are most decidedly inside that group), contempt, and plays-poorly-with-others attitudes.

I'm not impressed.

Nor with your comment, FWIW.


Authentication is done with PAM and Kerberos these days - Kerberos is late 1980s, PAM came along in the 90s. Unix evolves and had continued to do so since its inception. udev certainly changed how we do devices.

The rest of your comment is fear mongering which could be applied to any group of core devs on any OSS project in existence. After all who controls Debian and security defaults? Do YOU trust them?


You're missing the point: there was no networking (outside of UUCP and dial-up connnections) in 1978 Unix, so there were large classes of functionality since added which simply didn't exist.

What 1978 Unix did have was security and authentication. The OS was multi-user from the very beginning -- hence the pun in the name: uniplexing operating system (Dennis and Ken created a two-user OS to play Space Traveller).

As Bruce Perens recently discussed in a set of comments at LWN, the first thing he did as DPL of Debian was decentralize the management of Debian packaging. He recommends a very similar process for Systemd. The Systemd proponents in that discussion aren't particularly taken with the idea.

http://lwn.net/Articles/621022/

It's not a matter of fear mongering when the stated goals and practices of Systemd are to intentionally break compatibility with other Unixen, to reject compatibility patches, and to provide "choice" in the form of allowing users the option of any Linux distro on which they can run systemd:

http://imgur.com/r/linux/Is9vjRJ

As Jon Corbet noted at LWN in his Grumpy Editor post on the topic, it would greatly behoove systemd leadership and proponents to demonstrate a modicum of gracious victory.

As for Debian's governance, that process has been more than slightly troubled of late, with at least four key departures (Joey Hess, Ian Jackson, Russ Allbery, and Tollef Fog Heen), only in the past couple of weeks. The cabal question was raised by former DPL Bruce Perens in the LWN post linked above. And, frankly, no, I haven't been happy with the recent directions of Debian's Technical Committee of late. Joey Hess's resignation (as well as those of Ian and Russ) calls into question more than just the specific decisions, but the process as a whole.

Your attempts to smear my own comments which are based on actual events, facts, and highly considered views of those with deep and broad experience in the field is, I'm really sorry to say, far too typical of what I see from systemd proponents (the attacks on Perens in the LWN thread strike a pretty similar tenor).

Something is sick in this process. That more than anything is what's bothering me about it, though I've also grave doubts over the technical direction.


It's an interesting take, but that's not really how software works. Look at Plan9. It isn't POSIX compliant, but it does a lot of things much better than traditional Unix (or nowadays Linux, for that matter). Traditional Unix is not the philosopher's stone. There are plenty of good things about it, but it also comes with a number of dubious design decisions or what is now irrelevant cruft (why are we leaving with code replicating the behaviour of obsolete hardware in our terminal emulators?). It's not so much the actual implementation that is important, it is the "good parts" of its philosophy that we need to keep.


Plan9 is actually a really good example to bring up, for any number of reasons. I have to admit that I've never used it, though I've read bits about it. There are definitely some ideas in there that I'd like to play with and experience.

The most important elements to consider about Plan9 are these:

1. Plan9 wasn't Unix (nor was it Linux). It was its own OS, it was absolutely informed by Unix, and tried to learn from mistakes practiced in Unix. Because it wasn't Unix it provided for an independent test bed in which these ideas could be explored without disrupting a large established installed base and user community. And that is a key benefit of branched development. All of these I consider positives of Plan9.

2. It was hampered by an overbearing corporate control and licensing model. It was an ugly stepchild of AT&T's, under a proprietary license. The fact that it was under development kept it from being widely deployed (among other factors), the fact that it had a restricted license meant that other possible collaborators couldn't get involved.

When Linux emerged in the early 1990s, it had a lot of problems -- it was far from the best or most obvious Unix alternative out there (look up ESR's PC Unix guides from that era). But in a world of large proprietary Unicies priced far out of the hobbyist's range, a handful of small PC ports of varying quality, and BSD which was embroiled in its lawsuit with AT&T (speaking of Plan9), Linux was unencumbered, free, and (pretty quickly) available under the GPL. That gave it the critical mass to develop. As with Plan9, it was its own OS, providing a testbed environment for development, but also allowing stable cuts to be made for use in specific deployments as it reached sufficient states of readiness.

Which is to say: the community and development dynamics mattered a lot.

I'm seeing a far more troubled path for Systemd in this regard.

Also of note: in the Debian init system debate, a specific concern raised against upstart, one of the init alternatives, was its own requirement of a developer license grant to Canonical, which was seen as a strong demerit against upstart. As with Plan9, exercising too much proprietary control may well have cost Canonical critical votes in the Debian decision.


It seems that folks outside of Red Hat do contribute to systemd, if that's your concern. What I could imagine is that some projects under the systemd umbrella will live an independent life, once things stabilize a bit.

I must admit the ever-growing scope of systemd is starting to concern me somewhat (though I've been running it with satisfaction more or less since it became available in Debian experimental).


What can I say, some people in 2014 like to live in 1978.

It was fun for a while, but I grew out of it.


You completely missed the point. It's not about living as in 1978, it's about _not throwing away_ accumulated knowledge. What use is my CP/M knowledge nowadays? None at all. What use is the knowledge I could have of '78 Bourne shell, pipes, signals, vi, ed, awk, grep, man? Not only useful, but still of daily use.


Cars from last century still take me from A to B. It doesn't mean I want to drive one.


New cars have almost exactly the same interface as cars from the 50s or even older ones. People having learned to drive at any time since WWII can drive any brand new car, and nobody proposed seriously that we switch to a joystick or a brain interface.

Similarly, though the underlying hardware and code share basically nothing with Unixen of yore, old knowledge is still useful on modern Linux. This commonality of interface is more important than inner workings.

By the way the most expensive cars by far (therefore arguably the most desirable) are old to very old. A Ferrari 250 GTO is way more valuable than any new car. IIRC the most expensive car ever is a 1929 Bugatti Royale, and even you can drive it.


I don't accept that being irritated at 'radical change' (I'm not sure exactly how this is so radical, it's an init system, not a kernel.) because you're losing domain knowledge is a good reason. That would imply radical change is necessarily bad and should be avoided everywhere. Carriages to cars, ice houses to refrigeration, bow and arrow to black powder. Every single one of these radical changes required people to learn something vastly different. With your reasoning we should've waited longer for some intermediate technology to smooth the learning curve. People were getting really good at taking care of horses, now they need auto mechanics? Building an ice house was becoming a science, how do I fix leaking refrigerant? My aim and dexterity with a box is second to none, but now we're using guns? Now your first thought might be that all of those things are different than server admin, but are they really? I would suggest the only thing different for server admins is that it's entirely less radical of a change than all of these other technologies.

As for my comment being unwarranted, sysadmin'ing requires learning new tech. If there is an improvement on a tech such that it has mass adoption, learn the tech. It's your job. If you don't like it, change jobs. I'm not saying you should shut up and put up. However, we're far past that stage of valued input and people are still complaining. The decisions have pretty much been made that are going to be made concerning systemd adoption. Yet here I am, reading yet again how systemd was the wrong choice, even though rigorous debate was had and core teams decided it was the best decision. Even though this was the biggest drama piece since that blogger blasted linus for being rude. Here we are with 'radical change' in systemd.


I understand your strong opinion and since I am no sysadmin I have no technical problem with arguments. But I am not sure I totally agree with your characterization of slow pace of change by Apple or the wonderful state of Unix/Linux. Aqua was quite a break from the previous GUI and Apple changed the whole stack at one point from computer architecture to OS to graphic library. I don't know a more radical change than that for a software company. As for Linux graphic environment, I can only say that replacing X-win with Wayland is not evolutionary and it cannot come soon enough. Anyway, hopefully things will quiet down for a while and we can compare and contrast alternatives in the real world.


Also, to be clear, I'm not accusing Apple of failing to innovate elsewhere in its product chain. It clearly has. Since 1999: the iBook, MacBook Pro, Air, and a few iterations of the iMac, just in form factors. There's been a lot of under-the-hood stuff going on as well.

But where the user interacts with the system, things have been remarkably stable. Even the relatively minor changes which have been presented have been covered with the usual Apple levels of obsession -- skewmorphic vs. flat designs, etc., ad nauseum.

Again the point being: screw with how things are visually and how users interact with the system, you're going to create huge usability costs with little to show for.


I'm not saying that the System 9 (I think -- I'm not fully up on my MacOS nomenclature) to Aqua break wasn't big. It was.

BUT IT WAS THE FIRST SUCH BREAK IN 15 YEARS OF THE GUI, AND IT'S BEEN THE ONLY MAJOR BREAK IN THE PAST 15 YEARS.

I'm also not saying that Aqua hasn't changed at all. It has, with the most notable addition that I'm aware of being virtual desktops (something NextSTEP had in the 1980s). But other than some minor cosmetic changes, and largely invisible-to-the-user under-the-hood updates, the visible UI has NOT changed appreciably.

Contrast that with the disruption that's prevailed in the Microsoft Windows and Linux spaces from 1999 to present. We've gone from the Win98 UI to the candy-cane XP styling, and Metro in Windows, and at least three generations each of KDE and GNOME on Linux, plus a few other desktops which have waxed and waned in popularity.

I've continued to use WindowMaker, and after 17 years, it is, hands down, the one GUI metaphor I've had the longest experience with of any. It's been exceptionally stable, with very few changes. Even minor ones are quite jarring to me, which is somewhat odd to reflect on.

X11 and/or replacements is a whole 'nother discussion, but I'll simply note that the network transparency of X has been hugely underappreciated by many who've sought to upend it (I don't know what the status of Wayland is in this regard).


> If they feel like learning systemd is a chore, they might be in the wrong business.

I think IT managers would prefer it if they didn't have to spend time and money re-training their sysadmins or hiring/firing them to ensure their staff has the skills to use the $NEW_SHINY from $CORPORATE_VENDOR. Skill transference is a boon for customers (see also: "Stop breaking the UI!").


> it escapes much of the ugliness and the poor permission model of Unix, but it does so by handing virtually everything to the app.

If you want to see the ultimate extent of this, look at the Wii and Wii U. Each game ships with an "IOS": effectively an OS kernel+initrd update package. Every game boots to the newest IOS available, so if one game updates to IOS v6653, then another game that only shipped with IOS v6652 will find the newer version on disk and use it.

However, a game's IOS requirement doesn't just have a version; it also has a slot. Each console has space for 256 individual copies of IOS, which are each independently versioned. So if two games both use IOS[58], then the game providing v6653 will overwrite v6652 on disk, and then the game providing v6652 will boot into v6653. But if one game is providing a version of IOS[58], and the other is providing a version of IOS[61], then their effects on one-another are isolated.

You can think of it a bit like the IOS codebase having 256 branches, and each piece of software being able to specify which branch of the kernel it was developed on. It gets the newest kernel released on that branch.

This allows a sort of "move fast and break things" approach to kernel development, where a kernel can be hacked to support new software in a way that breaks old software: you just stick your modified kernel into an as-yet-unused IOS slot, and old software will have nothing to worry about. This approach has resulted in my own (pretty unused) Wii U having ~73 different IOS slots populated with kernels.

Interestingly, if you think about it, this is pretty much a continuation of what Nintendo was allowing developers to do before: shipping random collections of chips in their own cartridges that DMA to the console, effectively creating their own extended console to run upon. Allowing your software to ship its own kernel is basically the software equivalent.


This is how the Wii works but the Wii U has a more traditional kernel model. This is required due to the system's multitasking abilities (run "mini" apps like the browser or download manager along side a normal app/game). The only time the Wii U uses the IOS model is when it boot into vWii mode to run Wii apps which also disables the new features of the Wii U software.


I completely agree. Having played with Docker on CoreOS for some weeks now I see that it will push a much bigger and different change than systemd, which, on my Arch box was just an update with no real problems, I just had to learn some new commands.

Docker though... man how different it is and how clean it makes my system feel. I do feel that Docker will move towards some kind of Docker optimized minimal dockers that are not Debian or Ubuntu or what ever, that is just a stage so you feel some familiarity.

CoreOS meanwhile, who will ever touch its init system except to auto-start containers? Which will be done by nice tool which hides systemd in the future I guess.

Ok, my posts does focus on the server side of course.


Well, here's the basic deal. If we're talking about common servers, common desktop, etc then systemd is an excellent replacement. It covers the base of users quite well.

But lets say you are building a highly specialized application. You are going to be making quite a few customizations which are far more manageable through a shell scripting environment than by customizing a bunch of binaries.

I assume that Redhat is going to cover a lot of the bases for most users out there. But for those of us in highly customized environments it's going to suck.


The status quo for such projects is to use Busybox, and I reckon it will continue for projects where systemd etc is too much.


There are other options, in particular the Musl libc-based distros like Alpine Linux and Sabotage (where you can use busybox but dont have to). They also feel much more like the traditional BSDs - Musl libc and pkgsrc is very close to a BSD...


Systemd is too much, but often busybox is not enough. Plus if everything starts conforming to systemd, busybox will have to become like systemd to stay compatible.


I doubt you want to run Gnome on a system where you have to use busybox.

Also if busybox is not enough, a minimal systemd system will still be leaner and faster than the equivalent sysV system.

http://events.linuxfoundation.org/sites/events/files/slides/...


It's not just GNOME; an embedded device that acts as a USB host needs something like udev, and as I recall reading systemd has swallowed the hardware plugging notification systems, or was working on it. The success of systemd in getting single-user kernel patches that only make sense for desktops and systemd is most concerning, though. Linux is great because of its modularity; previously, if init didn't make sense you could switch to openrc, upstart, or others, without having to change the way you do logging, DHCP, hardware plug events, etc.


that's just speculation until you clarify. Can you be more specific?


Think of anything that isn't a web server, file server, database server, or desktop.


jolla uses systemd on their phones and tablets right now.

IIRC it's also part of some soon to be shipping vehicle integrations, for in-car entertainment systems and mapping.


That doesn't make systemd right for the other hundreds/thousands of embedded Linux systems. If you recall my response elsewhere in the thread, "Company X uses it" is unconvincing.

Also, it's inevitable that if systemd and software expecting to use it take over more and more aspects of userland and the kernel, vendors will be left with no choice but to use it as well. So "more vendors are switching to systemd" is not a convincing argument either. I like to make my own decisions on the basis of modularity and replaceability (vendor lockin has been a huge burden in other major projects not mentioned in my online persona), not popularity.


the pereson was asking for an example, that is all. i can't even speak for whether the jolla devices are any good. only that is it shown that is done and in shipping hardware right now, not some time in the future.


http://thorstenball.com/blog/2014/11/20/unicorn-unix-magic-t...

is also a current story on HN.

Just an example of how powerful that simple 70's Unix is. Allows features that appear "magical" to thorsten, anyway.

Windows? Wasn't really even an option... until 20 years later. And, of course, Unix really isn't that good, either. But, before you ignore it, please come to feature parity, at least.


> Windows? Wasn't really even an option... until 20 years later.

Which 'Windows'? Before Windows 2000, there was Windows and Windows NT, the former being a more-or-less just a shell running on top of DOS.


> Unix won because the only commercially viable and well supported alternative was Windows,

Unix didn't beat Windows. Unix beat VMS and LISP machines and AS/400 and various other minicomputer operating systems. In fact, if we're talking about mainline commercial Unixes, NT started beating the shit out of them in the late 90s - if Unix lovers hadn't had the free ixen (Linux, BSDs) to fall back on it would be a sad state of affairs indeed.


>AS/400

Hey now, I'll have you know AS/400 is still alive in going in my workplace! We also have an entire position just for it's programming...


> We're getting more monolithic and coarse-grained.

At the same time we are pushing more heterogenous software stacks to production and configuring more specific dependencies for our applications.

It almost seems like you're using cross-platform as a pejorative. ;)


Yeah we need to go back to when distro A and B had totally different versions of everything since it was so much work to get things working.

Now we are close to having a OS where you can seriously just expect anything "Linux" to just run. Bad I guess to some :P


Seriously, once the systemd convergence is over, I can finally start advocating Ubuntu on workstations everywhere because it will finally have commonality with server infrastructure. The last frontier after that is package format convergence, and Lennart has said repeatedly he intends to use the systemd monoculture to push a common package format, which is a really good thing for me.

Right now I have most clients running OpenSUSE, just because I cannot be bothered to fuck with Upstart anymore. Once systemd is in place, the fact zypper is much nicer than apt doesn't make up for the incredible market size difference between Suse and Debian and its children.


>The last frontier after that is package format convergence, and Lennart has said repeatedly he intends to use the systemd monoculture to push a common package format, which is a really good thing for me.

Great, so now instead of adopting a package system system with a solid theoretical foundation like Nix or guix, we're going to dump all dependencies into fat binaries and more or less end up with the solution the NeXT people came up with in the 90s. Such progress.

EDIT:

Not to mention that Lennart's proposed package system[1] would depend on btrfs-specific features, adding even more code coupling.

[1]http://0pointer.net/blog/revisiting-how-we-put-together-linu...


I was so sure it would happen... https://news.ycombinator.com/item?id=8203859 - called it just before the blog post. I wish I was wrong.


I also run OpenSUSE and I think that zypper is best in class and the OpenSUSE Build Service is the killer app for SUSE and I don't know why these are not such a HUGE seller of SUSE servers????

With OpenSUSE Build Service what does Debian server get you? Just wondering.


And why not RHEL/CentOS and/or Fedora? I'm sure you have your reasons, but I find it odd you didn't even bother to mention it, when it's a rather large part of the market.


Software availability and versioning in the Red Hat ecosystem sucks. Either you are using Fedora, where software is usually just frozen bleeding edge circa Manjaro, where breakage does happen and that cannot be accepted in production, or you are running upwards of 5 year old versions of software.

The Ubuntu LTS cycle is just an optimal compromise in my book. You even get Debian Testing as a good rolling release, Debian Stable as a great server release, Ubuntu Server as an enterprise option, and they all (soon) will be using a common core.

For now I advocate the SUSE's, but while it has been stable the general obscurity of it and the dwindling userbase and the fact Novel (I know they have also since sold SUSE) backed out of maintaining OpenSUSE directly, I can't be confident in its future. You cannot underestimate the Ubuntu mindshare, because it means "Linux" software is often Ubuntu first, repackaged by hobbyists for other distros second.


> One thing I've realized about the Linux community through all this systemd flame warring is how unbelievably conservative a large subsection of it is. There's this huge so-called "neckbeard" continent that views anything architecturally beyond the 1980s as a huge affront to Unix.

Fully agree. It seems some people are quite happy with a few xterms in the X-Windows replicating a twm user experience, stuck in the past.

I would also add Oberon, Active Oberon, Singularity, Verve and the current unikernel/library OS research.

> OS I still feel is hiding behind the JVM

Android kind of got us there. Now with Java being compiled to native code, maybe other C++ layers might be replaced in future versions, given how Android team looks at the NDK.

All in all, I want the Xerox PARC and Douglas Engelbart's visions, not the AT&T one.


>single parent hierarchy for namespaces

Predictably, all the blame is laid at systemd's feet.

The current churn is happening, because all of Linux's core developers (kernel and user space) are wanting that change...to push the envelope.

For example, the current change in CGroup's namespaces are because kernel is mandating that the current cgroup access mechanism be deprecated. They want a single writer to Cgroups. Systemd is in the unfortunate position of complying with that request. Guess what? Soon enough, so will Upstart.

Again with kdbus, the person who made the push is not "evil" Lennart, but Kay Sievers - a long time maintainer of udev.

Systemd is nice. Don't be afraid.

Http://www.lambdacurry.com/systemd-nice-dont-afraid/


No. Rebuilding the entire userspace set of services to be a systemd cluster is not nice. It's essentially redoing the traditional Linux approach, which has worked relatively well for years. There are a number of things that could be split out and made more modular - c.f., uselessd for a more in-depth analysis.

To be clear, I'm not claiming that SysV init is The Best Way. Shell scripts are not the Happiest Place. But I am claiming that systemd is a crummy and overbearing replacement.


> It's essentially redoing the traditional Linux approach

It seems like that's part of their mission statement, given comments like this: "Some day, we will have turned the old crap into a real operating system. :)" -- Kay Sievers (https://plus.google.com/+TomGundersen/posts/eztZWbwmxM8)


Yeah, and they seem to be using Windows as their model of what a "real operating system" looks like.


Both Windows and OS X have a unified management mechanism.

In fact, OS X's launchd was a direct inspiration for systemd because of how nicely it works there. I've wanted launchd on servers for so many years.


OSX is not an appropriate choice for a server, and Windows has limited reputation in my mind since they put the GUI in kernel space.

we should not emulate the things we've overtaken and have a higher server market share than. Something we've been doing before is doing it right, and I believe it's the ability to be dynamic and modular.


Windows is actually used on servers quite a bit, but that's besides the point.

Service management has been a problem on (Linux) servers for a long time. Just because launchd originates on a desktop doesn't mean it's not a good idea.


If there were a couple of competing alternatives to manage your cgroups, sysadmins might not be so peeved. We're used to installing an alternative syslog daemon or cron daemon rather easily. But it's systemd or the highway, in the 3 or 4 most popular distros.

About udev, Linus has had multiple serious complaints about udev maintainership since GregKH passed it to Kay. Don't you recall the async firmware loading issue...


Here's a though: try remembering you don't speak for "sysadmins", you speak for yourself.


I think he speaks for sysadmins. If you are using cgroups at the moment you can write scripts for them. It's a mounted filesystem. The change forces you to use systemd for cgroups as only systemd is able to write to cgroups. The argument is: If you don't like systemd implement an alternative that does this for you. Some for kdbus and udev, netlink...

The article is right - it's not Linux as we know it anymore for better or worse.


That is the change that kernel wants, not systemd.

They want to prevent direct access to Cgroups, other than through a single writer. This change is happening regardless of whether you want systemd or not.


You are right. I remembered reading an article from 2013 that gave the impression that the changes where related to systemd and then it shed a different light on the issues (https://lwn.net/Articles/555922/) but it looks like the features are mostly back and in a better shape: http://lwn.net/Articles/601840/


He doesn't speak for sysadmins. I'm a sysadmin, and I love Systemd.

He may speak for some subset of sysadmin, but he certainly does not speak for us all.


> The change forces you to use systemd for cgroups as only systemd is able to write to cgroups. The argument is: If you don't like systemd implement an alternative that does this for you.

And from the looks of it, this has been done: https://cgmanager.linuxcontainers.org/ as reported at http://lwn.net/Articles/618411/


at best he might speak for some sysadmins. His position does not reflect that of all sysadmins so its wrong to pretend it is.


I'm pretty sure it's Greg KH who's running kdbus these days [0]. At least, he's the one who submitted the patch to lkml [1].

[0] https://github.com/gregkh/kdbus [1] https://lkml.org/lkml/2014/10/29/854


You mean this Kay Sievers? http://www.theregister.co.uk/2014/04/05/torvalds_sievers_dus... - It's basically the buddy of Lennart at Red Hat - at least that is my impression from far away..

systemd may be nice but it's coming from Redhat and cgroups are changed due to because systemd folks wanted it that way as far as I followed that debate...


Yes, we are veering dangerously close to Godwin's Law : not only is Lennart and systemd evil, but everyone who agreed or contributed to it.


Sievers appears to be an unreasonable dick. Gunderson announces to the world that systemd's DHCP client is pretty damn cool. In the same post, he puts out a call for interested volunteers.

Ted Lemon (the author and maintainer of ISC DHCP from its inception to 2003 [0]) asks for the location of the project's source repo. Sievers replies with a LMGTFY link that doesn't even answer Lemon's question. Lemon politely criticizes Sievers for his rude and unhelpful answer. Sievers fails to even apologize.[1]

Both Sievers and Poettering have pretty serious attitude issues. It's one thing to lambast a peer who frequently fails to meet the potential that they've demonstrated in the past. It's entirely another thing to try to score social points with your callous indifference and blinkered bullheadedness.

[0] https://www.isc.org/downloads/dhcp/

[1] Check the first few comments of: https://plus.google.com/+TomGundersen/posts/eztZWbwmxM8


What you've just mentioned here is what scares me most about systemd. Not its tight coupling, not its bugs, not its philosophy - all of these things are arguable and in most cases fixable.

The project being run by people who hold unreasonable and downright odious views who act like, frankly, utter asshats, is a much more serious problem.

The Kay/Linus debacle is something you can expect to see more of from these fine folks going forward. Mark my words. Ask yourself if you want software developed this way running your OS.


I am not sure if you're referring to this, but I have seen so many instances where an individual joins a community and then systematically tears it apart by calmly and coolly promoting ideas that ~51% kind of like and ~49% absolutely hate, through a combination of personality cult and back-room coalition building. I have seen this happen in forums, IRC channels, RPG groups, businesses. It is a particularly insidious type of toxic personality because you can't fix the problem by excising them without alienating and losing a large chunk of your community.


Been there and done that.. what's a community to do in such a case?


It's more like everyone knows Sievers is under Poettering's umbrella ever since the 'debug' kernel flag debacle wherein Kay was banned from contributing to the kernel, and then instead of addressing the issue himself, Poettering came out and made that infamous "the kernel is just an implementation detail" blog post to defend him... https://plus.google.com/+LennartPoetteringTheOneAndOnly/post...


No need for personal attacks. I just think the article as some merit and your comment gives the impression that these people are not connected. I just wanted to point that out.


No, cgroups are being changed because the kernel maintainers want to deprecate some stuff and systemd is in the best position to provide the new services.


look at how many people actually contribute to systemd. It's far more than just redhat folks


And if you run gitstats on systemd, you'll see that just 10 people are responsible for over 90% of the code.


But the patches that didn't fit the roadmap dictated by RedHat have been rejected.


I'm quite happy to be a part of the development team for GNU Guix, a distro that is not using systemd. I'm not a systemd hater, but it's definitely not for me and I'm not thrilled with the direction that development is going. It's a shame that sysvinit and friends are so bad that using systemd is the best option we have right now. Maybe GNU dmd will be able to stand up to it someday.


> I'm quite happy to be a part of the development team for GNU Guix, a distro that is not using systemd.

Thank you for doing this :) I love systemd myself, but I still think it's important to have alternatives available; also it makes me very happy to see somebody creating their own choice instead of tearing down other people's choices :D


Thank you for doing this.

That being said, it's a little too esoteric for my tastes (among other things: "if you are looking for a stable production system that respects your freedom as a computer user, a good solution at this point is to consider one of more established GNU/Linux distributions.").

Do you know of any Linux distros that a) don't use systemd, b) are vaguely active / supported, and c) run on UEFI? (My current laptop, unfortunately, has UEFI. It's a royal pain, but oh well.)


>it's a little too esoteric for my tastes

Yes, we're still in alpha. Not ready for prime time yet.

>Do you know of any Linux distros that a) don't use systemd, b) are vaguely active / supported, and c) run on UEFI?

Gentoo? I use Debian most of the time, which of course uses systemd now.


> Gentoo doesn't officially support UEFI


Ah, didn't know that. Thanks!


> Do you know of any Linux distros that a) don't use systemd, b) are vaguely active / supported, and c) run on UEFI?

Not addressing any other of your points, doesn't most laptops/computers which ships with UEFI allow you to set it to boot in "legacy-BIOS" mode?

Even if you're currently UEFI-booting, I would be seriously surprised if UEFI-support was a requirement for every OS your machine can boot.


I dual boot with Windows 8 for games. Windows 8 will work without UEFI, but I haven't found any information about how to downgrade an existing UEFI-based windows 8 partition to legacy bios, or if it is safe/feasible to do so. In particular, the non-existent bootloader. (Windows, as usual, seems to take the approach of "wipe + reformat", which is not exactly optimal.) If you have any ideas, feel free to let me know.

And regardless, this is a temporary solution.


I'm looking forward to switching to Guix as soon as it reaches beta. All this POSIX breakage and LGPL exploitation is making me double down on GNU.


Glad you are interested! If you are brave, you can try out the distro and report the issues you run into. It would be very helpful to us. BTW, I'm typing this from my standalone Guix machine. Eating my own dog food.


I have mixed feelings about systemd: like many others, I've spent a lot of time with traditional Unix systems (25 years) and like the simplicity and stability commonly associated with them and familiar tools/conventions. On the other hand, OS X showed what a modern system built on Unix should feel and look like, while Linux never got anywhere near (not only on the GUI layer). If it takes unreasonable people to make progress (like in the famous G.B. Shaw quote), then let's sit back and see what happens. Perhaps Linux will shed more cruft and become simpler to develop for rather than harder.

Of course, I'm in the favourable position of not having to maintain/administer a bunch of Linux boxes for a living. I can fully understand the frustration of people who built and shipped custom solutions on top of SysV init.

The end of Linux? No, it's the end of Linux as a traditional Unix with lots of arbitrary optional features on top perhaps. We'll get used to it, or switch to something better.


On the other hand, OS X showed what a modern system built on Unix should feel and look like, while Linux never got anywhere near (not only on the GUI layer).

This feels like a non-sequitur. The nice GUI of Mac OS X has nothing to do with launchd (which is the systemd-like portion of Mac OS X).


Are you kidding me? Launchd helps coordinate a massive selection of on-demand GUI-related services and their prerequisites. (254 services on my Mavericks install as of writing)

SysV init is ill-suited to that sort of complex event-oriented management, which (for example) is exactly why Canonical developed Upstart in the first place.


Progress is good. But is systemd progress?


> is systemd progress?

My knee-jerk reaction would be to say "no", but many people who are more involved with this part of Linux seem to think so, I don't feel qualified to challenge that.

I am confident that more radical changes are good in general, as long as the bazaar model allows the best solutions to survive and the worst to be undone if needed.


The ignorance in this post runs deep.

For example:

Speaking of zones and Solaris, if that’s an option for you it’s probably the best of breed stack right now.

Does the author have no idea what's going on with Solaris? Hint: Nothing. Nothing is going on with Solaris, because Oracle doesn't care about Solaris. They closed the source, and now push out the occasional minor update from on high for their enterprise customers. Anyone who is suggesting Solaris as an alternative to Linux at this point in history is simply not credible.


He's actually suggesting SmartOS[1][2] and OmniOS[3], which are both illumos distros that are very much alive, having forked from OpenSolaris over four years ago.[4]

[1] https://smartos.org/

[2] https://news.ycombinator.com/item?id=8571961

[3] http://omnios.omniti.com/

[4] https://www.youtube.com/watch?v=-zRN7XLCRhc


That's probably a wrong choice, too, for most Linux users (though there are some reasons a reasonable, and technically competent, person might choose an Illumos system, if you're doing it because a random crank on the Internet tells you to, you probably don't know enough to understand those reasons and the quite large tradeoffs you'd be making).

Regardless, it's one example of many where the author exhibits a very poor grasp of...well, everything he talks about. Dunning-Kruger effect is funny that way.


You realize that you're just talking in ad hominem circles? You acknowledge that there are reasons that a "reasonable and technically competent person might choose an illumos system"; is it as least possible that someone advocating that choice might not be merely "a random crank on the Internet"? And given that this is potentially a reasonable choice, how does advocating it represent "a very poor grasp of [...] everything"? If there are specific technical arguments to make here, please make them; the repeated personal attack is unwarranted -- and unpersuasive.


Yes, I find the whole article so full of wrong that I didn't think it needed more than simply calling it "wrong". Though seeing how many upvotes it now has, I guess I was wrong.

But, if you insist, let's break it down a bit:

"...FreeBSD...also ships with ZFS as part of the kernel and has a jails which is a much more baked technology and implementation than LXC."

Which is an assertion that would require significant citation and specification about the ways in which the author believes jails to be superior in order to be a useful claim. I believe it is an assertion based on ignorance of either Jails or LXC, or the ways those technologies have been used historically and are being used today. For most of the uses I see talked about on HN, LXC is the "more baked" implementation. While Jails has existed for a long time, it was not intended for the purposes we're using LXC for today in Docker and similar deployments. The tools exist, the resource management exists, for LXC and they don't, or are quite rudimentary for jails. To suggest someone choose jails where they are currently using Docker and LXC is to suggest they live with a large variety of limitations and pain points, and in a lot of cases to simply not do what they are currently doing, or to do it in wildly different ways. All to avoid the minor pain that is represented by SystemD for most users.

In short: Jails are not (currently) a reasonable alternative to LXC in that context, and it exhibits some kind of ignorance to suggest them.

Continuing on, despite your suggestion that he is talking about SmartOS or OmniOS, he quite clearly is not. He specifically mentions Solaris while mentioning the others as other options:

"Speaking of zones and Solaris, if that’s an option for you it’s probably the best of breed stack right now. Rich mature OS-level virtualization. SmartOS brings along KVM support for when you HAVE to run Linux but backed by Solaris tech under the hood. There’s also OmniOS as a variant as well."

That paragraph clearly is recommending Solaris, specifically. If you'd like to argue that Solaris is a reasonable alternative for most Linux users, it's a conversation I'm going to opt out of. I'm pretty sure we'd be speaking completely different languages.

"If you absolutely MUST run Linux, my recommendation is to minimize the interaction with the base distro as much as possible. CoreOS (when it’s finally baked and production ready) can bring you an LXC based ecosystem."

So, CoreOS is the Linux option he recommends? The same CoreOS that uses SystemD? Indeed, was among the first distros to embrace SystemD with gusto. CoreOS, that is remarkably different than all other Linux distros. All because "Linux is becoming something different than it was"? So, in response to Linux becoming something different, he recommends people switch to something that is utterly different, like an entirely different operating system (FreeBSD, Solaris(!), etc.) or a Linux distribution that rethinks everything, not just the init system (CoreOS).

All to avoid something being different. It absolutely boggles my mind, and I have hard time responding with anything other than derision; for that, I apologize.

You're right that I haven't been particularly persuasive, and have been quite abrasive. This article just really rubbed me the wrong way.


"That paragraph clearly is recommending Solaris, specifically."

He says, as he quotes a paragraph that specifically calls out SmartOS and OmniOS -- both of which are under the IllumOS branch of OpenSolaris.

"So, CoreOS is the Linux option he recommends? The same CoreOS that uses SystemD?"

The problem he's raising isn't that linux is going to be different, and if you think it is you need to re-read. Try doing it with your --with-reading-comprehension switch. It being different is just a statement of fact, the problem statement was separate.

CoreOS is not a distribution in the classic sense. It is a platform for deploying containers, and in the use-case they've setup the author clearly believes it will allow you to still be successful in spite of systemd.


It is a terrible article, I think we can all agree.

Its CoreOS point is stunningly ignorant.

However, Zones and Jails are a good substitute for LXC; they are not yet such a good fit for the Docker way of using containers, which is somewhat different. But for a whole system image type model like LXC they are both great.

SmartOS is doint great things with Linux compatibility, both through emulation and KVM, it is worth looking at.


Curious logic you have. "It doesn't have a constant stream of fixes. That proves it's no good. Things that need fixing are better than things that don't need fixing".


Software maintenance matters. A lot. Particularly for networked systems.

Also, it hasn't been Open Source in several years. That makes Solaris utterly irrelevant in my world. This blog post is suggesting Linux users move to Solaris. It's just a bizarre recommendation, made as an off-the-cuff assertion with no reasons given.

Also, Oracle. What hacker chooses to be beholden to Oracle?


> What hacker chooses to be beholden to Oracle ?

How about almost every single Java, Scale, Clojure etc developer. We are all using Oracle's JVM and they have been unquestionably a fantastic steward of the Java platform. I know the company deserves a lot of criticism but remember it is a big company with many different departments.


We are all using Oracle's JVM…hey have been unquestionably a fantastic steward of the Java platform

That is debatable. First off, we're not all using Oracle's JVM :) It's also not at all clear that Oracle has been "a fantastic steward of the Java platform." They don't appear to have totally wrecked it, but that's a low bar for "fantastic."


Fair enough. I haven't followed Java in many years (long before the Oracle acquisition), so I don't have any point of reference. In my mind, and in my experience, Oracle make Microsoft seem like pretty nice guys.


Ellison seems to have had a pretty solid inferiority complex when it comes to Gates. He must be pleased with how Oracle has now dethroned Microsoft as the Evil Empire, seeing as how even most hardened old Microsoft haters (I count myself in that group) now for the most part has at most slightly tepid lingering dislike for Microsoft, while Oracle has taken the exact opposite part.


I don't know about clojure, but Scala ships with openjdk. It's possible to use the oracle JVM, but that's not default. They rely on oracle for bytecode/behaviour spec and not much more. Personally I'd like to see more bytecode effort being directed towards supporting non-java languages on the JVM, so I'm not saying the status quo is ideal, but it's not that bad.


Solaris is dead but Illumos lives on kinda.


>The ignorance in this post runs deep.

The irony there is quite impressive.

>Does the author have no idea what's going on with Solaris?

Do you?

http://wiki.illumos.org/display/illumos/illumos+Home

https://smartos.org/

http://openindiana.org/

http://omnios.omniti.com/

Even the commercial Solaris from Oracle is actively updated and a release was done this year. Anyone who is suggesting Solaris is not an alternative to Linux at this point in history is simply not credible.


Yeah, actually, I do. I maintain a software project with a thousand or so Solaris installations.


Going against the "divide et impera" principle, "do just one thing, and do it well", "everything is a text file" (configurations and logs) and "pipes" which are very natural idiom, in a UNIX-like system which has been built upon these principles, is just bad engineering, shallow understanding and, perhaps, too high ambitions of knowing better how to fix what isn't broken.

The question why do we have this project and why there is such a buzz is more interesting.

But the answers doesn't belong to realms of system design or engineering, they are in more obscure notions of "business strategies" and "sales techniques".

It is a text-book example how to sell - "show them that they have a problem they didn't realize before, and give them the one and only fix". Fine. Except that the problem doesn't exist outside discussions of "why this product is good - it solves this and that".

Another thing is that it is an excellent strategy to "create a niche for oneself". Look, we are the ones who give you a solution (to a problem which doesn't exist). We are "world leader" - look, we have lots of experts, web-presence, lines of code.

There are so many examples in "parallel worlds", beginning with eco products or "fair-trade" coffee, you name it.

But you see, lots of people don't need an eco or environmental-friendly daemon which does everything "the right way". No, thank you.

Another thing, back to system design. If Windows has "registry" and there are millions copies of Windows everywhere, or Mac has that settings daemon (from which GNOME Gsettings has been copied) it doesn't mean that these design decisions are superior to plain old text configs. You could compile them and have an "intermediate storage" to gain a 5% more efficiency for application startup times, but this is, again, not a fundamental problem. Being able to use regular expressions on configuration and logs without any restriction are much more fundamental. And memorizig all the details of all these xxxxctl insted of saying something like

   $ grep -R ^sysctl /etc  
or whatever I need, is just an additional unnecessary burden.

Most of the problems systemd is trying to "solve" does not exist. At least not BSD, AIX, Solaris, you name it. And, of course, there never been any problem with syslog, init or cron. They were *good-enough".


I don't support a switch to systemd at this time (I use Debian, and I'd be happy to live with pretty much any other init system for at least one more release - the obvious choice would be to just stick to systemV init, and see what systemd looks like in a year or so. Down side is that if the distro were to move (for supporting all those people that want Gnome for some reason ;-) -- then we loose out on a year or two of testing systemd).

Anyway:

> Most of the problems systemd is trying to "solve" does not exist. At least not BSD, AIX, Solaris, you name it.

Don't forget that (open) Solaris revamped the init system, not just the disk/filesystem-system. I think there are arguments to be made for integrated systems like the zones/zfs/SMF.

I don't think systemd is a reasonable way to go about it, and I certainly don't think it is a good fit for "Linux in general". My impression is: Systemd is too big -- will fail.

Wasn't it Gnome that at one time tried to mimic the windows registry for settings (because binary data on disk: really fast, lol) -- only to go back to ini-style files? (I might be misremembering that one).

Different designs are fine (see: eg plan9) -- but moving away from fundamental design principles (everything is a (text) file) effectively means abandoning the old system, making a new system.

If systemd was a bit more upfront about "making a new operating system sharing some code with Linux kernel and traditional userland" - rather than trying to sell systemd as "more of the same, just better" -- maybe they'd meet with more positive reception. That a new system is unstable is fine -- just don't expect people to use your new crap in production.


If the problems with sysvinit are all imaginary then why has every other UNIX replaced it?

Mac OS X has launchd for 10 years now.

Solaris 10 replaced sysvinit with SMF.

Don't have any experience with AIX but the documentation says it has a "System Resource Controller" with a "srcmstr" daemon to manage services; though it looks like it runs on top of something sysvinit-like:

http://www-01.ibm.com/support/knowledgecenter/ssw_aix_61/com...

BSDs have generally never used sysvinit, although they do have shell-based service management.

So actually none of the particular examples you cite use purely sysvinit to manage services.

And it's also quite obvious that there are use-cases where reliable and fast service management are very important on servers.

One is containers, where you may run 10000 of them on a single host, and you have to reboot the host some time too, so you don't want to delay the start up of all those containers by loads of slow and racy shell scripts. Or even better, you want to have socket activated containers that don't actually start until they are needed.

https://www.getpantheon.com/blog/pantheon-running-over-50000...

Another is hyperscale servers: There are now really small ARM and x86 based servers and you can put 4-500 of those in a single rack. That means a lot more individual servers to admin, and failure modes that are relatively rare in current server rooms will become an order of magnitude less rare, so more robust OS level service management is helpful.


Problem 0: Yes sysinit has problems but systemd brings even more problems (journalctl, PID 1, dependancy Hell ++, undeterministic bootup process)

Systemd is a broken silver bullet for handling the decrease in quality of packaging.

There are problems with sysinit: the first one is the missing link between devs and syadmins in companies: people known as packagers.

For a sysinit to work well shell scripts, permissions, where resources are located, the dependency management has to be done with art and expertise. It is a human job with human which are:

1) a very skilled rare resource, 2) not identified has needed by companies, 3) company's software QA is shitty (it has barely the level it works for me (tm) of quality

Talking about debian, which is considered the distribution with the most talented packagers debian has 2 flows in this domain:

* too rigid: projects goes for logical units of packaging that are consistent and organization of assets in packages that can ease maintaining. Debian has its guideline that makes them «fix» poorly packaged softwares like lateX, python, ruby: cutting language distro in at least runtime, dev, extra. Debian packagers are often debian experts, not upstream software package experts and they first break some stuffs (latex is so poorly packaged on debian it can be considered broken), AND it adds more works

* too much features typical linux distro compared to BSD are pacakging fucking more packages in their core resulting in more work; less attentions to the details and conflicts of functionality/overlaps. This result in more resource drained from the packagers. Like we have 4 shells considered OK for writing shell related stuff, when they have only one: «sh».

The problem of linux vs BSD is symbolized by the systemd vs sysinit: linux is an OS of devops that are super devops, poor coders and sysadmins, BSD is an OS of sysadmins and devs that are good sysadmins and devs, but no devops.

And we still lack in 2014 of maintainer, sysadmins and coders of quality.

Linux/Gnome ... FSF projects are not sustainable in these conditions. They think of free software has an infinite resource of benevolence. And they exponentially overblow the works required for maintaining, deploying ... thus they are mathematically doomed to die under their own weight.

I see BSD as a calvinist boring protestant community turned towards humility doing what is right and linuces as catholic exhuberant rockstars over spending the good will of developers without thinking of the future.

Being lutherian, I still dream of THE right OS that would less terse than BSD.


> Except that the problem doesn't exist outside discussions of

except https://en.wikipedia.org/wiki/Init#Replacements_for_init

this problem was noticed and was trying to be fixed for a long time, by a lot of people


Traditionally, one of the major drawbacks of init is that it starts tasks serially, waiting for each to finish loading before moving on to the next. When startup processes end up I/O blocked, this can result in long delays during boot.

For a server it doesn't matter, leave alone "broken". long delays during boot is grossly exaggerated, because delays are mostly due to network errors/timeouts or file-system checks, but you can do almost nothing "in parallel" without clean FS or configured network interface.

Well, for some Ubuntu for mobile devices they could have adapt some "optimized" sysvinit-scripts (not /sbin/init) replacement, but, please, don't tell us that everyone needs this. Leave servers and home workstations alone.


it matters with servers too, billing-per-hour/minute/second etc

and my point was there has been a whole bunch of prior attempts at changing init, each one of those had issue with the pre-existing systems.

I am not telling you that everyone needs it, or that its broken. I'm telling you that the issues didn't suddenly exist just because systemd


I think the article got one thing wrong: That sooner or later every GNU/Linux distro will switch to systemd. I can think of three that most likely will not: Slackware, Crux, and Gentoo. Granted, the more that systemd binds itself to formerly modular userspace utilities and apps, the more pressure on those distros to make the switch, or else either fork every systemd userspace app at the risk of being left behind. But I have a feeling there will be a schism in Debian, resulting in a fork or (less likely) dropping systemd altogether, before too much longer. That might just be the catalyst to move away from it in other major distros.

Then again, I could be completely wrong, and systemd will end up in every single surviving distro by default, turning GNU/Linux into systemd/Linux. That's when we'll see a true exodus of those opposed to it to other OSes.


I'm also interested to see how things play out with OpenWRT. High-end home routers are starting to get a lot more powerful with the transition to ARM SoCs and NAND flash, but they won't be able to run any software that depends on the systemd ecosystem while OpenWRT is still supporting the massive install base of hardware that systemd can't fit on.


Ditto for small ARM devices like the Raspberry Pi. When the RPi version of Arch Linux went to systemd, the system actually got slower in my experience. On such an already limited device, it was a quantifiable difference; before systemd, Arch was the fastest GNU/Linux distro for the Pi by leaps and bounds. After systemd, even Raspbian feels faster than Arch, and now Slackware is the reigning king of speed for it (again, among Linux distros).

Unfortunately Slackware is still a bugaboo when it comes to installing it on the Pi, but once you've got it on there you can image your SD card and have a ready-to-roll distro that's lean and mean.


Good point. What is the systemd approach to embedded systems in general?


Can someone put in Layman's terms what the change entails and why it would be the end of Linux?


It's not the end of Linux. It's the final death of the idea that Linux should just idly continue to act like a clone of some vague Unix of old because it was better when men were men and their computers had obscure RISC processors with billions of registers.

Linux hasn't really been that way for an incredibly long time, but a large proportion of the userbase still cling to this notion, some for ideological reasons, and some out of sentimentality.


Systemd is most problematic because of its own-everything monolithic nature, and the changing of kernel interfaces to match. The vast majority of Linux systems are not desktops, and probably not servers either. The kernel and the core utilities around it (like init) should be designed with multiple implementations and all machine types in mind. This means phones, set top boxes, TVs, routers, automation hardware, supercomputers, and more.

If the core utilities around the kernel and the interfaces they use are well-documented, designed, and modular, then different machine types can pick and choose the components they need, and easily write their own by opening a well-defined API on something in /dev. If systemd continues to take over and alternatives smothered, very quickly the kernel will become useless to any system where systemd makes no sense, especially non-desktop embedded systems.


systemd makes a lot of sense for embedded systems:

* Embedded systems use watchdogs. systemd implements a watchdog supervisor chain, where systemd supervises applications, and the hardware watchdog supervises systemd.

* kdbus: efficient IPC

* networkd: simple network setup, very fast DHCP client

* fast boot times

* handles many complexities, so that embedded developers can focus on their application


That all sounds fantastic, but what do you do if just one component of systemd doesn't work for you? Or if you only want to use one or two components of systemd?


Regarding the first question: Then you use something else instead of that component. For instance, the existence of networkd does not preclude the use of (say) dhcpcd or your own network setup scripts. Of course, there are a few components that are more central (e.g. journald, udevd). In that case you file a bug report and/or fix it. Same as when one component of the kernel (which is vastly larger than systemd) doesn't work for you.

As for the second question: certain parts of systemd can certainly be used on non-systemd systemd (such as udev or nss-myhostname). But most would require at least some changes.


We are selling embedded device with systemd Debian since this spring and they work just fine. And it not some 5$ board made once and then sold as is. It is a big and very advanced device in the hundred thousand dollars bracket and all of them are constantly improved and upgraded. And they will be maintained for many more years.


jolla uses systemd on their hardware right now. for tablets and phones


Your comment is the very definition of FUD. You have nonfacts just baseless fear.


I developed and marketed my own embedded Linux products. It would have been difficult to do some of the customizations I did if the kernel, DHCP server, logging system, HAL/udev, etc. were all adapted to the one systemd way of doing things with dbus.


You should educate yourself about FUD, I recommend the Halloween documents, where they explain the unfeasibility of FUD tactics against an open source project, e.g. if someone tries to do it they won't get anywhere.


Why did systemd decide to ALSO take over logging and store them in a binary format, though?


If you read over Wikipedia page of systemd, you will find out - it just does not replace `init` system, but whole lot more. Systemd is a collection of 64 binaries which manage login daemon, networking (DHCP) etc.

In other words to some it seems like lot of not-so-tested software replacing software that was well tested. from an outsider perspective two things seem problematic:

1. A lot of Linux software worked on principle of, don't break userland. Systemd appears to break userland here and there.

2. From any such large software replacing a well tested infrastructure, bugs are expected. The problem appears to be, in many cases systemd developers push the blame of breakage to other subsystem devs (sometimes it could be Kernel dev, sometimes it could be end user apps written on top of KDE/GNOME). This is the part which makes lot of people angry apparently.


it was tested enough to ship in redhat enterprise linux 7.


Upstart was tested enough to ship in RHEL6. It does not mean it works perfectly and has no bugs.

But systemd IS NOT only an init system.

So if systemd 208 (until 213) by default saves the core files in the journal and your core file is bigger that what systemd devs decided was appropriate in a .c file (around 768MB, IIRC), you lose it. That is _inacceptable_.

And while we (=the company i work for) still have not officially started our evaluation of the platform, if RHEL7(.0) does not have this bug fixed then I will be strongly against supporting it officially, since in case of crash we would not be able to get or pass up to the devs the core file for analysis.


> So if systemd 208 (until 213) by default saves the core files in the journal and your core file is bigger that what systemd devs decided was appropriate in a .c file (around 768MB, IIRC), you lose it. That is _inacceptable_.

Is that not, however, a simple fix? There might be a case of death by million cuts but that's true of any new software that replaces any existing software.


RH still offers support for RHEL6, and will do so for years to come. It will be interesting to see how quickly 7 gets adopted outside of cloud instances.


They support much further back that that. Red Hat still offers support for RHEL4 and will do until March 31, 2017. It was originally released in 2005.

They'll be supporting RHEL7 (with Systemd 208) in some form until 2027


Lennart Poettering works for Red Hat, btw.


so? that doesn't make systemd automatically stable enough to be used on redhat's flagship product.


This line of reasoning is drifting dangerously close to the conformist call of "nobody ever got fired for buying IBM" or in this case, RedHat. If systemd turns out to be a mistake in RHEL, it wouldn't be the first time something enterprisey did something wrong. We (Linux users) can't afford to let enterprise thinking, which is often more political than technical, dominate the community.


I sometimes wonder if the whole systemd push is a "gottcha" in the direction of Oracle and their RHEL clone.


I don't agree with the article that it is 'end' of Linux, whatever that may mean, but the article argues that because SystemD is changing what Linux stood for - simplicity, never break userspace, keep as many things in user space as practical, and keep the kernel version dependencies on userspace to a bare minimum and in addition is also rewriting lot of battle hardened software like resolvd and in doing so it will introduce instability and security issues.

I guess the point is end users will shy away from the short term mess and move to another system like FreeBSD.

While I don't know if SystemD is a vastly superior design and implementation to any of the already existing things and I am not sure how much of a stability/security/complexity concern it is, I think that the 'end' of Linux will not be due to SystemD - it has a lot of momentum going for it. It will take something bigger to derail Linux at this point.


It's important to note that the blog post doesn't mean, "Linux is done for," but rather, "Linux is going to be a different kind of system than it has been historically." Some people think this is a good thing, and some think it is not. EDIT: I want to emphasize that I am trying to write this comment neutrally, not endorsing either side, so while my biases may sneak in, my intention is to be fair to both sides. Both sides have arguments in their favour, and I'd hope people consider both before coming to a conclusion.

All Unix-based systems start by running a single process known as 'init' which is responsible for setting up the system, starting all other programs, and managing various services as they run. The change under contention is the widespread inclusion of a relatively new piece of software called systemd, which replaces the historically popular sysvinit, Ubuntu's alternative known as upstart, and various other competing systems. All of these are different approaches to building an init system.

In contrast to some of the other systems, systemd is written less in terms of traditional Unix-style tools (like pipes and plain text files for storage), choosing instead to build on newer and more elaborate communication interfaces and store in specially designed binary formats. Systemd consists of a large family of interrelated pieces of software, many of which are nominally optional but generally expect to be used together. One point of contention is whether systemd is "too large", as proponents argue that developing these pieces together will increase their quality, while detractors argue that this makes it too difficult to substitute components if necessary and that these pieces should not be part of the same conceptual package.

Additionally, the architectural choices of a large, widely-used package integral to the functioning of a running system will influence the design and assumptions of other pieces of software and even Linux itself. Already, the Linux kernel has incorporated the newer communication systems used by systemd into the operating system itself (the mentioned KDBus), which many believe to be a sign that the inclusion of systemd will change the way that Linux operates and the way programs expect to interact with the kernel and with each other.

An important factor here, whether good or bad, is that these choices make Linux into a very different system than it has been historically, and very different than other Unix variants. Proponents argue that this is a step forward, as the facilities offered by Linux historically might not be appropriate tools for the current uses of Linux. Detractors disagree. Either way, the conclusion is that systemd is a change in the way the operating system is structured and used which has been a major point of contention in some communities.


I know we don't have access to the OSX code in its entirety, but could one argue that they actually did change how init works in FreeBSD? And perhaps that is one of the reasons the OS is stable? It's not far to assume that the Linux folks are going for more control over init for specifically this reason.


They did change init, but it is not about stability, the main reason was support of hotplug and dynamically adding and removing services. You expect stuff to happen if you plug in a new device and so on, and that is integrated into "init", so it is not so much "init" as an ongoing service that deals with events as they happen.


I think he means, "the end of Linux as we know it", but that doesn't have the hyperbolic ring of the current title.


TL;DR systemd is winning the developer mind share. Things being built like Gnome which require SystemD thus most things will need systemd.


Gnome needs a dbus api that currently only systemd supports, which is not quite the same thing.


With the irony that Poettering was the guy to pull the plug on the alternative. And alternative that could live on top of any init out there.


Lennart's view is that he did all he could to avoid breaking using GNOME with ConsoleKit: http://lwn.net/Articles/621182/


While at the same time giving every indication that consolekit was dead.

http://erickoegel.wordpress.com/2014/10/20/consolekit2/comme...


The mail from 2102 that is linked from your url contains this paragraph, wonder what happened to that plan?

"Ubuntu plans to take over maintainership (more precisely Martin Pitt from Canonical), to maintain it as long as they still need it, and will change the name while doing so."


Tried to find out some time back. All i could find was a empty Consolekit project in Martin Pitt's name over at Launchpad. The mailing list didn't seem to hold much more info until Poettering closed it because he didn't want spam...


we've already upgraded to RHEL7 here. ppl didnt know systemd too well but it works (im the only guy who knew systemd. heck ppl didnt even know of journald).

Now im not saying that i like systemd in general - for the reasons the author explains I don't really like it. Systemd could have been way better if it wasnt for the political shit and attempts to take control over distros, kernel, etc.

That said it has some features I do like. Likewise for kdbus. This is like Chrome vs Firefox. People would like to support Firefox better. But Chrome and Google apps do enough of what they want right now to go with the solution they don't really approve of - I'll say it again: it works.


It has been said before, but it deserves mention, there are alternatives to Systemd out there, like OpenRC which has been around for a while, and other, newer projects like Uselessd.

Of course, whether those other options will be as well supported and developed as systemd remains to be seen.


I am very happy with my Gentoo system and OpenRC - it works extremely well. Modulo jokes about "burn-in test OS". :)


There's been at least ten or so sysvinit alternatives that have existed before systemd. Some newer ones have appeared since then, most promising of which is nosh.

That said, all of the sysvinit replacements back then (eINIT, initng, depinit, s6, perp, etc.) never made it and were all ignored during their time.


I am no expert on this but apparently when Gentoo developers tried integrating OpenRC they ran into too many bugs.

http://blogs.gnome.org/ovitters/2013/09/25/gnome-and-loginds...


> ...apparently when Gentoo developers tried integrating OpenRC they ran into too many bugs.

You are certainly no expert on this.

I've been using Gentoo since 1.4... back in 2003, maybe 2004. I vaguely remember when Stable Gentoo was switched to OpenRC, which happened in mid 2011. Unstable Gentoo (which I ran -and still run- on my laptop) switched to OpenRC much earlier [but that date I cannot remember].

What the first topic in the blog post that you linked to is really saying is:

"We Gnome developers still claim that recent Gnomes don't require systemd. We stand by this assertion. Recent Gnomes only require init and system management daemons that behave exactly like systemd in pretty much every aspect; they don't actually require you to have systemd installed.

It's a pity that developers don't want to spend time re-working their init and system management software to be systemd clones. If they did, then the world would finally understand why we Gnome developers continue to assert that Gnome doesn't require systemd."


I've always found this part particularly hilarious/misleading in retrospect:

"For one, in the last stages of GNOME 3.8.0 as release team we specifically approved some patches to allow Canonical to run logind without systemd. Secondly, the last official statement still stands, No hard compile time dep on systemd for “basic functionality”. This is a bit vague but think of session tracking as basic functionality."

It's now the position of the systemd developers that running logind without systemd was never supported, and that distros like Ubuntu should never have tried to do it. I believe they've now broken the ability to do so. It's one of the major reasons Ubuntu had to switch to systemd.

(Oh, and for some context, http://www.freedesktop.org/wiki/Software/systemd/logind/ is the logind DBUS API. Good luck reimplementing that!)


That's the fundamental problem with DBus-exposed interfaces: they're APIs that get marketed as abstractions. APIs don't leave much room for different interpretations, meaning you'll end up rewriting the component you're trying to avoid using in the first place.


I'm not sure what you're saying here. If you want to avoid using a particular piece of software, but need to adhere to its API, you're going to either reimplement parts of it, or use someone else's reimplementation. There's really no way around that. It's a fundamental problem shared by all software.


Exactly. I'm (obliquely) affirming your argument that the commonly thrown-around excuse of "you don't need systemd, you just need something that implements its DBus interfaces" is a distinction without a difference.


Is a system running Linux + Wine no different than Windows, just because it implements the same Win32 API? Is FreeBSD the same as Linux, since they both implement POSIX?

There are many ways to implement the same APIs.


Wine is not reliable enough for mission-critical software. Nor is running a nominally POSIX-compliant program that hasn't been tested on FreeBSD. If a project claims not to hard-depend on systemd, I would not trust that unless it's actually, y'know, tested on something that isn't systemd.


Ah. I largely agree that it is a distinction without a difference.

However, I would argue that it's far easier for an end-user to drop in a replacement for a DBUS API implementation than it is to drop in a replacement for a C/C++ API implementation. So... there's that, I guess. :/


from same article:

"Apparently GDM 3.8 assumes that an init system will also clean up any processes it started. This is what systemd does, but OpenRC didn’t support that. Which means that GDM under OpenRC would leave lingering processes around, making it impossible to restart/shutdown GDM properly. The Gentoo GNOME packagers had to add this ability to OpenRC themselves. Then there were various other small little bugs, details which I already forgot and cannot be bothered to read the IRC logs. "

So apparently there are bugs when using OpenRC with Gnome. That is not to say, Gnome requires systemd (I did not make that claim).


Unless you're ovitters, I never claimed that you claimed that Gnome requires systemd. You did -however- say that "...when Gentoo developers tried integrating OpenRC they ran into too many bugs."

Your statement is not true. Stable Gentoo had its default init system switched to OpenRC in mid 2011. OpenRC has been great for Gentoo.

"Apparently GDM 3.8 assumes that an init system will also clean up any processes it started."

AFAIK, the only Linux init system that behaves in this way is systemd. Expecting this behavior means that one expects one's init system to behave like systemd. It is disingenuous to claim that one's software doesn't require the use of systemd when it relies on process management -and other- behavior that can only be found in systemd.

To make an analogy: I write software that makes extensive use of cgroups. If I said:

"My software doesn't require Linux. We could run on *BSD if they'd just implement cgroups, and Windows if they'd just implement POSIX and cgroups. It's a pity that they don't make this effort, and I don't have the bandwidth to help them out, but my software doesn't require you to use Linux to run it."

you would likely accuse me of sophistry; and with good reason!


I find myself reminded of a poster at a school toilet: "your mother does not work here, clean up after yourself".


I'm afraid that I have no idea what you're trying to say. Would you elaborate?


Gnome apparently assumes someone else is supposed to clean up after them, and so leave processes behind.

In essence, systemd has becomes Gnome's mother...


As anyone who's used a language at least as modern as C will tell you, it's awfully nice when a runtime system will help you clean up when your task terminates.

My beef with the article that the GGP links to is not that GDM now requires such a system (strategic laziness is a virtue!), but that the author refuses to admit that

1) There currently exists only one such runtime system that provides the behavior that GDM relies on.

2) There is absolutely no guarantee that the GDM folks won't come to rely on more systemd implementation detail in the future. Indeed, given the way Gnome development seems to happen, it's almost a certainty that GDM will depend on more and more systemd implementation detail in an entirely ad-hoc manner as time goes on.


When unable to provide a cogent response, a parable will be a sufficient substitute.


Two distros that don't use systemd:

http://www.voidlinux.eu/

http://crux.nu/

Void linux is a rolling release, has binary packages, uses a very good package manager (even better than pacman) and the community is small but very friendly.

Crux is very minimal, and you have to compile your packages.

I use Void Linux, and am very happy with it.


Thanks for that information.

I am using Arch at the moment but I spent the last 45 minutes trying to fix a network problem which involved fighting with journalctl to see what was actually happening.

Void linux looks like a great alternative.


I assure you void is a great aternative to Arch. I was with arch until systemd, and it feels like home, only smaller, with Arch.


Wow. The UNIX-haters are out in force. Apparently proven, real-world tested code is no longer a relevant feature to a lot of people. Well, "Those who do not understand Unix are condemned to reinvent it, poorly."

But enough about the technical distractions. Why is systemd so important? It's certainly not technical quality or design (even if some of the ideas are useful, the implementation is junk). There is a far more important reason, that some of you have noticed parts of, at least tangentially.

From this very thread: "systemd makes a lot of sense for embedded systems". Yes, it does, andor, and it's all because of this: "kdbus: efficient IPC". The thing is, kdbus/dbus isn't really that great for a lot of things - you have to bounce through the kernel at a minimum, and there is and encode/decode steps that add some overhead. It might be useful for some types of IPC, but it is replacing what should be a fast and simple library call in many places.

Now, here's where a lot of you are going to start calling me crazy or "obviously wrong", both without actually addressing the key claim, which was is probably better explained by stevel over in the Gentoo forums[1]. I encourage reading that post.

The goal with all of this is not technology related at all: the systemd takeover is an attempt to separate Linux and many userspace tools from the GPL, so that software can be used under the LGPL terms instead.

What is the big difference between GPL and LGPL? Linkage. Linking to a GPL library requires you to follow certain requirements if you link against it, while the LGPL specifically allows taht usage. (k)dbus provides the workaround, by replacing what would be a normal function call into a library with a "IPC". It's slower, but so what, computers are way faster than needed. In the end, while you can still choose to release your code as GPL, if you have to use an IPC mechanism to do anything useful the license requirements that will actually apply ends up being being more like the LGPL.

Well, if I wanted to release under the LGPL, I would. What I'm not going to do is undermine my choice of license just because a bunch of embedded developers (and others) want to use what were traditionally GPL projects without having to be bound by the coopyleft requirements. If this was proprietary software, you would call that kind of behavior "stealing".

(seriously, the linked comment below does a much better job of explaining this)

[1] http://forums.gentoo.org/viewtopic-p-7645524.html#7645524


Where do you get UNIX-haters? I read systemd haters which begets Linux haters but UNIX? No.

Besides, Linux is no longer a UNIX-like system. Linux is now only Linux unto itself with UNIX-similarities.


Just look at this very thread - there are numerous posts trashing the "old ways" of UNIX.

It has been a common thread throughout the systemd mess. Lennart himself is very strongly outspoken against any use of shell scripting (not just in init).


Sigh. Obligatory reference to http://0pointer.de/blog/projects/the-biggest-myths.html:

"Myth: systemd is incompatible with shell scripts. This is entirely bogus. We just don't use them for the boot process, because we believe they aren't the best tool for that specific purpose, but that doesn't mean systemd was incompatible with them. You can easily run shell scripts as systemd services, heck, you can run scripts written in any language as systemd services, systemd doesn't care the slightest bit what's inside your executable. Moreover, we heavily use shell scripts for our own purposes, for installing, building, testing systemd. And you can stick your scripts in the early boot process, use them for normal services, you can run them at latest shutdown, there are practically no limits."


Note that your quote doesn't contradict pdkl95's statement. Nowhere did they say that systemd prevents you from starting shell scripts, but that the developers have a (strong?) revulsion towards them.


The real reason people are pissed about systemd is it's intentionally designed to take over the entire system. Conveniently is also maintained by RedHat. So yes, it is a hostile takeover of Linux.


Are there any reliable VPS providers that offer FreeBSD?


There are quite a few. Some decent ones I have heard good things about:

* rootbsd.net * arp networks * vultr

There are quite a few cheap-ish dedicated server vendors out there that can either be ordered with FreeBSD or provide some means to install it yourself.


I have had good luck with arpnetworks. So far only noticed one outage in three years of use. Have had good network and CPU performance. They don't do tech support for things inside your operating system, but they are always helpful when I ask them to do things on their end.


I would recommend http://www.ramnode.com/

Reliable and cheap, even more so if you take 5 minutes to google for a discount coupon.


tilaa.com for EU servers, I like them a lot- IRC support and a general "no bullshit" feel.


I reccomend vultr. I run OpenBSD not FreeBSD, but it is just a standard KVM VM you can run whatever you want on, and unlike most BSD offering VPS providers, they aren't double the price of DO and friends.


The description of systemd - monolithic, taking on function of other things - sounds like the X server. Look what's happening now with Wayland, and look at how hard it's been to get there (hint Wayland is only possible as a result of all the driver architecture changes over the last 5-8 years). It's hard to undo this kind of thing once it's entrenched, but it can be done.


Look how many platforms X supports, how many paradigm shifts has survived. systemd already has a less flexible architecture by design that an ancient system, so in the future it'll be difficult to untangle.


Does systemd increase the security of the OS or reduce it? For example, does it allow currently-separated processes to dip into each others' memory space? I would appreciate a birds-eye view from someone familiar with the security ramifications of systemd.

Would not the incorporation of many loosely-coupled but individually secure mechanisms into a single monolithic mechanism be useful to an entity whose purpose was to monitor communications, view/modify systems unbeknownst to sysadmins and users, etc.? Yes, I'm talking about the NSA et al. I reference the following which also brings up Red Hat's control of Linux:

"Julian Assange: Debian Is Owned By The NSA"

http://igurublog.wordpress.com/2014/04/08/julian-assange-deb...


He rants about systemd, then talks about liking CoreOS, which uses systemd for everything. He is contradicting himself, but I'm genuinely not sure of why.


Read it again:

Author is saying that CoreOS is a good solution because you use it with heavily isolated containers. Thus, any use of systemd is unable to screw up hosted applications. For that use case, it makes sense, whereas in more general use, systemd wants to get all up in everything.


He responds to that here as well: http://blog.lusis.org/blog/2014/11/21/a-few-things/


I think that SystemD is an attempt to put Linux on par with Mac OSX and Windows 8.X, and attempts to do that haven't worked in the past.

Like Canonical changing Ubuntu to use Unity and Mir, so that it was easier to use on tablets and modern PCs. It just doesn't seem to work and it drove me to Lubuntu and use LDXE with an XP like Start Menu UI.

Ubuntu doesn't use SystemD yet, but I got a feeling it will.

I am downloading Fedora 20 because it has SystemD in it, so I get some idea what it is like.

But I think every Linux company wants to become another Apple for some reason. They saw how *BSD Unix went into making Mac OSX, and they want to try and copy that with their Linux distro and right now this SystemD seems like a path to that.

It is like a change from free and open source to commercial Linux. We all saw how Lindows/Linspire tried that and failed.

I might go to ReactOS, HaikuOS, AROS when one of them gets finished to the right level that it is ready for prime time. All they need to do is port Apache2, PHP, MySQL and other stuff to those operating systems to be used in VPS hosts so they can be used as alternative to Linux with SystemD if it ever breaks things.


Which Ubuntu release did you find to be using Mir as a default?


I think it was on a tablet, not on the desktop by default yet.

It is kind of confusing.


New to this entire brouhaha, but I'm concerned because my entire production stack runs on Linux. At some point devops is not going to be able to do things the old way.

Is there a link where one can read about the rationale of why systemd was needed? Perhaps by the person who started off the initial project?


is this a joke? - "my entire production..." you're a devops engineer? - "link where one can read about the rational.." with good grammar? - "why systemd was needed... " ok you lost me there - "the person who started off the initial project?" and you've been hidden under a rock for the past 10 years??

http://0pointer.net/blog/ - is his blog

are you seriously trying to tell me that someone is going to have to explain what a "container" is by comparing with chroots? :D how have you been running your production stack all these years if you didn't have an internet connection under your rock :)


As a rule of thumb, it's good to distrust products when you know that the politics behind have been loud enough to be heard widely. But the sharp razor which will decide if systemd is better is simplicity. Those who have switched, is systemd simpler than sysvInit?


Simplicity is not necessarily a good measure of suitability. See this: http://mjg59.dreamwidth.org/2414.html

There are some aspects of systemd that are simpler than sysvinit. The mess of double-forking and PID files goes away, and process management is much more reliable. Setting up containers for services is much simpler. Unit files are generally much simpler than a given init script.

I think more useful adages ("boned wisdom for weak teeth" - AB) would be, "Everything should be as simple as possible, but no simpler," and "Simple things should be simple, complex things should be possible." There is an enormous amount of complexity inherent in the problem of initializing and monitoring a modern OS. If your solution to this is actually simple, it is probably wrong or incomplete. How you manage that complexity and what your abstraction layers are are the vital questions.


Yes, declarative unit files are simpler than Turing complete shell scripts.


I used Gentoo for 10 years, and I can't figure out how to get a Systemd configuration working with XFCE. Indeed, most of the existing documentation has greatly slowed me down.


Fwiw, my friend runs Ubuntu for 10 years and he cannot understand gentoo for the life of him. He thinks it is absurd people want to keep recompiling stuff.


People don't want to recompile stuff, they want to configure the software to their liking. Basically it's very enjoyable if you're a developer.

And FWIW with Gentoo/Portage you can create and install binary packages.


That's pretty sad, I was able to get it up-and-running on my first Gentoo installation.

I guess time doesn't equate to knowledge, eh?


Haha, well its running but not quite right, if it saves me any face it worked out of the box in KDE!


https://news.ycombinator.com/item?id=8641839 is a much needed response to this discussion.


It seems like most systemd problems boil down to: shit this will take more than half an hour to understand, I'm a lazy admin and don't feel like learning something new.


The author responds to a number of the comments here: http://blog.lusis.org/blog/2014/11/21/a-few-things/


You know the blog is just a clickbait when it capitalizes the name of the project wrong.

(on the wiki, the big Spelling header could not be more explicit)


> Freebsd has jails which is a much more baked technology and implementation than LXC

Disagree!

1. FreeBSD has nothing like cgroups or namespaces. You can't apply cpu or memory limits to a whole jail, only individual processes in that jail.

2. it is early days for virtual network cards and ethernet bridging and jails: you have to recompile kernel to add VIMAGE.


You've been able to apply cpu and memory limits to entire jails since 10.0 was released through the rctl mechanism.


Thanks. I don't think it changes my point - various attempts were made to implement this feature since Freebsd 6 - almost 9 years ago, for some reason these were incomplete or ignored.

FreeBSD 10 was released only this year. For reference, Heroku began in 2007, using resource limits with LXC... a whopping 7 years ago.


Everything I read about SystemD is negative. Negative on the technology, negative on the people who created it. Nothing positive.

How is it that SystemD is about to dominate the market? Who is driving SystemD adoption, and why?


It's so terrible (literally the devil incarnate, sacrifice your offspring now) that it's already been adopted by:

- Arch Linux

- Fedora/RHEL

- openSUSE/SLES

- Mageia

- NixOS

- CoreOS

- Sabayon

- Debian and Ubuntu in the near future

> Everything I read about SystemD is negative. Negative on the technology, negative on the people who created it. Nothing positive.

There's a vocal group that seem to think it's some Red Hat conspiracy to destroy Linux and take over the world. You don't hear the positives because people either don't care about init systems (as long as their distro continues to work) and happy programmers are generally silent on the issue.

The fact that the article says that Solaris is the 'best of breed stack right now' and also suggests CoreOS (apparently not knowing they use Systemd) should speak volumes...


Well, in fairness, I did have a RH chap tell me to my face several weeks ago that Linux needs to replicate the Windows registry model and stop using the /etc text file approach. So I have a certain sympathy for the idea of Red Hat conspiracy (what is a company anyway but a conspiracy to produce work & products for hire?).


ROFLMAO... more and more applications in Windows are moving AWAY from the registry.. other than install/uninstall notations created by the installer, very few recent apps use the registry. Most use the global ProgramData or user's AppData paths for configuration. It simply works.

I don't hate the registry... I'm just not in a hurry to see Linux adopt it. What's so bad about /etc/app/config or ~/.config/app/config? It works in pretty much every platform (though the paths may be slightly different). Windows does have both roaming and local data directories, which can be pretty nice (if you don't bloat them).


I was rather unhappy when Samba took this approach as well, ages ago (mimicking the registry as a configuration store, not just as a hack to talk to Windows systems).


Only "reason" i could think of for going registry is so that it could be wrapped in an ACL.

Yep, the Windows registry has its own ACL. Was bitten by that when trying (naively) to move an account's files between Windows installs. Logged in and found myself back at default settings, and changes not applying.

I wonder if this shift in mentality has something to do with the M-I contracts.


One thing that needs further investigation is the degree to which distros adopted systemd because they'd rather not maintain patches for udev, logind, KDE, GNOME, Wayland, etc. to keep them decoupled.

I have a hunch that for !RedHat, the motivation to switch comes just as much from not having to fight upstream (if not more so) than from systemd's relative merits. After all, there are plenty of different init systems, but only one of them is coupled to the rest of userspace to a significant degree.

Also, I wouldn't write off Solaris zones :) They're pretty powerful--in some ways, more so than LXC is today (syscall translation comes to mind).


Solaris does not use systemd. It uses systemctl. I'll leave it to others to debate their individual merits, but I personally like the Solaris system a lot.


systemctl is pretty obviously the inspiration for a lot of things in systemd though. The idea that one or the other is better is up for the debate, but they're both based on the same idea.


Conspiracy: a group of people working in secret to commit a crime.

In this case, we have red hat, lennart, Kay, and freedesktop.org working together to force a LGPL RPC loophole to circumvent the GPL. Sounds like an appropriate term.


> The fact that the article says that Solaris is the 'best of breed stack right now'

That well-known paragon of SysV systems 8)


I'm a bit confused, was that sarcasm? Doesn't Solaris use SMF and not SysV?


Yep, I was agreeing through the medium of sarcasm. The latest (11) version of Solaris is also XML-ifying all the things; it's starting (in terms of XML config manipulated from tools) to look a lot more like AIX than SunOS.


There's SysV Unix, and SysV init.

Solaris is based off of Unix System V, as are AIX and HPUX.

You can be based off SysV without utilizing the init system from it


It hinges upon the fact that people are more likely to be vocal when they don't like something than when they do. Consider online reviews: are you more likely to review a product when it's worked fine as you expected or when it died a month after you got it?

systemd has both good aspects (e.g. faster boot times and removal of the nasty nest of shell scripts) as well as bad ones (incredibly monolithic). There are arguments on both sides of the fence. It's just that you're more likely to hear from people who dislike something than you are to hear from people that like it. (And at this point most of the systemd proponents have given up talking about it because of the incredibly vocal and relentless opposition.)


> (e.g. faster boot times and removal of the nasty nest of shell scripts)

OpenRC does this quite nicely. Gentoo has been using it as the default init since at least 2011.


"removal of the nasty nest of shell scripts"

What one person finds nasty another may find sexy.


Then feel free to use shell based init. Meanwhile, there is a strong concensus in Linux distros that the maintainers thought shell scripts were not sexy, and systemd unit files were, and thus everyone has / is switching.


"Everyone" has not switched nor is "everyone" switching.

Most distros are yes. Some are not. Personally I like shell scripts and plan to continue using them as long as feasible. They work, and when/if they don't, I can usually find out why.

"Strong consensus" might be good argument for some people. I personally find it a fallacious appeal to popularity.


Additionally,I would argue that distros are switching because of gnome, not because they like systemd so much.


I'd like to point out that logind can be used without systemd using systemd-shim. So you can use fully featured GNOME without systemd.


Yea it's called a fetish


> How is it that SystemD is about to dominate the market? Who is driving SystemD adoption, and why?

Redhat, Gnome and Pottering. While no-one can know exactly what they're thinking, it's in Redhat (and other commercial linux vendors') interest for more software to be more tightly coupled to linux and not run on other OSes. And Gnome is mostly developed by people who work for these commercial linux vendors (in a notable contrast to KDE, which has a wider spread of contributors, including more hobbyists and a number who are supported by government grants, particularly in Europe. Which leads to a different set of incentives).

My impression is that Gnome and associated software is, by and large, the wedge which is driving adoption in other distributions like Debian; previous "faster init" solutions were often available as options but never pushed to default because it wasn't necessary; users who wanted a different init could install one, but all their software would work fine either way.


Of course the good people of KDE will take a stand against the evil corporations and would never ever consider using that vile systemd-logind abomination!

http://blog.martin-graesslin.com/blog/2014/10/libinput-integ...


Will KDE support systemd on platforms where it makes sense? Of course. Will KDE drop support for non-systemd systems the way Gnome has? I find that virtually unimaginable.


Linux and SystemD work and work good. Because the people who use it like it and we don't have many complaints besides logs are no longer text files we have to use journalctl. The strong critics usually go the route of "It remains to be seen if this is a good or bad thing."

Systemd has been a replacement for the worst part of Linux, init. It was confusing and was just barely workable. It has been needing replacement for 5 or more years. There were SEVERAL solutions brought out and well more developers like SystemD more then the rest.

As a user in Arch Linux, OpenSUSE and Fedora it has been rock solid and I have been able to do things at my knowledge level much more consistent and lower level.


you dont have to use journald. you can send back from journald to syslog (eg rsyslog) and that works well/its easy to do. You can even disable the journald logging itself and just have rsyslog (or the logger of your choice).

at my place we kept both journald and have rsyslog to redirect logs to the network. ie our /var/log is relatively empty (rsyslog isnt setup to log to disk) so we look at log on the machine via journald if needed (rarely - since we look at the central log aggregator instead)


> you dont have to use journald. you can send back from journald to syslog

That's a contradiction. You're running journald even if it's dead weight.


And if journald goes belly up, no logs for syslog...


As if syslog cannot go belly up as well.


Lets assume for the sake of argument that logind (a new codebase) is about as stable as syslog (anyone of which are several years old, well tested). Layering one on top of the other, you now have doubled the chance of one of them going belly up.


When have you ever seen syslog go belly up?


You are clearly too young to remember. But don't worry journal will get there


That's the entire point. Syslog has years of stability and reliability. You don't just throw that away, especially for such an important role.


Can you assure it is 100% failure proof in all UNIX variants that have one running?


And if syslog goes belly up no logs. What's your point?


So now instead of syslog we have journald AND syslog - we only doubled the risk of something going boom and taking away our ability to log. I don't think that's nice.


You used have one component that needed to fail to start losing logs. Now you have two.


Technically its really just a socket dup


Using sane/standard logging should be the first thing an installer asks before installing systemd based OS. In fact, just make separate .isos for this.

(and I like systemd, at least in theory - and running Fedora 20).


opensuse and debian both continue to log to their respective syslog implementations.


> How is it that SystemD is about to dominate the market?

Religiously passionate salesmen, selling an upgrade to your car stereo, who conveniently forget to mention that you also have to replace your whole car.

> Who is driving SystemD adoption,

Kay Sievers, Harald Hoyer, Daniel Mack, Tom Gundersen, David Herrmann, and its creator, Lennart Poettering [1].

> and why?

One guy wanted to make booting his desktop faster. [2]

--

Later it was decided that systemd would be "a big opportunity for Linux standardization. Since it standardizes many interfaces of the system that previously have been differing on every distribution, on every implementation, adopting it helps to work against the balkanization of the Linux interfaces. Choosing systemd means redefining more closely what the Linux platform is about." [3] Basically, they want to change "how we put together Linux systems." [4]

[1] https://en.wikipedia.org/wiki/Lennart_Poettering [2] http://0pointer.de/blog/projects/systemd.html [3] http://0pointer.de/blog/projects/why.html [4] http://0pointer.net/blog/revisiting-how-we-put-together-linu...


people like me. I've been waiting for something like systemd to come to linux since i started using it (back in 2000).

Tired of having to write init scripts for every distro, writing forking daemons, and dealing with the too simple syslog.


> Everything I read about SystemD is negative

May be a bias in action. People who like it, or who really don't care much about it (I dislike journalctl, but, apart from that, I'm mostly OK with it) don't waste time writing how not much changed for them and how things continue to work as expected.


There does appear to be a lot of negativity around systemd, such as people giving out about binary logging. I'm sure there's got to be a way to enable text logging along with binary logging.

This lad seems really enthusiastic about systemd logging and systemctl:

http://0pointer.de/blog/projects/journalctl.html

There are a lot of interesting features, but personally I'd prefer to have both binary and text logging. Text logging is in cases where a system goes tits up and you may only have access to some basic tools such as grep, vi etc.


Yes, the author of systemd/systemctl is very enthusiastic about his projects. Why did you point that out?


Why is everything on the news always negative? Is everyone a crime peddling pedophile rapist murderer drug addict? My point is that negative drama sells headlines.


I didn't really like systemd initially, but know I'm more in the what-ever-its-fine camp.

Companies like RedHat clearly drives systemd adoption, because it solve problems for their customers, that would be large enterprise customers. Many of those who dislike systemd has simpler requirements, so systemd becomes some new they need to learn, but it doesn't provide them with any tangible benefits. They didn't see or care about many of the problem or features that systemd address.

Of cause you're going to be negative if a new system, one you didn't need is forced upon you. If you're happy with it you're most likely a large company that doesn't blog about systemd, especially if it's something that just works.

At least that's my take on it.


Its the same way Dropbox was trivial, iPad is useless and so on. Once the non devs adopt it, you will see the positively side.


It's necessary to understand that desktop linux has been total shit always, and this has caused incredible pain over the years to people who, against all reason, continue to try to use it. It turns out the bazaar can't create a polished user-friendly product, period.

Nevertheless, Red Hat and Canonical and others continue to try to foster desktop linux, on the now-obvious misguided theory that (a) the desktop matters and (b) they can take share from windows and osx installations.

As a result, there's hundreds of terrible paid desktop programmers, and thousands of their users, who are dying every day because, e.g., the last several iterations of wireless networking scripts were written by morons, their graphics libraries are comically bad, etc., etc.

Into this charged mix of total incompetence and frustration comes a small group of mediocre coders with hubris, backing, and political nous. And what they are promising to the long-abused desktop users sounds amazing to them, like wizardly magic, and literally, and almost entirely, boils down to this: freedom from having to deal with the shitty wireless networking script system. No joke. That is the fundamental issue at play, and the driver behind "faster boot times", "socket activation", and all of the other marketing points. If that idiot who wrote the wireless provisioning scripts had been competent, this entire mess would never have happened.

So the desktop linux users and desktop linux developers, who again have been living in a tiny cage being pooped on every day by their own regrettable choices, reach for this solution with the religious fervor of a drowning victim. And since desktop linux developers tend to be the C team, they don't care about good architecture, they just want things to work for them and their very specific desktop linux use case, which objectively and axiomatically, again, has not worked for decades and will never work.

So they band together in unison, following the exciting, energetic, charismatic and opinionated lead developer. And obviously, Red Hat is delighted, because that's their employee, and maybe they get more market share. And they pack Debian with developers, because there's a ton of horrible little desktop linux apps that grant them votes, and shout down the opposition. And they set up an IRC channel, and they brigade every forum with the same nonsensical attacks on the very architecture that made it possible for the internet to happen in the first place. Including, obviously, this very thread.

In actuality, this group of users is vanishingly tiny compared with the linux installed base, which is mostly phones and servers, where the real action is, and which don't need this halfassed dbus nonsense or the accreting blob of carelessly rewritten known-good-daemons. The desktop linux people are chasing a dead target with a terrible design and religious fervor substituted for technical ability. It will be intriguing to watch it play out. FreeBSD is about to get a big positive jolt of people that know what 'good' is.


Redhat has more influence than people believed. Systemd is entirely driven by redhat, who controls a bunch of other software and said "we're breaking all the software we have so it only works with systemd". Distro maintainers would have immense amount of work on their hands forking everything to try to avoid systemd.


My impression is that a very small number of people (core maintainers) actually drive adoption of technologies - persuade them & you have altered the cruise ship.

For me, I plan to stay as far away from systemd as I can; it's an abomination of software design.


I've been saying some of these things since systemd came out and typically get downvoted into oblivion so here this article makes HN's front page.

His implication about switching to FreeBSD is a good one and I notice a decent influx of Linux admins switching to FreeBSD showing up on every forum I visit.


for me it seems that most people complaining/crying out loud come from a system admin perspective.

For example the author of this blog article does not seem to have contribute a lot of code to any project and is more "just" an admin and not a dev.

for me it looks like they don't like new stuff love the linux world how they knew it and don't want things to get easier or even change.


>for me it seems that most people complaining/crying out loud come from a system admin perspective.

Sysadmins complaining about it is pretty important, as at the end of the day they are the ones who are maintaining the systems.

Personally, I have a strong dislike for Systemd. I don't believe it's the right replacement for sysvinit however it's not going away so we'll have to try and work around its warts. Also, currently it's buggy as hell so I will use Centos 6/Ubuntu 14.04 for a few more years to see if the many problems are sorted by then.


That's exactly as intended. People who want to help out systemd will use latestbsoftware, report bugs and fix it because you know thats how free software works.

People who don't want to help out use the most recent stable and have nothing to worry about.


TBH with the current direction systemd is going in I'll probably end up using one of the BSDs. However, I doubt I'd be able to use a *BSD in a production environment as I am a contractor and all of my clients only want Linux.


I don't think they would complain if things became easier. All I hear about is problems with systemd, and having to solve problems (and learn how systemd and all its components interact so you can debug it) is not easier.


Yeahz they all spend like 20 mins and come out crying without spending at least a week understanding it.

Which software stack do you know that you instantly get in 20 mins. I for one cannot understand the sysvinit or upstart in 20 mins (ie) to develop and debug. How do you debug sysvinit anyway? Echo's I place in the code do not appear on the screen. Can i claim 'Oh shit this is so broken...'?


You can start the daemon manually to check why it fails, or check logs, or put echo in the code like you say, or run the script with -x, there are many possibilities. sysvinit is quite easy to get, because you just apply what you already know (running shell scripts, running shell scripts with -x, read (log) files, etc.) and there isn't a layer in front of you that you don't know how to debug or trace.

And yes LSB init scripts are broken under systemd, exactly because of what you describe (placing echo doesn't show) because it redirects the execution through systemd, and stores the output in journald. Try writing (or debugging) an init script on a non-systemd system and see how much easier it is.

systemd also tries to act "smart" and remember the last state a service was in which makes developing LSB init scripts on a systemd system ... complicated. If an init scripts exits with success (perhaps because the deamon wasn't configured yet) then systemd will remember that and the next time you issue a 'start' it'll be a noop, and claim it was successful. Which leads to countless hours wasted until you figure out what really happened: systemd never even run your script again. So then you run 'restart' on the init script and all is well again.


Complaining and crying from a system admin perspective is pretty significant.


Agree. However, admins and devs often have competing interests. Admins want stability and ease of maintenance. Devs want features, agility, and speed of development. Admins often disregard or downplay technical debt in light of "has been working in production for X years". Devs often don't care enough about backwards compatibility.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: