Hacker News new | past | comments | ask | show | jobs | submit login
Systemd redux: The end of Linux (lusis.org)
235 points by bcantrill on Nov 21, 2014 | hide | past | favorite | 446 comments



Perhaps this is a controversial idea, but is this not just someone finally taking the tried and true Open Source "advice" to heart?

That is, every time I've reported something is broken, wonky, doesn't work reliably, et cetera, I've been told, "Submit a patch.", "Write some code.", or worse, "Implement it yourself."

Someone finally got fed up with the haphazard state of affairs in Linux-land. Fed up with the fragmented and sometimes many places you have to look for error logs. Fed up with the many files you have to edit to configure the network correctly (different on every major distribution). Fed up with the half dozen ways to configure X, where X is a common function to every modern operating system.

It seems Lennart has taken the advice and followed through, and distribution maintainers liked it. They liked the idea that someone was taking all this complicated work - this dirty, boring to write and maintain code - and making their lives easier. Why else would nearly every distribution be on board?

Systemd is offering a more compelling solution than anyone else, and if you don't like it, well, you should submit a patch, write some code, or implement it yourself.


I had no opinion on systemd until yesterday. In fact I had a glace or two at the code and it's pretty clean and I liked the rough objectives laid out.

I installed CentOS 7 on a machine last night that we're replacing CentOS 6 on and was poked in the face with timedatectl and dbus problems for an entire hour, some of which were intermittent. Debugging these issues is a horrific pain. I lost 4 hours on it. I've never lost that much time on a system function before. This is not what I expected and there is no way I could possibly introduce that to our production environment.

I think that might why people are slightly sensitive to it.

Yes you're exactly right, but replacing something with something less stable, more complicated and more difficult to debug isn't a rational or good engineering. I'm sure many people will be fed up with systemd much quicker than what was already there.

Not impressed with a community which pushes this as stable, quality software. Voting with my feet: FreeBSD is being trialled instead. WhatsApp throwing a million dollars at it draws a lot of valuable attention and puts it in the business's mindset.

Choice is as much of a valuable aspect of open source too...


You know this is funny, I remember reading comments EXACTLY like this about 3-4 years ago but with pulseaudio in place of systemd.

Pulseaudio was Lennart's previous project. It broke everything in linux sound for a while, everybody moaned and hated it and said it was the worst thing since the crucifixion of Christ.

Yet, name one problem you had with sound on linux in the past year? There are very few. Pulseaudio now just works(tm) and is a unseed, unheard of part of the plumbing.

If you remember what is was like messing with ALSA and (shudders) OSS before pulseaudio came along you will agree that the current state of affairs is a million miles better. It used to be really difficult to get more than one application to be able to play sound at a time. I remember compiling sound drivers from source just to get them working. Configuring ALSA config files to get surround sound working was practically a black art. Creating manual scripts that unmute the sound card on every boot because the driver didn't initialize it properly.

With pulseaudio, I never have to worry about any of that and configuring surround sound takes me two clicks of the mouse.

Lennart did a fantastic job with pulseaudio, he took on a dirty problem that nobody else dared to touch and went through years of criticism to produce a really high quality solution that solved the linux audio problem so well that you don't hear complaints about it anymore.

In light of that, I trust him to do a good job with systemd. It'll be a couple of years of everyone moaning and bitching and whining about it, then one day it will have become a seamless part of the plumbing, everyone will take it for granted and wonder how they ever managed fighting with shell scripts and fragmented init systems before systemd came along.

It's ironic that Lennart Poettering is probably the most abused developer in the entire OSS ecosystem, yet he is one of the people contributing most to it. For our sake, I'm glad he has such a thick skin. If I was him I'd have quit this game long ago.


> Yet, name one problem you had with sound on linux in the past year?

That's just it. Linux sound worked fine for me before Pulseaudio, and FreeBSD sound has always worked perfectly fine for me. In fact, FreeBSD solved sound mixing sooner via /dev/pcm virtualization (while Linux chose to create the Linux-only ALSA instead), and has always had lower observed latency.

Pulseaudio screwed up my audio so badly that for a year I was running the closed source OSSv4 binaries and manually recompiling all the audio libraries to use OSS instead of ALSA/Pulse.

It is not fantastic to push horribly broken code onto the entire Linux userbase while others frantically jump in to help patch and fix the trainwreck.

And we're doing the same thing again with systemd. Instead of having a few years where users can choose between systemd, sysvinit, openrc or upstart, while all of the major bugs are worked out, we're being forced immediately from sysvinit (Wheezy) to systemd (Jessie). I was on Lennart's treadmill with Pulse, I'm not getting on it again with systemd.


WAIT you NEVER had an audio problem in Linux before PulseAudio? I would have said the weakest link in Linux on desktop WAS audio.

Now PulseAudio was released into the wild too soon by too many distros BUT it has fundamentally fixed what was HORRIBLE in Linux. (Previously a Sound Engineer and Record Studio owner)

BUT I would say that Systemd is extremely stable and not broken. What people are complaining about is the philosophy aspect.


> WAIT you NEVER had an audio problem in Linux before PulseAudio?

To be fair, I didn't say I never had Linux audio issues prior to Pulseaudio (whereas I did say that about FreeBSD.)

Back in '98, my SB16 ISA card would only output sound at 8-bit monaural under mikmod, and I could only play CD-audio with that passthrough cable between the CD-ROM drive and the sound card. Once I was able to get sound working well enough, the only way I was able to play MIDIs was through Timidity and Soundfont emulation. And until ALSA, there was obviously pain whenever two things would want to play sound at the same time. This of course was due to the OSSv3 author changing the license before introducing his own audio mixing, and all of those awful sound server daemons (esd et al) never really worked, since there were multiple daemons and each application wanted different daemons or just wanted to stab right at the OSSv3 ioctl's.

But once ALSA was established and working, yes. Audio under Linux at that point worked just fine for me. Pulseaudio was a solution looking for a problem.

> (Previously a Sound Engineer and Record Studio owner)

I won't claim to be either of these. I like to listen to music while I write code, I'll occasionally watch some movies or play some games, and I want Pidgin to make a chime when someone sends me a message.

In particular, I'm very sensitive to latency in gaming (emulation), but that's about the extent of what I need speaker sound output for.

> What people are complaining about is the philosophy aspect.

To me, the worst part is the backroom politics, the complete disregard for portability, and the lock-in effects of consuming other daemons and services, and making software dependent upon it.

However, I do also object to the design itself, as well as to the developers responsible for working on the project, and the attitude of disdain they present to the community at large.


The thing is that we HAD TO HAVE JACK to over come latency in Linux and MAN that was HARD and once it worked DON'T mess with it or else 3 hours later you had a broken keyboard, mouse and monitor.

The issue was ALSA was HUGE latency to use for anything in recording was just not doable! I had to buy a closed source solution under Windows. Today I could easily do it in Linux.


To clarify, "/dev/pcm virtualization" means FreeBSD does audio mixing and re-sampling in kernel space.


That is correct. Let's look at the simplest form of sound mixing:

    /* A */ sample = (sample_a >> 1) + (sample_b >> 1);  //lowers volume of A and B by 50%
    /* B */ sample = max(-32768, min(+32767, sample_a + sample_b));  //prone to clamping
Obviously, the algorithms will become fancier (to mix better, to support multiple bit depths and frequency rates, to avoid popping if one stream runs out of samples, etc), but it's still an incredibly basic and perfectly safe bit of code to run.

Playing this up as a bogeyman for not being in user-space is FUD, especially when video card drivers also run in kernel space, and are literally thousands upon thousands of times more complex and error-prone. And now the big push is to have kernel mode setting for video cards (even FreeBSD is doing this), which I believe to be a terrible direction to go in.

I have never in my entire life seen a system crash due to audio mixing, but I've personally experienced plenty of video card drivers causing kernel page faults.

If people were even remotely serious about the protection of kernel space (and I certainly wish they were), Minix would be more than a footnote in history. Neither Linux nor the BSDs make serious efforts at microkernel designs. Not even passive attempts to run non-critical device drivers under ring 1. Personally, I'm really rooting for Minix 3 and hope that it takes off more now that it's gained binary compatibility with NetBSD.


Sorry, it was not my intent to "play this up as a bogeyman", and don't know enough about audio to have an opinion on this design decision anyway. (Do audio devices support floating point formats nowadays?)

I just find it amusing how a monolithic design of doing all audio stuff in the kernel is held up by some as an example of reliability and as superior to a more modular design that is more in line with the UNIX philosophy.

About the KMS however, I've heard that DisplayPort link training has latency requirements that are difficult to meet in anything but a kernel interrupt handler... a quick duckduckgo search finds a short note about that on: http://fedoraproject.org/wiki/Features/RadeonDisplayPort

Also X servers have traditionally needed direct PCI bus access to get the hardware initialized, which means that a buggy X server can hang your PCI bus so the driver running in user space likely doesn't increase reliability in practice.

It's an interesting question to what extent the limited success of microkernel based UNIX implementations is to historical accidents and network effects, and to what extent due to actual technical limitations and additional complexity of a microkernel architecture.


> Sorry, it was not my intent to "play this up as a bogeyman"

Okay, my apologies as well then. It was hard to get a read from just that one sentence with the word kernel emphasized.

> (Do audio devices support floating point formats nowadays?)

Natively, no. You can be lazy and do it anyway in software mixing though.

> I just find it amusing how a monolithic design of doing all audio stuff in the kernel is held up by some as an example of reliability and as superior to a more modular design that is more in line with the UNIX philosophy.

Certainly, it would be ideal if everything non-critical were in user space. But audio in the kernel is probably at the very bottom of the list. Audio mixing is maybe 0.0001% of the kernel code, and is some of the safest, simplest arithmetic code imaginable. It's worrying about the one ant you saw on the counter when your entire house is infested with termites.

> About the KMS however, I've heard that DisplayPort link training has latency requirements that are difficult to meet in anything but a kernel interrupt handler

I don't know if that's true or not, but I am running a DisplayPort monitor (ZR30w) now without KMS, and it works fine. Obviously the video driver is still running in kernel mode, but at least it's a module outside of the kernel itself that runs after my system is booted.

What I'd really like to see is distros and vendors instead relying on UEFI GOP for boot-time mode setting.

> Also X servers have traditionally needed direct PCI bus access to get the hardware initialized

Well, compare it to audio. Eventually even a userland mixer will have to send the samples through some sort of hardware interface. But if your goal is stability, then it would be ideal to get as much code out of the kernel as possible.

> and to what extent due to actual technical limitations and additional complexity of a microkernel architecture.

Certainly nothing is ever perfect. There are so many potential problems with computers. Cosmic rays can flip bits in your RAM if you don't shell out an extra $500 for the premium CPU, mainboard and ECC RAM. Strong enough power surges (lightning) can burn through and destroy absolutely any running computing equipment. Hardware can literally fail and take down your system. Things can overheat, there can be design flaws in the silicon itself, etc.

So I look at it like OpenBSD looks at security. You want to stack all the protections you can. Mirror your drives, use ECC RAM, don't run anything in kernel space you don't have to, try and build as much redundancy and safety as you can into the system. It won't be perfect, but every bit will help increase uptime.

...

So again, sure, audio should preferably be in user space. Just, it's many thousands of times worse that video isn't even trying to do this, and is in fact going in the opposite direction to become more tightly coupled with the kernel.


You're entirely right and I've upvoted you.

However, with one small caveat: servers don't generally have sound cards so the impact of this was relatively low. There aren't that many desktop Linux users out there. I mean I'm a Unix guy at heart and I'm typing this on a Windows laptop. I've never used Linux on the desktop and probably never will.

Now servers do have init processes and we don't really want to spend the next 3-4 years being guinea pigs. I'm quite happy for the vendors to do this behind the scenes or offer it as an alternative but we've got an RHEL+CentOS release with systemd in it already and a Debian with systemd in it just around the corner. A pulseaudio situation, even for 6 months, will result in no small amount of chaos.

I do indeed remember times before even ALSA when you had to pay OSS for drivers for your turtle beach card etc. But that's in the distant past, not right now and of little relevance. Windows was fine on the desktop then as well and the sound worked fine out of the box.


> I mean I'm a Unix guy at heart and I'm typing this on a Windows laptop. I've never used Linux on the desktop and probably never will.

Then you're not really a Unix guy at heart.

At home, all I run is Linux, including the laptop my non-geeky wife uses.

For me it was a hard choice. I knew she would object because it would be "different" and she's not really interested in learning a gazillion different computing-systems, but on the flip side it meant it was simpler, quicker and less work for me to maintain the computers at home.

Once setup things just work, and ensuring everything (including flash and other vulnerability vectors) is up to date is one apt-get upgrade away.


Let's say PulseAudio is really good now (I can't disagree). It was initially released 10 years ago. So 6-7 years after initial release ("3-4 years ago") it was causing people grief.

I'm not sure that's a great vote of confidence for the road ahead of systemd (given that systemd presumably has a bit more to it than PulseAudio). To quote the article, "I do honestly believe this will end up being the start of a rocky period for Linux".

Ubuntu 14.04 will keep me happy for ~5 years, then I can take another look at what the current state of things are.


Pulseaudio still has problems. Sound suddenly being muted, it using the wrong alsamixer settings, sound being garbled until you pass in arcance settings or change it back to Alsa. Granted, the error might be more in part of the rest of the ecosystem, though I doubt it. But it is sadly far from "just use pulseaudio and everything will work instantly". And if it works, there is no need to think that Alsa alone wouldn't have worked as well, the configuration was way less brittle and cumbersome than it is described now. Mostly it just worked.

It is only since 14.04 that you have a small chance that opting to use pulseaudio is the better choice.

Just trying to get Skype working (which uses pulseaudio) cost me 2 hours last week, which is not at all nice when you have a call starting in 5 minutes.


Seconding this.

Pulseaudio still won't detect the headphone jack on my old intel board, and Skype on my newer machines on Linux will routinely fuckup playback.

One also wonders how much of the PA cleanup was handled by people that weren't Lennart.


Strange I don't see the issues you're talking about I use Ubuntu Desktop on a variety of desktops and laptops. I'd say PA was stable by Ubuntu 12.04. I do Skype and Google Talkplugin (now Hangouts).


Working in your use case doesn't invalidate someone else's. I've had some systems that pulseaudio has been great for. I have some in which I still don't have fully working sound.


Try using it with JACK and setting up a DAW.. then you'll see the pain.


I've had success by adding these four pre/post startup/stop scripts to qJackCtl: https://wiki.archlinux.org/index.php/PulseAudio/Examples#The...

In particular for problems of getting Youtube (or any browser audio) to work while other apps use JACK directly.

Although on a recent new install it seemed to work without them as well.

One problem is I need to start/stop the qJackCtl thing every time my laptop comes back from sleep, to get sound working again. There must be a way to automate (or, preferably, fix) this, right? Anyone know?


To be fair, the only reason to run Pulseaudio is "everyone else is" - i.e. its fully glommed into the distro. ALSA and Jack have been stable for a lot of people, even before Lennart decided to tackle 'all the problems'.

But, also to be fair - like you, I maintain my own systems and do not overly depend on the teeming-mass-reality as a derivation of stability. My personal Linux DAW systems, running now for decades, have attained a level of productivity that I would at least hope is represented in the current niveau, vis a vis Popular Linux Distro designed for audio (e.g. pure:dyne, Arch Pro Audio, 64 Studio, UbuntuStudio, et al.) .. for the newcomer, it should of course 'all just work' from boot-up, which I hope is the case. It is for me, anyway: I've expunged pulseaudio from all of my machines, and make do with Jack. My studio uses 48-channels of digital audio, everything-is-a-file .. a working and functional DAW, thousands of plugins, about 12 MIDI devices (synthesizers/effects rack) and so on, and the best thing of all: all source code included. So, yeah .. ;)

EDIT: apropos qjackctl, yeah, apmd:

http://www.tutorialspoint.com/unix_commands/apmd.htm

.. or some such similar thing.


Thank you! I keep meaning to figure out that power-management thing, but keep putting it off because I vaguely don't know where to start. Now I have something, I will dig into it :-D


Might be worth trying ALSA->JACK routing?

http://pastebin.com/iVAjZzTS


Thanks you just ruined my Friday remembering those days!!!!


Up to this day, while libpulse0 is required by some package, I have no pulsed running. Everything is using alsa directly, and I'm using jack for audio work. The impact of pulseaudio is way lower than systemd, and never actually impacted anyone producing music that have no reason to use use a pulseaudio sink. Notably, the audio layer is (in)famous to be able to route anything through anything. Contrast to systemd, where sysadmins actually have now limited choice to many parts of the system (not only init!).


Your pulseaudio example is naive, because this: pulseaudio broke audio for many professionals who were already using JACK and ALSA. This is why the upheaval is criticized.. Whats being done to improve things caters only to a low common denominator; it doesn't push the state of the art forward.


> Yet, name one problem you had with sound on linux in the past year?

A month ago with Ubuntu 14.04 - paired with a bluetooth speaker, but would not send any audio to it (A2DP) without any indication as to how to diagnose or correct the issue.


Hey, pulseaudio still does not work for me and when it rarely manage to get out some sound, my CPU is over 10%.


Great example except ... I don't have pulseaudio installed on my system. Not even out of any specific effort to avoid it.

It may be standard on some systems, but not, apparently on debian (it's an "optional" package).

And that's part of the point: stuff that needn't be present shouldn't be. systemd's a whole 'nother ball of wax in that regard.

And yes, I'll even allow that Linux audio has been frustrating over the years. But in my case, problems going away had nothing to do with Lennart's work.


Audio in my Lenovo with Ubuntu is completely erratic. Not going to blame this on Lennart, but it is certainly not a fixed problem. I liked Alsa, though I was working with high end hardware at that point.


>> Pulseaudio was Lennart's previous project. It broke everything in linux sound for a while, everybody moaned and hated it

I've been with Ubuntu since 2007 and went through the PA transition. I agree it is so much better now. Changing audio sources is easy and faster than on Windows 7. By Ubuntu 12.04 this was stable for me. Like changing from speakers to headsets for a meeting, smooth with PA at least on Ubuntu. Until PA I never thought I'd see audio united on Linux.


I still haven't tried pulseaudio; my first interaction with it was so terrible.


I checked out several free synthesizers recently on an Ubuntu system. None worked on first try. Thought I got half of them working after a while (the others would have needed JACK).

On my Debian system youtube videos stop playing sound once in a while (video continues), thought I suppose it's not pulseaudio's fault (so just a general sound problem).

You asked ;-) (I still agree with you that it got better than it once was)


pulseaudio is a continuous disaster for me.


Are you really extrapolating from ONE data point?

In some sense systemd is more stable in that it's fixing some longstanding bugs with sysvinit, but of course it will have some bugs of it own. If you don't want to deal with that, you could skip a release.


No it's not just one data point. This is the final straw to use the old phrase. After a few years of serious problems, our most recent being CIFS VFS problems causing panics and mounts locking up on CentOS 6.5, hard locks on RH certified hardware, power management hell and so much incredible churn with no progress and the sudden "fuck POSIX" approach, it paints a really bad picture of the current state of things.

There is a distinct lack of engineering prowess and quality control. It originates at the core GNU + kernel + freedesktop teams and waterfalls down through the distribution houses.

That's the problem and it's endemic within Linux.


I caution you to not apply the word "engineering" to software, especially in the apparent manner you've been applying it to this thread. You seem to have an idea that there are universal software development practices that are so well understood we can make regulation around them. But that isn't how software works. There are some practices that are known and some that are unknown. For working software "engineers", we are all, daily, put into a position for having to figure out for ourselves even what tools and materials to use to make anything.

Imagine you were an architect of buildings. Your day to day job is to design mundane strip malls and gas stations. You have building code on your side for much of the process. As long as you don't violate the regulation, you at worst can only make an inconvenient building, but not a dangerous one.

But imagine instead that you're building a large office building every six months, your clients demand you don't reuse any design principles on your future clients, and you not only lacked the building code, but also 1/4th of the heavy machine equipment, 1/2ths of the tools, and 3/4ths of the raw materials. I don't just mean you don't have them in stock, I mean nobody has invented them yet. And of the ones that have been invented, we don't even know all of their material properties, say nothing about what material properties we should be looking for. Will this particular bolt we are using with these particular cross beams hold up to the stresses placed on them? The answer is unknown, and I'm some cases unknowable.

It's really easy as a user to say, "they should have tested this more". While strictly true that more testing may have found your issues ahead of time (presuming the right tests were done), it is inefficient engineering to exhaustively test things. Even mechanical engineering bases a lot on statistical modeling, which will always, always have corner cases that don't match reality.

In the real world, people had to learn the hard way about things like lightning rods and sacrificial electrodes. They didn't come about from "testing during development". They came about from testing live, and seeing which buildings and boats did or did not burn down or sink. That's not bad engineering. That's just the nature of unknown problems.


That sounds like an excuse which I don't accept. I have a formal engineering background and whilst you're fundamentally right, engineering is based upon cumulative experience gained. We have a hell of a lot of experience as a society of writing software that works and is of merchantable quality.

What the general state of affairs shows is the following traits:

1) There is no thought and research going into the design of a piece of software. Ergo, we do not learn from past mistakes.

2) There are isolated individuals writing vast swathes of software which are trusted unconditionally. Ergo, we do not learn the benefit of multiple eyes on a problem, review and discussion.

3) We assume that software is correct from one person's viewpoint and opinion. Ergo, we do not test software properly nor cover those tests with objectives.

4) We work to deadlines, not quality objectives. Ergo, we trade quality for tolerance from others.

In this case someone came along and didn't think about the problem, didn't work with others, assumed they were unconditionally correct and chose tolerance over quality.

To use your analogy, they're now selling stainless steel lightning rods (poor conductivity), are the only vendor of them, are a vocal marketing front and houses are catching fire everywhere.

Or more specifically, in one example, the entire process was above and from the author to the distributor, no one even noticed that loginctl doesn't work properly.


Wait, so let me get this straight. You're an engineer-engineer, not a system administrator or a software developer. And you think there is a fundamental problem of qualification in software development, yet you lack qualification in software development.

On your points:

#1 is patently false, to the point of being extremely insulting. You've lost all sympathy from me at this point. Go peddle your baseless opinions somewhere the audience doesn't know better.

#2 is also laughably false, as that is the entire freaking point of open source and often considered the greatest strength of Linux. You think because your highly qualified opinion wasn't consulted before you had to spend a whole four hours, OMG on a problem that means that no review or discussion was done?

#3 is false again, because software is tested. You use the word "properly", so I will sit here and wait for you to bestow upon us your great wisdom on what we could be doing better.

#4 is false on both presumptions that software is not built to quality standards instead of deadlines, or that other fields are not dictated by deadlines.


10 years in EE (embedded systems, defense industry), then 15 in what we now call devops/architecture. Experience is fine. I have no problems being arrogant about that. I've fallen down a lot more holes than a lot of people and know what I'm talking about.

When I say tested properly, I mean tested completely. If you miss an entire functional unit of the software and a client reports it as broken, its pretty obvious what the problem is.

Our senior software guys sat down for the other four hours and presented all our findings together and cumulatively said "we're not supporting that shit; we can't trust it".

Regarding #4, it's plain to see that RHEl was released with a broken systemd implementation due to deadline...


> When I say tested properly, I mean tested completely.

I am still in the learning phase, but even I know that complete testing of any complex software is practically impossible.

So, how do you guarantee completedness in "proper" testing? I know you can't without redefining the word "completely". What's your definition?

Also see Impossibility of Complete Testing[0] by Cem Kanen, co-founder of http://www.associationforsoftwaretesting.org/

[0] http://kaner.com/pdfs/impossible.pdf


Yes you're right.

When your system consists of functions "A, B, C, D", I'd expect to see test suites for "A, B, C, D". In this case there were test suites for "A, B, C". The client found D therefore the test suite was incomplete.

Now if a bit of A, B, C or D suites were missing that would be different and entirely expected.


You should at least be testing the happy path and most common failure mode of every component of a software system, whether manually or automatically. The most visible components to the user should be tested most. Imagine an ecommerce site where nobody ever tested checkout, or an OS where nobody tested logging in.


Nah, they're pretty much right.

As for testing--notice that when a lot of people here are reporting issues with systemd/pulseaudio, their reports are pretty much dismissed out of hand, or they're told "no, you've done something wrong".

For #2, a lot of times somebody with the right political position (say, Lennart at Redhat) or just the ability to shout louder and longer than anyone else will get something put in, regardless of technical advantage. Don't even try to claim otherwise.


I read him as leveling his complaints against the entire field of software development, and if he was being more specific than that, it was at least as big as the subset who develop Linux.


If there is one thing that HN has taught me it's not to make broad, hyperbolic statements just because I don't like something; especially if it is a divisive issue. There is always a "moron4hire" who will rip you a new one (and rightly so).


It seems to me that OpenBSD stands for everything you want in OS dev, except maybe point 3, Theo seems really at the center of everything.


You're 100% correct. In fact my own mail server/web machine uses it. Theo is not right at the centre; there's a large group of people who work together as equals from what I can see. I can't recommend it for our "enterprise use" though because we need some of the more friendly features that FreeBSD offers such as ZFS.


I run a mixed environment right now with OpenBSD, FreeBSD, Red Hat, and Windows. I use Red Hat and Windows because certain software requires it (basically they are single app servers[1]). The OpenBSD servers are all doing basic utilities and the FreeBSD servers are doing stuff the requires a big file system (ZFS). There are a lot of enterprise tasks that OpenBSD is fine with doing, and I just use FreeBSD for tasks that require the file system or really heavy load.

1) government contractors are so fun when they get their software required to deal with certain parts of government


Hmm, I don't see any of that, odd isn't it?


Not really odd. You just haven't poked the bits I have.


it's fixing some longstanding bugs with sysvinit

Honest question: I've been using sysvinit for a very long time and I have no concept of what those bugs might be.


Simple one PID files.

Assume server with lots of processes.

Service A starts writes PID to disk, lets say 123. lots of processes start and stop as it goes along and does its work. Service A crashes/stops working PID 123 gets reused by a new process SysAdmin comes and hits /etc/init.d/ServiceA restart shell script calls kill 123 which was a totally different process now not at all related to ServiceA.

etc...

Clean unmounting not depending on timeouts to be high enough.

etc...

Not starting a database before the filesystem with the database files is mounted.

etc...


> Service A starts writes PID to disk, lets say 123. lots of processes start and stop as it goes along and does its work. Service A crashes/stops working PID 123 gets reused by a new process SysAdmin comes and hits /etc/init.d/ServiceA restart shell script calls kill 123 which was a totally different process now not at all related to ServiceA.

I created a specialized FUSE filesystem to deal with this. Processes create PID files in it, but when they die, the filesystem automatically removes them.

Code: https://github.com/jcnelson/runfs


Nice to see a different solution than cgroups for this

The Readme is rather sparse, could you add an example how to use it from a init shell script?


I still think 64 bit non reused pids are the best long term solution. There are other pid race conditions. (Not having pid files deleted on reboot is a different issue of course).

Although the Capsicum model (in FreeBSD, slowly getting into Linux) where you can have file descriptors for processes is another different model.


(although Posix does I believe require pid_t to be an int which is an issue)


> Not starting a database before the filesystem with the database files is mounted.

This was solved decades ago with numbered init symlinks:

  K20postgresql
  S20postgresql


This was solved in the same way that assembly languages are touring complete. It's true, but it's not useful. You need a compiler to output those meaningless numbers. What happens when you install something after booting? Who starts it? On a long running system those files are useless. Which ones have you run? Are they idempotent? Systemd attempts to solve all of these problems and more.


Don't those correspond to runlevels? I can boot to runlevel 5 and not have all file systems present and working e.g. an NFS filesytem might not be available.

So I would say it has been hacked around for decades. Not cleanly solved. But I am not the best informed here so please add more details about how numbered init symlinks guarantee file system being there before a service is started.


It hardly does anything for FreeBSD in the enterprise world. Companies cannot afford to support Windows Server, Linux AND another flavor of UNIX (FreeBSD). They are already dumping Solaris/AIX/HP-UX as much as they can so environments are a bit more homogeneous (and easier to support). There is no point in onboarding FreeBSD for what they perceive as minor technical differences (that, truth be told, are overshadowed by dozens of layers of bureaucracy and any efficiency benefit is completely lost).

It took a long time and a giant ecosystem to get where Linux is today at big enterprises. OSes are commodities in that space. They are not commodities in many other spaces though (e.g. startups, HPC, science, etc).


> They are already dumping Solaris/AIX/HP-UX as much as they can so environments are a bit more homogeneous (and easier to support).

Whoever is pushing for this is an idiot then. Verisign, for example, has had 100% DNS uptime for the .net, .org, and .com root servers for ~15 years because of their mixed environments. In every one of their POPs they tend to have at least two racks of equipment with:

   * 2 different brands of load balancers
   * 2 different brands of firewalls
   * 2 different brands of switches
   * 2 different brands of servers
    ** servers are from different hardware generations
   * 2 different OSes (Linux and FreeBSD)
   * a choice from 3 different DNS server software
This is all pretty much randomized at each location. As a result, a bug in one piece of the stack (hardware, software, driver, security, etc) will not take down their service completely.

This is how you run a reliable global-scale service. Anyone who plays the "it's just easier if we all use ____" is in for a big surprise when their entire infrastructure is at risk due to one bug.


Yet Google and Facebook keep using Linux for all their servers.


Facebook also uses PHP, the most reviled language around here. Google has a developer team to rewrite components and adapt the Linux kernel, consider that.


I don't recall seeing Facebook or Google have 15 years of uninterrupted service.


It's an unfair comparison in any case: DNS is "trivial" to keep up compared to even small fractions of Facebook and Google's infrastructure.

As long as a sufficient fraction of servers at a sufficient fraction of Verisigns clusters has an uncorrupted set of data and is able to serve responses, Verisigns TLD zones remain up.

Pretty much the only thing that can go wrong in that case, assuming you have safeguarded the integrity of the zone is bugs in components outside their direct control.

It makes 100% sense for them to focus their efforts on ensuring diversity, because the class of problems that can solve for them makes up an unusually high percentage of the possible failure classes, and the nature of the service also means that most of the potential problems that this can cause is only likely to take out some proportion of their capacity that still leaves them with a functional system, so the potential benefit is higher for them than for most with a heterogeneous, and the potential risks are lower for them than for most.

For Google and Facebook, the systems are so much more complex that the tradeoffs are vastly different.


It's equally trivial if you serve your web infrastructure off of load balanced caches.


Sort of (ignoring that web browsers don't retry failed requests), if all you are serving is static/cacheable content.

Which excludes the vast majority of functionality of Google/Facebook, and most other major web properties.


I disagree.

There are very few heterogenic systems in the enterprise. That is an objective, but the main thing is that we deliver what we're paid to deliver by choosing an appropriate platform. We have Solaris, zSeries, Linux and Windows. We just got rid of AIX.

As for minor differences, FreeBSD has a lot of much bigger wins than people realise at first glimpse. The differences are far from minor. For example:

ZFS, dtrace, rctl, a scary good IP stack, virtio support, documentation that doesn't suck, a POSIX base, LLVM/clang, a MAC framework that doesn't suck, OpenBSM, CARP and a pile more. Oh plus an automated deployment story that is pretty tidy.

Sure we can replicate some of these on CentOS 7 for example with similar tech but the above are a million times more cohesive.


I installed CentOS 7, with no prior systemd experience, had no problems and found .service files way neater than sysvinit bash scripts and liked having meaningful names in the log/ journal rather than 'local3'.


I had no problems the second time as well.

Unfortunately when the first and second time differ even though identical (recorded) steps were performed, one has to ask the question: why and can I trust it?


Maybe it was a hardware problem on your end?

My rule of thumb is "search for the problem on Google. If nothing comes up, maybe something is wrong on my end".

Did you find any results or reported bugs similar to what you experienced?


The hardware is known good on CentOS 6.5. In fact it was high end HP kit that we pulled out of production.

Yes there were other mentions of it with notes to it being fixed in a later systemd drop, which we can't deploy because RH/CentOS don't ship it. I think one of our guys raised a case with RH but I was dragged off onto something else then.


To be fair, pulseaudio did expose bugs in alsa-drivers, so a "known good" configuration could stop working when "upgraded" to use PA.

I have my share of reservations about systemd (and PA), but thought that it might be worth pointing out that "known good" hardware A with software X, doesn't have to mean hardware A is all good, just that A has no bugs/errors not exposed when running X. So Y comes along (new kernel, drivers?) with entirely new code - and suddenly things behave erratically.


I've been a longtime linux sysadmin (back into the 90's) and have run into similar issues as you. I've never had as many problems with basic system stuff as I have since testing some of the new systemd-defaulting distro's. For instance, the centos/rhel 7 boxes I've tried have erratic problems - sometimes not setting the hostname, sometimes services don't start, sometimes services I've disabled _do_ start. It's making me really think about shifting to FreeBSD (or, god help me back to Solaris).


Out of curiosity, what ended up being the root cause of the timedatectl problem?


Absolutely no idea. It just went away spontaneously which is even more worrying as that suggests the system is non-deterministic.

I don't have the error on my phone which I'm on at the moment but it threw a dbus error with no debug info.


If you don't know what the problem was why are you so convinced it was systemd? Could be the kernel, could be udev, dbus, filesystem, or hardware (sorry, there is never 100% "known good" hardware).


It must be a hardware problem on your side, dude. Simple as that.

I imagine 10 years ago you would be the person complaining that GCC segfaults randomly during compiling Linux kernel, complaining that it's not "tested completely". While the segfault was caused by CPU overheating (not cooled properly) and flaky memory (causing bits to flip).


Dude, prove it. The system should be able to exonerate itself by detecting and reporting those problems (and other systems do). At the very least, point to some actual evidence. This is the "engineering" process that the parent poster referred to elsewhere.

Just because a problem is unusual, intermittent, or only affects one person doesn't mean it's not a regular old software bug. And in my experience, it almost always is. And once you do debug it, you often (but not always) understand why it was intermittent, under what conditions it happened, and why you were the only person that saw it.


Known good HP DL380 G8 pulled from production, ECC RAM, monitoring, SAS disks, full hardware test and memtest86 pass.

Nope. Not that.

We don't buy crappy hardware or not test it.


I completely agree with you that Lennart scratched an itch, which is the way all good software gets started, and others picked up on it.

Where I think the systemd-naysayers have a valid point is around the tight coupling that has been introduced, and is still being introduced, between systemd and various other components of a fully functional Linux system.

To take your "just submit a patch" example - say N years from now I'm unhappy with some aspect of how systemd works. I can submit a patch, or I can rewrite that whole component from scratch. However, it's entirely possible that the piece I'm unhappy with is so tightly coupled to the rest of systemd that I can't rewrite one component of it without rewriting the rest of systemd, or convincing the systemd maintainers to accept my rewrite and bake it in as the new "official" version of that component.

Where I think the criticism of systemd is valid is that the idea of modularity has taken a backseat, and the APIs between the different components of systemd haven't been very well-thought-out. The informal spec is "whatever systemd does today is correct", which of course destroys any sort of interoperability.

And by way of full disclosure, I'm an Arch user, and run systemd on 4 systems I use everyday - home desktop, home server, work desktop, work laptop. Whatever else I have to say about its design, I use it every day, and actually like the parts of it that I use. eg, the boot time for my desktop is stupidly fast, and if I want to know about some log message, I just run journalctl. I no longer care whether the foo daemon uses syslog, or writes to its own /var/log/foo.log that I should set up rotation for, or handles its own rotation as /var/log/foo/2014-11-20.log, and so on.

And just to play devil's advocate with my own position - there's a certain point where tight coupling makes sense. Linux kernel modules, for example, are tightly coupled to the Linux kernel, and don't work unmodified when compiled against a *BSD or Solaris kernel.


Well, the very distro-specific bunch of scripts in /etc/init.d (or is it /etc/rc.d/init.d/? or /etc/rc.d? or a symlink to ...?) were some kind of tight coupling, too.

Plus: This tight coupling did not exactly replace existing communication features. It created new ones. These are made use of.

Yes, systemd is bringing lots of new functionality. Under the hood - that is why sysadmins love or loathe it and users mostly don't care. That "tight integration" argument is mostly one that comes from people (please do not take offence, you're weighting it carefully indeed!) who bemoan that other userspace system infrastructure is left behind feature-wise. And those who love to argue about and against design decisions.


Honestly, I can understand why people are uneasy about this. "Yes, tight coupling is being introduced in many core Linux projects, but don't worry -it's only these shiny NEW features you don't have anywhere else!"

Sounds eerily like "Embrace Extend Extinguish" redux.

Don't get me wrong, I am aware systemd is a technically superior solution. But politically, it is a trainwreck.


> Well, the very distro-specific bunch of scripts in /etc/init.d (or is it /etc/rc.d/init.d/? or /etc/rc.d? or a symlink to ...?) were some kind of tight coupling, too.

Sure, but the coupling was contained. You could still run Gnome on any distro (or on non-linux), whichever way around your init scripts were.


Say whatever you want about shell scripts, but that is not what tight coupling means.


You forgot /etc/init (upstart) :)


Lennart didn't just submit the code and put it out there. He lobbied other projects to hard-depend on it, and lobbied distros to adopt it. Systemd didn't succeed where less poisonous equivalents failed because it was technically superior (it isn't), it succeeded because of shady back-room politics.


How dare he ask other people to use the software he wrote.


I think "ask" here is the wrong word, and you know it.


I'm curious to know what kind of leverage he had over the distros where it wasn't just "ask". Any suggestions on where I can find some info?


Let's say you're right. What kind of "political" leverage Lennard had to convince distros to use systemd?


All he would've needed was the support of RedHat management, after that everything falls into place: most Gnome contributors work for RedHat, so they can convince Gnome to follow a particular direction and hard-depend on systemd. Then most distributions are under pressure from their userbase to support Gnome (because of a legacy of politics and FUD about the KDE license, because a minimally-configurable environment is popular with the kind of large, inflexible organization that buys expensive support contracts, and because it's actually quite good software) and are therefore also obliged to hard-depend on systemd.

It's in RedHat's interest for software that's currently portable to FreeBSD or especially to Solaris to become tied to Linux. This wouldn't be the first time RedHat has adopted anti-opensource methods out of fear of Oracle - compare their policy of deliberately obfuscating the history of their kernel source.


Thanks for the explanation. I never thought about this before. It makes senses, but it is scary. I hope you're wrong. :)


You can't realistically submit a patch to change the direction that Systemd's is going in. For example, they won't accept a patch which removes 95% of the code, so a more modular system can be built.

Submitting a patch, implies you agree with the general direction but need a bug fixed or a feature added.


Not only would submitting a patch be agreeing to their goals (Lennart gets to push the Overton window a bit further), suggesting that we should simply submit patches presupposes that the systemd cabal would ever accept them. Unless it is perfectly in agreement with their goals - including the complete software - they probably won't accept it. They don't even accept already written and tested patches for trivial things like #ifdef-ing a couple minor fixes so the project can build on a different libc.

Lennart Poettering[1]:

    Humm, I know this will disappoint you, but we are not particularly
    interested in merging patches supporting other libcs, if those are not
    compatible with glibc. We don't want the compatibility kludges in
    systemd, and if a libc which claims to be compatible with glibc actually
    is not, then this should really be fixed in the libc, not worked around
    in systemd.

If they aren't interested in trivial compatibility patches, they certainly aren't going to accept any patch that dares to disrupt their tight integration or questionable design choices.

As for forking the whole thing, remember that logind was briefly liberated so it could be build as a standalone package, Lennart went and did a big rewrite so the next version was much more integrated with systemd. When he controls the internal APIs and can change them whenever he wants, a clone will have to be a total replacement right from the start, or it ends up perpetually having to catch up to the changes that will be introduced just to cause breakage.

[1] http://lists.freedesktop.org/archives/systemd-devel/2011-Jul...


His response seems perfectly reasonable to me - even more so after reading that whole exchange.

Why should the Systemd team pay the overhead - in terms of complicating their code - to work around incompatibilities in another libc that will also affect portability of a lot of other Linux software?


The same reason most other project accept trivial patches like that: it's not actually a cost or complication, and helping compatibility and interoperability in the software ecosystem is a good thing.

We're not talking about asking for some new work to be done. We're not talking about any kind of change to how the project works.

This is about trivial changes like #defining function name that aren't even included in the build unless you were using the libc. It is actually rather surprising behavior to see in a publicly-developed project. This kind of fix is incredibly common we've created tools su chasd "cmake" and "autoconf" to handle the common cases and easier #ifdef-ing.


"Trivial" changes like that contributes substantially to making projects hard to read and understand. When there are no better alternatives, that may be warranted, but in this case there is an obvious alternative: Fix the libc implementations that are incompatible with glibc, and at the same time gain the benefit of helping other applications.

I wish more projects would take this line.

Autoconf is the devil. It's a symptom of how broken Unix-y environments have been, and how people were willing to impart a massive maintenance cost of countless application code bases instead of either pushing their vendors to getting things right, or agreeing on common compatibility layers.


Well in this specific case the patches would have been subtely broken. I.e. replacing a thread safe call with one that is not. So it was not just id deffing (thay even suggest some ways to do that better in the patches. E.g. capability based if def instead of uclib or not)


It's a matter of perspective. You could also say that glibc is adding incompatibilities by deviating from standards, and now systemd depends on them. I don't consider it "perfectly reasonable" that Gnome, systemd and the Linux kernel are now starting to depend on each other when previously all of these components could be exchanged for others. It's a mischaracterization to say that the systemd "shouldn't have to pay the overhead" of making their code compatible, because they started out with introducing an architecture that promotes this very lock-in to begin with.


glibc is the standard for C libraries to follow on Linux.

In this particular case, mkstemp() is not a viable replacement for mkostemp(). A proper fix is to provide mkostemp() in uClibc, or to compile with a shim that provides it.

Arguing over whether including the shim in Systemd would be acceptable would be a different matter, but parts of the patches as presented were flat out broken.

And the Linux kernel is not starting to depend on systemd or the others. The Linux kernel is moving towards demanding a single cgroups writer, and at the moment Systemd is the main contender in that space.

That Systemd is depending on Linux is unsurprising, given that they stated from the outset exactly that they were unwilling to pay the price of trying to implement generic abstractions rather than taking full advantage of the capabilities Linux offers. You may of course disagree with that decision, but frankly, for a lot of us getting a better init system for the OS we use is more important than getting some idealised variation that the BSD's could use too.

> an architecture that promotes this very lock-in to begin with.

The "architecture that promotes this very lock-in" in this case is "provide functionality that people want so badly they're prepared to introduce dependencies on systemd".

At some points enough is enough, and sub-optimal advances still end up getting adopted because the alternatives are worse. Systemd falls squarely in that category: I agree it'd be nicer if it was presented and introduced in nice small digestible separate chunks with well defined, stadardised APIs so that people could be confident in the ability to replace the various APIs. But if the alternative is remaining with the alternatives? I'll pick Systemd warts and all.

Looking at posts from the Gnome people, the original intent appears to have been to provide a narrow logind shim exactly to make it easier to replace logind/systemd with something else. If someone feels strongly enough to come up with a viable shim or an alternative API that can talk to both systemd and other systems reliably, then I'd expect Gnome to be all over that exactly because they will otherwise have the headache of how to continue to support other platforms.

The problem is that Gnome already for a long time have dependend on expectations of the user session management that ConsoleKit on top of other init systems have been unable to properly meet, so Gnome has in many scenarios been subtly broken for a long time.


For better or worse, systemd has adopted the OpenBSD approach to portability. Nothing is stopping you from creating a systemd-portable project, similar to how OpenSSH-portable makes OpenSSH usable on non-OpenBSD platforms.

As to logind, it may have been a better choice for the long term to do a separate implementation of the public and stable logind DBus API instead of trying to run the systemd-logind implementation without systemd as PID1, but supposedly whoever did the latter thought it was the best short-term choice.


You can fork it. You can fork all of Debian. But there's numerous reasons that's going nowhere.

Most being: this is not the issue the loudest voices say it is.


Forking a distribution will help by giving an alternative, but it's not as easy as just saying it. Maintaining a distribution is a massive effort that takes a lot of work. Building a team of people with enough time to make that happen isn't something you do overnight. The fact that one hasn't magically appeared since this started has more to do with that than anything else. (Followed closely by people generally waiting to see how this shakes out before they pull the trigger.)

Second - while it will provide an alternative that helps frame the debate, this is not a minor undertaking. With every other distribution caving maintaining a distribution that does not use Systemd will require a lot of work to keep all of the software out there working properly with whatever alternative init system it chooses to use.

This alternative distro is also going to have to deal with how to solve the init problem. We had some good options in play but I don't believe we'd found the best answer to the problem yet when Lennart came bowling through like a bull in a china shop. So any distribution effort is going to have to take on the role of choosing the best of breed alternative and make the effort to ensure it continues to develop and improve.

This isn't something you take on lightly.


> Systemd is offering a more compelling solution than anyone else, and if you don't like it, well, you should submit a patch, write some code, or implement it yourself.

Or go back to using windows for general use and software development which is in fact what I've done. It's amazingly sad after many years of being a strong linux supported but this has killed linux for me. I see no point in continuing to use it.


That's really not a terrible idea actually. I switch back and forth a lot between the Windows and the Unixy ecosystems and by far the best end user experience, tailored[1] developer experience and (I'm going to get slated for this), mobile experience is on Windows by far.

[1] tailored as in platform specific. For cross platform stuff, it's not so good.


Someone created a new account in order to post a pro-windows reply in a systemd thread.

Color me surprised.


[flagged]


Nice, a personal attack at the end. I'll definitely not call you a shill now!


Ok perhaps the personal attack was uncalled for. Please accept my apologies.


Or stop using Linux entirely and move to FreeBSD.


Unfortunately FreeBSD doesn't have very good MAC system. Capsicum and friends implement only ~10% of what SELinux offers. Also the virtualization is far more basic, and lacks tooling (it's like receiving a huge bag of Legos and instructions on how to build space shuttle).


IMHO Capsicum has significant potential to provide similar real-word security improvements in a far simpler way than SELinux (once the relevant profiling of daemons is done); there was some work by somebody at Google to port it to Linux and I hope that will be usable sometime soon.


Linux Torvalds said he would never have created Linux is FreeBSD was available at the time. And the only reason it wasn't available is because it was embroiled in a lawsuit over embedded ATT UNIX code back in the 1990s(?).


I'm not sure how this is relevant. Saying he wouldn't have created Linux at the time is not the same thing as saying he wouldn't work on it or use it today instead of FreeBSD.


I may be wrong, but I took GP to mean that FreeBSD is a good enough OS to use daily, hence Torvalds wouldn't have had a need to create Linux. Obviously, Torvalds hasn't abandoned Linux in favor of FreeBSD today, and no one said he has or is planning on it.


> I may be wrong, but I took GP to mean that FreeBSD is a good enough OS to use daily,

But that's not at all what Torvalds's quote means. He meant that, if FreeBSD had been available, he would have worked on contributing to that instead and improving it for daily use, rather than building Linux to be used for daily use. (The state of FreeBSD in this hypothetical world has no bearing on the state of FreeBSD today).

In terms of FreeBSD today, while it's possible to use it for daily use, it suffers from even worse driver issues than Linux does, and of all the problems of just simply having much smaller market share (both for contributing developers and users) than Linux does.

This may or may not be 'good enough' for OP's purposes, but it's disingenuous to suggest that Torvalds's hypothetical from the early 1990s implies that FreeBSD is a clean substitute for end-user Linux today.


The thing is though, back then in the early 90s FreeBSD could have been good enough; most x86 computers didn't have a GUI at all, and if they did it was OS/2 or Windows over DOS. Hardware was arguably much simpler too. Linux was created as a response to Minix, not BSD[1], and Minix was just a "teaching" OS, not a daily driving workstation OS. He didn't initally build Linux for "daily use", but as a hobby project, per his own words.

[1] https://groups.google.com/forum/#!msg/comp.os.minix/dlNtH7RR...


>In terms of FreeBSD today, while it's possible to use it for daily use, it suffers from even worse driver issues than Linux does

No it does not. It has fewer driver issues by far. Because it does not have broken half-assed binary only drivers by obscure vendor X that don't actually work. Unsupported hardware is simply unsupported, rather than broken.


> Because it does not have broken half-assed binary only drivers by obscure vendor X that don't actually work.

I've always found it interesting that Nvidia offers a more complete and stable BSD driver than its GNU/Linux counterpart. That said, AMD/ATI support is abysmal, and even Intel video is lacking compared to GNU/Linux.

> Unsupported hardware is simply unsupported, rather than broken.

That's a matter of interpretation. If FreeBSD doesn't support my hardware, it's the equivalent of being broken for me, given that I can't use it anyway. That said, I try to build or buy the most OS-agnostic workstations possible so my options are always open.


> ... and even Intel video is lacking compared to GNU/Linux.

Not so sure about that, or maybe it depends on the situation. For instance, I've been running FBSD and Linux in VMs (specifically, Hyper-V/Win 8.1 on a Surface Pro 2).

After updating to FBSD 10.1, I decided to try the Lumina DE (from PC-BSD). I've been surprised at the performance of the GUI under the constrained memory and CPU availability. It's about as good as the host (Windows), albeit running minimally demanding applications.

OTOH Linux versions (SUSE, CentOS) have been much more sluggish and GUI usability much lower. I realize this is impressionistic and hardly a deep analysis. Nonetheless, I think it points out that it's risky to make assumptions when circumstances and system requirements are so tremendously variable.


> I've been running FBSD and Linux in VMs

Then you're abstracting away from the video hardware, and not getting the same results as you would on bare metal. The only thing really lacking in Intel video versus Nvidia is proper KMS support; Intel video on FreeBSD works generally well otherwise. The FreeBSD Nvidia-provided driver, while closed source and binary only, is more or less feature complete.


>If FreeBSD doesn't support my hardware, it's the equivalent of being broken for me, given that I can't use it anyway

That's the point. People like to pretend linux has more hardware support, but mostly what is has is broken drivers for obscure buggy hardware that you can't actually use. The "well supported stable hardware that actually works" list is practically identical between them.


And if Linux weren't available, Lennart would most likely be working on systemd for BSD. Whether or not he would be successful in getting it adopted there is unknowable, but it would be possible.

This is why choice is good, and lock-in is so bad.


I don't think it would get adopted in its current form. Systemd is a massive violation of the Principle of Least Astonishment[0].

[0] http://www.unixguide.net/freebsd/faq/16.17.shtml


To avoid violating this principle, the commonly accepted industry best practice to never change anything at all.


If it ain't broke, don't fix it. If it is broke, make sure people still know how to use it after you're done fixing it.

As an aside, I've heard it theorized that part of the reason Microsoft tends to do massive GUI facelifts every few releases, is to keep the Windows/Office training industry going strong.


"Someone finally got fed up with the haphazard state of affairs in Linux-land"

But wouldn't the path of least resistance be to switch to a project that does not have this "haphazard state of affairs"?

When I originally tried Linux I got fed up within _days_. It is the _relative_ lack of default "configuration" (that is decided by someone else) that makes me stay with FreeBSD and NetBSD. Of course, lack of default configuration is the antithesis of popular Linux distributions. Whenever I have to use one, I spend more time learning how to turn things off than I ever did learning how to turn things on.

The answer to the original question is, I think, "no", switching is probably not the path of least resistance for many Linux users. Because when the Linux user makes that switch, they immediately find that someone has not done everything for them.

And from what I have seen, observing the questions of Linux users who first try FreeBSD or NetBSD, they generally do not like that. It means they have to do some configuration of their own. And even if they are comfortable doing configuration, it means they have to learn things that are different from the "Linux way"; and they inevitably encounter shortcomings that are due to lack of developer resources (read: time).

In doing things for yourself you learn about how things work. The rc.d system that all BSD projects use is coherent and relatively easy to understand. For whatever that is worth.

This debate over systemd seems to cut to the core of the value of learning about how things work. The reader can draw their own conclusions.

Linux is only a kernel and it should still be possible and thus optional to run that kernel with a basic init (or init alternative, e.g., one based on daemontools) and with userland utilties that do not need systemd.

The question I have is how difficult the popular Linux distribution folks are going to make that for their users to do.

And if they do make it difficult, it raises the question, "Why?"


> Why else would nearly every distribution be on board?

This informal fallacy is based on the idea that everything in the world can be distilled to a single answer. The real answer is more complicated. For example:

Red Hat is on board because they pay its creator's salary. So they rely on an individual bias.

Debian is on board because they rely upon 'collective wisdom' and committees to make decisions. So they rely on the bias of group thinking.

Ubuntu is on board because Debian is on board. So they rely on the bias of the other.

Other distributions are using it because 'every other distribution is using it', or they're small enough that it doesn't cause conflicts for its use base, or because it's a GNOME dependency, or because it's just new technology.

--

To make someone think something is a good idea, show them someone else thinks it's a good idea. This is a fact of all human beings' thought processes. Decisions are not based on merit, or logic, or even a quorum; it is based on fallacies created by heuristics. There exists a heuristic in which the more an idea is adopted, the more other people think it is a good idea.

We imagine our thoughts are logical, and that other people also think logically, and that their decisions must be made for a good reason. But in fact, the great majority of all decisions we make are based on guesses; this is how our brains are able to carry out complex calculations and come to decisions in split-seconds.

For example, you might look at systemd and say, "it fixes so many problems! it provides so many features! it standardizes Linux! CLEARLY this is superior. we must adopt it."

For people who care about the purity of the highest technical ideals, this makes sense. For people who care about being able to use their computer, these things don't matter, and in the systemd implementation, actually makes things worse for them. The changes systemd purports to make are not bad things. It's really just the way in which they did it that is bad.

--

It's like wanting to upgrade your bicycle to four wheels, but requiring the rider now operate it lying flat down and using mirrors to navigate. The four wheels was a great idea. Using mirrors to navigate? Maybe not so great.

Of course, its creators will turn this inconvenience into a feature, saying "you get to lay down! it's therefore more efficient and easier to use!", completely ignoring how other people want to ride a 4-wheeler.


haphazard? fragmented? many places for error logs? half-a-dozen ways to configure X?

Have you ever run a linux box? How about dozens or hundreds of them in a production environment? I'm guessing no on both counts based on the nonsense you're spewing forth in your post.

If you don't know what you're talking about, it's best to just keep quiet.


Really bored of this nonsense at this point. Take this for example:

If you absolutely MUST run Linux, my recommendation is to minimize the interaction with the base distro as much as possible. CoreOS (when it’s finally baked and production ready) can bring you an LXC based ecosystem

Systemd is absolutely key to how CoreOS works. It's the basis for the distributed init system it provides — a major selling point.

Taking any of this blog's advice would be harmful. I'd suggest a better approach would be to accept that the majority of distributions have settled on systemd and that generally this decision has not been made by idiots. So it would be worth either understanding what their pain points are and how they can be solved with an alternative to Systemd, or to help solve the issues that are apparently in Systemd yourself.


I agree with that specific advice (minimize the interaction with the base distro). I'm on a quest to isolate every major component of my user experience in containers (including things like browser etc.).

But not because I have anything against Systemd. I love Systemd so far.

It's hilarious that he's proposing CoreOS as an alternative, given that it's one of the most radical rethinks of a Linux distro out there.


For those with reading comprehension skills this isn't funny but rather a natural conclusion. The problem for you is that you're making assumptions about the author that you haven't verified.

The problem here isn't change, or re-thinking linux. The problem is re-inventing the wheel, and doing it poorly.

CoreOS uses systemd, but it's not a distribution in the classic sense -- rather it's a platform for containers. The narrow use-case for systemd here removes some (most? all?) of the concerns.


ah so you're saying that systemd is flexible because it was chosen for a distro where it has very limited remit? so you mean it wasn't some horrible octopus which tried to suck all of coreos into it before they rejected its entanglements?


Many large decisions have been made by idiots, badly. Blindly accepting consensus as a rational choice without a counterpoint had caused many years of suffering for the human race from wars to bad science to bad medicine.

Whilst I agree that the blog is probably hocum, there's nothing wrong with critical thinking.


I completely agree – but critical thinking involves something along the lines of "here are the things that I think are wrong, this is why I think they're wrong, this is how I think they could be solved."

The answer "let's throw everything out" isn't that useful; likewise, dismissing the considered opinion of lots of people who have been doing this sort of thing for a while needs to be done with some rationality. An empty, bandwagon-jumping appeal like this adds little value and just helps spread more misinformation.


Actually the answer is to be slightly more conservative with the approach. No one has to pick up systemd now for example. Leave it a year and see where it is. The technical merits will be obvious and the technical problems will also be.

Moving all the chess pieces at once, which is what is happening is not productive, professional or a sign of experience.


That's not the case though – systemd has been enabled by default by Fedora for three and a half years or so, and has been steadily adopted since then by most other major distros. Not everybody is moving at once, so why would waiting a year make any difference?


This bugzilla query against RHEL 7.0 systemd says otherwise:

https://bugzilla.redhat.com/buglist.cgi?bug_status=NEW&bug_s...

A lot of the bug descriptions are quite scarily bad when you consider them in context such as "various loginctl commands not working" etc.


So what you mean is "Systemd still has serious bugs," which is nothing to do with whether or not everybody is moving at once.

That's the point – if systemd has important bugs they should be fixed. Clearly, the groups responsible for the decision have concluded that the tradeoff is worth it, and have accepted that a large, fundamental change will have issues. That's fine – there are a bunch of other distress you can use that have not adopted systemd, which you can use in the meantime if you disagree.


The two are related.

People are shipping production operating systems with systemd that is chock full of bugs.

An all consuming tentacle monster like systemd is fine if you want to dogfood it but to throw at paying customers and/or supporters of your distribution is a little off key.


Linux developers are generally very smart people in my experience. It's consensus of many experienced and smart people that makes it significant.


They are individually I agree, but as a group of people it's not such a good story. It's quite dysfunctional from what I've seen.


I can;t comment specifically on the Linux group, but in general you are absolutely correct. As a consultant, I see it daily in my practice. When I talk to people individually, they seem smart/knowledgeable, but as a group, they often make not-so-smart decisions.


That's exactly my point. It's not limited to the Linux core team. We divide into working groups of 2-3 people max to avoid this. Works quite well.


What does this comment even mean? What's wrong with them as a group?


To borrow a quote from a popular movie:

    A person is smart. People are dumb, panicky dangerous animals and you know it. Fifteen hundred years ago everybody knew the Earth was the center of the universe. Five hundred years ago, everybody knew the Earth was flat, and fifteen minutes ago, you knew that humans were alone on this planet. Imagine what you'll know tomorrow.


I think this is trying to say you are as strong as the weakest link? Maybe it's trying to say something about group think.

I mean all these movie quotes are all cool sounding but are quite shallow.


Answer: Yes.

Just because some thought has to go into the interpretation does't make it shallow.


A large group is often dumber than its weakest link.


One thing I've realized about the Linux community through all this systemd flame warring is how unbelievably conservative a large subsection of it is. There's this huge so-called "neckbeard" continent that views anything architecturally beyond the 1980s as a huge affront to Unix.

IMHO I kind of shrug at this, since Unix was never really all that great to begin with. Unix won because the only commercially viable and well supported alternative was Windows, an OS that was (and in many ways still is) significantly worse especially for server and embedded applications. Everyone rallied around Unix and especially free/open Unix as an alternative, and so here we are.

It's also tough to compete with free, and Unix OSes got a huge boost from both Linux and the various free flavors of BSD. Yet that boost came at the expense of things like BeOS, Plan9, original NeXT, and the OS I still feel is hiding behind the JVM ... which for their day represented fresh ideas that might have gone somewhere.

Ultimately I think the existing Unix paradigm is going to be killed by Docker and mobile OSes that containerize in similar ways, and I'm not sure this is a step forward. It escapes much of the ugliness and the poor permission model of Unix, but it does so by handing virtually everything to the app. Docker containers (and mobile apps) can be thought of as something almost akin to giant statically linked binaries. We're getting more monolithic and coarse-grained.


People who aren't extremely familiar with how the Linux init system works and whose job doesn't include keeping the servers stable don't see why the neckbeards are up in arms about systemd, but there's good reason. Many peoples' jobs depends on making sure the servers are working, and knowing how the servers work is a big part of their job (and sanity). In its current incarnation, systemd changes the fundamentals of how servers work without much increase in features - large risk and little reward.


I've had huge productivity gains with unit files for systemd over trying to write spaghetti shell code for old sysv. The syntax and features are well documented and writing them is extremely simple. I also don't believe any sysv implementation had crash recovery or socket activation of daemons, both of which are huge feature wins.


I also don't believe any sysv implementation had crash recovery or socket activation of daemons, both of which are huge feature wins.

That's because there were other components handling those tasks, like inetd and /etc/inittab. I do like having Upstart handle respawning for me, though.


Inetd only did TCP socket activation, not of unix sockets, though.


Inetd only did TCP socket activation, not of unix sockets, though.

False.

http://manpages.ubuntu.com/manpages/hardy/man8/inetd.8.html

    The service name entry is the name of a valid service in the file
    /etc/services. … For UNIX domain sockets this field specifies the
    path name of the socket.


    The protocol must be a valid protocol as given in /etc/protocols.
    Examples might be “unix”, “tcp” or “udp”. … A protocol of “unix”
    is used to specify a socket in the UNIX domain.
xinetd does not appear to support this feature.


Oops, you're right, I misread an article about socket activation.


> People who aren't extremely familiar with how the Linux init system works and whose job doesn't include keeping the servers stable don't see why the neckbeards are up in arms about systemd

Hi, I'm a sysadmin who's fed up with neckbeards (most of who apparently don't know much and refuse to learn) claiming to speak for all sysadmins on this topic.

> large risk and little reward.

It's four years old, and claiming "large risk and little reward" is like listening to someone claim that moving from sendmail to postfix would be a disaster.


The only sysadmin I know with an actual neckbeard (over a foot long) is a 20-year unix/linux admin, and he greatly favours systemd.

Perhaps if you're tired of neckbeards speaking for all sysadmins, you should return the favour and not declare what all neckbeards are saying. A lot of old, experienced admins are for systemd. It's not the young go-getters who are at the top level of distros making the foundational architectural decisions, after all.


I deal with sysadmin stuff for quite some time(since 2001), and I just won't use distributions without systemd.

The benefits far outweigh the risks (imo obviously)


Meh I don't mind either way. There's nice stuff in systemd but nothing thats so critical i wouldnt use a sysv based system (there's pretty good ones).

What generally annoy me are things like supervisor and other things people use to "auto restart" services but these aren't exactly integrating nicely and put stuff all over the filesystem/etc.. I like that systemd includes that and does it mostly properly.


Servers is precisely where I want systemd!

There are some things I've wanted reliable and consistent mechanisms for so long: starting/restarting/inspecting services, isolation/resource limiting, socket activation, log collection.


Server admins have so much new technology to learn and play with to stay relevant. If they feel like learning systemd is a chore, they might be in the wrong business.


All the more reason to be judicious in what new technologies are introduced.

One of the huge benefits of the Unix/Linux, CLI, and Free Software traditions is that they tend to be very strongly preserving of established knowledge. Changes are incremental, usually additive, a reliance on scripting means that interfaces are unlikely to change, and new tools are very frequently drop-in replacements for old.

As specific examples:

I first learned editing under BSD vi in the mid 1980s. In the time since I've learned and used on various PCs (and a few other systems): WordPerfect, WordStar, MacWrite, AmiPro, several iterations of MS Word, the EDT and EVE editors under VAX, the TSO-ISPF editor, and a few others under Unix: emacs, ae, nano, nedit, Abiword, Lyx, and various iterations of what's now LibreOffice. Most of that skill-acquisition is now dead to me -- the tools simply aren't available or aren't useful.

I'm no longer using vi, but vim (adopted in the mid 1990s as I switched to Linux), but the basic muscle-memory is the same. And its an editor I can utilize across a huge number of systems (though I do admit to finding traditional vi / nvi painful).

Similarly, the bash shell is an iteration on the basic Bourne and Korn shells.

ssh is a drop-in replacement for rsh, to the extent that /usr/bin/rsh is typically a symlink to ssh. While the dynamic is slightly different from telnet, it's still pretty similar with a few exceptions.

The rare occasions in which a utility changes its commandline options you'll virtually always hear about it. The fact that it's so painful (and tends to break decades-old scripts) means its generally avoided. Authors who make a point of doing this tend to find that people avoid their tools.

A bigger point is that forgetting stuff is often much harder (and more important) than learning stuff. And when you're invalidating long-established patterns, that's really painful.

There's also the fact that we manage technology by managing complexity, and most of us in the field work at the limits of our ability to manage the complexity we're faced with: the basic OS, shells and interpreters, hardware, vendors, hosting providers, management tools, employers, clients, customers, co-workers, engineering and development teams, services, abuse and security concerns. It's a really complex and dynamic field.

Linux has done quite well (with a few notable exceptions) of maintaining a balance between capabilities provided and complexity imposed. One problem is that as systems become more complex, the additional benefits of yet more complexity are lower, and the costs are higher (this is a very general rule, not just specific to Linux, operating systems, or computers).

The question of how to introduce radical change is a key one. I've seen a number of failed attempts to drastically revise existing systems in place -- this almost always fails. Linux itself wasn't introduced in this way -- it emerged as an alternative to both "traditional" proprietary Unices, to Big Iron (mainframes, VAX), and Microsoft's then-new WinNT. Linux ended up dominating virtually all of these categories, but it did so by incrementally beating out the competitors through replacement.

An interesting space where a lot of this comes to a head specifically is in the graphical user interface field. I've noted several times that Apple, notable for a great deal of success in this area, has been exceptionally conservative in its GUI development. It's effectively had two GUIs, the initial Mac System interface, and Aqua. Each has had a roughly 15 year lifespan, and yes, there was incremental improvement over the span of both, but the essential base remained the same.

Since the early 1990s, I've watched Unix/Linux go from twm to fvwm, Motif/mwm, VUE/CUE (a "corporate" standard based on Motif plus a desktop), Enlightenment, GNOME, and KDE, and now alternatives such as xfce4 and ... oh, that funky graphics thing Suse's got, as the "primary" desktops. GNOME and KDE themselves have gone through about three major revisions. And there are a number of other "lesser" more minimal desktops as well -- I use one of these, WindowMaker, which is actually based in a late 1980s ancestor of the Aqua interface now used by Apple.

Microsoft's experienced some similar recent tribulations. As has pretty much every online site ever that's done a site redesign.

As jwz has observed: changes to GUIs just don't offer that much win. They're highly disruptive, they're possible because the interfaces generally aren't scripted (other than via automated QA testing systems, but that's another story), but more importantly: the productivity benefits granted users really aren't that significant, especially regards the cost.

Worse: changing an existing interface leaves users in a no-recourse situation, especially in the case of SAAS. For Linux and systemd, the options are slightly more open in that (for now) it's possible to disable or block systemd from installing in at least some cases. But over the long run, it may be that the only options are voice and exit, as opposed to loyalty (a reference to the book and concept of Voice, Exit, and Loyalty, which I recommend looking up).

So yes: those of us with numerous decades of experience in the field often do have an extremely jaundiced view toward radical change. And with very good reason.

But your comment is really unwarranted.


Wow, great comment -- and one that all who endeavor to innovate in systems should take to heart. As my former colleague Bart Smaalders was fond of saying, "the hardest software to upgrade is the software in our brains"; when inventing new abstraction, it must be done so sparingly and (as much as reasonable) by leveraging extant notions. This isn't merely to allow a technology to be readily understood (though that too, certainly); it also requires thinking in terms of reinvention versus reuse. This thinking enforces a kind of humility: you must learn about the systems that have come before, if only to understand which of their abstractions can be reused. I think it is a perceived lack of this kind of humility in systemd that has been so alienating for those who have a long history with Unix: it's not as if other approaches are being rejected so much as they are not being considered at all.


I think it is a perceived lack of this kind of humility in systemd that has been so alienating for those who have a long history with Unix: it's not as if other approaches are being rejected so much as they are not being considered at all.

I really have the feeling that people are using double standards here, especially when suggesting Solaris or Solaris-derived systems. Since systemd is implementing pretty much what has been in Solaris (SMF) and OS X (launchd) for a while now:

https://docs.oracle.com/cd/E23824_01/html/821-1451/dzhid.htm...

https://developer.apple.com/library/mac/documentation/Darwin...

Also, it is of somewhat questionable ethics that members of the Solaris community submit such troll posts (as others have pointed out, there is not much substance there). It reeks of wanting to destroy Linux' image for your own (Illumos, SmartOS) gain.


This is a rather disingenuous response.

It assumes that this is a troll post - which I don't think is fair. The author has concerns that are legitimate to them, and outright dismissal as a troll, whether or not you agree with them, is petty and judgmental.

Second, you are somehow conflating dislike of systemd with love of sysv init. The cognitive dissonance here only makes sense to me if you believe that systemd is perfectly fine, and think that the only reason people dislike it is because it's different.

However, if someone is recommending a solution that utilizes SMF, is it such a stretch to think that it might not be because they are in love with sysv init, and instead might think that the implementation of systemd is lacking?

I personally like the underlying idea of SystemD - because I like SMF. I do not like the implementation of systemd, and also have reservations about the people helming the project.


SMF is a pita - far, far worse than the process management stuff in systemd.

SMF does not seem to want to own every bit of my Linux machine, however.


I want to know how this even became an argument in the first place.

It's not that I don't like systemd, it's that [insert affiliated party] is way too cocky

It blows my mind to see people regress so far back into arguments that this because an issue of emotions in a technical debate.


It's not a matter of emotions.

It's a matter of having observed similar behavior in other projects which went similarly off the rails.

Poettering's own track record with Pulseaudio comes to mind. There's also the GNOME project, which I identified as actively intelligent-user-hostile around 2004. It's been somewhat gratifying to see that particular perception bear out with time.

There are other projects which have shown similar levels of arrogance, though mostly with more limited and self-contained damage.

And being prickly or hard to deal with has shades. Neither Linus nor Theo de Raat are pussycats, but both focus very much on technical issues and are generally highly responsive to specific technical complaints. Sure, they make mistakes and bad calls occasionally, but on balance they've tended to get things right.

The attitudes expressed by Poettering and Sievers in particular aren't simply cocky, but contemptuous. And they're getting called on it. Including by Linus.

I could give a shit about personalities themselves, I really could. For the most part I really don't care how socially awkward someone is if they're good at their job. And if they don't start going out of their way to do harm to me or others. Personality disputes in discussions bore the piss out of me.

But I'm also not blind to technical failings with roots in personality traits. And those are what I'm seeing in the systemd crowd and leadership.


I could give a shit about personalities themselves, I really could.

Then stop poisoning the well.

But I'm also not blind to technical failings with roots in personality traits. And those are what I'm seeing in the systemd crowd and leadership.

The problem that I see that most arguments against systemd are first and foremost about Lennart Poettering. And if technical reasons are brought forward, they can all be summarized as: does not conform to the UNIX philosophy (monolithic, replaces existing tools with tightly-coupled equivalents, binary logs).

I think that a reasonable argument can be made that, with the exception of binary logs, these things are true for many UNIXen. You will find only few people who would say that BSD does not conform to the UNIX philosophy. However, the BSDs have the aforementioned traits as well: developed by one project and tightly coupled (e.g. you cannot just take most BSD utilities and libraries and compile them on Linux or Solaris, it requires serious effort).

People always argued that this was a good trait of the BSDs (and I agree to some extend), because it allows better integration and use of BSD-specific features.

However, when systemd does it, it's suddenly violating the UNIX philosophy.


"Then stop poisoning the well."

I've dithered on whether or not to respond, but this bugs me.

Your response, again, typically of many systemd supporters, looks at the option of responding to the relevant points of my argument (personalities can have relevant technical consequences), and dives to the personality dispute "stop poisoning the well".

I'm not poisoning the well. I'm pointing out that the well has been poisoned.

The elements of the Unix philosophy which you allude to exist for good reasons, and violating them imposes very high costs. This is a lesson that those of us who've been around for a while, and have multi-platform experience (check on both counts for myself) are well aware of.

Monolithic systems transcend ready replacement. Generally you've got to toss the whole mess out. Pluggable systems avoid that. There are instances in which monolithic design does seem to be at the very least hard to avoid, but you'd best be very aware of this and defend your position well. Systemd violates this principle by assuming gratuitous monolithic nature and explicitly refusing compatibility and modular alternatives.

Tightly-coupled systems are similarly brittle. The classic case of this is probably the Windows platform as a whole. Among the best arguments for loose coupling comes from Steve McConnell's 1990s classic Code Complete (ironically, McConnell was a Microsoft developer). I strongly recommend you read the relevant sections on tight vs. loose coupling.

Binary logs (and binary file formats in general) preclude use of alternative tools. The Windows Registry (again from Microsoft) comes to mind. One of the better hacks of this I know of are Unix/Linux compatibility systems which treat the registry as a filesystem interface. This originated with UWIN (from Steve Korn of AT&T and Korn shell fame), and has since been adopted by Cygwin. The ability to grep the registry, process it with scripting tools (sed, awk, perl, etc.), and modify it (using specific commandline utilities offered for the purpose) makes dealing with that particular hairball _slightly_ less annoying. The lack of self-documenting formats for registry values themselves (a trait shared by GNOME's gconf system) is another fatal flaw.

Even packaging formats are subject to this. Red Hat (gee ... aren't they involved with systemd....) designed a binary file format for RPM which requires specific tools to unpack. Joey Hess's 'alien' links to the RPM libraries for this purpose, and a set of Perl tools I'm aware of has to apply specific offsets (varying by RPM version) to extract data from the files. Contrast this with Debian's DEB format: tarballs packed in an ar archive. This can be unpacked with standard shell tools, or busybox.

Putting together the concepts of monolithic, loosely coupled, non-binary, standard tools, I've more than once rescued Debian systems which failed to pivot-root from initrd by breaking into the initrd shell, unpacking, and installing DEB packages using shell tools, facilitated by the use of an interactive shell, busybox for tools, and the DEB format. I'm thwarted on several levels from a similar recovery option in Red Hat systems due to the use of a special and explicitly noninteractive shell used in initrds (which is larger than Debian's 'dash' used for the same role), and the binary format of RPM packages. Working in cramped quarters and difficult situations, I can assure you of which system I'd prefer to be working with.

Systemd's violation of these principles is objectionable because it's not necessary (see OpenBSD's shim replacement for functionality, or uselessd, among others), gratuitous (decisions are being deliberately made), and, as your comment above illustrates, the very valid reasons for not doing just this are belittled.


LOL. Exactly. When you don't have any good response based on the merits of the argument, attack the presentation. This tactic is very familiar to those of us who call out sexism or racism: "You should be nicer when asking for your problems to be taken seriously."

But, ultimately, what happens in tech is as much about people and personalities as it is about actual technical merit. To delude ourselves otherwise is dangerous. When someone claims to be arguing from technical merit, look very closely at their history and probable motivations. There's always more there.


Thanks.


Exactly that. You know what's insanely great? When you watch this presentation video of 1978 at AT&T where Ken Thomson explains Unix and type some commands on his VT-52 and you think: all of this is still current knowledge, and all of his explanations still hold true. Just like celestial mechanics or Pythagorean theorems. We are heirs of this ancient wisdom and this is friggin good, this is culture.


And systemd is changing precisely none of that.

Nothing about systemd removes the basic unix command line. Because he's most definitely not explaining the init system, which wouldn't have been the same from year to year then, or even similar decade to decade.


Systemd does touch numerous parts of Unix as it existed in 1978: logging, authentication, and devices come to mind. But much of what it's interacting with came along afterward: networking, far more services than existed at the time, a much more complex security scope, and more.

But that's still a good 25-30 years of work, experience, practices, and smoothing out the rough edges that will be shot down the drains.

Systemd also fundamentally changes the control locus of key features within Linux and how applications, the kernel, and OS as a whole are constructed and constrained. Putting all of that under the control of a small group with highly evident disdain for any "outside" concerns (in quotes as these are of the larger Linux community, and the concerns are most decidedly inside that group), contempt, and plays-poorly-with-others attitudes.

I'm not impressed.

Nor with your comment, FWIW.


Authentication is done with PAM and Kerberos these days - Kerberos is late 1980s, PAM came along in the 90s. Unix evolves and had continued to do so since its inception. udev certainly changed how we do devices.

The rest of your comment is fear mongering which could be applied to any group of core devs on any OSS project in existence. After all who controls Debian and security defaults? Do YOU trust them?


You're missing the point: there was no networking (outside of UUCP and dial-up connnections) in 1978 Unix, so there were large classes of functionality since added which simply didn't exist.

What 1978 Unix did have was security and authentication. The OS was multi-user from the very beginning -- hence the pun in the name: uniplexing operating system (Dennis and Ken created a two-user OS to play Space Traveller).

As Bruce Perens recently discussed in a set of comments at LWN, the first thing he did as DPL of Debian was decentralize the management of Debian packaging. He recommends a very similar process for Systemd. The Systemd proponents in that discussion aren't particularly taken with the idea.

http://lwn.net/Articles/621022/

It's not a matter of fear mongering when the stated goals and practices of Systemd are to intentionally break compatibility with other Unixen, to reject compatibility patches, and to provide "choice" in the form of allowing users the option of any Linux distro on which they can run systemd:

http://imgur.com/r/linux/Is9vjRJ

As Jon Corbet noted at LWN in his Grumpy Editor post on the topic, it would greatly behoove systemd leadership and proponents to demonstrate a modicum of gracious victory.

As for Debian's governance, that process has been more than slightly troubled of late, with at least four key departures (Joey Hess, Ian Jackson, Russ Allbery, and Tollef Fog Heen), only in the past couple of weeks. The cabal question was raised by former DPL Bruce Perens in the LWN post linked above. And, frankly, no, I haven't been happy with the recent directions of Debian's Technical Committee of late. Joey Hess's resignation (as well as those of Ian and Russ) calls into question more than just the specific decisions, but the process as a whole.

Your attempts to smear my own comments which are based on actual events, facts, and highly considered views of those with deep and broad experience in the field is, I'm really sorry to say, far too typical of what I see from systemd proponents (the attacks on Perens in the LWN thread strike a pretty similar tenor).

Something is sick in this process. That more than anything is what's bothering me about it, though I've also grave doubts over the technical direction.


It's an interesting take, but that's not really how software works. Look at Plan9. It isn't POSIX compliant, but it does a lot of things much better than traditional Unix (or nowadays Linux, for that matter). Traditional Unix is not the philosopher's stone. There are plenty of good things about it, but it also comes with a number of dubious design decisions or what is now irrelevant cruft (why are we leaving with code replicating the behaviour of obsolete hardware in our terminal emulators?). It's not so much the actual implementation that is important, it is the "good parts" of its philosophy that we need to keep.


Plan9 is actually a really good example to bring up, for any number of reasons. I have to admit that I've never used it, though I've read bits about it. There are definitely some ideas in there that I'd like to play with and experience.

The most important elements to consider about Plan9 are these:

1. Plan9 wasn't Unix (nor was it Linux). It was its own OS, it was absolutely informed by Unix, and tried to learn from mistakes practiced in Unix. Because it wasn't Unix it provided for an independent test bed in which these ideas could be explored without disrupting a large established installed base and user community. And that is a key benefit of branched development. All of these I consider positives of Plan9.

2. It was hampered by an overbearing corporate control and licensing model. It was an ugly stepchild of AT&T's, under a proprietary license. The fact that it was under development kept it from being widely deployed (among other factors), the fact that it had a restricted license meant that other possible collaborators couldn't get involved.

When Linux emerged in the early 1990s, it had a lot of problems -- it was far from the best or most obvious Unix alternative out there (look up ESR's PC Unix guides from that era). But in a world of large proprietary Unicies priced far out of the hobbyist's range, a handful of small PC ports of varying quality, and BSD which was embroiled in its lawsuit with AT&T (speaking of Plan9), Linux was unencumbered, free, and (pretty quickly) available under the GPL. That gave it the critical mass to develop. As with Plan9, it was its own OS, providing a testbed environment for development, but also allowing stable cuts to be made for use in specific deployments as it reached sufficient states of readiness.

Which is to say: the community and development dynamics mattered a lot.

I'm seeing a far more troubled path for Systemd in this regard.

Also of note: in the Debian init system debate, a specific concern raised against upstart, one of the init alternatives, was its own requirement of a developer license grant to Canonical, which was seen as a strong demerit against upstart. As with Plan9, exercising too much proprietary control may well have cost Canonical critical votes in the Debian decision.


It seems that folks outside of Red Hat do contribute to systemd, if that's your concern. What I could imagine is that some projects under the systemd umbrella will live an independent life, once things stabilize a bit.

I must admit the ever-growing scope of systemd is starting to concern me somewhat (though I've been running it with satisfaction more or less since it became available in Debian experimental).


What can I say, some people in 2014 like to live in 1978.

It was fun for a while, but I grew out of it.


You completely missed the point. It's not about living as in 1978, it's about _not throwing away_ accumulated knowledge. What use is my CP/M knowledge nowadays? None at all. What use is the knowledge I could have of '78 Bourne shell, pipes, signals, vi, ed, awk, grep, man? Not only useful, but still of daily use.


Cars from last century still take me from A to B. It doesn't mean I want to drive one.


New cars have almost exactly the same interface as cars from the 50s or even older ones. People having learned to drive at any time since WWII can drive any brand new car, and nobody proposed seriously that we switch to a joystick or a brain interface.

Similarly, though the underlying hardware and code share basically nothing with Unixen of yore, old knowledge is still useful on modern Linux. This commonality of interface is more important than inner workings.

By the way the most expensive cars by far (therefore arguably the most desirable) are old to very old. A Ferrari 250 GTO is way more valuable than any new car. IIRC the most expensive car ever is a 1929 Bugatti Royale, and even you can drive it.


I don't accept that being irritated at 'radical change' (I'm not sure exactly how this is so radical, it's an init system, not a kernel.) because you're losing domain knowledge is a good reason. That would imply radical change is necessarily bad and should be avoided everywhere. Carriages to cars, ice houses to refrigeration, bow and arrow to black powder. Every single one of these radical changes required people to learn something vastly different. With your reasoning we should've waited longer for some intermediate technology to smooth the learning curve. People were getting really good at taking care of horses, now they need auto mechanics? Building an ice house was becoming a science, how do I fix leaking refrigerant? My aim and dexterity with a box is second to none, but now we're using guns? Now your first thought might be that all of those things are different than server admin, but are they really? I would suggest the only thing different for server admins is that it's entirely less radical of a change than all of these other technologies.

As for my comment being unwarranted, sysadmin'ing requires learning new tech. If there is an improvement on a tech such that it has mass adoption, learn the tech. It's your job. If you don't like it, change jobs. I'm not saying you should shut up and put up. However, we're far past that stage of valued input and people are still complaining. The decisions have pretty much been made that are going to be made concerning systemd adoption. Yet here I am, reading yet again how systemd was the wrong choice, even though rigorous debate was had and core teams decided it was the best decision. Even though this was the biggest drama piece since that blogger blasted linus for being rude. Here we are with 'radical change' in systemd.


I understand your strong opinion and since I am no sysadmin I have no technical problem with arguments. But I am not sure I totally agree with your characterization of slow pace of change by Apple or the wonderful state of Unix/Linux. Aqua was quite a break from the previous GUI and Apple changed the whole stack at one point from computer architecture to OS to graphic library. I don't know a more radical change than that for a software company. As for Linux graphic environment, I can only say that replacing X-win with Wayland is not evolutionary and it cannot come soon enough. Anyway, hopefully things will quiet down for a while and we can compare and contrast alternatives in the real world.


Also, to be clear, I'm not accusing Apple of failing to innovate elsewhere in its product chain. It clearly has. Since 1999: the iBook, MacBook Pro, Air, and a few iterations of the iMac, just in form factors. There's been a lot of under-the-hood stuff going on as well.

But where the user interacts with the system, things have been remarkably stable. Even the relatively minor changes which have been presented have been covered with the usual Apple levels of obsession -- skewmorphic vs. flat designs, etc., ad nauseum.

Again the point being: screw with how things are visually and how users interact with the system, you're going to create huge usability costs with little to show for.


I'm not saying that the System 9 (I think -- I'm not fully up on my MacOS nomenclature) to Aqua break wasn't big. It was.

BUT IT WAS THE FIRST SUCH BREAK IN 15 YEARS OF THE GUI, AND IT'S BEEN THE ONLY MAJOR BREAK IN THE PAST 15 YEARS.

I'm also not saying that Aqua hasn't changed at all. It has, with the most notable addition that I'm aware of being virtual desktops (something NextSTEP had in the 1980s). But other than some minor cosmetic changes, and largely invisible-to-the-user under-the-hood updates, the visible UI has NOT changed appreciably.

Contrast that with the disruption that's prevailed in the Microsoft Windows and Linux spaces from 1999 to present. We've gone from the Win98 UI to the candy-cane XP styling, and Metro in Windows, and at least three generations each of KDE and GNOME on Linux, plus a few other desktops which have waxed and waned in popularity.

I've continued to use WindowMaker, and after 17 years, it is, hands down, the one GUI metaphor I've had the longest experience with of any. It's been exceptionally stable, with very few changes. Even minor ones are quite jarring to me, which is somewhat odd to reflect on.

X11 and/or replacements is a whole 'nother discussion, but I'll simply note that the network transparency of X has been hugely underappreciated by many who've sought to upend it (I don't know what the status of Wayland is in this regard).


> If they feel like learning systemd is a chore, they might be in the wrong business.

I think IT managers would prefer it if they didn't have to spend time and money re-training their sysadmins or hiring/firing them to ensure their staff has the skills to use the $NEW_SHINY from $CORPORATE_VENDOR. Skill transference is a boon for customers (see also: "Stop breaking the UI!").


> it escapes much of the ugliness and the poor permission model of Unix, but it does so by handing virtually everything to the app.

If you want to see the ultimate extent of this, look at the Wii and Wii U. Each game ships with an "IOS": effectively an OS kernel+initrd update package. Every game boots to the newest IOS available, so if one game updates to IOS v6653, then another game that only shipped with IOS v6652 will find the newer version on disk and use it.

However, a game's IOS requirement doesn't just have a version; it also has a slot. Each console has space for 256 individual copies of IOS, which are each independently versioned. So if two games both use IOS[58], then the game providing v6653 will overwrite v6652 on disk, and then the game providing v6652 will boot into v6653. But if one game is providing a version of IOS[58], and the other is providing a version of IOS[61], then their effects on one-another are isolated.

You can think of it a bit like the IOS codebase having 256 branches, and each piece of software being able to specify which branch of the kernel it was developed on. It gets the newest kernel released on that branch.

This allows a sort of "move fast and break things" approach to kernel development, where a kernel can be hacked to support new software in a way that breaks old software: you just stick your modified kernel into an as-yet-unused IOS slot, and old software will have nothing to worry about. This approach has resulted in my own (pretty unused) Wii U having ~73 different IOS slots populated with kernels.

Interestingly, if you think about it, this is pretty much a continuation of what Nintendo was allowing developers to do before: shipping random collections of chips in their own cartridges that DMA to the console, effectively creating their own extended console to run upon. Allowing your software to ship its own kernel is basically the software equivalent.


This is how the Wii works but the Wii U has a more traditional kernel model. This is required due to the system's multitasking abilities (run "mini" apps like the browser or download manager along side a normal app/game). The only time the Wii U uses the IOS model is when it boot into vWii mode to run Wii apps which also disables the new features of the Wii U software.


I completely agree. Having played with Docker on CoreOS for some weeks now I see that it will push a much bigger and different change than systemd, which, on my Arch box was just an update with no real problems, I just had to learn some new commands.

Docker though... man how different it is and how clean it makes my system feel. I do feel that Docker will move towards some kind of Docker optimized minimal dockers that are not Debian or Ubuntu or what ever, that is just a stage so you feel some familiarity.

CoreOS meanwhile, who will ever touch its init system except to auto-start containers? Which will be done by nice tool which hides systemd in the future I guess.

Ok, my posts does focus on the server side of course.


Well, here's the basic deal. If we're talking about common servers, common desktop, etc then systemd is an excellent replacement. It covers the base of users quite well.

But lets say you are building a highly specialized application. You are going to be making quite a few customizations which are far more manageable through a shell scripting environment than by customizing a bunch of binaries.

I assume that Redhat is going to cover a lot of the bases for most users out there. But for those of us in highly customized environments it's going to suck.


The status quo for such projects is to use Busybox, and I reckon it will continue for projects where systemd etc is too much.


There are other options, in particular the Musl libc-based distros like Alpine Linux and Sabotage (where you can use busybox but dont have to). They also feel much more like the traditional BSDs - Musl libc and pkgsrc is very close to a BSD...


Systemd is too much, but often busybox is not enough. Plus if everything starts conforming to systemd, busybox will have to become like systemd to stay compatible.


I doubt you want to run Gnome on a system where you have to use busybox.

Also if busybox is not enough, a minimal systemd system will still be leaner and faster than the equivalent sysV system.

http://events.linuxfoundation.org/sites/events/files/slides/...


It's not just GNOME; an embedded device that acts as a USB host needs something like udev, and as I recall reading systemd has swallowed the hardware plugging notification systems, or was working on it. The success of systemd in getting single-user kernel patches that only make sense for desktops and systemd is most concerning, though. Linux is great because of its modularity; previously, if init didn't make sense you could switch to openrc, upstart, or others, without having to change the way you do logging, DHCP, hardware plug events, etc.


that's just speculation until you clarify. Can you be more specific?


Think of anything that isn't a web server, file server, database server, or desktop.


jolla uses systemd on their phones and tablets right now.

IIRC it's also part of some soon to be shipping vehicle integrations, for in-car entertainment systems and mapping.


That doesn't make systemd right for the other hundreds/thousands of embedded Linux systems. If you recall my response elsewhere in the thread, "Company X uses it" is unconvincing.

Also, it's inevitable that if systemd and software expecting to use it take over more and more aspects of userland and the kernel, vendors will be left with no choice but to use it as well. So "more vendors are switching to systemd" is not a convincing argument either. I like to make my own decisions on the basis of modularity and replaceability (vendor lockin has been a huge burden in other major projects not mentioned in my online persona), not popularity.


the pereson was asking for an example, that is all. i can't even speak for whether the jolla devices are any good. only that is it shown that is done and in shipping hardware right now, not some time in the future.


http://thorstenball.com/blog/2014/11/20/unicorn-unix-magic-t...

is also a current story on HN.

Just an example of how powerful that simple 70's Unix is. Allows features that appear "magical" to thorsten, anyway.

Windows? Wasn't really even an option... until 20 years later. And, of course, Unix really isn't that good, either. But, before you ignore it, please come to feature parity, at least.


> Windows? Wasn't really even an option... until 20 years later.

Which 'Windows'? Before Windows 2000, there was Windows and Windows NT, the former being a more-or-less just a shell running on top of DOS.


> Unix won because the only commercially viable and well supported alternative was Windows,

Unix didn't beat Windows. Unix beat VMS and LISP machines and AS/400 and various other minicomputer operating systems. In fact, if we're talking about mainline commercial Unixes, NT started beating the shit out of them in the late 90s - if Unix lovers hadn't had the free ixen (Linux, BSDs) to fall back on it would be a sad state of affairs indeed.


>AS/400

Hey now, I'll have you know AS/400 is still alive in going in my workplace! We also have an entire position just for it's programming...


> We're getting more monolithic and coarse-grained.

At the same time we are pushing more heterogenous software stacks to production and configuring more specific dependencies for our applications.

It almost seems like you're using cross-platform as a pejorative. ;)


Yeah we need to go back to when distro A and B had totally different versions of everything since it was so much work to get things working.

Now we are close to having a OS where you can seriously just expect anything "Linux" to just run. Bad I guess to some :P


Seriously, once the systemd convergence is over, I can finally start advocating Ubuntu on workstations everywhere because it will finally have commonality with server infrastructure. The last frontier after that is package format convergence, and Lennart has said repeatedly he intends to use the systemd monoculture to push a common package format, which is a really good thing for me.

Right now I have most clients running OpenSUSE, just because I cannot be bothered to fuck with Upstart anymore. Once systemd is in place, the fact zypper is much nicer than apt doesn't make up for the incredible market size difference between Suse and Debian and its children.


>The last frontier after that is package format convergence, and Lennart has said repeatedly he intends to use the systemd monoculture to push a common package format, which is a really good thing for me.

Great, so now instead of adopting a package system system with a solid theoretical foundation like Nix or guix, we're going to dump all dependencies into fat binaries and more or less end up with the solution the NeXT people came up with in the 90s. Such progress.

EDIT:

Not to mention that Lennart's proposed package system[1] would depend on btrfs-specific features, adding even more code coupling.

[1]http://0pointer.net/blog/revisiting-how-we-put-together-linu...


I was so sure it would happen... https://news.ycombinator.com/item?id=8203859 - called it just before the blog post. I wish I was wrong.


I also run OpenSUSE and I think that zypper is best in class and the OpenSUSE Build Service is the killer app for SUSE and I don't know why these are not such a HUGE seller of SUSE servers????

With OpenSUSE Build Service what does Debian server get you? Just wondering.


And why not RHEL/CentOS and/or Fedora? I'm sure you have your reasons, but I find it odd you didn't even bother to mention it, when it's a rather large part of the market.


Software availability and versioning in the Red Hat ecosystem sucks. Either you are using Fedora, where software is usually just frozen bleeding edge circa Manjaro, where breakage does happen and that cannot be accepted in production, or you are running upwards of 5 year old versions of software.

The Ubuntu LTS cycle is just an optimal compromise in my book. You even get Debian Testing as a good rolling release, Debian Stable as a great server release, Ubuntu Server as an enterprise option, and they all (soon) will be using a common core.

For now I advocate the SUSE's, but while it has been stable the general obscurity of it and the dwindling userbase and the fact Novel (I know they have also since sold SUSE) backed out of maintaining OpenSUSE directly, I can't be confident in its future. You cannot underestimate the Ubuntu mindshare, because it means "Linux" software is often Ubuntu first, repackaged by hobbyists for other distros second.


> One thing I've realized about the Linux community through all this systemd flame warring is how unbelievably conservative a large subsection of it is. There's this huge so-called "neckbeard" continent that views anything architecturally beyond the 1980s as a huge affront to Unix.

Fully agree. It seems some people are quite happy with a few xterms in the X-Windows replicating a twm user experience, stuck in the past.

I would also add Oberon, Active Oberon, Singularity, Verve and the current unikernel/library OS research.

> OS I still feel is hiding behind the JVM

Android kind of got us there. Now with Java being compiled to native code, maybe other C++ layers might be replaced in future versions, given how Android team looks at the NDK.

All in all, I want the Xerox PARC and Douglas Engelbart's visions, not the AT&T one.


>single parent hierarchy for namespaces

Predictably, all the blame is laid at systemd's feet.

The current churn is happening, because all of Linux's core developers (kernel and user space) are wanting that change...to push the envelope.

For example, the current change in CGroup's namespaces are because kernel is mandating that the current cgroup access mechanism be deprecated. They want a single writer to Cgroups. Systemd is in the unfortunate position of complying with that request. Guess what? Soon enough, so will Upstart.

Again with kdbus, the person who made the push is not "evil" Lennart, but Kay Sievers - a long time maintainer of udev.

Systemd is nice. Don't be afraid.

Http://www.lambdacurry.com/systemd-nice-dont-afraid/


No. Rebuilding the entire userspace set of services to be a systemd cluster is not nice. It's essentially redoing the traditional Linux approach, which has worked relatively well for years. There are a number of things that could be split out and made more modular - c.f., uselessd for a more in-depth analysis.

To be clear, I'm not claiming that SysV init is The Best Way. Shell scripts are not the Happiest Place. But I am claiming that systemd is a crummy and overbearing replacement.


> It's essentially redoing the traditional Linux approach

It seems like that's part of their mission statement, given comments like this: "Some day, we will have turned the old crap into a real operating system. :)" -- Kay Sievers (https://plus.google.com/+TomGundersen/posts/eztZWbwmxM8)


Yeah, and they seem to be using Windows as their model of what a "real operating system" looks like.


Both Windows and OS X have a unified management mechanism.

In fact, OS X's launchd was a direct inspiration for systemd because of how nicely it works there. I've wanted launchd on servers for so many years.


OSX is not an appropriate choice for a server, and Windows has limited reputation in my mind since they put the GUI in kernel space.

we should not emulate the things we've overtaken and have a higher server market share than. Something we've been doing before is doing it right, and I believe it's the ability to be dynamic and modular.


Windows is actually used on servers quite a bit, but that's besides the point.

Service management has been a problem on (Linux) servers for a long time. Just because launchd originates on a desktop doesn't mean it's not a good idea.


If there were a couple of competing alternatives to manage your cgroups, sysadmins might not be so peeved. We're used to installing an alternative syslog daemon or cron daemon rather easily. But it's systemd or the highway, in the 3 or 4 most popular distros.

About udev, Linus has had multiple serious complaints about udev maintainership since GregKH passed it to Kay. Don't you recall the async firmware loading issue...


Here's a though: try remembering you don't speak for "sysadmins", you speak for yourself.


I think he speaks for sysadmins. If you are using cgroups at the moment you can write scripts for them. It's a mounted filesystem. The change forces you to use systemd for cgroups as only systemd is able to write to cgroups. The argument is: If you don't like systemd implement an alternative that does this for you. Some for kdbus and udev, netlink...

The article is right - it's not Linux as we know it anymore for better or worse.


That is the change that kernel wants, not systemd.

They want to prevent direct access to Cgroups, other than through a single writer. This change is happening regardless of whether you want systemd or not.


You are right. I remembered reading an article from 2013 that gave the impression that the changes where related to systemd and then it shed a different light on the issues (https://lwn.net/Articles/555922/) but it looks like the features are mostly back and in a better shape: http://lwn.net/Articles/601840/


He doesn't speak for sysadmins. I'm a sysadmin, and I love Systemd.

He may speak for some subset of sysadmin, but he certainly does not speak for us all.


> The change forces you to use systemd for cgroups as only systemd is able to write to cgroups. The argument is: If you don't like systemd implement an alternative that does this for you.

And from the looks of it, this has been done: https://cgmanager.linuxcontainers.org/ as reported at http://lwn.net/Articles/618411/


at best he might speak for some sysadmins. His position does not reflect that of all sysadmins so its wrong to pretend it is.


I'm pretty sure it's Greg KH who's running kdbus these days [0]. At least, he's the one who submitted the patch to lkml [1].

[0] https://github.com/gregkh/kdbus [1] https://lkml.org/lkml/2014/10/29/854


You mean this Kay Sievers? http://www.theregister.co.uk/2014/04/05/torvalds_sievers_dus... - It's basically the buddy of Lennart at Red Hat - at least that is my impression from far away..

systemd may be nice but it's coming from Redhat and cgroups are changed due to because systemd folks wanted it that way as far as I followed that debate...


Yes, we are veering dangerously close to Godwin's Law : not only is Lennart and systemd evil, but everyone who agreed or contributed to it.


Sievers appears to be an unreasonable dick. Gunderson announces to the world that systemd's DHCP client is pretty damn cool. In the same post, he puts out a call for interested volunteers.

Ted Lemon (the author and maintainer of ISC DHCP from its inception to 2003 [0]) asks for the location of the project's source repo. Sievers replies with a LMGTFY link that doesn't even answer Lemon's question. Lemon politely criticizes Sievers for his rude and unhelpful answer. Sievers fails to even apologize.[1]

Both Sievers and Poettering have pretty serious attitude issues. It's one thing to lambast a peer who frequently fails to meet the potential that they've demonstrated in the past. It's entirely another thing to try to score social points with your callous indifference and blinkered bullheadedness.

[0] https://www.isc.org/downloads/dhcp/

[1] Check the first few comments of: https://plus.google.com/+TomGundersen/posts/eztZWbwmxM8


What you've just mentioned here is what scares me most about systemd. Not its tight coupling, not its bugs, not its philosophy - all of these things are arguable and in most cases fixable.

The project being run by people who hold unreasonable and downright odious views who act like, frankly, utter asshats, is a much more serious problem.

The Kay/Linus debacle is something you can expect to see more of from these fine folks going forward. Mark my words. Ask yourself if you want software developed this way running your OS.


I am not sure if you're referring to this, but I have seen so many instances where an individual joins a community and then systematically tears it apart by calmly and coolly promoting ideas that ~51% kind of like and ~49% absolutely hate, through a combination of personality cult and back-room coalition building. I have seen this happen in forums, IRC channels, RPG groups, businesses. It is a particularly insidious type of toxic personality because you can't fix the problem by excising them without alienating and losing a large chunk of your community.


Been there and done that.. what's a community to do in such a case?


It's more like everyone knows Sievers is under Poettering's umbrella ever since the 'debug' kernel flag debacle wherein Kay was banned from contributing to the kernel, and then instead of addressing the issue himself, Poettering came out and made that infamous "the kernel is just an implementation detail" blog post to defend him... https://plus.google.com/+LennartPoetteringTheOneAndOnly/post...


No need for personal attacks. I just think the article as some merit and your comment gives the impression that these people are not connected. I just wanted to point that out.


No, cgroups are being changed because the kernel maintainers want to deprecate some stuff and systemd is in the best position to provide the new services.


look at how many people actually contribute to systemd. It's far more than just redhat folks


And if you run gitstats on systemd, you'll see that just 10 people are responsible for over 90% of the code.


But the patches that didn't fit the roadmap dictated by RedHat have been rejected.


I'm quite happy to be a part of the development team for GNU Guix, a distro that is not using systemd. I'm not a systemd hater, but it's definitely not for me and I'm not thrilled with the direction that development is going. It's a shame that sysvinit and friends are so bad that using systemd is the best option we have right now. Maybe GNU dmd will be able to stand up to it someday.


> I'm quite happy to be a part of the development team for GNU Guix, a distro that is not using systemd.

Thank you for doing this :) I love systemd myself, but I still think it's important to have alternatives available; also it makes me very happy to see somebody creating their own choice instead of tearing down other people's choices :D


Thank you for doing this.

That being said, it's a little too esoteric for my tastes (among other things: "if you are looking for a stable production system that respects your freedom as a computer user, a good solution at this point is to consider one of more established GNU/Linux distributions.").

Do you know of any Linux distros that a) don't use systemd, b) are vaguely active / supported, and c) run on UEFI? (My current laptop, unfortunately, has UEFI. It's a royal pain, but oh well.)


>it's a little too esoteric for my tastes

Yes, we're still in alpha. Not ready for prime time yet.

>Do you know of any Linux distros that a) don't use systemd, b) are vaguely active / supported, and c) run on UEFI?

Gentoo? I use Debian most of the time, which of course uses systemd now.


> Gentoo doesn't officially support UEFI


Ah, didn't know that. Thanks!


> Do you know of any Linux distros that a) don't use systemd, b) are vaguely active / supported, and c) run on UEFI?

Not addressing any other of your points, doesn't most laptops/computers which ships with UEFI allow you to set it to boot in "legacy-BIOS" mode?

Even if you're currently UEFI-booting, I would be seriously surprised if UEFI-support was a requirement for every OS your machine can boot.


I dual boot with Windows 8 for games. Windows 8 will work without UEFI, but I haven't found any information about how to downgrade an existing UEFI-based windows 8 partition to legacy bios, or if it is safe/feasible to do so. In particular, the non-existent bootloader. (Windows, as usual, seems to take the approach of "wipe + reformat", which is not exactly optimal.) If you have any ideas, feel free to let me know.

And regardless, this is a temporary solution.


I'm looking forward to switching to Guix as soon as it reaches beta. All this POSIX breakage and LGPL exploitation is making me double down on GNU.


Glad you are interested! If you are brave, you can try out the distro and report the issues you run into. It would be very helpful to us. BTW, I'm typing this from my standalone Guix machine. Eating my own dog food.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: