That is, every time I've reported something is broken, wonky, doesn't work reliably, et cetera, I've been told, "Submit a patch.", "Write some code.", or worse, "Implement it yourself."
Someone finally got fed up with the haphazard state of affairs in Linux-land. Fed up with the fragmented and sometimes many places you have to look for error logs. Fed up with the many files you have to edit to configure the network correctly (different on every major distribution). Fed up with the half dozen ways to configure X, where X is a common function to every modern operating system.
It seems Lennart has taken the advice and followed through, and distribution maintainers liked it. They liked the idea that someone was taking all this complicated work - this dirty, boring to write and maintain code - and making their lives easier. Why else would nearly every distribution be on board?
Systemd is offering a more compelling solution than anyone else, and if you don't like it, well, you should submit a patch, write some code, or implement it yourself.
I installed CentOS 7 on a machine last night that we're replacing CentOS 6 on and was poked in the face with timedatectl and dbus problems for an entire hour, some of which were intermittent. Debugging these issues is a horrific pain. I lost 4 hours on it. I've never lost that much time on a system function before. This is not what I expected and there is no way I could possibly introduce that to our production environment.
I think that might why people are slightly sensitive to it.
Yes you're exactly right, but replacing something with something less stable, more complicated and more difficult to debug isn't a rational or good engineering. I'm sure many people will be fed up with systemd much quicker than what was already there.
Not impressed with a community which pushes this as stable, quality software. Voting with my feet: FreeBSD is being trialled instead. WhatsApp throwing a million dollars at it draws a lot of valuable attention and puts it in the business's mindset.
Choice is as much of a valuable aspect of open source too...
Pulseaudio was Lennart's previous project. It broke everything in linux sound for a while, everybody moaned and hated it and said it was the worst thing since the crucifixion of Christ.
Yet, name one problem you had with sound on linux in the past year? There are very few. Pulseaudio now just works(tm) and is a unseed, unheard of part of the plumbing.
If you remember what is was like messing with ALSA and (shudders) OSS before pulseaudio came along you will agree that the current state of affairs is a million miles better. It used to be really difficult to get more than one application to be able to play sound at a time. I remember compiling sound drivers from source just to get them working. Configuring ALSA config files to get surround sound working was practically a black art. Creating manual scripts that unmute the sound card on every boot because the driver didn't initialize it properly.
With pulseaudio, I never have to worry about any of that and configuring surround sound takes me two clicks of the mouse.
Lennart did a fantastic job with pulseaudio, he took on a dirty problem that nobody else dared to touch and went through years of criticism to produce a really high quality solution that solved the linux audio problem so well that you don't hear complaints about it anymore.
In light of that, I trust him to do a good job with systemd. It'll be a couple of years of everyone moaning and bitching and whining about it, then one day it will have become a seamless part of the plumbing, everyone will take it for granted and wonder how they ever managed fighting with shell scripts and fragmented init systems before systemd came along.
It's ironic that Lennart Poettering is probably the most abused developer in the entire OSS ecosystem, yet he is one of the people contributing most to it. For our sake, I'm glad he has such a thick skin. If I was him I'd have quit this game long ago.
That's just it. Linux sound worked fine for me before Pulseaudio, and FreeBSD sound has always worked perfectly fine for me. In fact, FreeBSD solved sound mixing sooner via /dev/pcm virtualization (while Linux chose to create the Linux-only ALSA instead), and has always had lower observed latency.
Pulseaudio screwed up my audio so badly that for a year I was running the closed source OSSv4 binaries and manually recompiling all the audio libraries to use OSS instead of ALSA/Pulse.
It is not fantastic to push horribly broken code onto the entire Linux userbase while others frantically jump in to help patch and fix the trainwreck.
And we're doing the same thing again with systemd. Instead of having a few years where users can choose between systemd, sysvinit, openrc or upstart, while all of the major bugs are worked out, we're being forced immediately from sysvinit (Wheezy) to systemd (Jessie). I was on Lennart's treadmill with Pulse, I'm not getting on it again with systemd.
Now PulseAudio was released into the wild too soon by too many distros BUT it has fundamentally fixed what was HORRIBLE in Linux. (Previously a Sound Engineer and Record Studio owner)
BUT I would say that Systemd is extremely stable and not broken. What people are complaining about is the philosophy aspect.
To be fair, I didn't say I never had Linux audio issues prior to Pulseaudio (whereas I did say that about FreeBSD.)
Back in '98, my SB16 ISA card would only output sound at 8-bit monaural under mikmod, and I could only play CD-audio with that passthrough cable between the CD-ROM drive and the sound card. Once I was able to get sound working well enough, the only way I was able to play MIDIs was through Timidity and Soundfont emulation. And until ALSA, there was obviously pain whenever two things would want to play sound at the same time. This of course was due to the OSSv3 author changing the license before introducing his own audio mixing, and all of those awful sound server daemons (esd et al) never really worked, since there were multiple daemons and each application wanted different daemons or just wanted to stab right at the OSSv3 ioctl's.
But once ALSA was established and working, yes. Audio under Linux at that point worked just fine for me. Pulseaudio was a solution looking for a problem.
> (Previously a Sound Engineer and Record Studio owner)
I won't claim to be either of these. I like to listen to music while I write code, I'll occasionally watch some movies or play some games, and I want Pidgin to make a chime when someone sends me a message.
In particular, I'm very sensitive to latency in gaming (emulation), but that's about the extent of what I need speaker sound output for.
> What people are complaining about is the philosophy aspect.
To me, the worst part is the backroom politics, the complete disregard for portability, and the lock-in effects of consuming other daemons and services, and making software dependent upon it.
However, I do also object to the design itself, as well as to the developers responsible for working on the project, and the attitude of disdain they present to the community at large.
The issue was ALSA was HUGE latency to use for anything in recording was just not doable! I had to buy a closed source solution under Windows. Today I could easily do it in Linux.
/* A */ sample = (sample_a >> 1) + (sample_b >> 1); //lowers volume of A and B by 50%
/* B */ sample = max(-32768, min(+32767, sample_a + sample_b)); //prone to clamping
Playing this up as a bogeyman for not being in user-space is FUD, especially when video card drivers also run in kernel space, and are literally thousands upon thousands of times more complex and error-prone. And now the big push is to have kernel mode setting for video cards (even FreeBSD is doing this), which I believe to be a terrible direction to go in.
I have never in my entire life seen a system crash due to audio mixing, but I've personally experienced plenty of video card drivers causing kernel page faults.
If people were even remotely serious about the protection of kernel space (and I certainly wish they were), Minix would be more than a footnote in history. Neither Linux nor the BSDs make serious efforts at microkernel designs. Not even passive attempts to run non-critical device drivers under ring 1. Personally, I'm really rooting for Minix 3 and hope that it takes off more now that it's gained binary compatibility with NetBSD.
I just find it amusing how a monolithic design of doing all audio stuff in the kernel is held up by some as an example of reliability and as superior to a more modular design that is more in line with the UNIX philosophy.
About the KMS however, I've heard that DisplayPort link training has latency requirements that are difficult to meet in anything but a kernel interrupt handler... a quick duckduckgo search finds a short note about that on:
Also X servers have traditionally needed direct PCI bus access to get the hardware initialized, which means that a buggy X server can hang your PCI bus so the driver running in user space likely doesn't increase reliability in practice.
It's an interesting question to what extent the limited success of microkernel based UNIX implementations is to historical accidents and network effects, and to what extent due to actual technical limitations and additional complexity of a microkernel architecture.
Okay, my apologies as well then. It was hard to get a read from just that one sentence with the word kernel emphasized.
> (Do audio devices support floating point formats nowadays?)
Natively, no. You can be lazy and do it anyway in software mixing though.
> I just find it amusing how a monolithic design of doing all audio stuff in the kernel is held up by some as an example of reliability and as superior to a more modular design that is more in line with the UNIX philosophy.
Certainly, it would be ideal if everything non-critical were in user space. But audio in the kernel is probably at the very bottom of the list. Audio mixing is maybe 0.0001% of the kernel code, and is some of the safest, simplest arithmetic code imaginable. It's worrying about the one ant you saw on the counter when your entire house is infested with termites.
> About the KMS however, I've heard that DisplayPort link training has latency requirements that are difficult to meet in anything but a kernel interrupt handler
I don't know if that's true or not, but I am running a DisplayPort monitor (ZR30w) now without KMS, and it works fine. Obviously the video driver is still running in kernel mode, but at least it's a module outside of the kernel itself that runs after my system is booted.
What I'd really like to see is distros and vendors instead relying on UEFI GOP for boot-time mode setting.
> Also X servers have traditionally needed direct PCI bus access to get the hardware initialized
Well, compare it to audio. Eventually even a userland mixer will have to send the samples through some sort of hardware interface. But if your goal is stability, then it would be ideal to get as much code out of the kernel as possible.
> and to what extent due to actual technical limitations and additional complexity of a microkernel architecture.
Certainly nothing is ever perfect. There are so many potential problems with computers. Cosmic rays can flip bits in your RAM if you don't shell out an extra $500 for the premium CPU, mainboard and ECC RAM. Strong enough power surges (lightning) can burn through and destroy absolutely any running computing equipment. Hardware can literally fail and take down your system. Things can overheat, there can be design flaws in the silicon itself, etc.
So I look at it like OpenBSD looks at security. You want to stack all the protections you can. Mirror your drives, use ECC RAM, don't run anything in kernel space you don't have to, try and build as much redundancy and safety as you can into the system. It won't be perfect, but every bit will help increase uptime.
So again, sure, audio should preferably be in user space. Just, it's many thousands of times worse that video isn't even trying to do this, and is in fact going in the opposite direction to become more tightly coupled with the kernel.
However, with one small caveat: servers don't generally have sound cards so the impact of this was relatively low. There aren't that many desktop Linux users out there. I mean I'm a Unix guy at heart and I'm typing this on a Windows laptop. I've never used Linux on the desktop and probably never will.
Now servers do have init processes and we don't really want to spend the next 3-4 years being guinea pigs. I'm quite happy for the vendors to do this behind the scenes or offer it as an alternative but we've got an RHEL+CentOS release with systemd in it already and a Debian with systemd in it just around the corner. A pulseaudio situation, even for 6 months, will result in no small amount of chaos.
I do indeed remember times before even ALSA when you had to pay OSS for drivers for your turtle beach card etc. But that's in the distant past, not right now and of little relevance. Windows was fine on the desktop then as well and the sound worked fine out of the box.
Then you're not really a Unix guy at heart.
At home, all I run is Linux, including the laptop my non-geeky wife uses.
For me it was a hard choice. I knew she would object because it would be "different" and she's not really interested in learning a gazillion different computing-systems, but on the flip side it meant it was simpler, quicker and less work for me to maintain the computers at home.
Once setup things just work, and ensuring everything (including flash and other vulnerability vectors) is up to date is one apt-get upgrade away.
I'm not sure that's a great vote of confidence for the road ahead of systemd (given that systemd presumably has a bit more to it than PulseAudio). To quote the article, "I do honestly believe this will end up being the start of a rocky period for Linux".
Ubuntu 14.04 will keep me happy for ~5 years, then I can take another look at what the current state of things are.
It is only since 14.04 that you have a small chance that opting to use pulseaudio is the better choice.
Just trying to get Skype working (which uses pulseaudio) cost me 2 hours last week, which is not at all nice when you have a call starting in 5 minutes.
Pulseaudio still won't detect the headphone jack on my old intel board, and Skype on my newer machines on Linux will routinely fuckup playback.
One also wonders how much of the PA cleanup was handled by people that weren't Lennart.
In particular for problems of getting Youtube (or any browser audio) to work while other apps use JACK directly.
Although on a recent new install it seemed to work without them as well.
One problem is I need to start/stop the qJackCtl thing every time my laptop comes back from sleep, to get sound working again. There must be a way to automate (or, preferably, fix) this, right? Anyone know?
But, also to be fair - like you, I maintain my own systems and do not overly depend on the teeming-mass-reality as a derivation of stability. My personal Linux DAW systems, running now for decades, have attained a level of productivity that I would at least hope is represented in the current niveau, vis a vis Popular Linux Distro designed for audio (e.g. pure:dyne, Arch Pro Audio, 64 Studio, UbuntuStudio, et al.) .. for the newcomer, it should of course 'all just work' from boot-up, which I hope is the case. It is for me, anyway: I've expunged pulseaudio from all of my machines, and make do with Jack. My studio uses 48-channels of digital audio, everything-is-a-file .. a working and functional DAW, thousands of plugins, about 12 MIDI devices (synthesizers/effects rack) and so on, and the best thing of all: all source code included. So, yeah .. ;)
EDIT: apropos qjackctl, yeah, apmd:
.. or some such similar thing.
A month ago with Ubuntu 14.04 - paired with a bluetooth speaker, but would not send any audio to it (A2DP) without any indication as to how to diagnose or correct the issue.
It may be standard on some systems, but not, apparently on debian (it's an "optional" package).
And that's part of the point: stuff that needn't be present shouldn't be. systemd's a whole 'nother ball of wax in that regard.
And yes, I'll even allow that Linux audio has been frustrating over the years. But in my case, problems going away had nothing to do with Lennart's work.
I've been with Ubuntu since 2007 and went through the PA transition. I agree it is so much better now. Changing audio sources is easy and faster than on Windows 7. By Ubuntu 12.04 this was stable for me. Like changing from speakers to headsets for a meeting, smooth with PA at least on Ubuntu. Until PA I never thought I'd see audio united on Linux.
On my Debian system youtube videos stop playing sound once in a while (video continues), thought I suppose it's not pulseaudio's fault (so just a general sound problem).
You asked ;-) (I still agree with you that it got better than it once was)
In some sense systemd is more stable in that it's fixing some longstanding bugs with sysvinit, but of course it will have some bugs of it own. If you don't want to deal with that, you could skip a release.
There is a distinct lack of engineering prowess and quality control. It originates at the core GNU + kernel + freedesktop teams and waterfalls down through the distribution houses.
That's the problem and it's endemic within Linux.
Imagine you were an architect of buildings. Your day to day job is to design mundane strip malls and gas stations. You have building code on your side for much of the process. As long as you don't violate the regulation, you at worst can only make an inconvenient building, but not a dangerous one.
But imagine instead that you're building a large office building every six months, your clients demand you don't reuse any design principles on your future clients, and you not only lacked the building code, but also 1/4th of the heavy machine equipment, 1/2ths of the tools, and 3/4ths of the raw materials. I don't just mean you don't have them in stock, I mean nobody has invented them yet. And of the ones that have been invented, we don't even know all of their material properties, say nothing about what material properties we should be looking for. Will this particular bolt we are using with these particular cross beams hold up to the stresses placed on them? The answer is unknown, and I'm some cases unknowable.
It's really easy as a user to say, "they should have tested this more". While strictly true that more testing may have found your issues ahead of time (presuming the right tests were done), it is inefficient engineering to exhaustively test things. Even mechanical engineering bases a lot on statistical modeling, which will always, always have corner cases that don't match reality.
In the real world, people had to learn the hard way about things like lightning rods and sacrificial electrodes. They didn't come about from "testing during development". They came about from testing live, and seeing which buildings and boats did or did not burn down or sink. That's not bad engineering. That's just the nature of unknown problems.
What the general state of affairs shows is the following traits:
1) There is no thought and research going into the design of a piece of software. Ergo, we do not learn from past mistakes.
2) There are isolated individuals writing vast swathes of software which are trusted unconditionally. Ergo, we do not learn the benefit of multiple eyes on a problem, review and discussion.
3) We assume that software is correct from one person's viewpoint and opinion. Ergo, we do not test software properly nor cover those tests with objectives.
4) We work to deadlines, not quality objectives. Ergo, we trade quality for tolerance from others.
In this case someone came along and didn't think about the problem, didn't work with others, assumed they were unconditionally correct and chose tolerance over quality.
To use your analogy, they're now selling stainless steel lightning rods (poor conductivity), are the only vendor of them, are a vocal marketing front and houses are catching fire everywhere.
Or more specifically, in one example, the entire process was above and from the author to the distributor, no one even noticed that loginctl doesn't work properly.
On your points:
#1 is patently false, to the point of being extremely insulting. You've lost all sympathy from me at this point. Go peddle your baseless opinions somewhere the audience doesn't know better.
#2 is also laughably false, as that is the entire freaking point of open source and often considered the greatest strength of Linux. You think because your highly qualified opinion wasn't consulted before you had to spend a whole four hours, OMG on a problem that means that no review or discussion was done?
#3 is false again, because software is tested. You use the word "properly", so I will sit here and wait for you to bestow upon us your great wisdom on what we could be doing better.
#4 is false on both presumptions that software is not built to quality standards instead of deadlines, or that other fields are not dictated by deadlines.
When I say tested properly, I mean tested completely. If you miss an entire functional unit of the software and a client reports it as broken, its pretty obvious what the problem is.
Our senior software guys sat down for the other four hours and presented all our findings together and cumulatively said "we're not supporting that shit; we can't trust it".
Regarding #4, it's plain to see that RHEl was released with a broken systemd implementation due to deadline...
I am still in the learning phase, but even I know that complete testing of any complex software is practically impossible.
So, how do you guarantee completedness in "proper" testing? I know you can't without redefining the word "completely". What's your definition?
Also see Impossibility of Complete Testing by Cem Kanen, co-founder of http://www.associationforsoftwaretesting.org/
When your system consists of functions "A, B, C, D", I'd expect to see test suites for "A, B, C, D". In this case there were test suites for "A, B, C". The client found D therefore the test suite was incomplete.
Now if a bit of A, B, C or D suites were missing that would be different and entirely expected.
As for testing--notice that when a lot of people here are reporting issues with systemd/pulseaudio, their reports are pretty much dismissed out of hand, or they're told "no, you've done something wrong".
For #2, a lot of times somebody with the right political position (say, Lennart at Redhat) or just the ability to shout louder and longer than anyone else will get something put in, regardless of technical advantage. Don't even try to claim otherwise.
1) government contractors are so fun when they get their software required to deal with certain parts of government
Honest question: I've been using sysvinit for a very long time and I have no concept of what those bugs might be.
Assume server with lots of processes.
Service A starts writes PID to disk, lets say 123.
lots of processes start and stop as it goes along and does its work.
Service A crashes/stops working
PID 123 gets reused by a new process
SysAdmin comes and hits /etc/init.d/ServiceA restart
shell script calls
which was a totally different process now not at all related to ServiceA.
Clean unmounting not depending on timeouts to be high enough.
Not starting a database before the filesystem with the database files is mounted.
I created a specialized FUSE filesystem to deal with this. Processes create PID files in it, but when they die, the filesystem automatically removes them.
The Readme is rather sparse, could you add an example how to use it from a init shell script?
Although the Capsicum model (in FreeBSD, slowly getting into Linux) where you can have file descriptors for processes is another different model.
This was solved decades ago with numbered init symlinks:
So I would say it has been hacked around for decades. Not cleanly solved. But I am not the best informed here so please add more details about how numbered init symlinks guarantee file system being there before a service is started.
It took a long time and a giant ecosystem to get where Linux is today at big enterprises. OSes are commodities in that space. They are not commodities in many other spaces though (e.g. startups, HPC, science, etc).
Whoever is pushing for this is an idiot then. Verisign, for example, has had 100% DNS uptime for the .net, .org, and .com root servers for ~15 years because of their mixed environments. In every one of their POPs they tend to have at least two racks of equipment with:
* 2 different brands of load balancers
* 2 different brands of firewalls
* 2 different brands of switches
* 2 different brands of servers
** servers are from different hardware generations
* 2 different OSes (Linux and FreeBSD)
* a choice from 3 different DNS server software
This is how you run a reliable global-scale service. Anyone who plays the "it's just easier if we all use ____" is in for a big surprise when their entire infrastructure is at risk due to one bug.
As long as a sufficient fraction of servers at a sufficient fraction of Verisigns clusters has an uncorrupted set of data and is able to serve responses, Verisigns TLD zones remain up.
Pretty much the only thing that can go wrong in that case, assuming you have safeguarded the integrity of the zone is bugs in components outside their direct control.
It makes 100% sense for them to focus their efforts on ensuring diversity, because the class of problems that can solve for them makes up an unusually high percentage of the possible failure classes, and the nature of the service also means that most of the potential problems that this can cause is only likely to take out some proportion of their capacity that still leaves them with a functional system, so the potential benefit is higher for them than for most with a heterogeneous, and the potential risks are lower for them than for most.
For Google and Facebook, the systems are so much more complex that the tradeoffs are vastly different.
Which excludes the vast majority of functionality of Google/Facebook, and most other major web properties.
There are very few heterogenic systems in the enterprise. That is an objective, but the main thing is that we deliver what we're paid to deliver by choosing an appropriate platform. We have Solaris, zSeries, Linux and Windows. We just got rid of AIX.
As for minor differences, FreeBSD has a lot of much bigger wins than people realise at first glimpse. The differences are far from minor. For example:
ZFS, dtrace, rctl, a scary good IP stack, virtio support, documentation that doesn't suck, a POSIX base, LLVM/clang, a MAC framework that doesn't suck, OpenBSM, CARP and a pile more. Oh plus an automated deployment story that is pretty tidy.
Sure we can replicate some of these on CentOS 7 for example with similar tech but the above are a million times more cohesive.
Unfortunately when the first and second time differ even though identical (recorded) steps were performed, one has to ask the question: why and can I trust it?
My rule of thumb is "search for the problem on Google. If nothing comes up,
maybe something is wrong on my end".
Did you find any results or reported bugs similar to what you experienced?
Yes there were other mentions of it with notes to it being fixed in a later systemd drop, which we can't deploy because RH/CentOS don't ship it. I think one of our guys raised a case with RH but I was dragged off onto something else then.
I have my share of reservations about systemd (and PA), but thought that it might be worth pointing out that "known good" hardware A with software X, doesn't have to mean hardware A is all good, just that A has no bugs/errors not exposed when running X. So Y comes along (new kernel, drivers?) with entirely new code - and suddenly things behave erratically.
I don't have the error on my phone which I'm on at the moment but it threw a dbus error with no debug info.
I imagine 10 years ago you would be the person complaining that GCC segfaults randomly during compiling Linux kernel, complaining that it's not "tested completely". While the segfault was caused by CPU overheating (not cooled properly) and flaky memory (causing bits to flip).
Just because a problem is unusual, intermittent, or only affects one person doesn't mean it's not a regular old software bug. And in my experience, it almost always is. And once you do debug it, you often (but not always) understand why it was intermittent, under what conditions it happened, and why you were the only person that saw it.
Nope. Not that.
We don't buy crappy hardware or not test it.
Where I think the systemd-naysayers have a valid point is around the tight coupling that has been introduced, and is still being introduced, between systemd and various other components of a fully functional Linux system.
To take your "just submit a patch" example - say N years from now I'm unhappy with some aspect of how systemd works. I can submit a patch, or I can rewrite that whole component from scratch. However, it's entirely possible that the piece I'm unhappy with is so tightly coupled to the rest of systemd that I can't rewrite one component of it without rewriting the rest of systemd, or convincing the systemd maintainers to accept my rewrite and bake it in as the new "official" version of that component.
Where I think the criticism of systemd is valid is that the idea of modularity has taken a backseat, and the APIs between the different components of systemd haven't been very well-thought-out. The informal spec is "whatever systemd does today is correct", which of course destroys any sort of interoperability.
And by way of full disclosure, I'm an Arch user, and run systemd on 4 systems I use everyday - home desktop, home server, work desktop, work laptop. Whatever else I have to say about its design, I use it every day, and actually like the parts of it that I use. eg, the boot time for my desktop is stupidly fast, and if I want to know about some log message, I just run journalctl. I no longer care whether the foo daemon uses syslog, or writes to its own /var/log/foo.log that I should set up rotation for, or handles its own rotation as /var/log/foo/2014-11-20.log, and so on.
And just to play devil's advocate with my own position - there's a certain point where tight coupling makes sense. Linux kernel modules, for example, are tightly coupled to the Linux kernel, and don't work unmodified when compiled against a *BSD or Solaris kernel.
Plus: This tight coupling did not exactly replace existing communication features. It created new ones. These are made use of.
Yes, systemd is bringing lots of new functionality. Under the hood - that is why sysadmins love or loathe it and users mostly don't care. That "tight integration" argument is mostly one that comes from people (please do not take offence, you're weighting it carefully indeed!) who bemoan that other userspace system infrastructure is left behind feature-wise. And those who love to argue about and against design decisions.
Sounds eerily like "Embrace Extend Extinguish" redux.
Don't get me wrong, I am aware systemd is a technically superior solution. But politically, it is a trainwreck.
Sure, but the coupling was contained. You could still run Gnome on any distro (or on non-linux), whichever way around your init scripts were.
It's in RedHat's interest for software that's currently portable to FreeBSD or especially to Solaris to become tied to Linux. This wouldn't be the first time RedHat has adopted anti-opensource methods out of fear of Oracle - compare their policy of deliberately obfuscating the history of their kernel source.
Submitting a patch, implies you agree with the general direction but need a bug fixed or a feature added.
Humm, I know this will disappoint you, but we are not particularly
interested in merging patches supporting other libcs, if those are not
compatible with glibc. We don't want the compatibility kludges in
systemd, and if a libc which claims to be compatible with glibc actually
is not, then this should really be fixed in the libc, not worked around
As for forking the whole thing, remember that logind was briefly liberated so it could be build as a standalone package, Lennart went and did a big rewrite so the next version was much more integrated with systemd. When he controls the internal APIs and can change them whenever he wants, a clone will have to be a total replacement right from the start, or it ends up perpetually having to catch up to the changes that will be introduced just to cause breakage.
Why should the Systemd team pay the overhead - in terms of complicating their code - to work around incompatibilities in another libc that will also affect portability of a lot of other Linux software?
We're not talking about asking for some new work to be done. We're not talking about any kind of change to how the project works.
This is about trivial changes like #defining function name that aren't even included in the build unless you were using the libc. It is actually rather surprising behavior to see in a publicly-developed project. This kind of fix is incredibly common we've created tools su chasd "cmake" and "autoconf" to handle the common cases and easier #ifdef-ing.
I wish more projects would take this line.
Autoconf is the devil. It's a symptom of how broken Unix-y environments have been, and how people were willing to impart a massive maintenance cost of countless application code bases instead of either pushing their vendors to getting things right, or agreeing on common compatibility layers.
In this particular case, mkstemp() is not a viable replacement for mkostemp(). A proper fix is to provide mkostemp() in uClibc, or to compile with a shim that provides it.
Arguing over whether including the shim in Systemd would be acceptable would be a different matter, but parts of the patches as presented were flat out broken.
And the Linux kernel is not starting to depend on systemd or the others. The Linux kernel is moving towards demanding a single cgroups writer, and at the moment Systemd is the main contender in that space.
That Systemd is depending on Linux is unsurprising, given that they stated from the outset exactly that they were unwilling to pay the price of trying to implement generic abstractions rather than taking full advantage of the capabilities Linux offers. You may of course disagree with that decision, but frankly, for a lot of us getting a better init system for the OS we use is more important than getting some idealised variation that the BSD's could use too.
> an architecture that promotes this very lock-in to begin with.
The "architecture that promotes this very lock-in" in this case is "provide functionality that people want so badly they're prepared to introduce dependencies on systemd".
At some points enough is enough, and sub-optimal advances still end up getting adopted because the alternatives are worse. Systemd falls squarely in that category: I agree it'd be nicer if it was presented and introduced in nice small digestible separate chunks with well defined, stadardised APIs so that people could be confident in the ability to replace the various APIs. But if the alternative is remaining with the alternatives? I'll pick Systemd warts and all.
Looking at posts from the Gnome people, the original intent appears to have been to provide a narrow logind shim exactly to make it easier to replace logind/systemd with something else. If someone feels strongly enough to come up with a viable shim or an alternative API that can talk to both systemd and other systems reliably, then I'd expect Gnome to be all over that exactly because they will otherwise have the headache of how to continue to support other platforms.
The problem is that Gnome already for a long time have dependend on expectations of the user session management that ConsoleKit on top of other init systems have been unable to properly meet, so Gnome has in many scenarios been subtly broken for a long time.
As to logind, it may have been a better choice for the long term to do a separate implementation of the public and stable logind DBus API instead of trying to run the systemd-logind implementation without systemd as PID1, but supposedly whoever did the latter thought it was the best short-term choice.
Most being: this is not the issue the loudest voices say it is.
Second - while it will provide an alternative that helps frame the debate, this is not a minor undertaking. With every other distribution caving maintaining a distribution that does not use Systemd will require a lot of work to keep all of the software out there working properly with whatever alternative init system it chooses to use.
This alternative distro is also going to have to deal with how to solve the init problem. We had some good options in play but I don't believe we'd found the best answer to the problem yet when Lennart came bowling through like a bull in a china shop. So any distribution effort is going to have to take on the role of choosing the best of breed alternative and make the effort to ensure it continues to develop and improve.
This isn't something you take on lightly.
Or go back to using windows for general use and software development which is in fact what I've done. It's amazingly sad after many years of being a strong linux supported but this has killed linux for me. I see no point in continuing to use it.
 tailored as in platform specific. For cross platform stuff, it's not so good.
Color me surprised.
But that's not at all what Torvalds's quote means. He meant that, if FreeBSD had been available, he would have worked on contributing to that instead and improving it for daily use, rather than building Linux to be used for daily use. (The state of FreeBSD in this hypothetical world has no bearing on the state of FreeBSD today).
In terms of FreeBSD today, while it's possible to use it for daily use, it suffers from even worse driver issues than Linux does, and of all the problems of just simply having much smaller market share (both for contributing developers and users) than Linux does.
This may or may not be 'good enough' for OP's purposes, but it's disingenuous to suggest that Torvalds's hypothetical from the early 1990s implies that FreeBSD is a clean substitute for end-user Linux today.
No it does not. It has fewer driver issues by far. Because it does not have broken half-assed binary only drivers by obscure vendor X that don't actually work. Unsupported hardware is simply unsupported, rather than broken.
I've always found it interesting that Nvidia offers a more complete and stable BSD driver than its GNU/Linux counterpart. That said, AMD/ATI support is abysmal, and even Intel video is lacking compared to GNU/Linux.
> Unsupported hardware is simply unsupported, rather than broken.
That's a matter of interpretation. If FreeBSD doesn't support my hardware, it's the equivalent of being broken for me, given that I can't use it anyway. That said, I try to build or buy the most OS-agnostic workstations possible so my options are always open.
Not so sure about that, or maybe it depends on the situation. For instance, I've been running FBSD and Linux in VMs (specifically, Hyper-V/Win 8.1 on a Surface Pro 2).
After updating to FBSD 10.1, I decided to try the Lumina DE (from PC-BSD). I've been surprised at the performance of the GUI under the constrained memory and CPU availability. It's about as good as the host (Windows), albeit running minimally demanding applications.
OTOH Linux versions (SUSE, CentOS) have been much more sluggish and GUI usability much lower. I realize this is impressionistic and hardly a deep analysis. Nonetheless, I think it points out that it's risky to make assumptions when circumstances and system requirements are so tremendously variable.
Then you're abstracting away from the video hardware, and not getting the same results as you would on bare metal. The only thing really lacking in Intel video versus Nvidia is proper KMS support; Intel video on FreeBSD works generally well otherwise. The FreeBSD Nvidia-provided driver, while closed source and binary only, is more or less feature complete.
That's the point. People like to pretend linux has more hardware support, but mostly what is has is broken drivers for obscure buggy hardware that you can't actually use. The "well supported stable hardware that actually works" list is practically identical between them.
This is why choice is good, and lock-in is so bad.
As an aside, I've heard it theorized that part of the reason Microsoft tends to do massive GUI facelifts every few releases, is to keep the Windows/Office training industry going strong.
But wouldn't the path of least resistance be to switch to a project that does not have this "haphazard state of affairs"?
When I originally tried Linux I got fed up within _days_. It is the _relative_ lack of default "configuration" (that is decided by someone else) that makes me stay with FreeBSD and NetBSD. Of course, lack of default configuration is the antithesis of popular Linux distributions. Whenever I have to use one, I spend more time learning how to turn things off than I ever did learning how to turn things on.
The answer to the original question is, I think, "no", switching is probably not the path of least resistance for many Linux users. Because when the Linux user makes that switch, they immediately find that someone has not done everything for them.
And from what I have seen, observing the questions of Linux users who first try FreeBSD or NetBSD, they generally do not like that. It means they have to do some configuration of their own. And even if they are comfortable doing configuration, it means they have to learn things that are different from the "Linux way"; and they inevitably encounter shortcomings that are due to lack of developer resources (read: time).
In doing things for yourself you learn about how things work. The rc.d system that all BSD projects use is coherent and relatively easy to understand. For whatever that is worth.
This debate over systemd seems to cut to the core of the value of learning about how things work. The reader can draw their own conclusions.
Linux is only a kernel and it should still be possible and thus optional to run that kernel with a basic init (or init alternative, e.g., one based on daemontools) and with userland utilties that do not need systemd.
The question I have is how difficult the popular Linux distribution folks are going to make that for their users to do.
And if they do make it difficult, it raises the question, "Why?"
This informal fallacy is based on the idea that everything in the world can be distilled to a single answer. The real answer is more complicated. For example:
Red Hat is on board because they pay its creator's salary. So they rely on an individual bias.
Debian is on board because they rely upon 'collective wisdom' and committees to make decisions. So they rely on the bias of group thinking.
Ubuntu is on board because Debian is on board. So they rely on the bias of the other.
Other distributions are using it because 'every other distribution is using it', or they're small enough that it doesn't cause conflicts for its use base, or because it's a GNOME dependency, or because it's just new technology.
To make someone think something is a good idea, show them someone else thinks it's a good idea. This is a fact of all human beings' thought processes. Decisions are not based on merit, or logic, or even a quorum; it is based on fallacies created by heuristics. There exists a heuristic in which the more an idea is adopted, the more other people think it is a good idea.
We imagine our thoughts are logical, and that other people also think logically, and that their decisions must be made for a good reason. But in fact, the great majority of all decisions we make are based on guesses; this is how our brains are able to carry out complex calculations and come to decisions in split-seconds.
For example, you might look at systemd and say, "it fixes so many problems! it provides so many features! it standardizes Linux! CLEARLY this is superior. we must adopt it."
For people who care about the purity of the highest technical ideals, this makes sense. For people who care about being able to use their computer, these things don't matter, and in the systemd implementation, actually makes things worse for them. The changes systemd purports to make are not bad things. It's really just the way in which they did it that is bad.
It's like wanting to upgrade your bicycle to four wheels, but requiring the rider now operate it lying flat down and using mirrors to navigate. The four wheels was a great idea. Using mirrors to navigate? Maybe not so great.
Of course, its creators will turn this inconvenience into a feature, saying "you get to lay down! it's therefore more efficient and easier to use!", completely ignoring how other people want to ride a 4-wheeler.
Have you ever run a linux box? How about dozens or hundreds of them in a production environment? I'm guessing no on both counts based on the nonsense you're spewing forth in your post.
If you don't know what you're talking about, it's best to just keep quiet.
If you absolutely MUST run Linux, my recommendation is to minimize the interaction with the base distro as much as possible. CoreOS (when it’s finally baked and production ready) can bring you an LXC based ecosystem
Systemd is absolutely key to how CoreOS works. It's the basis for the distributed init system it provides — a major selling point.
Taking any of this blog's advice would be harmful. I'd suggest a better approach would be to accept that the majority of distributions have settled on systemd and that generally this decision has not been made by idiots. So it would be worth either understanding what their pain points are and how they can be solved with an alternative to Systemd, or to help solve the issues that are apparently in Systemd yourself.
But not because I have anything against Systemd. I love Systemd so far.
It's hilarious that he's proposing CoreOS as an alternative, given that it's one of the most radical rethinks of a Linux distro out there.
The problem here isn't change, or re-thinking linux. The problem is re-inventing the wheel, and doing it poorly.
CoreOS uses systemd, but it's not a distribution in the classic sense -- rather it's a platform for containers. The narrow use-case for systemd here removes some (most? all?) of the concerns.
Whilst I agree that the blog is probably hocum, there's nothing wrong with critical thinking.
The answer "let's throw everything out" isn't that useful; likewise, dismissing the considered opinion of lots of people who have been doing this sort of thing for a while needs to be done with some rationality. An empty, bandwagon-jumping appeal like this adds little value and just helps spread more misinformation.
Moving all the chess pieces at once, which is what is happening is not productive, professional or a sign of experience.
A lot of the bug descriptions are quite scarily bad when you consider them in context such as "various loginctl commands not working" etc.
That's the point – if systemd has important bugs they should be fixed. Clearly, the groups responsible for the decision have concluded that the tradeoff is worth it, and have accepted that a large, fundamental change will have issues. That's fine – there are a bunch of other distress you can use that have not adopted systemd, which you can use in the meantime if you disagree.
People are shipping production operating systems with systemd that is chock full of bugs.
An all consuming tentacle monster like systemd is fine if you want to dogfood it but to throw at paying customers and/or supporters of your distribution is a little off key.
A person is smart. People are dumb, panicky dangerous animals and you know it. Fifteen hundred years ago everybody knew the Earth was the center of the universe. Five hundred years ago, everybody knew the Earth was flat, and fifteen minutes ago, you knew that humans were alone on this planet. Imagine what you'll know tomorrow.
I mean all these movie quotes are all cool sounding but are quite shallow.
Just because some thought has to go into the interpretation does't make it shallow.
IMHO I kind of shrug at this, since Unix was never really all that great to begin with. Unix won because the only commercially viable and well supported alternative was Windows, an OS that was (and in many ways still is) significantly worse especially for server and embedded applications. Everyone rallied around Unix and especially free/open Unix as an alternative, and so here we are.
It's also tough to compete with free, and Unix OSes got a huge boost from both Linux and the various free flavors of BSD. Yet that boost came at the expense of things like BeOS, Plan9, original NeXT, and the OS I still feel is hiding behind the JVM ... which for their day represented fresh ideas that might have gone somewhere.
Ultimately I think the existing Unix paradigm is going to be killed by Docker and mobile OSes that containerize in similar ways, and I'm not sure this is a step forward. It escapes much of the ugliness and the poor permission model of Unix, but it does so by handing virtually everything to the app. Docker containers (and mobile apps) can be thought of as something almost akin to giant statically linked binaries. We're getting more monolithic and coarse-grained.
That's because there were other components handling those tasks, like inetd and /etc/inittab. I do like having Upstart handle respawning for me, though.
The service name entry is the name of a valid service in the file
/etc/services. … For UNIX domain sockets this field specifies the
path name of the socket.
The protocol must be a valid protocol as given in /etc/protocols.
Examples might be “unix”, “tcp” or “udp”. … A protocol of “unix”
is used to specify a socket in the UNIX domain.
Hi, I'm a sysadmin who's fed up with neckbeards (most of who apparently don't know much and refuse to learn) claiming to speak for all sysadmins on this topic.
> large risk and little reward.
It's four years old, and claiming "large risk and little reward" is like listening to someone claim that moving from sendmail to postfix would be a disaster.
Perhaps if you're tired of neckbeards speaking for all sysadmins, you should return the favour and not declare what all neckbeards are saying. A lot of old, experienced admins are for systemd. It's not the young go-getters who are at the top level of distros making the foundational architectural decisions, after all.
The benefits far outweigh the risks (imo obviously)
What generally annoy me are things like supervisor and other things people use to "auto restart" services but these aren't exactly integrating nicely and put stuff all over the filesystem/etc.. I like that systemd includes that and does it mostly properly.
There are some things I've wanted reliable and consistent mechanisms for so long: starting/restarting/inspecting services, isolation/resource limiting, socket activation, log collection.
One of the huge benefits of the Unix/Linux, CLI, and Free Software traditions is that they tend to be very strongly preserving of established knowledge. Changes are incremental, usually additive, a reliance on scripting means that interfaces are unlikely to change, and new tools are very frequently drop-in replacements for old.
As specific examples:
I first learned editing under BSD vi in the mid 1980s. In the time since I've learned and used on various PCs (and a few other systems): WordPerfect, WordStar, MacWrite, AmiPro, several iterations of MS Word, the EDT and EVE editors under VAX, the TSO-ISPF editor, and a few others under Unix: emacs, ae, nano, nedit, Abiword, Lyx, and various iterations of what's now LibreOffice. Most of that skill-acquisition is now dead to me -- the tools simply aren't available or aren't useful.
I'm no longer using vi, but vim (adopted in the mid 1990s as I switched to Linux), but the basic muscle-memory is the same. And its an editor I can utilize across a huge number of systems (though I do admit to finding traditional vi / nvi painful).
Similarly, the bash shell is an iteration on the basic Bourne and Korn shells.
ssh is a drop-in replacement for rsh, to the extent that /usr/bin/rsh is typically a symlink to ssh. While the dynamic is slightly different from telnet, it's still pretty similar with a few exceptions.
The rare occasions in which a utility changes its commandline options you'll virtually always hear about it. The fact that it's so painful (and tends to break decades-old scripts) means its generally avoided. Authors who make a point of doing this tend to find that people avoid their tools.
A bigger point is that forgetting stuff is often much harder (and more important) than learning stuff. And when you're invalidating long-established patterns, that's really painful.
There's also the fact that we manage technology by managing complexity, and most of us in the field work at the limits of our ability to manage the complexity we're faced with: the basic OS, shells and interpreters, hardware, vendors, hosting providers, management tools, employers, clients, customers, co-workers, engineering and development teams, services, abuse and security concerns. It's a really complex and dynamic field.
Linux has done quite well (with a few notable exceptions) of maintaining a balance between capabilities provided and complexity imposed. One problem is that as systems become more complex, the additional benefits of yet more complexity are lower, and the costs are higher (this is a very general rule, not just specific to Linux, operating systems, or computers).
The question of how to introduce radical change is a key one. I've seen a number of failed attempts to drastically revise existing systems in place -- this almost always fails. Linux itself wasn't introduced in this way -- it emerged as an alternative to both "traditional" proprietary Unices, to Big Iron (mainframes, VAX), and Microsoft's then-new WinNT. Linux ended up dominating virtually all of these categories, but it did so by incrementally beating out the competitors through replacement.
An interesting space where a lot of this comes to a head specifically is in the graphical user interface field. I've noted several times that Apple, notable for a great deal of success in this area, has been exceptionally conservative in its GUI development. It's effectively had two GUIs, the initial Mac System interface, and Aqua. Each has had a roughly 15 year lifespan, and yes, there was incremental improvement over the span of both, but the essential base remained the same.
Since the early 1990s, I've watched Unix/Linux go from twm to fvwm, Motif/mwm, VUE/CUE (a "corporate" standard based on Motif plus a desktop), Enlightenment, GNOME, and KDE, and now alternatives such as xfce4 and ... oh, that funky graphics thing Suse's got, as the "primary" desktops. GNOME and KDE themselves have gone through about three major revisions. And there are a number of other "lesser" more minimal desktops as well -- I use one of these, WindowMaker, which is actually based in a late 1980s ancestor of the Aqua interface now used by Apple.
Microsoft's experienced some similar recent tribulations. As has pretty much every online site ever that's done a site redesign.
As jwz has observed: changes to GUIs just don't offer that much win. They're highly disruptive, they're possible because the interfaces generally aren't scripted (other than via automated QA testing systems, but that's another story), but more importantly: the productivity benefits granted users really aren't that significant, especially regards the cost.
Worse: changing an existing interface leaves users in a no-recourse situation, especially in the case of SAAS. For Linux and systemd, the options are slightly more open in that (for now) it's possible to disable or block systemd from installing in at least some cases. But over the long run, it may be that the only options are voice and exit, as opposed to loyalty (a reference to the book and concept of Voice, Exit, and Loyalty, which I recommend looking up).
So yes: those of us with numerous decades of experience in the field often do have an extremely jaundiced view toward radical change. And with very good reason.
But your comment is really unwarranted.
I really have the feeling that people are using double standards here, especially when suggesting Solaris or Solaris-derived systems. Since systemd is implementing pretty much what has been in Solaris (SMF) and OS X (launchd) for a while now:
Also, it is of somewhat questionable ethics that members of the Solaris community submit such troll posts (as others have pointed out, there is not much substance there). It reeks of wanting to destroy Linux' image for your own (Illumos, SmartOS) gain.
It assumes that this is a troll post - which I don't think is fair. The author has concerns that are legitimate to them, and outright dismissal as a troll, whether or not you agree with them, is petty and judgmental.
Second, you are somehow conflating dislike of systemd with love of sysv init. The cognitive dissonance here only makes sense to me if you believe that systemd is perfectly fine, and think that the only reason people dislike it is because it's different.
However, if someone is recommending a solution that utilizes SMF, is it such a stretch to think that it might not be because they are in love with sysv init, and instead might think that the implementation of systemd is lacking?
I personally like the underlying idea of SystemD - because I like SMF. I do not like the implementation of systemd, and also have reservations about the people helming the project.
SMF does not seem to want to own every bit of my Linux machine, however.
It's not that I don't like systemd, it's that [insert affiliated party] is way too cocky
It blows my mind to see people regress so far back into arguments that this because an issue of emotions in a technical debate.
It's a matter of having observed similar behavior in other projects which went similarly off the rails.
Poettering's own track record with Pulseaudio comes to mind. There's also the GNOME project, which I identified as actively intelligent-user-hostile around 2004. It's been somewhat gratifying to see that particular perception bear out with time.
There are other projects which have shown similar levels of arrogance, though mostly with more limited and self-contained damage.
And being prickly or hard to deal with has shades. Neither Linus nor Theo de Raat are pussycats, but both focus very much on technical issues and are generally highly responsive to specific technical complaints. Sure, they make mistakes and bad calls occasionally, but on balance they've tended to get things right.
The attitudes expressed by Poettering and Sievers in particular aren't simply cocky, but contemptuous. And they're getting called on it. Including by Linus.
I could give a shit about personalities themselves, I really could. For the most part I really don't care how socially awkward someone is if they're good at their job. And if they don't start going out of their way to do harm to me or others. Personality disputes in discussions bore the piss out of me.
But I'm also not blind to technical failings with roots in personality traits. And those are what I'm seeing in the systemd crowd and leadership.
Then stop poisoning the well.
But I'm also not blind to technical failings with roots in personality traits. And those are what I'm seeing in the systemd crowd and leadership.
The problem that I see that most arguments against systemd are first and foremost about Lennart Poettering. And if technical reasons are brought forward, they can all be summarized as: does not conform to the UNIX philosophy (monolithic, replaces existing tools with tightly-coupled equivalents, binary logs).
I think that a reasonable argument can be made that, with the exception of binary logs, these things are true for many UNIXen. You will find only few people who would say that BSD does not conform to the UNIX philosophy. However, the BSDs have the aforementioned traits as well: developed by one project and tightly coupled (e.g. you cannot just take most BSD utilities and libraries and compile them on Linux or Solaris, it requires serious effort).
People always argued that this was a good trait of the BSDs (and I agree to some extend), because it allows better integration and use of BSD-specific features.
However, when systemd does it, it's suddenly violating the UNIX philosophy.
I've dithered on whether or not to respond, but this bugs me.
Your response, again, typically of many systemd supporters, looks at the option of responding to the relevant points of my argument (personalities can have relevant technical consequences), and dives to the personality dispute "stop poisoning the well".
I'm not poisoning the well. I'm pointing out that the well has been poisoned.
The elements of the Unix philosophy which you allude to exist for good reasons, and violating them imposes very high costs. This is a lesson that those of us who've been around for a while, and have multi-platform experience (check on both counts for myself) are well aware of.
Monolithic systems transcend ready replacement. Generally you've got to toss the whole mess out. Pluggable systems avoid that. There are instances in which monolithic design does seem to be at the very least hard to avoid, but you'd best be very aware of this and defend your position well. Systemd violates this principle by assuming gratuitous monolithic nature and explicitly refusing compatibility and modular alternatives.
Tightly-coupled systems are similarly brittle. The classic case of this is probably the Windows platform as a whole. Among the best arguments for loose coupling comes from Steve McConnell's 1990s classic Code Complete (ironically, McConnell was a Microsoft developer). I strongly recommend you read the relevant sections on tight vs. loose coupling.
Binary logs (and binary file formats in general) preclude use of alternative tools. The Windows Registry (again from Microsoft) comes to mind. One of the better hacks of this I know of are Unix/Linux compatibility systems which treat the registry as a filesystem interface. This originated with UWIN (from Steve Korn of AT&T and Korn shell fame), and has since been adopted by Cygwin. The ability to grep the registry, process it with scripting tools (sed, awk, perl, etc.), and modify it (using specific commandline utilities offered for the purpose) makes dealing with that particular hairball _slightly_ less annoying. The lack of self-documenting formats for registry values themselves (a trait shared by GNOME's gconf system) is another fatal flaw.
Even packaging formats are subject to this. Red Hat (gee ... aren't they involved with systemd....) designed a binary file format for RPM which requires specific tools to unpack. Joey Hess's 'alien' links to the RPM libraries for this purpose, and a set of Perl tools I'm aware of has to apply specific offsets (varying by RPM version) to extract data from the files. Contrast this with Debian's DEB format: tarballs packed in an ar archive. This can be unpacked with standard shell tools, or busybox.
Putting together the concepts of monolithic, loosely coupled, non-binary, standard tools, I've more than once rescued Debian systems which failed to pivot-root from initrd by breaking into the initrd shell, unpacking, and installing DEB packages using shell tools, facilitated by the use of an interactive shell, busybox for tools, and the DEB format. I'm thwarted on several levels from a similar recovery option in Red Hat systems due to the use of a special and explicitly noninteractive shell used in initrds (which is larger than Debian's 'dash' used for the same role), and the binary format of RPM packages. Working in cramped quarters and difficult situations, I can assure you of which system I'd prefer to be working with.
Systemd's violation of these principles is objectionable because it's not necessary (see OpenBSD's shim replacement for functionality, or uselessd, among others), gratuitous (decisions are being deliberately made), and, as your comment above illustrates, the very valid reasons for not doing just this are belittled.
But, ultimately, what happens in tech is as much about people and personalities as it is about actual technical merit. To delude ourselves otherwise is dangerous. When someone claims to be arguing from technical merit, look very closely at their history and probable motivations. There's always more there.
Nothing about systemd removes the basic unix command line. Because he's most definitely not explaining the init system, which wouldn't have been the same from year to year then, or even similar decade to decade.
But that's still a good 25-30 years of work, experience, practices, and smoothing out the rough edges that will be shot down the drains.
Systemd also fundamentally changes the control locus of key features within Linux and how applications, the kernel, and OS as a whole are constructed and constrained. Putting all of that under the control of a small group with highly evident disdain for any "outside" concerns (in quotes as these are of the larger Linux community, and the concerns are most decidedly inside that group), contempt, and plays-poorly-with-others attitudes.
I'm not impressed.
Nor with your comment, FWIW.
The rest of your comment is fear mongering which could be applied to any group of core devs on any OSS project in existence. After all who controls Debian and security defaults? Do YOU trust them?
What 1978 Unix did have was security and authentication. The OS was multi-user from the very beginning -- hence the pun in the name: uniplexing operating system (Dennis and Ken created a two-user OS to play Space Traveller).
As Bruce Perens recently discussed in a set of comments at LWN, the first thing he did as DPL of Debian was decentralize the management of Debian packaging. He recommends a very similar process for Systemd. The Systemd proponents in that discussion aren't particularly taken with the idea.
It's not a matter of fear mongering when the stated goals and practices of Systemd are to intentionally break compatibility with other Unixen, to reject compatibility patches, and to provide "choice" in the form of allowing users the option of any Linux distro on which they can run systemd:
As Jon Corbet noted at LWN in his Grumpy Editor post on the topic, it would greatly behoove systemd leadership and proponents to demonstrate a modicum of gracious victory.
As for Debian's governance, that process has been more than slightly troubled of late, with at least four key departures (Joey Hess, Ian Jackson, Russ Allbery, and Tollef Fog Heen), only in the past couple of weeks. The cabal question was raised by former DPL Bruce Perens in the LWN post linked above. And, frankly, no, I haven't been happy with the recent directions of Debian's Technical Committee of late. Joey Hess's resignation (as well as those of Ian and Russ) calls into question more than just the specific decisions, but the process as a whole.
Your attempts to smear my own comments which are based on actual events, facts, and highly considered views of those with deep and broad experience in the field is, I'm really sorry to say, far too typical of what I see from systemd proponents (the attacks on Perens in the LWN thread strike a pretty similar tenor).
Something is sick in this process. That more than anything is what's bothering me about it, though I've also grave doubts over the technical direction.
The most important elements to consider about Plan9 are these:
1. Plan9 wasn't Unix (nor was it Linux). It was its own OS, it was absolutely informed by Unix, and tried to learn from mistakes practiced in Unix. Because it wasn't Unix it provided for an independent test bed in which these ideas could be explored without disrupting a large established installed base and user community. And that is a key benefit of branched development. All of these I consider positives of Plan9.
2. It was hampered by an overbearing corporate control and licensing model. It was an ugly stepchild of AT&T's, under a proprietary license. The fact that it was under development kept it from being widely deployed (among other factors), the fact that it had a restricted license meant that other possible collaborators couldn't get involved.
When Linux emerged in the early 1990s, it had a lot of problems -- it was far from the best or most obvious Unix alternative out there (look up ESR's PC Unix guides from that era). But in a world of large proprietary Unicies priced far out of the hobbyist's range, a handful of small PC ports of varying quality, and BSD which was embroiled in its lawsuit with AT&T (speaking of Plan9), Linux was unencumbered, free, and (pretty quickly) available under the GPL. That gave it the critical mass to develop. As with Plan9, it was its own OS, providing a testbed environment for development, but also allowing stable cuts to be made for use in specific deployments as it reached sufficient states of readiness.
Which is to say: the community and development dynamics mattered a lot.
I'm seeing a far more troubled path for Systemd in this regard.
Also of note: in the Debian init system debate, a specific concern raised against upstart, one of the init alternatives, was its own requirement of a developer license grant to Canonical, which was seen as a strong demerit against upstart. As with Plan9, exercising too much proprietary control may well have cost Canonical critical votes in the Debian decision.
I must admit the ever-growing scope of systemd is starting to concern me somewhat (though I've been running it with satisfaction more or less since it became available in Debian experimental).
It was fun for a while, but I grew out of it.
Similarly, though the underlying hardware and code share basically nothing with Unixen of yore, old knowledge is still useful on modern Linux. This commonality of interface is more important than inner workings.
By the way the most expensive cars by far (therefore arguably the most desirable) are old to very old. A Ferrari 250 GTO is way more valuable than any new car. IIRC the most expensive car ever is a 1929 Bugatti Royale, and even you can drive it.
As for my comment being unwarranted, sysadmin'ing requires learning new tech. If there is an improvement on a tech such that it has mass adoption, learn the tech. It's your job. If you don't like it, change jobs. I'm not saying you should shut up and put up. However, we're far past that stage of valued input and people are still complaining. The decisions have pretty much been made that are going to be made concerning systemd adoption. Yet here I am, reading yet again how systemd was the wrong choice, even though rigorous debate was had and core teams decided it was the best decision. Even though this was the biggest drama piece since that blogger blasted linus for being rude. Here we are with 'radical change' in systemd.
But where the user interacts with the system, things have been remarkably stable. Even the relatively minor changes which have been presented have been covered with the usual Apple levels of obsession -- skewmorphic vs. flat designs, etc., ad nauseum.
Again the point being: screw with how things are visually and how users interact with the system, you're going to create huge usability costs with little to show for.
BUT IT WAS THE FIRST SUCH BREAK IN 15 YEARS OF THE GUI, AND IT'S BEEN THE ONLY MAJOR BREAK IN THE PAST 15 YEARS.
I'm also not saying that Aqua hasn't changed at all. It has, with the most notable addition that I'm aware of being virtual desktops (something NextSTEP had in the 1980s). But other than some minor cosmetic changes, and largely invisible-to-the-user under-the-hood updates, the visible UI has NOT changed appreciably.
Contrast that with the disruption that's prevailed in the Microsoft Windows and Linux spaces from 1999 to present. We've gone from the Win98 UI to the candy-cane XP styling, and Metro in Windows, and at least three generations each of KDE and GNOME on Linux, plus a few other desktops which have waxed and waned in popularity.
I've continued to use WindowMaker, and after 17 years, it is, hands down, the one GUI metaphor I've had the longest experience with of any. It's been exceptionally stable, with very few changes. Even minor ones are quite jarring to me, which is somewhat odd to reflect on.
X11 and/or replacements is a whole 'nother discussion, but I'll simply note that the network transparency of X has been hugely underappreciated by many who've sought to upend it (I don't know what the status of Wayland is in this regard).
I think IT managers would prefer it if they didn't have to spend time and money re-training their sysadmins or hiring/firing them to ensure their staff has the skills to use the $NEW_SHINY from $CORPORATE_VENDOR. Skill transference is a boon for customers (see also: "Stop breaking the UI!").
If you want to see the ultimate extent of this, look at the Wii and Wii U. Each game ships with an "IOS": effectively an OS kernel+initrd update package. Every game boots to the newest IOS available, so if one game updates to IOS v6653, then another game that only shipped with IOS v6652 will find the newer version on disk and use it.
However, a game's IOS requirement doesn't just have a version; it also has a slot. Each console has space for 256 individual copies of IOS, which are each independently versioned. So if two games both use IOS, then the game providing v6653 will overwrite v6652 on disk, and then the game providing v6652 will boot into v6653. But if one game is providing a version of IOS, and the other is providing a version of IOS, then their effects on one-another are isolated.
You can think of it a bit like the IOS codebase having 256 branches, and each piece of software being able to specify which branch of the kernel it was developed on. It gets the newest kernel released on that branch.
This allows a sort of "move fast and break things" approach to kernel development, where a kernel can be hacked to support new software in a way that breaks old software: you just stick your modified kernel into an as-yet-unused IOS slot, and old software will have nothing to worry about. This approach has resulted in my own (pretty unused) Wii U having ~73 different IOS slots populated with kernels.
Interestingly, if you think about it, this is pretty much a continuation of what Nintendo was allowing developers to do before: shipping random collections of chips in their own cartridges that DMA to the console, effectively creating their own extended console to run upon. Allowing your software to ship its own kernel is basically the software equivalent.
Docker though... man how different it is and how clean it makes my system feel. I do feel that Docker will move towards some kind of Docker optimized minimal dockers that are not Debian or Ubuntu or what ever, that is just a stage so you feel some familiarity.
CoreOS meanwhile, who will ever touch its init system except to auto-start containers? Which will be done by nice tool which hides systemd in the future I guess.
Ok, my posts does focus on the server side of course.
But lets say you are building a highly specialized application. You are going to be making quite a few customizations which are far more manageable through a shell scripting environment than by customizing a bunch of binaries.
I assume that Redhat is going to cover a lot of the bases for most users out there. But for those of us in highly customized environments it's going to suck.
Also if busybox is not enough, a minimal systemd system will still be leaner and faster than the equivalent sysV system.
IIRC it's also part of some soon to be shipping vehicle integrations, for in-car entertainment systems and mapping.
Also, it's inevitable that if systemd and software expecting to use it take over more and more aspects of userland and the kernel, vendors will be left with no choice but to use it as well. So "more vendors are switching to systemd" is not a convincing argument either. I like to make my own decisions on the basis of modularity and replaceability (vendor lockin has been a huge burden in other major projects not mentioned in my online persona), not popularity.
is also a current story on HN.
Just an example of how powerful that simple 70's Unix is. Allows features that appear "magical" to thorsten, anyway.
Windows? Wasn't really even an option... until 20 years later. And, of course, Unix really isn't that good, either. But, before you ignore it, please come to feature parity, at least.
Which 'Windows'? Before Windows 2000, there was Windows and Windows NT, the former being a more-or-less just a shell running on top of DOS.
Unix didn't beat Windows. Unix beat VMS and LISP machines and AS/400 and various other minicomputer operating systems. In fact, if we're talking about mainline commercial Unixes, NT started beating the shit out of them in the late 90s - if Unix lovers hadn't had the free ixen (Linux, BSDs) to fall back on it would be a sad state of affairs indeed.
Hey now, I'll have you know AS/400 is still alive in going in my workplace! We also have an entire position just for it's programming...
At the same time we are pushing more heterogenous software stacks to production and configuring more specific dependencies for our applications.
It almost seems like you're using cross-platform as a pejorative. ;)
Now we are close to having a OS where you can seriously just expect anything "Linux" to just run. Bad I guess to some :P
Right now I have most clients running OpenSUSE, just because I cannot be bothered to fuck with Upstart anymore. Once systemd is in place, the fact zypper is much nicer than apt doesn't make up for the incredible market size difference between Suse and Debian and its children.
Great, so now instead of adopting a package system system with a solid theoretical foundation like Nix or guix, we're going to dump all dependencies into fat binaries and more or less end up with the solution the NeXT people came up with in the 90s. Such progress.
Not to mention that Lennart's proposed package system would depend on btrfs-specific features, adding even more code coupling.
With OpenSUSE Build Service what does Debian server get you? Just wondering.
The Ubuntu LTS cycle is just an optimal compromise in my book. You even get Debian Testing as a good rolling release, Debian Stable as a great server release, Ubuntu Server as an enterprise option, and they all (soon) will be using a common core.
For now I advocate the SUSE's, but while it has been stable the general obscurity of it and the dwindling userbase and the fact Novel (I know they have also since sold SUSE) backed out of maintaining OpenSUSE directly, I can't be confident in its future. You cannot underestimate the Ubuntu mindshare, because it means "Linux" software is often Ubuntu first, repackaged by hobbyists for other distros second.
Fully agree. It seems some people are quite happy with a few xterms in the X-Windows replicating a twm user experience, stuck in the past.
I would also add Oberon, Active Oberon, Singularity, Verve and the current unikernel/library OS research.
> OS I still feel is hiding behind the JVM
Android kind of got us there. Now with Java being compiled to native code, maybe other C++ layers might be replaced in future versions, given how Android team looks at the NDK.
All in all, I want the Xerox PARC and Douglas Engelbart's visions, not the AT&T one.
Predictably, all the blame is laid at systemd's feet.
The current churn is happening, because all of Linux's core developers (kernel and user space) are wanting that change...to push the envelope.
For example, the current change in CGroup's namespaces are because kernel is mandating that the current cgroup access mechanism be deprecated. They want a single writer to Cgroups. Systemd is in the unfortunate position of complying with that request. Guess what? Soon enough, so will Upstart.
Again with kdbus, the person who made the push is not "evil" Lennart, but Kay Sievers - a long time maintainer of udev.
Systemd is nice. Don't be afraid.
To be clear, I'm not claiming that SysV init is The Best Way. Shell scripts are not the Happiest Place. But I am claiming that systemd is a crummy and overbearing replacement.
It seems like that's part of their mission statement, given comments like this: "Some day, we will have turned the old crap into a real operating system. :)" -- Kay Sievers (https://plus.google.com/+TomGundersen/posts/eztZWbwmxM8)
In fact, OS X's launchd was a direct inspiration for systemd because of how nicely it works there. I've wanted launchd on servers for so many years.
we should not emulate the things we've overtaken and have a higher server market share than. Something we've been doing before is doing it right, and I believe it's the ability to be dynamic and modular.
Service management has been a problem on (Linux) servers for a long time. Just because launchd originates on a desktop doesn't mean it's not a good idea.
About udev, Linus has had multiple serious complaints about udev maintainership since GregKH passed it to Kay. Don't you recall the async firmware loading issue...
The article is right - it's not Linux as we know it anymore for better or worse.
They want to prevent direct access to Cgroups, other than through a single writer. This change is happening regardless of whether you want systemd or not.
He may speak for some subset of sysadmin, but he certainly does not speak for us all.
And from the looks of it, this has been done: https://cgmanager.linuxcontainers.org/ as reported at http://lwn.net/Articles/618411/
systemd may be nice but it's coming from Redhat and cgroups are changed due to because systemd folks wanted it that way as far as I followed that debate...
Ted Lemon (the author and maintainer of ISC DHCP from its inception to 2003 ) asks for the location of the project's source repo. Sievers replies with a LMGTFY link that doesn't even answer Lemon's question. Lemon politely criticizes Sievers for his rude and unhelpful answer. Sievers fails to even apologize.
Both Sievers and Poettering have pretty serious attitude issues. It's one thing to lambast a peer who frequently fails to meet the potential that they've demonstrated in the past. It's entirely another thing to try to score social points with your callous indifference and blinkered bullheadedness.
 Check the first few comments of: https://plus.google.com/+TomGundersen/posts/eztZWbwmxM8
The project being run by people who hold unreasonable and downright odious views who act like, frankly, utter asshats, is a much more serious problem.
The Kay/Linus debacle is something you can expect to see more of from these fine folks going forward. Mark my words. Ask yourself if you want software developed this way running your OS.
Thank you for doing this :) I love systemd myself, but I still think it's important to have alternatives available; also it makes me very happy to see somebody creating their own choice instead of tearing down other people's choices :D
That being said, it's a little too esoteric for my tastes (among other things: "if you are looking for a stable production system that respects your freedom as a computer user, a good solution at this point is to consider one of more established GNU/Linux distributions.").
Do you know of any Linux distros that a) don't use systemd, b) are vaguely active / supported, and c) run on UEFI? (My current laptop, unfortunately, has UEFI. It's a royal pain, but oh well.)
Yes, we're still in alpha. Not ready for prime time yet.
>Do you know of any Linux distros that a) don't use systemd, b) are vaguely active / supported, and c) run on UEFI?
Gentoo? I use Debian most of the time, which of course uses systemd now.
Not addressing any other of your points, doesn't most laptops/computers which ships with UEFI allow you to set it to boot in "legacy-BIOS" mode?
Even if you're currently UEFI-booting, I would be seriously surprised if UEFI-support was a requirement for every OS your machine can boot.
And regardless, this is a temporary solution.