Hacker News new | past | comments | ask | show | jobs | submit login

I still don't understand why systemd sucks

That's a loaded question, and I'm only qualified to answer from my perspective and experience. My biggest gripe with it has always been that it is alpha-quality software, even today, that has a central role in an otherwise mature OS ecosystem. It has been widely adopted (some would say forced or tricked into adoption by a few distros) and therefore all the major Linux distributions are now running at an alpha level while its creators try to figure out exactly what they want it to be. That was the state of Linux in the late 90s, a state that it overcame during the 2000s, but now it's regressing again.

First it was "just an init to replace SysV", something I could get behind, and back in 2012 or so I was actually excited about it. Then it started growing, replacing individual components of GNU/Linux with a monolithic mega-app that has more in common with Windows NT based OSes than with anything UNIX-like. Gone is the philosophy of "do one thing and do it well", replaced with "do everything no matter the quality of the results".

I've always been a Slackware user since I started messing with Linux in the late 90s, and these days I find it getting faster and better while mainstream Linux distros slow down and grow more and more bugs. One of my benchmark systems for observing the growing bloat of modern OSes is an Atom based netbook from around 2010. It shipped with Windows 7 Starter, which it ran acceptably but not great.

Recently I tested Windows 10, Slackware 14.2, Ubuntu 14.04, Ubuntu 16.04, Debian unstable, OpenBSD, and Elementary OS Loki on it. Slackware was the fastest OS on it by a wide margin, followed by OpenBSD, then Debian, Ubuntu 14.04, Elementary, Windows, and Ubuntu 16.04 dead last. Guess which of those (not counting Windows) do not have systemd? Yep, Slackware and OpenBSD. Maybe it's a coincidence, but given how Ubuntu 16.04 on my modern workstation gets progressively slower with each systemd update, whereas Slackware on the same machine continues to chug along with no issues, that's telling.

All of that said, systemd was and maybe still is a good idea, if only they can stop trying to reinvent the wheel and instead fix the spokes they broke along the way. I can't say I'm happy about eroding the UNIX philosophy from Linux, but if systemd is the future of Linux then it damn well needs to be a stable future.

The irony of the eroding the UNIX philosophy from Linux is that most real UNIX systems, meaning AIX, HP-UX, Solaris, NeXTSTep (cough macOS), Tru64,... do have something similar to systemd.

Sometimes shouting "UNIX philosophy" in GNU/Linux forums reminds me of emigrants that keep traditions of their home countries alive that are long out of fashion back home.

The sarcastic irony is, Solaris engineers implemented a fully functional systemd(8) long before systemd(8) by designing and implementing SMF, which went on to break world records with startup and shutdown speed, on what is now an ancient AMD Opteron system (I think it was either v20z or a v40z). I wanted to include the reference to the slashdot's then-article, but try as I might, I can't find it any more.

XML and CDDL, yuck and yuck.

Go drool some more over Cantrill, will you?

No, you're actually wrong, IIRC.

The other systems still fundamentally do init's job, as well as managing services. They DON'T do everything, including:

-replace cron

-handle dynamic device files (udev's job)

-set the hostname

-replace syslog

-replace inetd


-anything I forgot

AIX, HP-UX, Solaris and NeXTStep were not written by the original authors of Unix and its philosophy. Linux has always been closer to the philosophy than many of these, actually. So much so that it has imported concepts from the successor of Unix, Plan9. Linux's procfs which exposes sysinfo as files within the filesystem is a concept taken from Plan9, which was the OS people like Ken Thompson and Rob Pike envisioned as the future of OSes and replacement for Unix.

Those "traditions" you're speaking of not only are not outdated, but they never were fully realized to their ideal outside of Plan9, which attempted to make everything accessible through file APIs.

The suckless crowds are not about reproducing the original Unix. They are about carrying the torch of that philosophy, and the original unix was just the beginning, not an end in itself. Here's an example of software from suckless that follows Plan9 : http://tools.suckless.org/ii/

Actually the only thing I find positive about Plan9 is that it gave birth to Inferno and Limbo, both of which don't have much to do with UNIX philosophy.

Those that worship Plan 9 as the UNIX culture, should actually be aware what the authors think about UNIX.

"I didn't use Unix at all, really, from about 1990 until 2002, when I joined Google. (I worked entirely on Plan 9, which I still believe does a pretty good job of solving those fundamental problems.) I was surprised when I came back to Unix how many of even the little things that were annoying in 1990 continue to annoy today. In 1975, when the argument vector had to live in a 512-byte-block, the 6th Edition system would often complain, 'arg list too long'. But today, when machines have gigabytes of memory, I still see that silly message far too often. The argument list is now limited somewhere north of 100K on the Linux machines I use at work, but come on people, dynamic memory allocation is a done deal!

I started keeping a list of these annoyances but it got too long and depressing so I just learned to live with them again. We really are using a 1970s era operating system well past its sell-by date. We get a lot done, and we have fun, but let's face it, the fundamental design of Unix is older than many of the readers of Slashdot, while lots of different, great ideas about computing and networks have been developed in the last 30 years. Using Unix is the computing equivalent of listening only to music by David Cassidy. "


For me at least, unix philosophy is Thompson, Ritchie, Raymond, Stevens, etc., not some obscure commercial things.

Which validates my point of long gone traditions.

The world isn't a PDP-11 anymore.

commercial unixes haven't been relevant for decades. Unix is linux these days.

How about the second largest desktop OS in the world, OS X (or now macOS)? That's BSD.

OSX is actually a certified UNIX http://www.opengroup.org/openbrand/register/

and systemd was modelled partially after its launchd

Mac OS X is not BSD: https://wiki.freebsd.org/Myths

Actually not from the point of view of our enterprise customers, the same that still happily pay for mainframes.

I don't believe Mac OS X/OS X/macOS's launchd is as complicated as systemd.

Ubuntu 16.04 is not meant for lightweight machines - for example, the Unity desktop assumes you have 3D acceleration (which sucks for using in a VM). It's not systemd that makes your atom netbook slow (well, assuming you're using Unity...)

Re: systemd itself, I could care less about the bells and whistles, but every time I go back to fiddle with a sysv init script, I yearn for either of upstart or systemd...

Ubuntu actually has a low graphics mode so that it can be used in VMs


I stand corrected. Maybe I'm thinking of 14.04?

Nope, Xfce on all the Linux distros on that machine, except for Elementary. I was surprised to find that Elementary's Pantheon was faster than Xfce on Ubuntu 16.04.

Besides, it wasn't a test of DE performance alone, it was a combination of factors including boot time, script run time, video encode/decode, build from source time, and so on. Yes, DE performance was also a metric, and for fun I did load Unity on both 14.04 and 16.04 just to see what would happen. If I were basing it on DE performance alone and used the default DE for each distro, both Ubuntu versions would be the slowest by far.

Also, 3D acceleration was not an issue, the Intel video hardware in that machine is fully accelerated in Linux and OpenBSD.

It didn't just start growing. There was talk of it being the base of the OS (excluding the kernel) since 2012 (as mentioned here http://0pointer.net/blog/projects/systemd-update-3.html).

> We have been working hard to turn systemd into the most viable set of components to build operating systems, appliances and devices from, and make it the best choice for servers, for desktops and for embedded environments alike. I think we have a really convincing set of features now, but we are actively working on making it even better.

I'm pretty sure I saw others posts, but my googlefu is a bit weak.

So, the expansion was in the plan from nearly the beginning (for good or ill)

Thanks for that. When I had first heard of it back in 2012, it was right after getting my first Raspberry Pi, and a friend had suggested trying to port systemd to it to improve boot speed. At that time, all I was able to find out about systemd was that it was a faster init. There was nothing I saw back then about the authors wanting to replace all of GNU with it. It was several months later, after the update to systemd broke my Arch installation, that I started reading about how it's growing too fast and rather than focus on code quality and stability, the authors were rushing to make it this huge replacement for GNU.

Since then I've followed its progress, and while my overall impression remains slightly negative, I'm hoping it improves to the point that it is stable and mature enough for daily use. Until then, I happily run Slackware for serious work and Windows 10 for games.

> There was talk of it being the base of the OS

To be fair, that's pretty much the definition of an init system, innit?

When Linux came along in the mid-90s, most commercial Unixes had left behind the Unix philosophy, with their own integrated, object oriented desktop environments and sophisticated administration tools. Only Xenix, the engine that powered many an auto shop's rinky-dink five-user database setup, stuck with the model of text terminals and CLI administration with simple tools.

Of course Linux took off, and it sort of reset everything back to stone knives and bearskins. But systemd itself is modelled on Solaris SMF, which is world-class industrial grade service management for large server deployments.

Appeals to the "Unix Philosophy" are the province of reactionary greybeards. Unix philosophy means nothing in the modern era.

Out of curiosity, what were you using to measure the speed of the various operating systems you mentioned?

For CLI stuff (compiling, file operations etc) it's the time command, for video decode/encode it's built into ffmpeg, and for graphical stuff it's mostly subjective. There's honestly not a ton of difference on most of the CLI stuff since the hardware is the same, but it is measurable. As for the DE, let's just say that Xfce under Slackware and OpenBSD is quick and peppy while Xfce under Debian-based distros is anything but. Ubuntu seemed to be the slowest for that test, and Elementary's Pantheon desktop is a mixed bag. I have considered running the Phoronix test suite for a more accurate result.

Also note that I did have to tweak OpenBSD a little to get it on par with Slackware on the desktop, though the stock install is still faster than the more "modern" Linuxen for most tasks.

And for those who wonder why I do all of this: It's a hobby. It's more fun than watching TV on my off days, and it keeps me up to date on the latest goings-on in the OS world.

>There's honestly not a ton of difference on most of the CLI stuff since the hardware is the same, but it is measurable.

This is what I was after. I can't imagine ffmpeg running slower just because of systemd or unity. But yeah, if you're running on a 2010 netbook I wouldn't be surprised if it ran better under Xfce.

I'm an Ubuntu LTS user. Compared to Upstart in 14.04, Systemd is an improvement. ;)

I am also an Ubuntu LTS user, but more a developer than a system administrator.

I have migrated from 14.04 LTS to 16.04 recently. I am using a NAS drive. After my do-release-upgrade -d, internet was not working anymore because of systemd circularity problem. I had to learn how to create systemd configuration files to describe remote filesystem mounts. It was not easy to find documentation on systemd.

When my computer enters in sleep mode, I can wake it with a press on enter. The next time, it enters in sleep mode, I can not wake it up anymore.

My system used to boot in high resolution. Now, it is using huge fonts that makes boot message impossible to read (25 lines on a 23" screen!). I still do not know how to fix it.

It may not be only the fault of systemd, but migration from 14.04 LTS to 16.04 LTS was a very bad experience for me.

Ubuntu upgrades almost always suck, but the upgrade from 14.04 to 16.04 was the worst I ever saw. Nothing worked, my system was broken beyond rescue. Pulseaudio all over again.

Upstart solves the same sysv-init problems that was the motivation for systemd.

And systemd should be an improvement - it started after upstart, and hit production well after upstart :)

> gets progressively slower with each systemd update

Windows 10 will not be left behind! I mean, ahead!

Microsoft recently pushed out the Anniversary Update, which made at least my Win10 laptop noticeably - as in extra 10 or 15 seconds - slower waking up, and generally more sluggish here and there.

(How convenient, 400 million PCs need an upgrade now. Mwahahaha.)

I am puzzled that you had an Atom based netbook from around 2010 that shipped with any kind of Windows 10.

That was a typo, it shipped with Windows 7 Starter. Fixed now, thanks! :-)

> Ubuntu 16.04 on my modern workstation gets progressively slower with each systemd update

I hadn't noticed a noticeable slowdown on my Ubuntu 16.04. I wouldn't even know about the controversy except people keep mentioning it.


Please don't post like this here. We ask that you comment civilly and substantively or not at all.


You know, it's funny, I never said that, in fact I said once it matures systemd is actually a good idea.

Perhaps you're projecting a bit?

static linux isn't really a reaction to systemd. What it is a reaction to is exemplified both by what the blurb on its WWW spends most of its time on, and indeed by its very name: dynamic linking.

"Executing statically linked executables is much faster" ... "Statically linked executables are portable" ... "Statically linked executables use less disk space" ... "Statically linked executables consume less memory" -- http://wayback.archive.org/web/20090525150626/http://blog.ga...

I refuse to believe that disk space is less as it can leverage other libraries in the deps list to load at run time and other can use it too.

For static executable the same dependent library will have to linked to all binaries. Maintenance is a pain in the neck.

> I refuse to believe that disk space is less as it can leverage other libraries in the deps list to load at run time and other can use it too.

If I remember correctly, the argument goes someting like this: modern compilers, i.e. something as recent as the Plan 9 toolchain or a GCC version from this millenium, usually compile in only the necessary code with static linking, and not whole libraries. With dynamic linking, you always have to load the whole library into memory, which supposedly pays off only with heavily used libraries such as libc (e.g. think about how many libraries used by Firefox/Chromium are used by other programs).

So the hope is (combined with a general strive for small programs), since text pages are shared between processes and statically linked programs only include the absolute necessary code you end up with a smaller memory footprint. (I'm not sure whether you save disk space, but I don't think that would be a problem nowadays. Heck, look at go binaries.)

And I guess, the linker could do more whole-program-optimization on a statically linked program, since all the coude is available.

> For static executable the same dependent library will have to linked to all binaries. Maintenance is a pain in the neck.

Generally you would want to have a proper build system. In case of StaLi, they have one global git repository (/.git). An update is simply "git pull && make install".

I don't know if this process is slower or faster than binary updates, but if they strive for small programs/binaries, then I guess it doesn't matter as much.

Source-based distributions, such as Gentoo, have the advantage that you don't have to wait for someone to publish an upgraded binary, you can compile it yourself, instead. This might give you a slight edge for security vulnerabilities.

> you always have to load the whole library into memory

Not really. You do have to mmap it, but it can be demand-paged (executables are handled this way on most modern systems, which is why compressed executables are usually a bad idea). IIRC, what saves time is mostly not having to do the actual linking part where the references are resolved. This can be precomputed and stashed in the binary (an optimization well-known to Gentoo+KDE users), but that confuses some package managers, breaks some uses of dlopen()/dlsym(), and has issues with ASLR.

Modern compilers?!

Static linking was already like that in MS-DOS compilers.

> Modern compilers?

I was being partially ironic. It seems that the common assumption is still that you (statically) link in the whole library. Then, of course, binaries get really huge. But when you link in only what's necessary, the overhead is probably relatively small (when was the last time you used all of libc?).

The other thing is (which you can see in this thread, as well), people seem to think that you can do things only the way we are doing them now without ever questioning whether these things are still apropriate and how they originally came into existence. ("There has to be dynamic linking", "we have to use virutal memory", "there have to be at least 5 levels of caches", etc.)

To my knowledge, all the reasons regarding saving space, security, and maintenance were all made up after the fact (and aren't necessarily true, even (or especially) with modern implementations). Originally, dynamic linking was intended for swapping in code at runtime (was it Multics or OS/360?), which you can't do anymore today.

Furthermore, dynamic linking (as it is done today) is really complex. In contrast, static linking is much simpler (=> fewer bugs/security holes). I think we should reconsider if the overhead is worth it or not (do you really care whether your binaries make up 100MB or 200MB on your 1TB HDD?).

For embedded devices: yes, space does matter, but you probably don't run a full fledged Ubuntu desktop on you IoT device, anyway. You use different approaches (e.g. busybox, buildroot, etc.).

Because people like to complain more than they like to actually build a usable alternative.

Edit: here's a great example from one of the links in the other comment:

suckless complaining about "sysv removed" in systemd. Link takes you to this changelog entry:

"The support for SysV and LSB init scripts has been removed from the systemd daemon itself. Instead, it is now implemented as a generator that creates native systemd units from these scripts when needed. This enables us to remove a substantial amount of legacy code from PID 1, following the fact that many distributions only ship a very small number of LSB/SysV init scripts nowadays."

So, code was removed from the init daemon itself and moved into a standalone utility that does one specific job.

Systemd is now both being blamed for bloating init, and for splitting functionality out into a separate tool that does one thing.

runit on void linux runs great. it's tiny, easy to understand and very fast. and it doesn't try to take everything over.

> Because people like to complain more than they like to actually build a usable alternative.

More like people have had perfectly usable alternatives but now the hivemind is more or less forcing something else onto them. I don't need to build a new init system, I have one that works, thank you. Please don't give me systemd.

The one I had did not work so I am happy to have systemd now.

They built many. runit, s6, nosh, bsdinit, openrc.

But yeah, it is being blamed for bloating init, because for one thing, cron isn't init's job. And that's just the start.

> cron isn't init's job.

Which is why other platforms started moving from cron to init years ago?


> Note: Although it is still supported, cron is not a recommended solution. It has been deprecated in favor of launchd. [1]


> cron has had a long reign as the arbiter of scheduled system tasks on Unix systems. However, it has some critical flaws that make its use somewhat fraught. [...] cron also lacks validation, error handling, dependency management, and a host of other features. [...] The Periodic Restarter is a delegated restarter, at svc:/system/svc/periodic-restarter:default, that allows the creation of SMF services that represent scheduled or periodic tasks. [2]

[1]: https://developer.apple.com/library/content/documentation/Ma...

[2]: https://blogs.oracle.com/SolarisSMF/entry/cron_begone_predic...

Okay, so I got that one wrong. I still think it's a bad idea, but I did get that point wrong, true enough.

However, the other points I've made are still correct and accurate.

Here are a pair of links that you may have missed under the "Don’t use systemd (read more about why it sucks)" line item.



The first link reads like a sort of 99 Theses, while the second is a dissertation on why people will never get along on the subject of systemd.

I take the attribution in the first link (references to "Führerbunker" and "Führer") to mean that the author is comparing Lennart Poettering to Hitler. That's not funny, it's just very, very inappropriate.

I didn't write the linked content, nor did I choose the links.

If you feel strongly about this, consider contacting the authors of the story.

I didn't mean to imply that you did; sorry if that came out wrong.

Hating systemd is like hating Hillary Clinton at this point. It's well past time to suck it up and make peace with your next init system/President because the only viable alternative(s) are far worse.

Fortunately, picking an OS and a distro is not like voting in US presidential elections. In particular, there are more than two viable options.

And...you can actually choose.

Not only does your vote count, when it comes down to it, yours is the only vote that counts.

Eh, arguably from ecosystem effects, other people's votes count a lot too. I don't think I'd want to be the sole user of best init system in the world!

True, but you could if you wanted to.

American democracy: suck it up, you don't really have a choice.

As someone who has found runit to meet my needs I don't think I have any reason to make peace with systemd.

I feel like part of what people object to about systemd is the 'one true way to linux' thing.

Right now what I'm objecting to with systemd is that this system replaces syslog, has been created and driven by the enterprise linux distro, with full-time experienced linux devs, and has been released and used in production for years...

... and still doesn't have functional centralised logging ability. People have to use dirty hacks to make it work. This is my current headache.

That is "replaces" syslog (you can still have it forward to syslog if you insist) is one of the best parts. After getting used to journald I have no desire to ever go back to dealing with syslog.

> ... and still doesn't have functional centralised logging ability. People have to use dirty hacks to make it work. This is my current headache.

What are those "dirty hacks"? You can trivially use logstash or similar or you can forward log entries to a remote syslog-compatible endpoint. Incidentally the same that people usually do with syslog.

Is switching to a BSD the move to Canada option? I know several people who have migrated to various BSDs because of the ugliness of systemd.

Come on in, the water's fine here! Honestly, I use FreeBSD/OpenBSD for everything I need and anytime I have to deal with some linux monstrosity it's like taking a day trip from Toronto to Detroit.

Maybe they're far worse to you..

You also have the option of creating something yourself if everyting sucks so much.

I cannot even begin to describe how silly your comment is. Since when were politicians even comparable to programs? Do we "elect" a init system, as one nation united under Torvalds?

I'm hoping I was just trolled by an HN-flavored Markov Chain.

> Since when were politicians even comparable to programs?

Since people learned the power of the metaphor.

> Do we "elect" a init system

For some distros? Sure. By its nature, Linux, GNU and the open source software that goes into the ecosystem allows people to create new distributions, or choose one of the many that exist. This choice is, in some small way, like a vote. If systemd was really that bad, enough people would work around it to make it's adoption much more problematic.

If you want more than that, some distributions literally vote on features like this, and have voted specifically on systemd[1].

> I cannot even begin to describe how silly your comment is. ... I'm hoping I was just trolled by an HN-flavored Markov Chain.

That doesn't seem very constructive.

1: https://lists.debian.org/debian-ctte/2014/02/msg00294.html

I do retract my complaint about comparing politicians to programs. In its place, I complain about the process of electing a President being different from voting on an init system.

The most important point here is that distributions vote on which init system they elect. We are not all electing one init system to rule them all, across Linux. Distributions are nation-states of varying size that follow similar but sometimes incompatible rules, all derived from the same core tenets and program. So we're electing governors from the same political parties, more or less.

I think telling people to suck it up and just accept systemd as their one true init system is just silly. Regardless about how you feel about Clinton, there are always reasons to use something else.

If you need a barebones system, or something for experimentation, or something that is hardened at the price of flexibility, that is an applicable choice, and one you can make from the comfort of your own home. You can't fork the US or an individual state in the same way you can download a different distro to your Raspberry Pi.

And I could go on and on. But you're right; I suppose I could, in the end, begin to describe how silly that comment was. Even if the explanation ended up being really unwieldy and not my best writing. It might not have been wholly constructive either, but we're generally all here to have a good time.

My point is, it's a silly, leaky metaphor. And telling people to suck it up and use an actually useful tool in the comments for a distribution that's written as an elitist hobby project is similarly silly. These people aren't picketing your Debian or Arch systemd parties. They're just doing their own dang thing.

All metaphors and similes are leaky. The point is to focus on the ways it works and doesn't work, because each has the possibility to expand your thinking on a topic. The original comparison could have only worked in a singular facet, yet that would still make it a valid, correct and possibly useful simile. Here you've expanded on some ways the two things are different, which is also generally the point of using an analogy, in that it promotes that thinking as well.

> You can't fork the US or an individual state in the same way you can download a different distro to your Raspberry Pi.

Well, you can (in that you can fork the rules and structures), it's just finding the resources (people and location) to make use of this new government is hard, because we are currently resource constrained. In the past, when land was plentiful, this happened. It happened to some extent with the Pilgrims (although it mostly a separation from the prior church, not the government, although I don't doubt it was also viewed as a partial separation from the government due to the distances involved). If we start colonizing Mars at some point, I'm pretty sure there will be some more separatist movements and forking of governments.

Another way to look at this is that you can fork the government right now, you just can't supercede the rights of the current government you are part of. To follow the resource and forking metaphor, you can virtualize governments to your heart's content, but in cases where your rules conflict with the host government, you can emulate the result but you can't enforce it. That is, Ring 0 doesn't care what you think you can do, the rules are the rules.

Actually, it was a simile, not a metaphor.

Yeah, I'm aware, and actually thought of that while writing the comment, and specifically chose metaphor. I think it still worked better to use metaphor because I think that's the more common way to relate the items in question, and being the more abstract of the two, metaphors obviously allow for similes.

In the Linux ecosystem, generally you use whatever the majority supports, or if you use an alternative you assume responsibility for supporting it yourself. Since the majority of distros, and soon the majority of upstream, are supporting systemd, what do you think is going to be used by most commercial Linux deployments?

> Do we "elect" a init system, as one nation united under Torvalds?

Debian selected it via an election.

See https://news.ycombinator.com/item?id=11834348 for an interesting contrast.

don't bring politics in here.

the god damn stuff is everywhere and it doesn't need to be.

That's not true. Unlike presidential elections, we don't all have to make the same choice. openrc, runit, s6, nosh, bsdinit... there are plenty of choices that are better.


You can't comment like this here. We ban accounts that continue to violate the guidelines this way.


Oh, I didn't drink the liberal koolaid. Feel free to ban my account.

Because it's a Windows monolithic approach to startup, shutdown and dependency management, as well as being a poor copy of Solaris' service management facility, smf(5).

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact