First it was "just an init to replace SysV", something I could get behind, and back in 2012 or so I was actually excited about it. Then it started growing, replacing individual components of GNU/Linux with a monolithic mega-app that has more in common with Windows NT based OSes than with anything UNIX-like. Gone is the philosophy of "do one thing and do it well", replaced with "do everything no matter the quality of the results".
I've always been a Slackware user since I started messing with Linux in the late 90s, and these days I find it getting faster and better while mainstream Linux distros slow down and grow more and more bugs. One of my benchmark systems for observing the growing bloat of modern OSes is an Atom based netbook from around 2010. It shipped with Windows 7 Starter, which it ran acceptably but not great.
Recently I tested Windows 10, Slackware 14.2, Ubuntu 14.04, Ubuntu 16.04, Debian unstable, OpenBSD, and Elementary OS Loki on it. Slackware was the fastest OS on it by a wide margin, followed by OpenBSD, then Debian, Ubuntu 14.04, Elementary, Windows, and Ubuntu 16.04 dead last. Guess which of those (not counting Windows) do not have systemd? Yep, Slackware and OpenBSD. Maybe it's a coincidence, but given how Ubuntu 16.04 on my modern workstation gets progressively slower with each systemd update, whereas Slackware on the same machine continues to chug along with no issues, that's telling.
All of that said, systemd was and maybe still is a good idea, if only they can stop trying to reinvent the wheel and instead fix the spokes they broke along the way. I can't say I'm happy about eroding the UNIX philosophy from Linux, but if systemd is the future of Linux then it damn well needs to be a stable future.
Sometimes shouting "UNIX philosophy" in GNU/Linux forums reminds me of emigrants that keep traditions of their home countries alive that are long out of fashion back home.
Go drool some more over Cantrill, will you?
The other systems still fundamentally do init's job, as well as managing services. They DON'T do everything, including:
-handle dynamic device files (udev's job)
-set the hostname
-anything I forgot
Those "traditions" you're speaking of not only are not outdated, but they never were fully realized to their ideal outside of Plan9, which attempted to make everything accessible through file APIs.
The suckless crowds are not about reproducing the original Unix. They are about carrying the torch of that philosophy, and the original unix was just the beginning, not an end in itself.
Here's an example of software from suckless that follows Plan9 :
Those that worship Plan 9 as the UNIX culture, should actually be aware what the authors think about UNIX.
"I didn't use Unix at all, really, from about 1990 until 2002, when I joined Google. (I worked entirely on Plan 9, which I still believe does a pretty good job of solving those fundamental problems.) I was surprised when I came back to Unix how many of even the little things that were annoying in 1990 continue to annoy today. In 1975, when the argument vector had to live in a 512-byte-block, the 6th Edition system would often complain, 'arg list too long'. But today, when machines have gigabytes of memory, I still see that silly message far too often. The argument list is now limited somewhere north of 100K on the Linux machines I use at work, but come on people, dynamic memory allocation is a done deal!
I started keeping a list of these annoyances but it got too long and depressing so I just learned to live with them again. We really are using a 1970s era operating system well past its sell-by date. We get a lot done, and we have fun, but let's face it, the fundamental design of Unix is older than many of the readers of Slashdot, while lots of different, great ideas about computing and networks have been developed in the last 30 years. Using Unix is the computing equivalent of listening only to music by David Cassidy. "
The world isn't a PDP-11 anymore.
Re: systemd itself, I could care less about the bells and whistles, but every time I go back to fiddle with a sysv init script, I yearn for either of upstart or systemd...
Besides, it wasn't a test of DE performance alone, it was a combination of factors including boot time, script run time, video encode/decode, build from source time, and so on. Yes, DE performance was also a metric, and for fun I did load Unity on both 14.04 and 16.04 just to see what would happen. If I were basing it on DE performance alone and used the default DE for each distro, both Ubuntu versions would be the slowest by far.
Also, 3D acceleration was not an issue, the Intel video hardware in that machine is fully accelerated in Linux and OpenBSD.
> We have been working hard to turn systemd into the most viable set of components to build operating systems, appliances and devices from, and make it the best choice for servers, for desktops and for embedded environments alike. I think we have a really convincing set of features now, but we are actively working on making it even better.
I'm pretty sure I saw others posts, but my googlefu is a bit weak.
So, the expansion was in the plan from nearly the beginning (for good or ill)
Since then I've followed its progress, and while my overall impression remains slightly negative, I'm hoping it improves to the point that it is stable and mature enough for daily use. Until then, I happily run Slackware for serious work and Windows 10 for games.
To be fair, that's pretty much the definition of an init system, innit?
Of course Linux took off, and it sort of reset everything back to stone knives and bearskins. But systemd itself is modelled on Solaris SMF, which is world-class industrial grade service management for large server deployments.
Appeals to the "Unix Philosophy" are the province of reactionary greybeards. Unix philosophy means nothing in the modern era.
Also note that I did have to tweak OpenBSD a little to get it on par with Slackware on the desktop, though the stock install is still faster than the more "modern" Linuxen for most tasks.
And for those who wonder why I do all of this: It's a hobby. It's more fun than watching TV on my off days, and it keeps me up to date on the latest goings-on in the OS world.
This is what I was after. I can't imagine ffmpeg running slower just because of systemd or unity. But yeah, if you're running on a 2010 netbook I wouldn't be surprised if it ran better under Xfce.
I have migrated from 14.04 LTS to 16.04 recently. I am using a NAS drive. After my do-release-upgrade -d, internet was not working anymore because of systemd circularity problem. I had to learn how to create systemd configuration files to describe remote filesystem mounts. It was not easy to find documentation on systemd.
When my computer enters in sleep mode, I can wake it with a press on enter. The next time, it enters in sleep mode, I can not wake it up anymore.
My system used to boot in high resolution. Now, it is using huge fonts that makes boot message impossible to read (25 lines on a 23" screen!). I still do not know how to fix it.
It may not be only the fault of systemd, but migration from 14.04 LTS to 16.04 LTS was a very bad experience for me.
And systemd should be an improvement - it started after upstart, and hit production well after upstart :)
Windows 10 will not be left behind! I mean, ahead!
Microsoft recently pushed out the Anniversary Update, which made at least my Win10 laptop noticeably - as in extra 10 or 15 seconds - slower waking up, and generally more sluggish here and there.
(How convenient, 400 million PCs need an upgrade now. Mwahahaha.)
I hadn't noticed a noticeable slowdown on my Ubuntu 16.04. I wouldn't even know about the controversy except people keep mentioning it.
You know, it's funny, I never said that, in fact I said once it matures systemd is actually a good idea.
Perhaps you're projecting a bit?
"Executing statically linked executables is much faster" ...
"Statically linked executables are portable" ...
"Statically linked executables use less disk space" ...
"Statically linked executables consume less memory" -- http://wayback.archive.org/web/20090525150626/http://blog.ga...
For static executable the same dependent library will have to linked to all binaries. Maintenance is a pain in the neck.
If I remember correctly, the argument goes someting like this: modern compilers, i.e. something as recent as the Plan 9 toolchain or a GCC version from this millenium, usually compile in only the necessary code with static linking, and not whole libraries. With dynamic linking, you always have to load the whole library into memory, which supposedly pays off only with heavily used libraries such as libc (e.g. think about how many libraries used by Firefox/Chromium are used by other programs).
So the hope is (combined with a general strive for small programs), since text pages are shared between processes and statically linked programs only include the absolute necessary code you end up with a smaller memory footprint.
(I'm not sure whether you save disk space, but I don't think that would be a problem nowadays. Heck, look at go binaries.)
And I guess, the linker could do more whole-program-optimization on a statically linked program, since all the coude is available.
> For static executable the same dependent library will have to linked to all binaries. Maintenance is a pain in the neck.
Generally you would want to have a proper build system. In case of StaLi, they have one global git repository (/.git). An update is simply "git pull && make install".
I don't know if this process is slower or faster than binary updates, but if they strive for small programs/binaries, then I guess it doesn't matter as much.
Source-based distributions, such as Gentoo, have the advantage that you don't have to wait for someone to publish an upgraded binary, you can compile it yourself, instead. This might give you a slight edge for security vulnerabilities.
Not really. You do have to mmap it, but it can be demand-paged (executables are handled this way on most modern systems, which is why compressed executables are usually a bad idea). IIRC, what saves time is mostly not having to do the actual linking part where the references are resolved. This can be precomputed and stashed in the binary (an optimization well-known to Gentoo+KDE users), but that confuses some package managers, breaks some uses of dlopen()/dlsym(), and has issues with ASLR.
Static linking was already like that in MS-DOS compilers.
I was being partially ironic. It seems that the common assumption is still that you (statically) link in the whole library. Then, of course, binaries get really huge. But when you link in only what's necessary, the overhead is probably relatively small (when was the last time you used all of libc?).
The other thing is (which you can see in this thread, as well), people seem to think that you can do things only the way we are doing them now without ever questioning whether these things are still apropriate and how they originally came into existence. ("There has to be dynamic linking", "we have to use virutal memory", "there have to be at least 5 levels of caches", etc.)
To my knowledge, all the reasons regarding saving space, security, and maintenance were all made up after the fact (and aren't necessarily true, even (or especially) with modern implementations). Originally, dynamic linking was intended for swapping in code at runtime (was it Multics or OS/360?), which you can't do anymore today.
Furthermore, dynamic linking (as it is done today) is really complex. In contrast, static linking is much simpler (=> fewer bugs/security holes). I think we should reconsider if the overhead is worth it or not (do you really care whether your binaries make up 100MB or 200MB on your 1TB HDD?).
For embedded devices: yes, space does matter, but you probably don't run a full fledged Ubuntu desktop on you IoT device, anyway. You use different approaches (e.g. busybox, buildroot, etc.).
Edit: here's a great example from one of the links in the other comment:
suckless complaining about "sysv removed" in systemd. Link takes you to this changelog entry:
"The support for SysV and LSB init scripts has been removed
from the systemd daemon itself. Instead, it is now
implemented as a generator that creates native systemd units
from these scripts when needed. This enables us to remove a
substantial amount of legacy code from PID 1, following the
fact that many distributions only ship a very small number
of LSB/SysV init scripts nowadays."
So, code was removed from the init daemon itself and moved into a standalone utility that does one specific job.
Systemd is now both being blamed for bloating init, and for splitting functionality out into a separate tool that does one thing.
More like people have had perfectly usable alternatives but now the hivemind is more or less forcing something else onto them. I don't need to build a new init system, I have one that works, thank you. Please don't give me systemd.
But yeah, it is being blamed for bloating init, because for one thing, cron isn't init's job. And that's just the start.
Which is why other platforms started moving from cron to init years ago?
> Note: Although it is still supported, cron is not a recommended solution. It has been deprecated in favor of launchd. 
> cron has had a long reign as the arbiter of scheduled system tasks on
Unix systems. However, it has some critical flaws that make its use
somewhat fraught. [...] cron also lacks validation, error handling, dependency management, and a
host of other features. [...] The Periodic Restarter is a delegated restarter, at
svc:/system/svc/periodic-restarter:default, that allows the creation of
SMF services that represent scheduled or periodic tasks. 
However, the other points I've made are still correct and accurate.
The first link reads like a sort of 99 Theses, while the second is a dissertation on why people will never get along on the subject of systemd.
If you feel strongly about this, consider contacting the authors of the story.
Not only does your vote count, when it comes down to it, yours is the only vote that counts.
I feel like part of what people object to about systemd is the 'one true way to linux' thing.
... and still doesn't have functional centralised logging ability. People have to use dirty hacks to make it work. This is my current headache.
> ... and still doesn't have functional centralised logging ability. People have to use dirty hacks to make it work. This is my current headache.
What are those "dirty hacks"? You can trivially use logstash or similar or you can forward log entries to a remote syslog-compatible endpoint. Incidentally the same that people usually do with syslog.
You also have the option of creating something yourself if everyting sucks so much.
I'm hoping I was just trolled by an HN-flavored Markov Chain.
Since people learned the power of the metaphor.
> Do we "elect" a init system
For some distros? Sure. By its nature, Linux, GNU and the open source software that goes into the ecosystem allows people to create new distributions, or choose one of the many that exist. This choice is, in some small way, like a vote. If systemd was really that bad, enough people would work around it to make it's adoption much more problematic.
If you want more than that, some distributions literally vote on features like this, and have voted specifically on systemd.
> I cannot even begin to describe how silly your comment is. ... I'm hoping I was just trolled by an HN-flavored Markov Chain.
That doesn't seem very constructive.
The most important point here is that distributions vote on which init system they elect. We are not all electing one init system to rule them all, across Linux. Distributions are nation-states of varying size that follow similar but sometimes incompatible rules, all derived from the same core tenets and program. So we're electing governors from the same political parties, more or less.
I think telling people to suck it up and just accept systemd as their one true init system is just silly. Regardless about how you feel about Clinton, there are always reasons to use something else.
If you need a barebones system, or something for experimentation, or something that is hardened at the price of flexibility, that is an applicable choice, and one you can make from the comfort of your own home. You can't fork the US or an individual state in the same way you can download a different distro to your Raspberry Pi.
And I could go on and on. But you're right; I suppose I could, in the end, begin to describe how silly that comment was. Even if the explanation ended up being really unwieldy and not my best writing. It might not have been wholly constructive either, but we're generally all here to have a good time.
My point is, it's a silly, leaky metaphor. And telling people to suck it up and use an actually useful tool in the comments for a distribution that's written as an elitist hobby project is similarly silly. These people aren't picketing your Debian or Arch systemd parties. They're just doing their own dang thing.
> You can't fork the US or an individual state in the same way you can download a different distro to your Raspberry Pi.
Well, you can (in that you can fork the rules and structures), it's just finding the resources (people and location) to make use of this new government is hard, because we are currently resource constrained. In the past, when land was plentiful, this happened. It happened to some extent with the Pilgrims (although it mostly a separation from the prior church, not the government, although I don't doubt it was also viewed as a partial separation from the government due to the distances involved). If we start colonizing Mars at some point, I'm pretty sure there will be some more separatist movements and forking of governments.
Another way to look at this is that you can fork the government right now, you just can't supercede the rights of the current government you are part of. To follow the resource and forking metaphor, you can virtualize governments to your heart's content, but in cases where your rules conflict with the host government, you can emulate the result but you can't enforce it. That is, Ring 0 doesn't care what you think you can do, the rules are the rules.
Debian selected it via an election.
the god damn stuff is everywhere and it doesn't need to be.