And the stated objective? "To reduce boot times".
The best way to reduce boot times is to not boot. The reason I reboot systems is to return them to a known good state (or, very rarely, to perform a kernel upgrade).
On server hardware, I perform boots infrequently, and really, really, really want them to work right.
On end-user hardware, I perform boots infrequently, preferring to use suspend/restore to quiesce my systems (suspend to RAM, occasionally suspend to disk). That is a process which I'd like to have very thoroughly debugged and not give me any unhappy surprises (say: crash my video, e.g.: interactive session, lose track of drivers/hardware, especially wireless).
Systemd is the wrong answer to the wrong problem.
Much written better than I can:
Systemd loses the huge transitivity of shell scripting, and puts you in the position of needing to acquire a novel skill at the one time you least need to be learning and most need to be applying: when your systems won't boot straight:
I'm also not much surprised that Red Hat, who've had such a historic problem with consistency and reliable dependency management within their packaging system (as compared to Debian/Ubuntu) are proponents of this technology (hint: it's not the package format, it's the policy, or lack thereof). And now Arch.
For launchd, the service description is a few declarative entries: on OS X, ssh.plist is 37 lines, but only because XML plists are really verbose; it could be half that in a saner format. On my Debian system, /etc/init.d/ssh is 167 lines of almost entirely boilerplate shell script that has to be maintained separately for each service (and that isn't even enough to make the script standalone; it invokes the 1400 line start-stop-daemon). The only thing simpler about SysV init is that it's the legacy everything is compatible with: the simplicity of shell scripts doesn't hold up when you need over 100 lines for a simple daemon.
launchd itself is many thousands of lines of code (too much?), but it provides cron and inetd-like services (i.e. generalized on-demand services - it is really nice to know that a daemon has zero effect on my system, no pages that had to be loaded from disk, when it's not being used, but still operates efficiently when it comes under load; this also makes the implementation for the daemon simpler in some cases), as well as automatic process termination/restarting. Its service-on-demand focused dependency model is nondeterministic in the same way that systemd is (?), but it's completely reliable, since it's standard by now so everything is designed to work with it.
Of course I usually use suspend and restore, but making rebooting really fast makes the system more fun to use.
And yes, I'm talking about launchd, not systemd, but from what I've heard systemd is pretty similar in design and goals.
That's a failure on debian's part, not a fundamental flaw of init. Guess what the equivalent looks like on openbsd?
More importantly, init only handles starting and stopping of services. They don't manage services, like restarting them when they crash. Systemd can do that. The socket activation stuff also allows one to potentially save resources by not starting services until they're really needed.
The best way to reduce boot times is to not boot? Have you ever heard of "laptops" and "average users"? Even on my servers, a shorter boot time is welcome.
My wish list goes something like this: better wireless drivers, improved sleep/suspend/hibernate/resume, better power management, a better package manager, more up-to-date applications ...
At the very, very bottom of that list -- the very last item, so far at the bottom of the list that it's in danger of falling off entirely -- is "faster boot times".
Dredmorbius is spot on, at least for me and my daily usage and the couple dozen or so servers that I'm responsible for. If things are so pooched that I have to reboot it, then it doesn't really matter to me anymore whether it takes 30 seconds or a minute to start up. I would much prefer not having to reboot it in the first place.
Since Chakra was (I think) forked from Arch Linux, I'll have to check and see if they're gonna do this too.
I hope not.
(edit: none of this is intended as a criticism of Chakra's development team, who have been doing an amazing job of putting together a system that, despite its warts, I genuinely enjoy using every day.)
systemd can also ensure that only services you use actually use get started. For example printing is done as a server on Linux (cups) so systemd can ensure it doesn't start until you need it. This reduces power consumption.
Because of the way systemd manages services it can also do a better job of isolating them and dealing with unexpected issues. For example if the print server crashes, or someone attacks it while in Starbucks you'll be better off. (Its chrooting is easier to use, as well as the way things are put into control groups.)
All the things you list require developer time and attention. If systemd lets developers spend less time on startup scripts, then they will have more time to devote to the things on your list. (If you've ever had to write startup scripts you'll know how long it takes to develop and debug them.)
There's already hotplug support, xinetd, ifupdown's pre/post up/down stanzas, and the like (though networkmanager's screwing that bit up wonderfully). Chroot jails too. I'm not saying that these are perfect (and some are a very pale shadow of perfect indeed), but they're independent of init.
Systemd mashes a whole bunch of crap in one place. Most of which I really don't want to have to worry about.
Now, if Arch and Fedora want to serve as test beds for this stuff -- and either perfect them or reject them as nonviable, well. Yeah, I suppose I can live with that. Though I'm definitely not a fan.
You see, there's a few things here.
For my own systems, I really like not having to fuck with useless shit. Currently I'm managing networking manually on my laptop as NetworkMangler has gone to crap again. So I run "ifconfig" and "route" from a root shell (yay for shell history and recursive reverse search).
For servers, part of my performance evaluation is based on how many nines I can deliver. Not having shit get fucked up does really nice things to my nines. Having shit change does crap things to my nines. I like my nines. I really hate change. It's an ops thing. Where I've got to have change, I like to have it compartmentalized, modularized, with loosely-linked parts and well-defined interfaces.
Startup scripts are a very much mostly solved problem. Debian gives you a nice template in /etc/init.d/skeleton. Play with it. Yes, I've written startup scripts.
You may enjoy micro-managing your networking etc - good for you. Some of us don't like doing that. To give one example of stuff that certainly doesn't just work, I was trying to run Squid on my (Ubuntu) laptop and it certainly can't handle state changes well, and neither can Ubuntu's ifup/down and init system. I often ended up having to manually do stuff that the system should have been able to handle well.
I'm personally delighted with systemd's functionality - the way it captures output from services would have saved me hours in the past from services that wouldn't startup cleanly and avoided providing useful information as to why.
(Separately: my kingdom for a simple caching web proxy server)
As a result, there are way less differences now between distributions. Which means configuration becomes easier.
In any case, if you really care about things not changing, then I assume you're using a distribution which doesn't change this suddenly. So I don't see why your so awfully negative.
But forget about us, we're already converts, we don't matter. My grandfather (93 years old) is also a daily laptop linux user. When he presses that power button, that laptop better be booted and ready /yesterday/. And when he pushes it again, it better be off before he closes the lid. Slow startup and shutdown times are simply not an acceptable user experience; they are literally the difference between enjoying and wanting to use the computer, and not wanting to bother with it.
And don't think for a minute he's going to learn about suspend, hibernate, power savings, battery life, or whatever. It's just not going to happen. His laptop lives in the closet, so it's going to be off (either by his doing, or the battery running out). When he sees something on tv and wants to read about it, he takes the laptop out, plugs it in, and turns it on. If it's not ready for him when he's ready for it (i.e. now) then he just won't use it.
However, since I've got that sucker booting from power button to firefox home page load complete in under 7 seconds, he uses it all the time. And it's amazing how it enriches his life. You simply can't get computer use to penetrate into lives like his without fast booting and an easy user experience.
Light press: hybrid suspend suspends to RAM, also saves state to disk -- system spins down quickly and, so long as it's not been hibernating long enough to drain battery, restores in a second or so. Longer and it will do a boot/restore from disk.
Long press: powerdown.
Many devices have separate "suspend" and "poweroff" hardware (or soft controls) as well.
The OS and tools do all the magic bits.
To the non-enthusiast / casual user, closing the lid, pressing the power button, doing a system shutdown, inactivity sleep timeout, and the battery running out are all the same thing: the computer was "on", now it's "off". Asking someone like this to think about how the reason it came to be "off" affects how fast it will be ready for them later is a fool's errand. It needs to be fast in every circumstance.
Normal people just want to get something done. They judge their computer by how easy it is to use and how fast it responds to what they do. That included cold boots, launching program, and downloading webpages. Even if they're doing something "the wrong way", they will still judge it with the same criteria and the same harshness. I want my grandfather to use linux because I can quickly help him and fix things from afar, and because there are very few ways for him to mess it up. He uses it because he really thinks it's better then windows, and that's purely because it's fast and easy, every way he uses it.
For the record, I set it up so the power button does a shutdown, and everything else results in a hybrid sleep. What he understands that he can shut it down if he wants, otherwise no matter what happens (lid closed or not) everything will be the way he left it, even if he forgets about it for a few days or doesn't charge it.
That kind of simplicity is what allows people to think of linux as something they can use, not just some super complicated tool for "hackers" and "computer geniuses". I'm not saying it should be dumbed down or have options removed, but I am saying that making it enjoyable for everyone results in more people using it, and that benefits us all.
Honestly, I think the SSD has the most to do with it.
Use a different distro. None of these is a problem on a modern distro with reasonably modern hardware.
And then contrast that with the dollar figure for consultant / employee / remote hands time to figure out WTF went wrong?
There are numerous systems for managing services: monit is the best known, mon and several proprietary systems also exist. Nagios can tell you if the service is running or not (though it doesn't handle the start/stop logic).
These are small details and extensions on top of the existing SysV init foundation.
Ubuntu's boot time is already down to 8.6 seconds -- a restore from suspend is barely less than that (and restore from disk is considerably longer), though both restores preserve user state. You know, what applications / files you had open, and what was in them when you left off, positions of windows on your desktop. All that jazz.
The socket management is kind of nifty, but doesn't add a whole lot that xinetd didn't already offer (systemd does allow multi-socket services and d-bus-initiated services). I'm not convinced these couldn't be hacked into xinetd while preserving the simplicity and stability of init.
My desktop state (and its preservation) is worth a lot more than fast boot.
Yes. I've heard of inane gratuitous questions. As I said: if you're forcing average users to reboot with any frequency, you're Doing It Wrong.
Monit does a thing that approximates managing a process, for certain values of "approximates", "managing", and "process". Supervisory process management is one of Linux's absolute weakest points. I cut my teeth on fault-tolerant HA minicomputers, and it pains me to think that 30 years later, we still don't have a way to say "make sure apache is always running. period."
As a great blog pointed out, there is exactly one process that KNOWS when a service has stopped running, and it doesn't need .pid files or polling or anything else to tell it: process 1.
I'm not a systemd advocate - I don't know enough about it, and we're using Ubuntu so I'll end up learning upstart anyway - but read this, it's way more eloquent that I can be:
Init can and does manage processes. Somewhat crudely, mostly via the 'respawn' directive. One thing it isn't particularly good at is telling if a process is doing something useful (say, serving out web pages successfully), but it will let you know that it's running. There was a semi-popular hack some years back to run sshd out of init (via respawn) to ensure you always had an SSH daemon on your box (Dustin mentions this). The downside is that while it will ensure sshd is running, it doesn't give you much flexibility over the process (you've got to edit inittab and 'init q' to make changes).
What monit and kin can do, above and beyond process-level monitoring, is check that the service attributes of a process are sane. That a webserver, say, kicks out a 200 OK response rather than a 4## or 5## error, and restart the service if this isn't the case. Checking for correct operation can be more useful than simply verifying a process is running (though going too far overboard in defining "correctness" can also cause problems).
For realtime/HA tools, attacking things on the single-system level is probably the wrong way to roll. You want a load balancer in front of multiple hosts with response detection -- is host A still up or not? Whether or not this ties into mitigation (restart) or alerting (notifications to staff) is another matter.
There are also places other than init you can watch things from. /proc contains within it multitudes, including a lot of interesting/useful process state. Daemons can be written with control/monitoring sockets instrumented directly into themselves. Debuggers, strace, ltrace, dtrace, and systemtap all provide resolution inside a running process/thread. Creating something sane, effective, efficient, and sufficient out of all these tools ... interesting problem.
Well, ubuntu doesnt use sysvinit either.
Also, shell scripts rock for an init system language. It's a language that almost everyone knows and can debug without being a CS major. The only reason you 'have no idea what happened' is because the scripts are written poorly, and code in any language would be hard to debug if it's written poorly.
Fork and exec, seriously? You're worried about functions that take microseconds to finish? Look again - the huge sleep cycles to wait for drivers to finish initializing takes up a lot more time.
I have written my own init systems three times in three languages, and examined countless distros' versions. Trust me, shell is the best compromise.
ls -alh | wc -l returns 89. I can subtract "..", ".", and the "totals" lines, so that's 86 init scripts.
Big O for 86 scripts is 86 * n, which simplifies to "n". I'm not concerned.
Yeah, that's not how computational complexity works.
As far as I can see your arguments are 1) you boot your systems infrequently, so any work in that area isn't valuable 2) socket-based activation is somehow not predictable 3) you're familiar with shell scripting, so a change that replaces shell scripting with something else must be bad. 4) and then you thow in some unclear Red Hat FUD for no apparent reason. None of those sound convicing to me.
What is giving rise to the need to reboot them when you do?
Upstart and systemd provide tons and tons of other features though. Restarting of crashed processes, dependencies, etc.. They also generally have much more simple config files instead of start up scripts. I don't know how many crappy startup scripts I've seen over the years, when in practice: set these environment variables, execute this program as this user with these arguments is 95+% of what's needed.
Much much much more straight forward to have some specially formatted comments (?!hahaha, that's the UNIX spirit!) to determine the boot priority and then source some files to read some arbitrary variables and construct the command line that you're interested in running with complete abstraction.
The other functionality may be nice, but 1) it's got no place in init and 2) really complicates a key piece of system infrastructure. Complexity and change are the two dual enemies of stability. As an old-fart ops type, with scars on my hide and notches on my belt, I really hate both change and complexity. The mess with my nines.
Arch and Fedora are relatively wide of my usual ambit, but I've learned in my years to be wary of what others ask for -- you may get it and have to live with the consequences (see: GNOME).
So. Yeah, I'm pretty skeptical.
This results in way more users and developers looking at systemd. As a result, less bugs.
But since the Mac mini does not have a battery, S3 sleep mode does not survive unplugging the device. And since suspend-to-disk is not supported by the OS I run, shutting down is the only option.
P.S., I would have preferred something like a Mac mini, but with a small battery that powers S3 sleep mode. Sadly, I could not find anything like that on the market.
P.P.S., I run OS X on it. If I were to switch to Linux, would suspend-to-disk work reliably?
I'm a fan of small form-factor systems, though I suspect we'll start seeing these as G3 tablets (where the iPad was G1, and the current Android-and-others are G2). Which is to say, devices with integrated display and battery, to which other peripherals may be attached (physically or wirelessly, say, by Bluetooth). That said, we're not there yet.
And yes, small form-factor PCs (CPU, no battery, no display) are pretty slick. I'm something of a fan of the FitPC offerings: http://www.fit-pc.com/web/purchase/order-direct-fit-pc3/ (Googling "small form factor" will show you numerous other vendors).
I used a similar configuration under Linux for a time, and as of mid 2000s, found suspend-to-disk worked pretty reliably, though not perfectly. In the past 4-5 years on laptops and desktops, I've had very few problems, mostly traceable to display drivers.
Systemd also does much more than that and handles stuff like daemonization and socket creation, so that these things don't need to be re-implemented in every program that requires them.
Bash scripts are overly verbose, repetitive, and awkward in comparison to unit files.
And you can always use sysvinit if you still aren't convinced, just Arch will be optimised for systemd.
Frankly, it's getting old.
If not being portable means way less time spent on development, then some people might choose that. Good for them.
DEC. Oh, wait, that didn't work out so well, now did it?
Software has bugs.
Core, deep systems software has subtle bugs, or hidden bugs, or emergent bugs, or any of a whole host of things.
If arch and fedora want to ride this tiger, I guess they can.
Again: init is really, really stable stuff.
Add in hooks to journald, d-bus, and the equivalent of an xinetd replacement/upgrade. Too much change.
And a Really Bad Attitude from the developer. My experience (a few decades of beating around on various tech at various scales) says this doesn't bode well.
On the other hand, this would be nice: "There is no tool that will print out a dependency map." It's also pretty trivial to implement with a little shell script and graphviz.
systemctl dot | dot -Tsvg > systemd.svg
Juliusz Chroboczek: A few observations about systemd
Editorial / discussion at LWN:
And for the record: I'm not particularly much a fan of upstart, but it annoys me somewhat less than systemd.
I understand that this feature isn't important to dredmorbius, but to some of us this type of improvement is fantastic.
I work with a fair number of embedded systems myself. Most avoid full boots where possible.
I haven't seen or heard of any embedded apps which hibernate to flash. That would not be terribly fast, and would wear out the flash quickly.
A local capacitor might provide the latent power to support sleep state. Or you could provision flash with enough ECC and reserve capacity (a 16 GB microSD drive fits on my pinkie nail) to survive years. Might even make swapping the storage a regular maintenance item, say 5-year cycle. Figure a high-end duty-cycle of 10 starts/day, 365 days/year -- that's 3650 read/write cycles a year. Even if that's a 100x low estimate, we're talking 365,000 cycles/year (that's assuming 1000 starts/day). As of 2003, AMD were discussing 1,000,000 cycle lifetimes for flash storage: http://www.spansion.com/Support/Application%20Notes/AMD%20DL...
Actually, in five years, controller technology would likely advance enough that, provided your unit production count is high enough, you'd just swap the entire controller for a new component with enhanced capabilities.
What about those that can't debug when a shell script breaks?
Your answer is going to be that they have no business administering a server where a shell script is an integral part of the system working.
Conversely, someone that can't debug when a system can't start that uses systemd has no business administering a server where systemd is an integral part of the system working. If your system uses systemd, then you're going to need to learn a new tool. Get used to it.
On which point, specifically: when Debian breaks during initrd execution, the system is dumped to a shell, "dash", a POSIX-compliant shell. It doesn't have all the niceties of bash, but it's usable.
When a Red Hat system breaks during initrd execution, the system shell doesn't handle terminal IO. You literally can't even fucking talk to the damned thing. It's a scripting-only shell.
The kicker: the RHEL initrd shell is larger than dash.
Guess which of these two systems is easier to troubleshoot / debug / rescue in a pinch?
the system shell doesn't handle terminal IO
the RHEL initrd
dpkg-reconfigure -p low dash
Are they more able to debug when a systemd setup breaks? If not, it seems like a moot point to bring up. They're hosed either way.
Although I do have to say, I like the systemd model. The use of sockets to do process activation and thus doing away with almost all of the need for dependency management is pretty cool. I haven't used it enough to pass judgement, but the concept has the potential to be a good deal simpler than the init hackery we have now.
Yes, because no programming skills are necessary for editing systemd unit files.
Here are some reasons:
* Despite years of work on it, I find sleeping on laptops on linux is still flakey. My Thinkpad T420s failed to wake up about once a week (on Ubuntu), so I tend to shut down.
* I like having a clean desktop when I start on a morning. If I keep sleeping my machine, I just tend to gather up programs. Of course, you could argue I should get more sorted, but I don't really want to.
* One other problem you have is to do with Linux being used on both servers and desktops. I can see your problem. Personally, if my machine ever got in such a mess that they couldn't boot, I'd just reinstall, regardless of what had broken. I suspect most people are the same. However, I can understand if you want to be able to edit how your machine starts up, and fix it when it brakes.
2. You can bounce your X session. No need to reboot the full box (me? I prefer saved state).
3. My servers may be anywhere from several feet from me (stuffed into a closet with limited access and a crap POS keyboard and monitor) to tens to thousands of miles away. With varying values of ILOM / remote hands / virtual media support. "Reinstall" isn't generally a highly tenable operation. Being able to handle issues without having to dedicate one or more staff days to travel and unavailability for other tasks really sucks productivity down.
I can confirm that. After the 3rd time such a thing happened to me I switched to Ubuntu. Arch is a testing bed for people who like screwing around with Linux. Nothing against that but from time to time I'd like to be able to do actual work on my workstation :)
FWIW, Fedora moved to systemd a year or so ago.
Standard on wheezy. Allows for parallel launch of services.
I'll spend more time in hardware init (especially on servers) and fsck (even just journal replays) than service startups for most part. Even my servers (minimal services starting) take a while to come live, mostly due to the actual workload stack coming up. Then caches get to warm up and all that jazz.
Boot time is still a very small part of this.
boot times is only the third point mentioned:
Systemd has a overall better design than SysV, lots of useful
administrative features and provide quicker boot up
I do know that there were issues maybe 10+ years ago. Bringing things up that were solved 10+ years ago is a bit pointless.
Also regarding your systemd summary is inaccurate. You give the impression you just don't like Red Hat, because you don't really say anything concrete about systemd aside from some very generic remarks.
Yawn this again. Give an example please.
My philosophy is different. Make a change to a server, reboot.
The goal is to eliminate surprises if the server is restarted unexpectedly. I'd rather have them during the maintenance window than at 03:30 after a power outage.
Anyway - to systemd.
I was appalled when we moved up to Solaris 10 and the SMF facility started to replace init scripts. It felt wrong.
I adapted. It's not wrong, it's just different. Better in some respects: you can still use bash scripts, but you have better control over them, a standardized way of managing things.
Now we're abandoning Solaris for Linux and ... I'm appalled that 'linux' default method is still .. init scripts. And a hodge-podge of stuff like djb, systemd, etc, all with competing fan boys and advocates.
AT&T ran into a little restart issue, as I recall, in 1990 when a software upgrade gone wrong crashed much of the phone network. Among the problems were that most of the switches had been upgraded in place, many over decades, and there had never been a cold-boot restart. There was some uncertainty as to whether the system would start up properly or not.
While long uptimes are nice, I generally prefer seeing a few reboots annually just to be sure things will come up right. There's a balance between "restart for every change" and "restart regularly enough to not be surprised at 3am".
Now it's other stuff. With chef and automated system management, repeat 'apt-get update' runs, which even with local caches and other tricks give about 120-150s per startup.
Having been both on the packaging side, as well as the admin side, I can't imagine not abandoning the daemonize and PID file paradigm. The number of package I've seen that have init scripts that don't properly stop or start the daemon, or don't check the pid file and or subsys lock file; or daemons that don't properly chdir, or don't release an errant file descriptor, make me want to scream. Not to mention the process monitoring and full lifecycle management, it just seems like a no-brainer decision.
There's a lot of noise coming from people saying that their laptop doesn't need it, booting isn't that slow, etc; the driving force isn't targeting laptops/desktop, it's targeting the largest use of Linux -- servers. The process management is the big win, and boot time is just a bonus.
At which point actual workload stack initialization (webserver, application server, database, caches) generally takes additional time. Depending on where you're starting form, a few seconds to many minutes or hours (DB init/restore/replication from snapshots/backups/master).
Again: the few seconds you're going to save swapping out really stable infrastructure 1) isn't the problem and 2) introduces change and complexity (and hence uncertainty and unreliability) to a very critical system component.
If the systemd team wants to drag goalposts all across the field, that's fine. I'm just going to note their original location.
If you want to build a better xinetd, or better SysV init based dependency system (insserv), or alternative (upstart), then do it. OK, upstart also fucks with init, but with a lot less whack then systemd.
As to the "I've seen poorly written init scripts": on my distro of choice (Debian), package maintainers do a very good job of providing sane scripts (which are a lot easier to follow than RH scripts, something I noticed when first cutting over to Debian), in part because the distro provides a solid, 18-years of evolution, SysV init based process, and a policy that tends to iron out occasional bouts of dumpth.
I believe though that systemd is actually targeted at the desktop. :)
"Poettoering sucks! PulseAudio!"
That's not much of a technical argument against systemd, now is it? Pretty much everyone who complains about PulseAudio doesn't even know what it is; they just blame it when their audio doesn't work (usually for some unrelated reason).
"It's not deterministic!"
You're probably talking abot the socket activation. That part is pletny deterministic - a message comes in for a service, that service gets started. Are the messages coming in not deterministic enough for you? You can add your own unit files that starts the service at boot, and you can even control what starts before and after the service.
"Shell scripts are so simple!"
You know what's simpler than a shell script? A unit file. They are also more consistent. The various shell scripts are all written by different package maintainers and are rediculously diverse. Some are full-featured init scripts that can send signals to the service to make it do stuff; others can't even restart the service. Unit files are so simple that it's pretty hard for any different styles to really matter.
Also, the method for enabling a service is different in all the distributions with shell scripts. Debian is update-rc.d; Red Hat is chkconfig; Arch is vi /etc/rc.conf. With everything moving to systemd, we finally have one way: systemctl.
You don't need a special program to read logs; you can use whatever syslog daemon you want; you just don't have to. I really enjoy journal's ability to filter the logs precicely by all sorts of fields.
Actually, it's just the opposite of "relacement of files with APIs". A program's command-line is an API that you call using a program; that program is being replaced by a file.
I don't think you've actually used systemd if you claim that it "sucks the fun out of computing". I find the the "systemd-analyze" and "systemctl dot" tools to be a lot of fun.
systemd opens a lot of doors for potential new Errors. I agree sysvinit sucks but worse is better in this case. Ideally an init system would be a lean and smart turing complete scripting language and every feature is implemented on top of it.
> Ideally an init system would be a lean and smart turing complete scripting language and every feature is implemented on top of it.
You would probably really like NCD as an init system. I was considering doing that in an embedded system I make until systemd came around.
Several tens of thousands of Lines of C code are a lot less hackable than a few lines of shellscript
>2)When was the last time you hacked on an init script?
A few months ago, writing an intelligent battery monitor for my notebook.
NCD looks great btw. but it is not an init system.
C compiler? Not so much. Servers (security risk, more moving parts), embedded (space/power requirements).
Perhaps this is the main problem - the people pushing technology are not willing to acknowledge that people have different use cases to them. There are complaints, and they are valid - you just do not see them as valid, because they don't concern you.
Since you brought PulseAudio up, there's a very legitimate, common complain about it - latency. Yes, the people who care most about audio - the creators and audiophiles - are pushed aside as "not relevant," because Lennart is only interested in "appealing to the majority," not some minority fringe groups.
Really, the only reasonable solution for some people is Jack - but any hope of bridging the gap between Jack and Pulse is lost in a barrage of people pushing for a "single audio solution" - and prematurely claiming Pulse as the victor.
As a result, people write applications to only use PulseAudio, which are then unusable to someone using Jack (aside from hacks to make pulse act as a jack client). Professional audio software is still basically required to code for both Jack and Pulse/ALSA or whatnot - we're nowhere near a single audio solution.
But don't tell that to Poettering - because he is adamant that his solution is the only solution, and anyone who it doesn't suit is just playing with toys.
That's literally how he referred to Debain's choice to not push systemd because it targets multiple kernels, which systemd does not work with. (Which btw, Arch does too - although perhaps not the same people.)
Have you still yet to see a valid complaint? Of course kFreeBSD is not a valid complaint to you, because you've never used it, and never plan to - it's of no concern for you.
If we only cared about what's popular, Linux would not be what it is now - and you'd be on Windows. Linux is not the end-all solution to every problem anyway, as other lesser popular kernels have some great technology in them which is lacking in Linux. It's the only solution for Red Hat though - so you can see why they're happy to push such agenda. If you're not a Red Hat customer, your opinion is invalid.
Arch was built with a different mentality - the one of personal freedom to do what the hell you want with your desktop, not for the benefit of some company. You are free to use or reject any software you don't want.
Well, not any more. Pushing systemd on users breaks that mentality - because the choice is stripped from you. The choice is already there in Arch though - and has been for a while. If you want systemd, you can use it. If you don't want it, don't bother. Moving the other way is not really possible though - because if you build your system around systemd, you can't revert back (without taking the time to rewrite everything that depends on systemd.)
The dependency problem in itself is a complaint against systemd. Should udev users be forced to use systemd for example? Normally we would introduce another layer of abstraction to our code - such that we can share common code between systemd-udev and non-systemd-udev, and have a solution where everyone wins - the systemd users benefit from improvements in systemd integration, whilst everyone benefits from improvements to udev which aren't systemd dependent. This is programming 101.
Well, not if you're the package maintainer and have more political motives. Any proposal to introduce such a split with common code will probably be met with: NO, we're not interested - It's too much work - It's pointless supporting non-systemd - Fork it.
A fork it will be - and because the fork will be much less popular - it will obviously be a toy.
Just to be clear - I'm not against systemd and Pulse from a technical point of view, and I can very clearly see the advantages they have over alternatives, for linux. I'm not really against fragmentation either.
What I am against though, is the politics of it. The constant pushing of systemd down everyone's throat like it's a fucking panacea. One day saying "don't use if if you don't want," and the next blogging "you're fucking idiots for not using it." (Lennart's approach to Ubuntu.)
I brought it up primarily as an example of what is NOT relevant to systemd.
> But don't tell that to Poettering - because he is adamant that his solution is the only solution, and anyone who it doesn't suit is just playing with toys.
That's not what he said about PulseAudio. He specifically mentions the need for the other APIs and that the situation sucks. http://youtu.be/9UnEV9SPuw8
> That's literally how he referred to Debain's choice to not push systemd because it targets multiple kernels, which systemd does not work with. (Which btw, Arch does too - although perhaps not the same people.)
The advantages the Linux kernel provide (particularly, cgroups) is such a useful thing it would be stupid not to use it just because it doesn't exist everywhere. So either stick with your current init system, make a similar init system that is compatible with unit files (which are extremely simple), or add the necessary features to the BSD kernel. I'm not seeing a valid complaint here because systemd doesn't affect you unless you want it to.
>Arch was built with a different mentality - the one of personal freedom to do what the hell you want with your desktop, not for the benefit of some company. You are free to use or reject any software you don't want.
> Well, not any more. Pushing systemd on users breaks that mentality - because the choice is stripped from you. The choice is already there in Arch though - and has been for a while. If you want systemd, you can use it. If you don't want it, don't bother. Moving the other way is not really possible though - because if you build your system around systemd, you can't revert back (without taking the time to rewrite everything that depends on systemd.)
There is no "everything that depends on systemd" other than the init process itself. You say this choice is stripped from you, but what you want is to force the Arch developers to maintain init scripts. Take responsibility for it yourself - get together with other people who want those init scripts maintained and maintain them. You won't get things you want by just demanding that the world give it to you.
> The dependency problem in itself is a complaint against systemd. Should udev users be forced to use systemd for example? Normally we would introduce another layer of abstraction to our code - such that we can share common code between systemd-udev and non-systemd-udev, and have a solution where everyone wins - the systemd users benefit from improvements in systemd integration, whilst everyone benefits from improvements to udev which aren't systemd dependent. This is programming 101.
Where are these systemd-dependent parts of udev? They don't exist. There is no need for an abstraction layer to deal with the differences because there are no differences from a user perspective.
>What I am against though, is the politics of it. The constant pushing of systemd down everyone's throat like it's a fucking panacea.
Really, you're more political than anyone I've seen on the pro-systemd side of things.
>One day saying "don't use if if you don't want," and the next blogging "you're fucking idiots for not using it." (Lennart's approach to Ubuntu.)
Are people not allowed to express an opinion?
1. Give a graph to the computer
2. The computer makes the graph a reality
Puppet, Chef, Cfengine come at it from an on-disk direction.
Upstart, SMF, systemd, launchd come at it from a runtime direction.
They're still talking past each other. And it's annoying.
What I would really like is a system that does both as first-class citizens. I may be waiting a while.
There's a whole bunch of tools groping awkardly in a single direction here:
1. Give a graph to the computer
2. The computer makes the graph a reality
The problem is in determining whether the dependencies of a node in the graph are satisfied. Make does this by comparing file times. All the other 'graph resolving systems' you mentioned do something different. Obviously the solution is to abstract the dependency test from the graph itself. Then all these systems become pretty much the same.
Make still requires a manual step. I'm thinking of systems that do this themselves. Active control systems, constantly comparing the state of the world to the reference graph.
I don't want the unix tradition. The unix tradition is a pain in the rear end to actually administer. I want a single management framework with a single DSL that does it all.
I don't care much for the desktop, but I do care for servers.
Boot times on servers are irrelevant. Reducing them brings little benefit. Servers shouldn't have to be rebooted that often for it to matter, and also, servers spend most of their boot time POSTing and initializing firmware for the various cards. In many cases, twice as much as it takes to boot a normal SysV init Linux install.
Not only that, but it is much more important to be able to effectively troubleshoot boot problems than to get some dubious features that nobody really felt missing for all these decades.
I don't care if Fedora or Arch do this, but I do care if more server-oriented distributions do. I still haven't gotten over the fact that RHEL6 now gives you the option of using NetworkManager (bleh) or configuring interfaces through (badly designed) configuration files. What's wrong with the old system-config-network?
But at the same time systemd really does do much more than a classic "launch some stuff and call wait()" style init. And that stuff is pretty nice -- in a systemd world, no one needs to worry about writing a "daemon" any more. Any program that sits in a loop writing to standard output can be started, stopped and syslog'd.
And making this work isn't bad at all. You configure systemd with straightforward .ini file syntax and clear fields (e.g. "ExecStart=/path/to/my/program").
Basically it's complicated in structure (and the task of porting a whole distro to it strikes me as pretty scary) but simple in interface, and that's pretty much the right place to be. Most "systemd is too hard!" rants don't survive long past the initial implementation phase.
Thanks for posting this, this is a great analysis. It gets right to the heart of many of the disagreements I have with system design orthodoxy.
I think it's exactly the wrong place to be.
and here the design docs of systemd:
overall i think systemd is a much better implementation toward the same goal. some distros have already switched from upstart to it, not saying that upstart is not an extremely popular choice as well.
Currently (Friday 3:48 Pacific time), this thread is at the top.
Edit: Lots of +1s in the early thread messages; some elaboration surrounding migration issues in later emails.
BSD's rc system is fine. Sometimes the scripts are too verbose. But the whole idea is the system is simple enough to understand that you can write your own scripts -- more concisely, if you wish. You don't need to read a book (e.g. Linux from Scratch), keep most things disabled by default and let the user turn stuff on as they need it.
I recently used Debian's live USB, the rescue version, for a little while and was amazed at how much stuff is turned on by default. I guess if you understand each and every choice that's been made for you it's OK. But if not, that approach is not very conducive to learning.
As for Apple, never mind all the XML fluff, good luck trying to understand what's going on behind the scenes with their computers anymore. They can't even manage to let you have an nsswitch.conf or equivalent.
The is not the case on BSD systems (generally an integrated whole, though they've got package management) or RPM (poorer package management leading very frequently to a "kitchen sink" installation paradigm).
Yeah. RHEL's even got a package you can install to enable/disable postfix vs ... oh, whatever the default MTA is, I can't keep track (smail still? I know they've moved off of sendmail, right? Right?).
Could you give some examples of those lawsuits?
Announcement will surely be made shortly :)