Hacker News new | past | comments | ask | show | jobs | submit login

The reason people use UNIX-like systems is because they work reliably. In order to make a complex system work reliably, it needs to be easily fixed. In order to fix a system, a person needs to understand it as well as be able to make a change in it. And in order to understand a system, it helps very much if that system is straightforward and lucidly verbose.

I hope systemd will live or die on its merits; I fear that it will take over via politicking.




It already has taken over despite being mediocre at best.

Mostly I feel that it got an undue jumpstart thanks to RedHat trying too hard to be bleeding edge, and Upstart/OpenRC not having as high-profile a backer.


To be fair, Upstart was pretty awful in the early goings. It seems to be a lot more reliable and predictable now, but that's a fairly recent development for us (when we upgraded to Ubuntu 14.04 LTS).


The golden rule of technology hype: once people start abandoning a technology is when it starts being functional.


The problem is that some projects started depending on systemd behaviour now and the transition hasn't been smooth at all in Debian even if you want to stay with sysvinit. I'm still on sysvinit with systemd-shim, and things started to break in KDE already, mostly related to authentication (can't mount USB devices, can't manage VPN connections, etc.). In the end I'm afraid Debian users might not have much choice: either use systemd as PID1, or use another distribution (like Gentoo, etc.) that work without systemd.


The fact that upstart is covered under Canonical's developer contribution agreement and copyright license counted materially against it in the Debian debate.

Canonical's insistence on control and ownership ended up torpedoing its project. Which is really quite sad.


It is interesting looking at how differing linux distributions approach this problem as well as things like connection management. Many do seem to be moving towards things like systemd. Angstrom Linux for the BeagleBone even bundled something called connman for managing network connections. Attempting to navigate the path of systemd plus connman to get networking to work the way I wanted was a pain. Some of the more interesting "solutions" when searching about recommended essentially wedging sys5 type init scripts into the systemd framework.

I wised up and moved over to using the Debian distribution in this case. Less moving parts trying to make things work the way they thought they should.


Connman has a lot of promise, and I like the theory and design of it, but found it very frustrating and incomplete in practice. Basic functionality like "connect to my wireless on startup, and keep on trying to connect if you don't succeed right away" is missing.

Also, it's being rapidly iterated and there isn't a PPA for Ubuntu, which sucks.


Potential, yes, but with devices like beagleboard where you are often also using a WiFi adapter that is not super strong, the frustrations grow.

Fewer moving parts means easier debugging.


> In order to fix a system, a person needs to understand it as well as be able to make a change in it.

So if I follow you, it would be easier to grok the whole system if all the code was in different places?


Opaque C code that sits in lots of spread binaries and some end-user documentation in man pages don't help you a lot if have to debug an issue. It's okay from a user perspective but with systemd (or other complex systems that rely on in this principle) you need to start reading c-code, start gdb, deal with dbus... if it's a toolbox of scripts a `grep -r <error>` is often the first step on the way to understand and learn something and fix the problem.

This is more difficult if you have a lot of abstractions and binaries lying around. You need to start reading (often nonexistent) documentation and abstract C-code...

It likely does not matter for 95% of users and I've rarely had to do something like this but you are losing some control as developer/sysadmin. For some it's important as their productivity and job depends on solving such issues fast, others will never have this problem...

I've never had a problem with systemd through. But if you have it's difficult to fix it on your own.


> Opaque C code that sits in 20 binaries and some man pages don't help you a lot if have to debug an issue. It's okay from a user perspective but with systemd (or other systems that rely on in this principle) you need to start reading c-code, start gdb, deal with dbus... if it's a toolbox of scripts a `grep -r <error>` is often the first step on the way to understand and learn something and fix the problem.

What's opaque about Free and Open Source C code?

Some might argue (this guy included) that statically typed, statically analyzed C code will result in fewer people having to debug their system code than the equivalent code written in a particular variant of shell code.


> What's opaque about Free and Open Source C code?

Everything if you are sysadmin/developer trying to fix an issue. At first you need debugging symbols to pinpoint the problem, then you need to read the source.. it all takes time. E.g. you need to learn about dbus-monitor and dbus calls and need to grasp some internal concepts of systemd if something goes wrong. It takes time and patience you usally don't have or don't want to spend on such details. For comparison i.e. FreeBSD rc is only shell-scripts: http://www.freebsd.org/cgi/man.cgi?rc(8)

I don't want to say that one is better than the other but the latter is for most folks far easier to debug and modify than the first. However as I've said it only really matters for few people. But I can understand that they are not particularly happy about this new complexity. And bugs happen.


> I don't want to say that one is better than the other but the latter is for most folks far easier to debug and modify than the first.

Generally bad form to make claims on behalf of "most folks" since you are in fact a single person. It's totally a valid argument if you say this on your own behalf.

And yes, bugs happen. But fixing the C code, is in my experience, much easier than tracking them down in `bash -x`. Especially when dealing with race conditions between services/triggers/device initialization.


>And yes, bugs happen. But fixing the C code, is in my experience, much easier than tracking them down in `bash -x`. Especially when dealing with race conditions between services/triggers/device initialization.

You are being stubborn just for the sake of winning an argument. Interpreted languages are easier to debug. They have many flaws. Debugability isn't one of them. Heck, my servers don't even have gcc/gdb. Good luck trying to debug systemd in my production environments...


Yes. Shell scripts are it's own unique kind of hell. I'm really speaking on my behalf here. I can read shell good enough to follow and debug issues in it. However digging into internals of systemd and its interactions with dbus and other binaries is opaque for me.

Maybe it's just a different perspective - as a developer systemd likely eases a lot of pains and makes otherwise problematic and error-prone problems easy but as an sysadmin that mostly deals with servers it feels sometimes like forced unnecessary complexity that can introduce difficult to debug issues.


As a sysadmin, I'll take systemd units over SysV init scripts any day. They tend to be shorter, more simple to read, and I don't have to worry about the race conditions or services not restarting correctly due to varying daemonization techniques.


Yes. I don't intended to argue about that. For that systemd is perfect and I like using it too. I mean such problems as a hanging boot in an lxc-container where I've once got only a (not a lot on google about that at this time) red error message that something went wrong. How to go from there? It's sure possible but it's a lot of work.

I don't say that's the norm and I don't say this happens often but if you build custom stuff and do "strange" things it's easier to know what's going on if you grasp the complete system. This is more difficult with systemd.

I believe it's a valid criticism and I realize 95% of users never need to care about this. However it's still a valid point if you build complex systems that are not "off the shelf".


> fixing the C code, is in my experience, much easier

Really?

Did you account for the many very subtle ways you can run into what the C language defines as "undefined behavior"? I have only met a few programmers that truly understand that can of worms. Way too many don';t even know that compilers exploit these parts of the spec despite programming in C for many years.

http://blog.regehr.org/archives/213

http://blog.llvm.org/2011/05/what-every-c-programmer-should-...

That's just the cases where it it is totally legal for the compiler to output random noise - or output nothing at all - instead of what the C code says locally. These are some of the nastiest "gotchas" I've seen in any language.[1] Even the best C programmers are occasionally bit by this class of bug.

I still like C (a lot), but it is not easy. It's just so very annoying and time consuming to track down a bug happening in "foo.c" that is actually caused by a variable in "bar.c" didn't get updated waaaaaay earlier because some bit of code in "quux.c" was skipped over due to undefined behavior. Especially it becomes a heisenbug due to that particular optimization being turned off when in in debug builds.[2]

Bourne [Again] shell has its own share of quirks and "gotchas", but they are usually easy to investigate, and they are generally easy to avoid once you've written a couple scripts.

[1] There are other important classes of bug; I'm just using undefined behavior as an example because of how amazingly subtle it can be and how many serious security bugs it has caused.

[2] Before anybody complains that behavior involving 3 files like that is bad design, consider that A) this happens all the time in real world C, and B) I agree. Which is why many of us are against systemd, which adds complicated interactions like this on purpose as a way to force vertical integration.


> At first you need debugging symbols to pinpoint the problem

These days there is really no reason you can't have the debug symbols already around. But you've got a fine point there, the Bourne shell debugger is much more convenient and easy to come by, and it makes postmortem analysis with core files trivial... ;-)

> then you need to read the source..

As much as you need to do that with any system, you need to do that with them all.

> E.g. you need to learn about dbus-monitor and dbus calls

Yes... and if not you have to learn about whatever other mechanism is being used to provide encapsulation and separation of concerns between the components of the system...

> It takes time and patience you usally don't have or don't want to spend on such details.

This really boils down to, "I'm already really familiar with this other system...". It's a legit argument for why you might not use systemd. It's not a terribly legit argument for why systemd is bad.

The rc system doesn't address a fraction of the problem and actually makes a number of things worse. Heck, the rc man page you linked to links to four or more other components of the system, including the voluminous "simple because it is shell" rc.conf.


I completely agree with you.


My machine with systemd (FC20) doesn't boot up at all unless systemd loglevel is set to debug on the command line. Even with it it takes about 5 minutes to boot. Luckily I don't need to reboot often, but every single time there's a small fear that some upgrade has made systemd crap up even worse, and the system won't boot at all.

How do you debug a complete black box, where turning on debugging partly fixes the problem. You really don't. This is literally the worst debugging experience I've had in 20 years of using and maintaining Linux systems -- and that includes trying to do these things with much less knowledge and only limited internet access back in the early days.

I love the ideas behind systemd. It's too bad, even if not surprising, that the implementation is a flaming pile of garbage.


Reminds me of the joyful early days of moving from grub to lilo. Grub was (is) more fragile due to being more complex -- but at least the grub shell gives more information than lilo failing at "LI"...

Grub always seemed like the improved features made up for the added complexity; I'm not convinced about systemd.


Even Grub isn't all flowery. I have a background of using what I think is now called grub-legacy. Then I ended up running Debian, with newfangled Grub. I needed to change some kernel parameter (IIRC) but all I could find was a mess of undocumented scripts which say "don't touch this". I don't know where the documentation went, and it seemed needlessly complicated to configure. Why couldn't I say man grub and learn all I need about it? I had other issues with Debian, but the last straw was when during a routine package update it decided to install a new version of Grub.. and the next time, it wouldn't boot anymore. Why was it so complicated in the first place? Why did Debian have to fix it if it's not broken? Why did it fail to do it right? I don't know, I don't really care... all I know is that needless complexity and churn caused trouble, again.

So I'm no longer using Grub or Debian. And my bootloader is simple. I've installed it once, and never touched it afterwards. It's possible to configure it a little, but there's no need for it. So what if it has fewer features. It only needs to load the damn kernel... and it works. I'm happy.


The config files you want are /etc/default/grub and possibly also /etc/grub.d/ - though you're right, this doesn't seem to be documented anywhere obvious like in the man page. Gentoo installs its own version of /etc/default/grub with comments and examples but Debian may not be so helpful.


I'm pretty sure I poked in those very files, and some of them gave me the impression that they are generated by a script (hence "don't touch this"). Some of them gave me an impression they are read by some undocumented script. I still don't know what script.

But it's been a while.


It was messy (to a certain extent it still is). Reasons for GRUB(2) are mostly UEFI, multiboot support (eg: bsds, windows nt derivatives like modern windows). Grub fails the test of making simple things (as) simple (as possible). But it does support booting to space invaders. So there's a trade-off there, and I agree, it's not entirely clear much was gained from moving off of Lilo...


    info grub


Might be obvious, but that should be "moving from lilo to grub", not the other way around...


> What's opaque about Free and Open Source C code?

Opaque code is opaque. What does it have to do with Free or Open Source?


Actually, the nice thing about systemd being built around dbus is you can track most of what is going on with it simply by tracking the flow of messages in dbus.

If you find C-code "opaque", you're already kind of screwed in the Unix world...


> So if I follow you, it would be easier to grok the whole system if all the code was in different places?

Yes. Separated in documented modules that are self-contained, with well defined behavior. When your logger breaks, you fix your logger. Not your init process. When your devices are not discovered you debug udev not muddle through a code riddled with synching with ntp and so on.


The code is necessarily in different places, whether it's within a project or between projects. The difference is having an API that can be found without having to read all the code, and is well-defined and somewhat stable.

It's very "UNIX" to implement things as communicating processes rather than RPC or procedure-call-within-monolith.


Yes. See "decoupling", "big ball of mud".


yep




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: