
Systemd Is Not Magic Security Dust - walterbell
https://www.agwa.name/blog/post/systemd_is_not_magic_security_dust
======
bkor
Systemd makes it easy to add additional limitations to every service you're
running. More easily restrict all your services (NTP client, DNS caching
server, etc).

It adds an easy to use method to restrict all of the daemons and services that
someone might be running. Further, because the unit files are often shipped
with the daemon/service, it'll do this by default on every install. Not just
to the ones which have been looked at by a sysadmin. So a standard RHEL server
by default has more restrictions than before.

The examples are pretty unconvincing IMO. The article talks about an email
server. Systemd will provide an easy method to not have one service read the
other one. In the examples it talks about email server and a web server. But
completely ignores that someone will not just target that specific service,
you could just as well compromise the NTP client and then move from there to
another service.

Obviously systemd is just one small layer and security is not a yes/no thing.
But why then talk specifically about systemd? The article seems to suggest
that it's better to not use these abilities. This goes against the usual
defence in depth security IMO.

~~~
qwertyuiop924
>Systemd makes it easy to add additional limitations to every service you're
running

So does chroot. And so does jails. And posix capabilities. And cgroups...

~~~
bkor
It seems you skipped the bit where I said that those restrictions are often
shipped by upstream and as such applied to everyone by default? It's one thing
to lock down your machine(s), systemd allows to lock down loads of services
for everyone.

It's not perfect, it doesn't do everything. But is your suggestion really not
to make use of systemd abilities because maybe something else could be used?
How does this not go against the defence in depth theory?

Try and use (filesystem) capabilities. Now apply that security fix: capability
is gone. Further, that was just a change made to your machine(s).

It seems much easier to improve the security on all fronts instead of going
for some idealistic super safe solution that either takes ages or doesn't
happen.

Example: pretty nice to restrict that Jabber daemon written in Java from
accessing too much of the filesystem.

~~~
qwertyuiop924
Well, posix capabilities can be packaged upstream do that. So can Capsicum,
for that matter. And so can chroot...

~~~
bandrami
If only there were a free POSIX-ish OS that did privilege separation and
chroot for daemons by default. Perhaps some enterprising Canadian developer
could base it on the BSD kernel.

~~~
qwertyuiop924
Well then, there's your proof.

------
martius
The author of this blogpost is publicly insulting David Strauss (the systemd
contributor to whom he responds) on twitter:
[https://twitter.com/__agwa/status/782643469034528769](https://twitter.com/__agwa/status/782643469034528769)

~~~
deong
To be fair, he's doing so while referencing a response from that developer
that was just as insulting. I don't want to get into a debate over whether two
wrongs make a right, but the flow was "Systemd is not good", followed by a
reply of "you're just a child throwing a tantrum", __then __followed by the
insulting tweets.

------
DyslexicAtheist
I quite like systemd on my newer notebooks, hate it on servers. Different
users (DWH != 1-person desktop installation) have different requirements.

@munin[0] did a pretty good illustration on why both pro/con systemd camps
might be right about their reasons:

→ So, systemd is kind of a perfect microcosm of all the 'problematic' behavior
in tech, all at the same time. It's a project that is dedicated to novelty and
a specific set of goals - mostly speed-centric - above all else. And, to its
credit, it -does- make certain linux systems boot faster than other, competing
init systems. However. First, the way in which systemd has progressed has
taken specific advantage of certain problems in the open-source community.
Namely, that there are many projects that have little or no attention paid to
them, despite being infrastructurally critical.

→ So there is little desire by various linux distributors to make the effort
to maintain them - and when someone shows up offering to replace and maintain
that functionality, taking the responsibility off the already overworked
maintainers' plate? That's attractive.

→ Open source maintainers are always in a deep technical deficit trying to get
these old bits and pieces maintained, so they're eager to get the help, and
don't look too closely at what his 'help' entails.

→ And unfortunately, what this 'help' entails is the highly toxic systemd
community - and I use 'community' in the loosest sense, because it really
works out to being a sort of cult, spearheaded by a specific individual with
the ultimate power of acceptance or rejection over anyone else's participation
in the project.

→ Which is really unfortunate, because that guy's got an ego the size of
Manhattan. And he continually refuses to take criticism over his design and
implementation choices - take a look at what's left of any bug reports.

→ I say "what's left" because he has a history of purging anything he
considers to be "un-useful" \- that is, critical - from the archives

→ Now, certainly, you -could- go ahead and fork the project! Except now you
have a giant codebase and no community to work with to fix it. You could
always convince the distros to abandon it! ...except you're now up against a
bunch of overworked people who, frankly, won't care. And, worse, now that it's
a de-facto 'standard' in the linux world, you have a whole lot of
institutional inertia to work against to try to replace it, and - unlike when
they replaced init - a dedicated group of people who are utterly convinced
they're doing the right things advocating against rolling back the changes.
And, worse, there -are- some good points. SysVInit -is- grody as all get-out.

→ However, because of their dedication to novelty above all else, they're
making not only all the same mistakes sysvinit had to learn...

→ [ And because of ego, rejecting these mistakes as being 'un-useful
criticism' ]

→ But they're making whole new kinds of mistakes - things like
[https://cfp.systemd.io/en/systemdconf_2016/public/events/21](https://cfp.systemd.io/en/systemdconf_2016/public/events/21)
… which is just -staggeringly- WTF.

→ It's this whole fun trend of "fail fast" that, sure, looks great in a
startup producing some new kind of app for making your phone go yawp

→ But it's not -really- a very good model for infrastructural type concerns -
the things that need to be, by reason of their importance - conservative and
slow to change. You want your infra to be -reliable-, not "full-featured" \-
at least if you've any sense.

→ So: You have the trend-chasing guys who show up to solve "all your problems"
at the cost of making mistakes that could be seen a mile away. Eventually
leaving you far worse off and making a huge mess that will be -extremely-
expensive to fix when a reckoning comes. All in the name of some "change"
that's needed from the status quo and without understanding why the status was
quo. What's the solution here? Well, for one thing, take a look at the fine
print when Mephistopheles shows up offering to take care of all your problems.
It's too late for Linux - most Linuces are pretty much doomed at this point
into becoming utter travesties that make WinMe look reasonable. Though the
Devuan and Alpine folks seem to have some good impetus behind them.

→ Consider carefully what consequences are going to show up from adopting
these new and nifty 'features'.

→ Consider that there is a very large difference in requirements between core
infrastructure and user-facing things.

→ [ Because user-facing things can fail fast and be updated fast, but core
infra is much more expensive and time-consuming to do either ]

[0] Source:
[https://twitter.com/munin/status/781257878321582080](https://twitter.com/munin/status/781257878321582080)

Sorry about the above format, was too lazy to do a 'storify' or 'tweetlonger'
also hoping he puts that into a blog.

~~~
bandrami
Can't stand it on servers. It takes an absurd amount of effort to just get a
deterministic non-parallel service initiation. And the whole "systemctl start
foo.service && systemctl status foo.service" bit to make sure it actually
started is such a regression it's mind-boggling. Plus it hates the fact that I
use a static /dev tree (that's the only way I've found to do it on a server
while keeping my sanity).

On my laptop, it's fine, I guess, though even there I aesthetically dislike
parallel service initiation; I'm glad Jessie still lets you replace it with a
real init system, though I wish epoch [1] were packaged.

[1] [http://universe2.us/epoch.html](http://universe2.us/epoch.html)

~~~
parenthephobia
> _deterministic non-parallel service initiation_

When is this a problem?

> _systemctl start foo.service && systemctl status foo.service_

If you need to do that, the problem is with _foo.service_ , not systemd. If
foo.service fails to start, the right way, then "systemd start foo" _will_
fail. This is true of any init system: if a service starts but then bails out
after success has been reported to the user, it's too late to change that.

> static /dev tree

What's insanity-causing about a dynamic /dev tree "on a server"?

~~~
bandrami
> When is this a problem?

You're joking, right?

Sysadmins _need_ a deterministic boot order, period. If I want services to
activate on a request I'll use inetd, but I haven't wanted that in years.

> If you need to do that, the problem is with foo.service, not systemd.

This isn't about assigning blame, this is about my server being usable. If a
service fails to start I need the command that called it to fail also.

> What's insanity-causing about a dynamic /dev tree "on a server"?

Again, it's non-deterministic. Worse yet, it's declaratively configured in a
million and one udev rules instead of one imperative run control script (which
would at least be better, though still a bad idea) I need the devices on my
servers to always have the same names and numbers, and obviously a static tree
which doesn't expose any surface for my error or somebody else's attack is
strictly better than a configurable system that does expose those surfaces.

My run control scripts start the services that need to be started, in the
order they need to be started, one at a time, because at 3 am with the alarms
going off, that is transparent to me or to whoever has replaced me after the
tragic bus accident. Sysadmins get this, which is why so much of the pushback
against systemd came from us. Distribution maintainers love systemd, and I get
that, but I'm not a distribution maintainer. So now my site-local imperative
run control scripts replace an upstream declarative config system rather than
a distro-maintained set of run control scripts. C'est la vie.

~~~
justinsaccount
> My run control scripts start the services that need to be started, in the
> order they need to be started, one at a time, because at 3 am with the
> alarms going off, that is transparent to me or to whoever has replaced me
> after the tragic bus accident.

If your services need to be started in a particular order to work your
services are broken.

I've seen this sort of setup over and over again. Some custom
'startservices.pl' script that starts services "one at a time" in the "right
order" like application server -> web server.

Then one day the DBA restarts the database that lives on another box, the app
server crashes and the entire site is down.

So you get paged at 3am and run your startservices.pl script to fix the site.
Great job, pat yourself on the back.

Meanwhile, a site run by admins that use process supervision had a 5 second
outage until the app server process was restarted automatically.

You can tell systemd that one service depends on another, but it shouldn't
really be needed. This doesn't even have anything to do with systemd. You can
do the same thing using runit.

> Sysadmins get this, which is why so much of the pushback against systemd
> came from us.

..."us". No. You don't speak for everyone.

~~~
bandrami
> If your services need to be started in a particular order to work your
> services are broken.

Umm... that's possibly the silliest thing I've read on HN, which is going a
ways.

I don't know about you, but I like for my web server to come up _after_ the
NFS share it reads from is mounted. YMMV.

~~~
justinsaccount
> I like for my web server to come up after the NFS share it reads from is
> mounted.

Why does this matter? Your load balancer or service discovery layer should
detect that the web server is not functioning properly and take it out of
rotation. What do you do when your NFS server has an outage?

Even if you DID care about that sort of thing, systemd has a RequiresMountsFor
option:

    
    
      RequiresMountsFor=
    
        Takes a space-separated list of absolute paths. Automatically adds dependencies of type Requires= and After= for all mount units required to access the specified path.
    
    

Or with something like runit you would just do

    
    
      #!/bin/sh
      # web/run
      if [ -d /web/root ] ; then
        exit 2
      fi
    
      .. normal start commands here
    
    

That way if the box comes up before the NFS server the web server process will
still properly start on its own as soon as NFS comes back up.

I'm sure your hacked up scripts handle this scenario or anything else that can
go wrong. Or maybe you get paged at 3am every few days when something breaks.
YMMV.

~~~
bandrami
Hm. You seem to have a lot invested in convincing me that the systems I've
been using for over a decade don't actually work. _shrug_.

Yes, I agree there are complicated ways that systemd and other rc systems can,
largely, emulate the flexibility I get from writing a bash script to start the
services I want started. I just don't really want them.

~~~
justinsaccount
> You seem to have a lot invested in convincing me that the systems I've been
> using for over a decade don't actually work.

Yes, I used to work with people like you. "What's wrong with this method?
we've been using these scripts for 10 years!"

And every time the database restarts the app server crashes and the site goes
down. And for 10 years this has been seen as perfectly normal. After all, it's
so simple! At 3am all someone has to do is login and run some site specific
bash scripts that someone hacked together 10 years ago.

~~~
bandrami
> And every time the database restarts the app server crashes and the site
> goes down.

Nope. I've also never had these phantom poorly behaved forking daemons leaving
orphan processes all over the process space that people claim drove them to
systemd. Like I said, YMMV. What I've built works really, really well, and can
be picked up tomorrow by anybody who knows sh.

