
Mastering systemd: Securing and sandboxing applications and services - kureikain
https://www.redhat.com/sysadmin/mastering-systemd
======
proactivesvcs
The official documentation on these directives was of great value when I
started looking into unit file hardening. There were a few minor cases where I
had to, or felt the need to go elsewhere for deeper explanation, but for the
most part it was readable and comprehensive.

I was able to understand the changes that I made and while carefully testing,
few unexpected problems resulted.

The changes that I applied as a result of this has meant the unit files now
score around 1.5 from the "systemctl-analyze security" system. Considering I
approached the process with almost no knowledge of systemd, this speaks
volumes on the quality of the documentation, its timeliness and practical
relevance, and the fruit that can be borne of excellent documentation.

~~~
lovelettr
Was that the freedesktop.org wiki?

~~~
proactivesvcs
Primarly the systemd.exec man page at
[https://www.freedesktop.org/software/systemd/man/systemd.exe...](https://www.freedesktop.org/software/systemd/man/systemd.exec.html)
, as well as the accompanying unit and service file man pages for more general
reference.

------
thu2111
I have to admit, having recently spent some time learning it and updating my
Linux skills I really don't understand why systemd seems to have provoked so
much hate and controversy in the Linux world. So far I really like it. It
makes a Linux server feel more like something designed rather than
incrementally patched together out of hacky shell scripts written 20 years
ago.

The unit config files are small and simple. Every line makes sense. Things are
coherent - learning how to start a service at bootup means you've partly
learned how to configure the new equivalent of cron jobs. The same
configuration works at the system and per-user level. You can quickly locate
logs. I didn't quite like the command line interface at first e.g. why
"systemctl" and not "service", plus I always forget if it's the service name
in the second or first position. But the basic functionality is all there.

I realise Docker is all the rage at the moment but I've found it kind of flaky
and complicated. For my own purposes systemd feels about the right level of
abstraction. It's not picky about where software comes from, it just manages
it. You can unzip a tarball and make it run isolated, depend on other
services, and a whole lot of other neat things. If you've learned it on one
distro you learned it for the rest, unlike SysV init. And it seems to
constantly get more useful features that are all pretty easy to configure as
well.

~~~
castillar76
Purely my perspective here, as a 25-year user of Linux and BSD (for context).
On the one hand, I very much agree with you that Systemd brings a lot to the
table. The files are much easier to work with, the service ordering and
integration is logical and works well (to the extent I've beaten on it), and I
can't deny that a faster boot sequence is helpful for things that boot often
like container images. It's been a lot easier writing systemd config files for
my new services than it was to write init.d boot scripts, for sure, and the
integration of systemctl is really nice: one command does all the service
things from info to disabling.

The flip-side for me, the one that continues to get under my skin, is the
approach of the systemd project. It's the habit systemd has of simply
subsuming all other system functions into itself (DHCP client? Sure! DNS
client? Mine now! Logging? We handle that now. Firewall control? ALL SHALL BE
ASSIMILATED.). If the systemd versions of those functions were both obvious in
presence and easy to replace with more fully functional replacements when I
wanted, I'd feel better about it, but I keep running across cases where system
functions that have worked that way since Linus was in knee-pants have
suddenly been replaced behind the scenes with a systemd module whose
configuration files and knobs aren't obvious or well-documented, it's
difficult-to-impossible to uninstall, and trying to disable and replace it
leads to cascading failures.

Worse, I keep seeing security issues brought up to the systemd devs and then
tossed aside with "well, just don't do that" or "how is that even a problem".
It's not pervasive or constant, but it's steady enough to be worrying.
Obviously not every security issue raised will be top priority, but it
concerns me how much of my systems are being subsumed by a project that seems
to prioritize "do all the things now" over "do things securely".

Compare that to the approach taken by, say, OpenBSD, which has also been
steadily replacing long-standing system bits with their own custom-developed
pieces. Their approach has been "we will provide basic functionality that is
iron-clad secure", while leaving you the ability to swap in something else for
stuff like OpenSMTPd without breaking your system. And yes, Theo can be just
as much an <unprintable> as Poettering, but I'm a lot less worried about the
outputs of his work for the above reasons.

Ultimately I think systemd is a good way forward, but it needs someone else to
take over the project, rein it in, and keep it focused on being good at what
it does rather than trying to be all things everywhere. Or, alternately, it
needs to just implement its own kernel and go off to be SystemdOS v1, which
seems to be the trajectory it's on right now.

~~~
dTal
>Worse, I keep seeing security issues brought up to the systemd devs and then
tossed aside with "well, just don't do that" or "how is that even a problem".
It's not pervasive or constant, but it's steady enough to be worrying.
Obviously not every security issue raised will be top priority, but it
concerns me how much of my systems are being subsumed by a project that seems
to prioritize "do all the things now" over "do things securely".

I would be even harsher than that. It's not just security issues that earn
"don't do that" from systemd devs - it's everything that doesn't fit their
narrowly imagined use cases. You don't even get "do all the things now" \- you
just get "do this particular thing now". Generally with no regard for POSIX.
And if you want the old behaviour back, expect to boil the oceans. Exhibit A:
[https://news.ycombinator.com/item?id=19023885](https://news.ycombinator.com/item?id=19023885)

~~~
jcranmer
> Generally with no regard for POSIX.

To be fair, if I had to pick the heavily-used specification I'd most like to
see ground into dust and rewritten from scratch, it's POSIX. There are several
misfeatures that can't be easily undone (fork, and its maddening interaction
with file descriptors, for one).

I also strongly dislike the shell-based model of development that people
usually appeal to for POSIX. Shell makes for a crappy language (witness how
you effectively have to ban spaces in your filesystem paths to make things
work). Stringification of identifiers makes time-of-check-time-of-use attacks
possible. I suspect it's also a driving factor for some of the misfeatures,
because terminal programs and the shell need to implicitly share a lot more OS
resources, so programs end up doing weird things like passing all open files
to your children by default.

Were I to write my own operating system in 2020, I'd not think at all about
POSIX until I finished the design, and relegate it to a compatibility layer
for people who want to write programs as if it were 1970. Amusingly, when I
looked up Fuchsia last week, it does seem that they designed the OS APIs along
some of the ideas I had (e.g., ditching signals; handle-based API), so maybe
there is some hope for a better-than-POSIX future world.

~~~
zbentley
> fork, and its maddening interaction with file descriptors

What's maddening?

------
fpoling
This sandboxing for services provides similar isolation as various container
runtimes. Plus due to integration with systemd things like live update without
dropping a single connection is possible to implement with straightforward
application code.

~~~
thu2111
If I understand Docker correctly, it's not actually intended to be a sandbox
and wasn't designed as such (e.g. the daemon runs as root, or at least used
to). It's not clear to me what the threat model for running untrusted Docker
images is, or how you'd know what the expected set of permissions were except
by reading a README.

Whereas this feature is explicitly a sandboxing feature, and the needed
permissions are enumerated by the service file.

~~~
colechristensen
Cgroups limit the impact anything inside the container can do to anything
outside the container.

It doesn't matter that the daemon runs as root, it starts processes in an a
way that prevents them from interacting with other daemons, filesystems, etc.
resources.

You don't quite understand docker correctly :)

~~~
TheDong
It's not cgroups, but rather namespaces and seccomp (and apparmor/selinux on
some distros) that sandbox the processes inside the container.

cgroups are used mostly for resource limits, not for sandboxing (aka
namespacing).

docker by default does have a slightly more lax security posture than systemd
or lxc (i.e. a default set of capabilities that isn't explicitly enumerated
and a focus on UX over tweaking them, no usernamespaces by default, etc),
though you're right that it is largely meant to be a secure sandbox for
untrusted containers, as long as you know hat you're doing.

~~~
colechristensen
Ah, I was under the impression that namespacing was a part of cgroups in
general.

~~~
TheDong
To quote Jessie's blog post [0]: "containers were not a top level design, they
are something we build from Linux primitives [Linux namespaces and cgroups]".

cgroups can be used without namespaces, and the reverse is also true. Both of
them are part of linux container implementations (like lxc and docker), but
for an easy example, systemd uses cgroups for every service, and only uses
namespaces for ones you very explicitly turn them on for.

Don't quote me on this, but I also think cgroups landed in the kernel many
years before namespaces did.

[0]: [https://blog.jessfraz.com/post/containers-zones-jails-
vms/](https://blog.jessfraz.com/post/containers-zones-jails-vms/)

------
david_draco
This is cool. But how do I use these sandboxing if I want to run a desktop GUI
application?

~~~
reddotX
you use snapcraft [https://snapcraft.io/](https://snapcraft.io/)

~~~
ATsch
Not to start a flamewar, but I'd like to point out why I think snap is not
only inferior to flatpak technically but actually a threat to the linux
desktop:

Snap is very deliberately centralized, with a single hard-coded repo URL. The
server is also closed-source. This is because snap's somewhat transparent
primary goal is to give Canonical central control of app installation across
all linux distributions. The plan from there will include taking cuts of sales
revenue and publishing fees. The pieces for this (like DRM) are falling into
place.

Flatpak, because it isn't born out of such business a model, supports an
arbitrary number of user-defined repos, which are trivial to host because they
are static folders and can be installed with a single-click via a
`flatpakrepo` url.

This is on top of other advantages such as upstream support from the likes of
GNOME, support for sharing code between apps using "frameworks", supporting
themes, using namespaces instead of modified AppArmor, p2p support, a better
permission system, etc.

------
LockAndLol
What I'd like to know is how whether it's possible run GUI applications in
their own containers. From what I understand about X, if a GUI app runs in the
same context as the DE, it will have access to all other windows, the
clipboard, etc.

That makes me think that Xephyr is mandatory in order to run an app in a
container, but I haven't found a satisfactorily easy way to do so. Would
systemd be the easy solution I'm looking for?

~~~
mook
Firejail, mentioned elsewhere in the thread, should do that correctly.
Personally though I've been doing docker (substitute with systemd-nspawn or
whatever you like) with xpra; not sure it's as secure, but it should block
accidental snooping while still supporting clipboard transfers.

It appears that the relevant developers are pushing towards using Wayland for
more secure remote windowing, but I do not know what state it's in.

------
m23khan
Nice - wondering if these are applicable to centOS as well given that centOS
is touted as the freeware version of RHEL.

~~~
akeck
I expect that's the case. For the most part, CentOS has feature parity with
RHEL.

~~~
nightfly
CentOS is RHEL without the RHEL branding.

~~~
kyuudou
Also, you can update and upgrade without an RHN subscription. Technically,
it's a complete recompile of the source code used for RHEL into CentOS rpm
packages.

