
System Down: A systemd-journald exploit - gmueckl
https://www.openwall.com/lists/oss-security/2019/01/09/3
======
segfaultbuserr
Yet another proof for the following:

1\. It's reasonable to claim that amd64 (x86_64) is more secure than x86.
x86_64 has larger address space, thus higher ASLR entropy. The exploit needs
10 minutes to crack ASLR on x86, but 70 minutes on amd64. If some alert
systems have been deploy on the server (attacks need to keep crashing systemd-
journald in this process), it buys time. In other cases, it makes exploitation
infeasible.

2\. CFLAGS hardening works, in addition to ASLR, it's the last line of defense
for all C programs. As long as there are still C programs running, patching
all memory corruption bugs is impossible. Using mitigation techniques and
sandbox-based isolation are the only two ways to limit the damage. All
hardening flags should be turned on by all distributions, unless there is a
special reason. Fedora turned "-fstack-clash-protection" on since Fedora 28
([https://fedoraproject.org/wiki/Changes/HardeningFlags28](https://fedoraproject.org/wiki/Changes/HardeningFlags28)).

If you are releasing a C program on Linux, please consider the following,

    
    
        -D_FORTIFY_SOURCE=2         glibc hardening
    
        -Wp,-D_GLIBCXX_ASSERTIONS   glibc++ hardening
    
        -fstack-protector-strong    stack smash protection
    
        -fstack-clash-protection    stack clash protection
    
        -fPIE -pie                  better ASLR protection
    
        -Wl,-z,noexecstack          don't allow code on stack
    
        -Wl,-z,relro                ELF hardening
    
        -Wl,-z,now                  ELF hardening
    

Major Linux distributions, including Fedora, Debian, Arch Linux, openSUSE are
already doing it. Similarly, Firefox and Chromium are using many of these
flags too. Unfortunately, Debian did not use `-fstack-clash-protection` and
got hit by the exploit, because it was only added since GCC 8.

For a more comprehensive review, check

* Recommended compiler and linker flags for GCC:

 __[https://developers.redhat.com/blog/2018/03/21/compiler-
and-l...](https://developers.redhat.com/blog/2018/03/21/compiler-and-linker-
flags-gcc/)

* Debian Hardening

 __[https://wiki.debian.org/Hardening](https://wiki.debian.org/Hardening)

~~~
lmm
"Proof" suggests a level of absolute confidence that this example certainly
does not give.

> The exploit needs 10 minutes to crack ASLR on x86, but 70 minutes on amd64.

Is there any realistic threat model under which the difference between 10
minutes and 70 minutes is the difference between "insecure" and "secure"?

> Using mitigation techniques and sandbox-based isolation are the only two
> ways to limit the damage.

I'm not at all convinced that mitigation techniques represent a real
improvement in security, because by definition a mitigation technique is not
backed by a solid model. If you're letting an attacker control the
modification of memory that your security model assumes isn't modifiable, how
confident can you be that ad-hoc mitigations for all the ways you could think
of to exploit that cover all the possible ways to exploit that? E.g. I can
remember a time when ASLR was touted as a solution to C's endemic security
vulnerabilities; now cracking ASLR as part of vulnerability exploitation is
routine, as seen here. Mitigations _appear_ to give a security improvement
because an app with mitigations is no longer the low-hanging fruit, but I
suspect this is a case of "you don't have to outrun the bear": as long as
there are C programs without mitigations, attackers will go after those first.
That's different from saying that mitigations provide substantial protection.

~~~
dTal
>Is there any realistic threat model under which the difference between 10
minutes and 70 minutes is the difference between "insecure" and "secure"?

How about an intrusion detection system that flags up a human response? 10
minutes is hardly any time at all to respond, an hour gives you a chance to
roll out of bed.

~~~
wstuartcl
I guess, as long as the IDS senses the attack in progress quickly -- my gut is
this type of attack would be hard to detect until the outcome was achieved.
More likely the initial entry would be the detected event(s) -- in which case
yeah the extra time gives some safety net.

In either case, it still feels like pulling all things into systemd creates a
much harder to protect surface area on systems. Why should init care if your
logger crashes, let alone take down init with it? I am not a anti-systemd
person but I honestly do see the tradeoffs of the "let me do it all"
architecture as a huge penalty.

~~~
viraptor
> Why should init care if your logger crashes

It cares in the same way it cares about all the other processes. There's
nothing systemd-specific here. Journald service is configured to restart of
crash, same as many other services.

It's not taking down init when journald crashes either.

~~~
dane-pgp
> There's nothing systemd-specific here.

Well, except journald itself.

------
pdkl95
> If we send a large "native" message to /run/systemd/journal/socket ... the
> maximum size of a "native" entry is 768MB

Why does journal allow such large messages over a socket? That alone might be
a denial-of-service attack; are real messages delayed/blocked if someone sends
spawns a bunch of processes that send 768MB of junk to that socket?

    
    
        commit c4aa09b06f835c91cea9e021df4c3605cff2318d
        Date:   Mon Apr 8 20:32:03 2013 +0200
        ...
        -#define ENTRY_SIZE_MAX (1024*1024*64)
        -#define DATA_SIZE_MAX (1024*1024*64)
        ...
        +#define ENTRY_SIZE_MAX (1024*1024*768)
        +#define DATA_SIZE_MAX (1024*1024*768)
    

WTF? Why would you need to increase the max size so much? What are you
intending to send over that socket?! Oh. From the commit[1], the missing lines
at the 2nd "...":

    
    
        +/* Make sure not to make this smaller than the maximum coredump
        + * size. See COREDUMP_MAX in coredump.c */
    

Why would you send coredumps over a socket? Just write them to a file and send
the file's path to journald. Increasing the max message size 1200% just to
avoid writing a core file is crazy.

[1]
[https://github.com/systemd/systemd/commit/c4aa09b06f835c91ce...](https://github.com/systemd/systemd/commit/c4aa09b06f835c91cea9e021df4c3605cff2318d)

~~~
groestl
Playing the devil's advocate here: not every box has a writeable filesystem.

~~~
pdkl95
Or the only writable filesystem could be a tiny tmpfs/ramdisk. In both
situations, do you even _want_ core dumps?

~~~
loeg
[https://linux.die.net/man/8/netdump](https://linux.die.net/man/8/netdump)

[https://www.freebsd.org/cgi/man.cgi?query=netdump](https://www.freebsd.org/cgi/man.cgi?query=netdump)

------
ape4
Top comment from reddit thread: FWIW distros that use -fstack-clash-protection
to compile systemd, including recent Fedora and OpenSUSE, aren't vulnerable.

[https://www.reddit.com/r/linux/comments/aeac8g/systemd_earns...](https://www.reddit.com/r/linux/comments/aeac8g/systemd_earns_three_cves_can_be_used_to_gain/)

~~~
nailer
Also in the article:

> To the best of our knowledge, all systemd-based Linux distributions are
> vulnerable, but SUSE Linux Enterprise 15, openSUSE Leap 15.0, and Fedora 28
> and 29 are not exploitable because their user space is compiled with GCC's
> -fstack-clash-protection.

~~~
krylon
Does that include openSuse Tumbleweed?

------
gmueckl
These are three long standing exploitable bugs in systemd's journald which can
be used to gain local root privileges. Most systemd based distributions are
vulnerable and, from the looks of it, may have been for years.

~~~
nightfly
link?

~~~
mgalgs
I believe he's giving a synopsis of TFA

------
kazinator
[https://wiki.sei.cmu.edu/confluence/display/c/MEM05-C.+Avoid...](https://wiki.sei.cmu.edu/confluence/display/c/MEM05-C.+Avoid+large+stack+allocations)

> _This compliant solution replaces the VLA with a call to malloc(). If
> malloc() fails, the return value can be checked to prevent the program from
> terminating abnormally._

Wishful thinking. On overcommit-enabled VM systems (pretty much every
GNU/Linux desktop and server by default), malloc will cheerfully return a
valid-looking pointer to virtual memory even if the system is out of memory.
Your process will get a fatal signal when later it tries to use the memory. It
will not be prevented from terminating abnormally.

Any page of the memory could blow up. Maybe the first three pages get a frame
just fine, but when the process touches the fourth, it faults.

The reason to use malloc instead of the stack is that it goes awry only when
the system is out of memory, not when the stack is out of room.

~~~
geofft
But the only failure case there is a DoS. You cannot get memory corruption
from overcommit the way you can from a stack clash, and therefore you cannot
get arbitrary code execution. That seems like a huge reason to prefer
overcommit.

~~~
kazinator
Stack clashes are protected with guard pages. Guard pages can be skipped, but
not by code like strjoina which fills in all of the bytes.

[https://www.qualys.com/2017/06/19/stack-clash/stack-
clash.tx...](https://www.qualys.com/2017/06/19/stack-clash/stack-clash.txt)

The reporters do claim to have a root exploit that they will publish later,
which will be interesting (if it still applies on systems with robust guard
pages).

The strings are copied into the alloca area from the lower address to the
higher, so initially, the stpcpy calls might be corrupting memory belonging to
an adjacent mapping and not to the stack. Eventually the loop has to cross
back into the stack, and it cannot do so without writing into a guard page.

Situations where that is still exploitable can be contrived. For instance, the
program sets up a SIGSEGV handler to execute on an alternate stack using
sigaltstack. When the stack guard page is hit, that handler executes
successfully, the program continues somehow (e.g. by a recovering longjmp that
rewinds the stack) and then falls victim to the corruption.

One thing we can do is after calling alloca, walk over the memory in reverse
order (higher address to lower) in strides of 4096 bytes, and touch a byte at
every increment. This way we hit the guard page (on any system that has a page
size of at least 4096).

~~~
geofft
There are a bunch of other cases where guard pages don't help you (as long as
you're filling in the bytes from the far end of the stack first, as you
probably are on a stacks-grow-down architecture). If you start overwriting an
area of memory that's being used by another thread, even if you eventually
segfault, you've still corrupted memory. That can be a problem if the process
has multiple threads and you can corrupt another thread's memory, or if you
overwrite mmaped data and so the writes get committed to disk even if the
program proceeds to crash and restart.

In any case, those + your SEGV handler case are ones where stack clash attacks
are exploitable to corrupt memory or hijack control flow. The OOM killer
_never_ corrupts memory or permits hijacking control flow; it only kills the
process if it's in an irrecoverable situation. So overcommit is much safer.

The technique you describe is sometimes called "stack probing" (or confusingly
"stack checking") and is what GCC's -fstack-clash-protection implements, which
is why this isn't exploitable on OSes which compile systemd with that option.

~~~
kazinator
This is true; if it's under threads and we hop a guard page, we are screwed,
due to the race condition; by the time we hit that guard page, the other
thread has used the corrupt stack.

I have seen this sort of thing happen! Third party routing stack used in
system with thread stacks reduced to 64K. Third party stack turned out to have
debugging macros producing "{ char msg[8192]; ...}" declarations for sprintf.
Twice the size of a page; hopped right into another thread stack.

(Don't tell me systemd has threads in it, ouch!)

------
sus_007
If you're curious to know further critics regarding Systemd, then without-
systemd[1] has got you covered.

[1] : [http://without-systemd.org/wiki/index.php/Main_Page](http://without-
systemd.org/wiki/index.php/Main_Page)

~~~
blablabla123
Yeah I know that one. It puzzles me how systemd has such a broad support among
distributions and end users. It just redoes ages of system management in such
new ways that they don't exactly fit the picture.

We all know that some stuff of Linux sucks but this can't seriously be the
solution. Solving things the windows ways. ;)

~~~
yarrel
SystemD doesn't have broad support among end users. It has weary resignation
and a few people who don't know how grep works.

~~~
adontz
Honestly, if systemd is really that bad, why everyone adopted it?

We can blame Red Hat as Lennard's employer for forcing systemd into their
family of distributions (RedHat, CentOS, Fedora, CoreOS). But why
Debian/Ubuntu? Why ArchLinux and Gentoo which were never ever targeted at
"people who don't know how grep works"? Why SUSE? Either systemd brings more
good, than harm; or we have to blame Illuminati and Masonry.

~~~
insertcredit
You are using logical fallacies in your argument.

First, not _everyone_ has adopted it (loaded language). Google, which controls
the vast majority of Linux systems on the planet, has not. GNU has not. Others
[1] have not.

Second, the critique against systemd is substantial and solid enough to stand
on its own regardless of popularity. Popularity does not imply quality, you
should read "Worse is better" by Richard Gabriel. Politics, network effects
and an octopus-like architecture that imposes itself via ever-increasing
interdependencies are reasonable explanations to systemd adoption. For a
distribution provider or package maintainer, it has gotten to the point where
it's easier to go along with systemd than try and fight it, since the latter
option means extra work. This is really a sad state of affairs.

[1] [http://without-systemd.org/wiki/index.php/Main_Page](http://without-
systemd.org/wiki/index.php/Main_Page)

~~~
adontz
OK, my bad, I admit. Not everyone. _Most_.

Most popular distributions, because, let's make it clear: RedHat, Fedora,
CoreOS, CentOS, Debian, Ubuntu, Arch, including Kubuntu, Xubuntu, Fedora KDE,
and all other members of these respected families will account for at least
(data by different sources vary) 75% of all installations.

Octopus-like architecture cannot be argument why it was adopted in the first
place. People don't like dropping familiar/stable tools. Also, it was not so
octopus in the first version.

Politics and network effects sound to me like conspiracy theory. Sorry, but I
really do not believe that there is someone so powerful to make RedHat,
Debian, SUSE and Canonical, to name a few, to harm themselves in one and the
same, very specific way.

Problems solved by systemd exist. Systemd was not the only project trying to
solve these problems, it was most successful/adopted. There was upstart.
Remember Upstart? So, honestly, just reverting back to SysV init is not an
option, it's just burying your head in the sand. Systemd is not perfect. It
never was. Just SysV is worse.

I look at [http://without-
systemd.org/wiki/index.php/Arguments_against_...](http://without-
systemd.org/wiki/index.php/Arguments_against_systemd) and see that most
arguments against systemd are either a) Ignore obvious fact, that systemd is
not a single program, but suite of programs which play nice together and are
optimized to exchange data in effective ways, keep configuration in similar
manner, etc. You cannot compare systemd to initd, like you cannot compare Atom
to nano. b) Simply nostalgic. c) Somewhat valid, but again, systemd is not
perfect, it's just much better than SysV initd. That's why it was adopted, not
because of politics.

~~~
dane-pgp
> I really do not believe that there is someone so powerful to make RedHat,
> Debian, SUSE and Canonical, to name a few, to harm themselves in one and the
> same, very specific way.

Let's go through these one by one then.

* RedHat could certainly have adopted it because they saw it as a way to take control of the development of a central piece of GNU/Linux software architecture. An init system is the one piece of software (other than a kernel) that you can't run two of at the same time on the same bare metal, so this is obviously a tempting piece of real estate to capture to provide a competitive advantage in a commoditised landscape.

* SUSE didn't want to be seen to be left behind with "old fashioned" sysvinit, and didn't have the resources to invest in their own competing init system, especially after Canonical had already thrown their own resources at Upstart. Siding with the RPM distro over the DEB based one was also an obvious choice.

* Debian had a contentious debate about which init system should be the default (and, in practice, after choosing systemd, the only) fully supported init system. The decision was placed in the hands of the Technical Committee, who were split down the middle between choosing systemd or Upstart. The tie was resolved by a single vote, that of the committee's chairman, Bdale Garbee:

[https://lwn.net/Articles/585363/](https://lwn.net/Articles/585363/)

He is, no doubt, an honourable man, but he is also a cheerleader for HPE:

[https://www.linux.com/NEWS/LINUX-LEADER-BDALE-GARBEE-
TOUTS-P...](https://www.linux.com/NEWS/LINUX-LEADER-BDALE-GARBEE-TOUTS-
POTENTIAL-HPES-NEWEST-OPEN-SOURCE-PROJECT)

despite SUSE being HPE's preferred Linux distro:

[https://www.zdnet.com/article/sweet-suse-hpe-snags-
itself-a-...](https://www.zdnet.com/article/sweet-suse-hpe-snags-itself-a-
linux-distro/)

* Canonical (that is, Ubuntu) went with systemd shortly after the Debian vote, once it became clear that single-handedly supporting Upstart was an unsustainable option for the company, especially as packages were starting to add dependencies on systemd:

[https://www.zdnet.com/article/after-linux-civil-war-
ubuntu-t...](https://www.zdnet.com/article/after-linux-civil-war-ubuntu-to-
adopt-systemd/)

* With all these top tier distros succumbing to systemd, more and more packages started to depend on it as the init system, to the point that it became all but impossible for another distro to ship packages that didn't depend on systemd in its base system.

This is exactly the sort of slow creeping spread that systemd is notorious
for, using the momentum gained from each small victory to help crush bigger
and bigger targets, until it is unavoidable.

The worst part, though, is the historical revisionism, and the suggestion that
everyone just accepted systemd and abandoned all the software it replaces,
based purely on the merits of systemd. Most people had to accept systemd
whether they liked it or not. systemd is not a "suite of programs which play
nice together", it is a suite of programs which only play nicely together, and
which bully all the other programs into submission, despite systemd's
technical flaws.

------
AdmiralAsshat
First CVE I've seen to reference a System of a Down song.

(Every subsection starts with an SOAD song quote)

~~~
nailer
Off topic, but SOAD were really one of the most enduring metal acts of the
early 2000s. If you remember them, Needle Drop recently did a classic review
of Toxicity - enjoy the nostalgia!
[https://www.youtube.com/watch?v=-jI1ofec02A](https://www.youtube.com/watch?v=-jI1ofec02A)

------
_emacsomancer_
systemd is interesting as an innovative init/daemon-manager(+), but I remain
convinced it's not ideal that it is de facto Linux default init. Surely the
default init should be something more manageable (i.e. with fewer attack
surfaces) like runit or OpenRC (in combination with something, perhaps runit)
or s6 etc.

~~~
chousuke
This attack doesn't touch the init system's attack surface though. It breaks
journald, which is a separate program that you don't even have to run (syslog
daemons still work)

Calling this a bug in the init system is like assigning fault for bugs in eg.
rsyslog to sysvinit

~~~
insertcredit
This is disingenuous, systemd-journald is the default in every systemd-using
distribution I am aware of. The philosophy of systemd is all about tight
coupling and forcing its singular vision on end users. When that vision falls
apart you can not claim it is not really a systemd problem because in theory
you could have gone out of your way and done something that is not encouraged.

------
Twirrim
I might be reading that article wrong, but it reads like they're critical of
the use of alloca? I've never really written C or the like, what is dangerous
about alloca?

~~~
pilif
It allocates on the stack which is inherently of limited size and allocations
grow in the direction of the “juicy” stuff.

As so often with the C library, there is little bounds checking going on
either (in fact, the behavior once you overflow the stack is explicitly
undefined, so who knows what the hell is going to happen)

If you somehow let the user specify the amount of bytes you’re going to
allocate on the stack, then that’s an exploitable issue.

~~~
Twirrim
Ah, OK. So it is truly horrifying that it's in heavy use in journald, where
users inevitably have that sort of power.

------
lousken
Was systemd ever properly audited? That's exactly why i use devuan as my home
server.

~~~
vesinisa
Despite being a security critical component, systemd doesn't even seem to have
a proper security announcement release process. For example, there was a
remote code execution vuln discovered in their DHCP client in October
(CVE-2018-15688).[1] The issue was fixed with a GitHub PR that failed to
mention that the changes fix a remote exec vulnerability:
[https://github.com/systemd/systemd/pull/10518](https://github.com/systemd/systemd/pull/10518)

They also have a rather informal release process. Their "release notes" are
just a very long NEWS file[2] in the git repo, with notes of trivial and
critical changes mishmashed together. And for some reason, to this date there
is not a mention of the DHCP remote exec vuln fixed in the latest release.

I must say I don't feel too good about this project's attitude towards
security. Compare this to e.g. Apache.

[1]
[https://www.theregister.co.uk/2018/10/26/systemd_dhcpv6_rce/](https://www.theregister.co.uk/2018/10/26/systemd_dhcpv6_rce/)

[2]
[https://github.com/systemd/systemd/blob/master/NEWS](https://github.com/systemd/systemd/blob/master/NEWS)

~~~
dalai
That was not the first time either:

[https://github.com/systemd/systemd/issues/5144](https://github.com/systemd/systemd/issues/5144)

~~~
dijit
Or relying on other people to report the issues so that people patch.

[https://www.openwall.com/lists/oss-
security/2017/01/24/4](https://www.openwall.com/lists/oss-
security/2017/01/24/4)

------
kevin_thibedeau
Is Coverity being run against systemd? Would it not have found these issues?

~~~
uniformlyrandom
Usefulness of static code analysis beyond linting is greatly overstated (by
people selling static code analysis tools).

~~~
kevin_thibedeau
Coverity isn't useless. It finds these kinds of buffer overflow errors easily.
It's shameful that a prominent keystone FOSS project is using such outdated
coding practices in the first place. Not using the free tooling available for
such projects is doubly so.

~~~
tssva
Systemd uses Simmle LGTM and QL for static code analysis which is a good thing
since Coverty Scan is currently down without an ETA for restoration.

~~~
techslave
the flaws have existed for years.

------
eikenberry
I don't see any systems with security releases ready. Did they not notify the
effected systems before making it public? I thought that was standard
procedure these days.

~~~
kasabali
It's in the linked page:

2018-11-26: Advisory sent to Red Hat Product Security (as recommended by
[https://github.com/systemd/systemd/blob/master/docs/CONTRIBU...](https://github.com/systemd/systemd/blob/master/docs/CONTRIBUTING.md#security-
vulnerability-reports)).

2018-12-26: Advisory and patches sent to linux-distros@...nwall.

2019-01-09: Coordinated Release Date (6:00 PM UTC).

------
throw2016
There is a pattern of designing for the most niche use cases and imposing
complexity on everyone by default. Shouldn't it be the other way round?
Letting those with advanced needs accept the complexity? Presumably they have
the know how.

There are multiple examples of this including the binary journal for those
with extreme auditing needs, optimizing for laptop wifi networks when Linux is
predominantly used in servers, the ironically named 'predictable network
names' that are anything but predictable. And these issues being hand-waved
away because of some fringe use case with the additional overhead nonchalantly
imposed on all users.

------
egberts1
I still use ISC DHCP client due to enterprise DHCP using some esoteric DHCP
Options that systemd cannot do to this day.

------
posix_me_less
It's been two weeks since the GNU/Linux distributors were notified of this,
and still no fixes are available in Redhat/Centos/Debian.

Any ideas about how to mitigate this until official fixes are available? Would
it help to block autorestart of journald? Any idea about how to that?

------
jononor
Are there static analysis tools that catch such problems? User controlled
buffer length for alloca.

------
upofadown
>On a Debian stable (9.5), our proof of concept wins this race and gains eip
control after a dozen tries (systemd automatically restarts journald after
each crash):

Seems like a good example of why automatically restarting things that crash is
a bad idea.

~~~
loeg
As with anything, there are tradeoffs.

A dozen is too short to really make a difference, but one thing that can be
done at the service management layer without abandoning restart entirely is a
delayed restart, or exponential backoff.

You could also imagine a per-service option for auto restart. Paranoid
organizations with plenty of on-call engineers could disable auto-restart, if
they were convinced the engineers wouldn't just rig up their own auto-restart
to avoid the call.

------
zzzcpan
I suppose prlimit'ing stack size for systemd-journald could work against this
exploit too, since clash protection works, but without recompiling anything.

------
codedokode
I think that putting a hard limit on alloca allocation size, somewhere around
1-10 Kb, could help prevent this type of exploits. Also, it is possible to
write a function that would choose between alloca() and malloc()/free()
depending on the size.

------
z3t4
How can you check if you systemd is patched ? This sounds very serious.

------
insertcredit
I refuse to use systemd to this day. It's unbelievably complex and became
established through political power play rather than any sort of merit. Which
is without a doubt not what I expected to see in the Linux ecosystem.

~~~
na85
Me too. Ironically, systemd is why I bought a Mac.

It's so hard to find a linux distro that actually works on a laptop but
doesn't use systemd, so I eventually just gave up on Linux entirely.

That left me with BSD, Windows, and macOS.

I tried FreeBSD and OpenBSD but performance was atrocious and noticeably worse
in almost every way than Debian and win7 on the same hardware. I'm sure BSD
makes a good server OS but it's just not mature enough for serious work unless
I was willing to keep my laptop plugged 99% of the time. But what's the point
of a laptop if the OS performs so poorly it won't even last a 4-hour flight?

I similarly seem to be the last guy on earth who still remembers the bad old
days of Microsoft's dominance so I won't pay for windows.

The winner by default is therefore apple.

~~~
pedroaraujo
You bought a Mac because of systemd? Isn't that some sort of overreaction?

~~~
gpm
I don't particularly think so. The amount of my time that systemd has wasted
is insane. Buying a mac would have saved me all that time... if only I liked
their hardware more.

~~~
setquk
systemd has wasted a ton of time for me as well.

Then again I just bought a new MacBook air and they keyboard gave out after
two weeks and it gave me a rash.

Can't win at all.

------
dabockster
I knew it was a bad exploit when I saw stpcpy() being used in the code. Always
use the “n” versions (eg stpncpy()) and measure your sizes!

~~~
kevin_thibedeau
stpncpy() is still brain dead for copying null-terminated strings. Regardless
of your viewpoint on truncating strings, the zero-pad behavior is wasteful and
rarely necessary.

~~~
dabockster
My point stands. C/C++ aren’t memory safe. You can’t just copy stuff around in
memory and expect the compiler to fix it for you when the sizes don’t match.
This isn’t Python or Node.

I’m seriously curious as to why the non-n versions are still allowed to
compile. The dangers seem way too real.

~~~
loeg
C and C++ give you low level tools, in a variety of senses. For backwards-
compatibility reasons, which is a big part of the value of languages like C,
C++, Go, and Java, removing _anything_ is difficult or impossible. We barely
got rid of gets(3) in C11, which is and has always been _impossible_ to use
safely.

In contrast, it is at least _possible_ to use the non-n versions of string
routines safely. In addition, the 'n' versions have deleterious side effects
that make their adoption unappealing:

* zero padding to n

* no nul termination for strings of length n or longer

The BSD 'l' versions (strlcpy, etc) don't have either of those first two
problems, but do have:

* not in standard C, nor POSIX;

* as a result, glibc still refuses to implement them

In addition, some people will inevitably complain that both 'n' and 'l'
variants inevitably truncate source strings if they don't fit, and that
therefore no one should use either of them, you should just perfectly
calculate lengths and use the unchecked ones without making mistakes. I can't
tell if these people are delusional or trolling, but realistically,
programmers make mistakes, and using unchecked string routines is a common
source of buffer overflow; this is usually a worse problem than string
truncation.

Your point doesn't stand.

