
Don't Leave Coredumps on Web Servers - hannob
https://blog.hboeck.de/archives/887-Dont-leave-Coredumps-on-Web-Servers.html
======
Xoros
"PHP used to crash relatively often." ?

In 15 years of hosting hundreds of PHP site, I've never seen a single
coredump. Is it just me ?

~~~
andreareina
No OS I've used in the last ~10 years left a core dump by default. Maybe
longer, but I didn't know what core dumps were then, so I wouldn't know what I
was looking at if they did.

~~~
cyphar
Almost all Unixes generate coredumps by default. illumos, FreeBSD, and yes
GNU/Linux. The reason why you might not have seen coredumps in a cores/
directory is because most GNU/Linux distributions have configured the kernel
such that all new coredumps are sent to a processing script (systemd has a
coredump script) which results in a coredump file not included in the current
directory.

------
cpantoga
I just want to point out that the article says you should disable coredumps by
putting "* soft core 0" in your limits.conf file. You should actually put "*
hard core 0" in your limits.conf file. The soft limit is user configurable,
meaning applications that have code which changes the ulimit for coredumps
will still create coredumps. You can test this yourself by typing "ulimit -c
unlimited" in your terminal -- you will not be able to create coredumps of
unlimited size. Limits have a soft and hard value. The soft value is the user-
configurable max. Meaning, users can reconfigure this. The hard value is the
system limit, meaning if you set the hard value to 0 no user will be able to
change it.

~~~
hannob
I didn't go into the details, because, well, that was more a side aspect of
the blogpost.

Technically you're right. However how you're setting these things depends on
your needs. Let's assume you have a server where several people develop
software on. You may want to allow them to create core dumps for debugging
purposes. Whether you set a hard or soft limit thus depends on your use case.

I'd say usually setting a soft limit is good enough. Yes, this means users can
lift the limit, however if they play with the ulimit coredump setting I assume
they know what a coredump is, thus it should be okay.

~~~
cuckcuckspruce
>Let's assume you have a server where several people develop software on. You
may want to allow them to create core dumps for debugging purposes.

Then set the hard core size limit for the user you're deploying as (you don't
deploy as your development user, right?) and then set the soft limit for
everyone else.

------
Terr_
If you want to test your coredump configuration for webpages (as opposed to
the command-line interpreter) here's a little suicidal script for triggering
it on demand:

[https://gist.github.com/DHager/a63d36dade21150dd86d](https://gist.github.com/DHager/a63d36dade21150dd86d)

~~~
jwilk
It's "kill -QUIT", not "kill -SIGQUIT". (But "kill -3" works too.)

~~~
anamexis
At least for GNU, it is valid to prefix the signal specification with SIG.

cf.
[https://github.com/coreutils/coreutils/blob/master/src/opera...](https://github.com/coreutils/coreutils/blob/master/src/operand2sig.c#L69)

Appears to be the case for BSD as well:
[https://github.com/freebsd/freebsd/blob/master/bin/kill/kill...](https://github.com/freebsd/freebsd/blob/master/bin/kill/kill.c#L163)

~~~
jwilk
The SIG prefix doesn't work in dash (the default shell in Debian).

POSIX¹ says:

 _An early proposal also required symbolic signal_names to be recognized with
or without the SIG prefix. Historical versions of kill have not written the
SIG prefix for the -l option and have not recognized the SIG prefix on
signal_names. Since neither applications portability nor ease-of-use would be
improved by requiring this extension, it is no longer required._

¹
[http://pubs.opengroup.org/onlinepubs/9699919799/utilities/ki...](http://pubs.opengroup.org/onlinepubs/9699919799/utilities/kill.html#tag_20_64_18)

------
JdeBP
By phrasing it as applicable to "web servers" in general, this article entails
an implicit assumption that _all_ HTTP servers "expose secret information" in
core dumps. This is not the case. There still exist WWW sites where the HTTP
server's code is entirely open, the content being published is entirely open,
and no password or other user credentials mechanism is used by the HTTP
server.

Not all WWW sites in the world are complex constructions of server-side
scripting languages, plug-ins, cookies, user account authentication, and
"business logic". Some serve up static content with code that has been around
for decades and in the public domain for at least one decade. A core dump does
not expose secret information on such sites, because _there is no_ secret
information to expose.

It's just untidy. (-:

Of course, it is a lot harder to cause these HTTP servers to dump core in the
first place. The article's note about PHP being a source of core dumps again
has the implicit assumption that PHP is involved in all WWW servers in the
first place. It is not. Moreover, good practice for at least one such of these
HTTP servers is to run them under the aegis of an unprivileged and dedicated
to the purpose user account that has _no owner nor write access_ to any
filesystem object anywhere under its changed root. Thus they cannot create a
core dump file irrespective of core dump size resource limits.

------
sbahra
Or use backtrace.io to deal with it. It'll transfer the core dump or a
derivative to a centralized server.

Transparency: co-founder.

------
MichaelBurge
You can "Control + \" your application to generate a core dump if you want to
test it.

~~~
zuck9
It didn't work for me on macOS. I tried it on `yes` and `sleep 5` but it
simply quit the program. No files were saved.

~~~
krylon
It is possible to set a maximum size for core dump files via limits.conf /
ulimit. If this limit is 0, it effectively disables core dumps. Call ulimit -a
from a shell to see the current maximum size.

The linked article also explains how to change where core dumps are saved.
Maybe your system is configured to save them in a specific folder (I have no
clue what macOS' default settings for core dumps are!).

~~~
tomsmeding
I have a mac, and my default settings are a ulimit of zero. Makes sense,
because macOS has its own way of creating crash reports with the CrashReporter
daemon

------
cmurf
systemd has a coredump capture facility, managed by the coredumpctl command.
The coredumps are found in /var/lib/systemd/coredump/

I wonder how that relates to this article?

------
MatthewWilkes
I'm a little surprised at the abuse contact step, is that really more
effective that emailing security@ and webmaster@ for the domains affected?

------
fxlv
if you do really use coredumps for debugging one option would be to
automagically upload them to a 'debug' server. Unless you really want to debug
in prod.

