Hacker News new | past | comments | ask | show | jobs | submit login
Don't Leave Coredumps on Web Servers (hboeck.de)
87 points by hannob on June 16, 2017 | hide | past | web | favorite | 32 comments



"PHP used to crash relatively often." ?

In 15 years of hosting hundreds of PHP site, I've never seen a single coredump. Is it just me ?


No OS I've used in the last ~10 years left a core dump by default. Maybe longer, but I didn't know what core dumps were then, so I wouldn't know what I was looking at if they did.


Almost all Unixes generate coredumps by default. illumos, FreeBSD, and yes GNU/Linux. The reason why you might not have seen coredumps in a cores/ directory is because most GNU/Linux distributions have configured the kernel such that all new coredumps are sent to a processing script (systemd has a coredump script) which results in a coredump file not included in the current directory.


A bunch of the Unixes I've worked with do, it seems like. And at least some of the Linux systems on my own computers (I'm almost positive that this was my first contact with the idea of a core dump).


Do you have coredumps enabled on your hosting servers? As written in the blogpost the major distributions have a configuration that will forward coredumps to crash handling tools and not directly store them. Others simply have coredumps disabled by default.


It depends on how you are running it, in my experience. We're forced, due to ISP restrictions, to use FastCGI. I've not tested this theory, so YMMV, but I've had more trouble doing it that way than when using PHP as a module. We've seen less of it over the past few years, but it happened regularly enough in the past to make "grep for coredumps" a regular task.


It's easy to make PHP segfault. All you have to do is create an infinite recursion.


Unless you use xdebug.max_nesting_level


Xdebug in production you say?


It's not hard to make PHP seg fault and in the course of doing web development and hosting you will need to use core files to debug opaque issues with GDB.


I've had to do this ~3 times in the last couple years. In all cases it was extensions not playing well (xcache, gmagick/imagick). I actually couldn't find a single version of gmagick/imagick that tore down or something properly in php 7.0.x (broke when phpunit launched extra-process tests, among other places)


I've always found the cause to be extensions as well. Usually a version conflict or something. It's so difficult to debug though. Always takes a while just to get everything you need to finally dig in.


I've experienced one or two over twelve years, usually due to a new version and always caught in dev. PHP crashes are incredibly rare.


I just want to point out that the article says you should disable coredumps by putting "* soft core 0" in your limits.conf file. You should actually put "* hard core 0" in your limits.conf file. The soft limit is user configurable, meaning applications that have code which changes the ulimit for coredumps will still create coredumps. You can test this yourself by typing "ulimit -c unlimited" in your terminal -- you will not be able to create coredumps of unlimited size. Limits have a soft and hard value. The soft value is the user-configurable max. Meaning, users can reconfigure this. The hard value is the system limit, meaning if you set the hard value to 0 no user will be able to change it.


I didn't go into the details, because, well, that was more a side aspect of the blogpost.

Technically you're right. However how you're setting these things depends on your needs. Let's assume you have a server where several people develop software on. You may want to allow them to create core dumps for debugging purposes. Whether you set a hard or soft limit thus depends on your use case.

I'd say usually setting a soft limit is good enough. Yes, this means users can lift the limit, however if they play with the ulimit coredump setting I assume they know what a coredump is, thus it should be okay.


>Let's assume you have a server where several people develop software on. You may want to allow them to create core dumps for debugging purposes.

Then set the hard core size limit for the user you're deploying as (you don't deploy as your development user, right?) and then set the soft limit for everyone else.


Fair enough, I just wanted to point out the difference between soft and hard limits.


If you want to test your coredump configuration for webpages (as opposed to the command-line interpreter) here's a little suicidal script for triggering it on demand:

https://gist.github.com/DHager/a63d36dade21150dd86d


It's "kill -QUIT", not "kill -SIGQUIT". (But "kill -3" works too.)


At least for GNU, it is valid to prefix the signal specification with SIG.

cf. https://github.com/coreutils/coreutils/blob/master/src/opera...

Appears to be the case for BSD as well: https://github.com/freebsd/freebsd/blob/master/bin/kill/kill...


The SIG prefix doesn't work in dash (the default shell in Debian).

POSIX┬╣ says:

An early proposal also required symbolic signal_names to be recognized with or without the SIG prefix. Historical versions of kill have not written the SIG prefix for the -l option and have not recognized the SIG prefix on signal_names. Since neither applications portability nor ease-of-use would be improved by requiring this extension, it is no longer required.

┬╣ http://pubs.opengroup.org/onlinepubs/9699919799/utilities/ki...


By phrasing it as applicable to "web servers" in general, this article entails an implicit assumption that all HTTP servers "expose secret information" in core dumps. This is not the case. There still exist WWW sites where the HTTP server's code is entirely open, the content being published is entirely open, and no password or other user credentials mechanism is used by the HTTP server.

Not all WWW sites in the world are complex constructions of server-side scripting languages, plug-ins, cookies, user account authentication, and "business logic". Some serve up static content with code that has been around for decades and in the public domain for at least one decade. A core dump does not expose secret information on such sites, because there is no secret information to expose.

It's just untidy. (-:

Of course, it is a lot harder to cause these HTTP servers to dump core in the first place. The article's note about PHP being a source of core dumps again has the implicit assumption that PHP is involved in all WWW servers in the first place. It is not. Moreover, good practice for at least one such of these HTTP servers is to run them under the aegis of an unprivileged and dedicated to the purpose user account that has no owner nor write access to any filesystem object anywhere under its changed root. Thus they cannot create a core dump file irrespective of core dump size resource limits.


Or use backtrace.io to deal with it. It'll transfer the core dump or a derivative to a centralized server.

Transparency: co-founder.


You can "Control + \" your application to generate a core dump if you want to test it.


It didn't work for me on macOS. I tried it on `yes` and `sleep 5` but it simply quit the program. No files were saved.


It is possible to set a maximum size for core dump files via limits.conf / ulimit. If this limit is 0, it effectively disables core dumps. Call ulimit -a from a shell to see the current maximum size.

The linked article also explains how to change where core dumps are saved. Maybe your system is configured to save them in a specific folder (I have no clue what macOS' default settings for core dumps are!).


I have a mac, and my default settings are a ulimit of zero. Makes sense, because macOS has its own way of creating crash reports with the CrashReporter daemon


More specifically it sends SIGQUIT.


TIL


systemd has a coredump capture facility, managed by the coredumpctl command. The coredumps are found in /var/lib/systemd/coredump/

I wonder how that relates to this article?


I'm a little surprised at the abuse contact step, is that really more effective that emailing security@ and webmaster@ for the domains affected?


if you do really use coredumps for debugging one option would be to automagically upload them to a 'debug' server. Unless you really want to debug in prod.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: