Hacker Newsnew | comments | ask | jobs | submit | scvalex1's commentslogin

Everything's ok until you try to write to more memory than is available. At that point, either your program segfaults, or Linux's Out-Of-Memory (OOM) Killer is invoked and kills a random process (not necessarily the offending process).

-----

rosser 532 days ago | link

(not necessarily the offending process)

It's terrible about this with PostgreSQL. The OOM Killer tends to thump the postmaster, not the offending backend.

-----

pyre 531 days ago | link

It's not something that's easy to develop a general solution to. Maybe the process that's using a lot of RAM is not runaway, and is mission-critical. Maybe it is a runaway process and something else is mission-critical.

The most 'fair' way to go about it is to just kill a random process. That skits the whole issue.

-----

EdiX 532 days ago | link

You should instruct the OOM Killer to not kill PostgreSQL.

Memory is a global resource, even if you didn't overcommit the process getting the allocation failure is not guaranteed to be the misbehaving one, it's just the unlucky process that did the first allocation after all the memory was gone (this is also why handling allocation errors in your application is usually impossible, even without an overcommitting kernel).

-----


Lists | RSS | Bookmarklet | Guidelines | FAQ | DMCA | News News | Feature Requests | Bugs | Y Combinator | Apply | Library

Search: