Everything's ok until you try to write to more memory than is available. At that point, either your program segfaults, or Linux's Out-Of-Memory (OOM) Killer is invoked and kills a random process (not necessarily the offending process).
It's not something that's easy to develop a general solution to. Maybe the process that's using a lot of RAM is not runaway, and is mission-critical. Maybe it is a runaway process and something else is mission-critical.
The most 'fair' way to go about it is to just kill a random process. That skits the whole issue.
You should instruct the OOM Killer to not kill PostgreSQL.
Memory is a global resource, even if you didn't overcommit the process getting the allocation failure is not guaranteed to be the misbehaving one, it's just the unlucky process that did the first allocation after all the memory was gone (this is also why handling allocation errors in your application is usually impossible, even without an overcommitting kernel).