That would keep the system running happily even if processes leak arbitrary amounts of memory.
1. You don't have enough memory for what you're trying to do
2. Your software has a memory leak
Add more memory to the system or fix your software. Why is this fancy feature needed at all? It's simpler and safer just to reboot and have a fresh start if you're out of memory. Set panic_on_oom, job done.
To me, that sounds a bit like saying: "Compressing backups? If you can't store them as they are, you don't have enough storage for what you're trying to do." With finite resources, regardless of whether it's storage, RAM or CPU cores, it makes sense to look for options to utilize them more efficiently.
Not everyone can acquire all of the RAM for the problems that they'd want to tackle and therefore solutions like this could provide tangible benefits with sometimes acceptable drawbacks (the mentioned CPU usage). Personally, i feel like having VPSes that'd offer me more memory would be a good thing, since most of the software that i use is mostly memory not CPU constrained, though i understand why opinions could differ.
As for memory leaks and rather liberal memory usage - i feel that it'll be inevitable for as long as the industry uses Java, .NET, Ruby, Python, Node and most other technologies with a high abstraction level (and VM runtimes, ofen coupled with GC). Yet not everyone is keen on writing their business apps in C++ or Rust.
- Modern UNIXes usually can’t communicate, and modern applications usually can’t handle, out of memory conditions, because of overcommit, necessary due to pervasive use of fork(), which is ingenious and easy to use but lends itself poorly to resource accounting. Compare with Symbian, which was very awkward to program in, but was resilient enough to run the radio stack and user apps on the same processor.
- While the monolithic-kernel ideal is to put all caches and other discretionary memory consumers in the kernel, that is of course impossible to achieve. Everything on your system has buffers, caches, or un-garbage-collected memory it could afford to lose, but the system can’t communicate or cooperate with itself to actually make use of that. Cooperating resource usage of untrusted processes which have different ideas of how important things are is a hard and mostly unsolved problem.
So any buggy app (or just a `make -j`) can crash the system?
Yea no thanks
Yes. This isn't the 80s anymore, nobody runs multi-seat mainframes. Any critical system should be designed to be resilient enough that a machine can reboot without downtime.