Hacker News new | comments | show | ask | jobs | submit login

Yes I believe swap makes some workloads faster.

The goal of virtual memory is to apportion physical memory to the things that need it most -- keep frequently used data in RAM, and pageout things which aren't frequently used. This makes things go faster. When you disable swap, you constrain VM's ability to do this -- now it must keep ALL anonymous memory in RAM, and it will pageout file-backed memory instead. Even if that file-backed memory is much hotter.

I came to this conclusion by following the kernel community, starting with the "why swap at all" flamewar on LKML. See this response from Nick Piggin http://marc.info/?t=108555368800003&r=1&w=2 who is a fairly prominent kernel developer. Nothing I've read from the horses mouth has refuted this since then. This is true even on systems with gobs of memory.

You're worried about systems which grind to a halt under memory pressure, which is unquestionably a concern. The thing is, disabling swap doesn't fix this. As soon as you're paging out important file-backed pages (like libc), your system is going to grind to a halt anyway. And disabling swap can't prevent this -- (same point from a VM developer here http://marc.info/?l=linux-kernel&m=108557438107853&w=2). To really give your prod systems a safety net, you need to (a) lock important memory (like, say, SSH and libc) in memory or (b) constrain processes which hog memory with (e.g.) memory cgroups. IMHO cgroups/containers are a better solution.




I understand the idea, but I don't have a lot of faith in the kernel to make the right decision under memory pressure--and neither do server application developers, hence the proliferation of massive userland disk caches in processes. The amount of allocated memory that the kernel can find to discard on a no-swap system is fairly small, so I'm somewhat relying on the rogue program overrunning the point of blowing out the caches and going all the way to panic the system.

Ideal tuning would probably also reserve some decent amount of space for file caches and slab but I'm not aware of any setting that does that.

It's an entirely different story on laptops and development servers, where the workload varies widely, may contain large idle heap allocations worth swapping, and manually configuring memory usage isn't practical.


>I understand the idea, but I don't have a lot of faith in the kernel to make the right decision under memory pressure--and neither do server application developers, hence the proliferation of massive userland disk caches in processes.

There is a school of thought which thinks especially server devs should just trust the operating system in this regard. One notable person of that school is phk of FreeBSD and Varnish fame: https://www.varnish-cache.org/trac/wiki/ArchitectNotes


I don't disagree with swap being helpful in some (maybe most?) cases.

But there are cases where it clearly does more harm then good. I had a postgresql datbase server with a lot of load on it. The server had loads of ram, more than what postgresql had been configured to use plus the actual database size on disk. Even so linux one day decided to swap out parts of the database's memory, i assume since that was very rarely used and it was decided that something else would be more useful to have in memory. When the time came for queries that used that part of the database, they had a huge latency compared to what was expected.

Maybe i'm misremembering and maybe there was some way of preventing that from happen while still having a swap enabled on the server..


vm.swappiness [1] could be what you're looking for. I find the default value of 60 leaves my desktop more prone to swapping out than I would like (but then firefox consuming >15gb of ram leaves it little choice).

[1] http://en.wikipedia.org/wiki/Swappiness




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: