Hacker News new | past | comments | ask | show | jobs | submit login
In defence of swap: common misconceptions (chrisdown.name)
150 points by c4urself on Jan 14, 2018 | hide | past | favorite | 148 comments



In my experience, a misbehaving linux system that's out of RAM and has swap to spare will be unusably slow. The process of switching to a tty, logging in, and killing whatever the offending process is can easily take a good 15 minutes. Xorg will just freeze. Oh, and hopefully you know what process it is, else good luck running `top`.

Until this is fixed, I'll just keep running my systems with very small amounts of swap (say, 512MB in a system with 16GB of RAM). I'd rather the OOM killer kick in than have to REISUB or hold down the power button.

Some benchmarks with regards to the performance claims would be nice.


> In my experience, a misbehaving linux system that's out of RAM and has swap to spare will be unusably slow.

Yeah, this is basically the main drawback of swap. I tried to address this somewhat in the article and the conclusion:

> Swap can make a system slower to OOM kill, since it provides another, slower source of memory to thrash on in out of memory situations – the OOM killer is only used by the kernel as a last resort, after things have already become monumentally screwed. The solutions here depend on your system:

> - You can opportunistically change the system workload depending on cgroup-local or global memory pressure. This prevents getting into these situations in the first place, but solid memory pressure metrics are lacking throughout the history of Unix. Hopefully this should be better soon with the addition of refault detection.

> - You can bias reclaiming (and thus swapping) away from certain processes per-cgroup using memory.low, allowing you to protect critical daemons without disabling swap entirely.

Have a go setting a reasonable memory.low on applications that require low latency/high responsiveness and seeing what the results are -- in this case, that's probably Xorg, your WM, and dbus.


And a multigigabyte brick of a web browser.


You can use Alt+SysRq+f to manually call oom_kill.


On many distros, this is disabled by default because there's a chance that the OOM killer will hit something important, like the screen lock. For Ubuntu, enable it in /etc/sysctl.d/10-magic-sysrq.conf.


If you’re using logind, this isn’t a problem – if the screenlocker dies, it will be restarted up to 3 times (without revealing screen content), and if it is killed yet once more, your screen just won’t unlock (you can then unlock via a tty by logging into that and using loginctl session-unlock).

If the daemon responsible for this is killed, all your sessions will simply be killed.

In no situation will your screen unlock due to a process being killed (in contrast to the pre-logind world, where, if the screenlocker dies, your screen is free for all, as there the locker is just a fullscreen window)


In 2005 I was able to run Linux on 512MB RAM _without_ swap (on purpose - every day) without issues. Today it will bark at me on 8GB of RAM for not having swap enabled.


I'm running on an 8GB Linux box without swap and never even come close to running out of memory. If I don't have any VMs running, then it's pretty unusual for me to use much more than 1-2 gigs. It's interesting... one of my colleagues has serious problems with performance because he keeps running out of memory -- and I don't think he's doing anything unusual.

I think there is something wrong with some of the major distros. I got really fed up with Ubuntu because of random junk running without my approval and eventually migrated to Arch simply because I have a lot more control over configuration. I don't mean to trash one distro over another because each one has its strengths and weaknesses, but I'm been surprised at how bloated the average Linux install is these days. I'd love it if there was more attention paid to it.


> I'm running on an 8GB Linux box without swap and never even come close to running out of memory. If I don't have any VMs running, then it's pretty unusual for me to use much more than 1-2 gigs. It's interesting... one of my colleagues has serious problems with performance because he keeps running out of memory -- and I don't think he's doing anything unusual.

64gb here and 40gb used. Firefox alone uses 2 gigs with a mere ~30 tabs.


I run KDE, Firefox, Slack (browser based app), Chromium, VS Code (browser based app) (plus Gvim, shell etc) on Arch and it's currently steady at 3.9 GB (of 16)

Edit: Also be sure it's actually used: https://www.linuxatemyram.com/


Browsers have a tendency to use some percentage of available RAM. Firefox using 40gigs on a 64gig machine doesn't mean it'll try to use 40gigs on a 8gig machine.


What function does the 0.5GB swap have?


I just wish Linux distro installers would make opting out the default option; no, I don't want to swap on my SSD. The last time I installed a distro, I still had to select the manual option for partitioning.

With an 8 Gig stick in my NUC, for normal desktop usage it never goes above 3.


And even then; they provision an absurd amount of it. I just did a fresh Ubuntu install. On a machine with 32 gigs ram, it creates a 32 gig swap partition by default!


They probably reason that you want your system to be able to hibernate, plus storage is cheap.

I have 8GB of swap for the 8GB in my laptop, for that reason.

On my desktop with 16GB, the w 2GB of swap it has is sometimes too little, and everything grinds to a halt.


32GB of SSD isn't that cheap!


A last-ditch safety buffer, to induce the slowdown so that you'll recognise that RAM is running low and hopefully prevent from actually completely running out.


By that time, my system is usually responding to simple keypresses with latencies >1min... :|


I actually did this on a old laptop, setup 200mb of swap for 4gb of ram.

And it caused huge problems for me, would run out of swap while having plenty of free memory and then go cripplingly slow.


Set the swappiness lower.


That only delays the inevitable. Try setting overcommit to 2 and ratio to 100, then note apps crashing.


It doesn't go lower than 0. Swapping can still hose your machine when swappiness is set to 0.


The point being is, that a system doesn't have to misbehave to allocate more memory than the total RAM. And in those cases, there is a very good reason to have swap space, and swapping won't impact the performance of the system - rather the opposite.


Sure it does misbehave. The memory allocation failures should be handled properly and by that I mean not by crashing. Very few applications should require memory use beyond current free RAM. Especially not JVM, JavaScript VM, a web browser or even video player. Yet this silly heuristics in Linux lets it happen.


No, that is not correct. If you have any idle processes, their memory can safely swapped out without impacting performance. The user should not be forced to quit programs as soon as they become idle. Also, as described in the article, a program often allocates pages, which also become unused and can be swapped out. In an ideal world, a program would not allocate pages which become idle, but that happens with complex software (and often depends on the user interaction, and thus is not completely predictable). Swapping out idle pages is a very simple solution to make more memory available for the active processes.


Very few people on this thread read and understood the article. The point isn't working with data sets larger than RAM. The point is making better use of the RAM you do have by taking pages you'll almost never touch and spilling them to disk so that there's more room in RAM for pages you will touch.

Banning swap is like making self-storage companies illegal and forcing everyone to hold all possessions in their homes. Sure, you'd be able to get to grandma's half broken kitschy dog coaster that you can't bring yourself to throw away, but you'd also be harder to harder to fit and find your own stuff, the stuff you need all the time.

If you find yourself driving to and from the self storage place every day, you probably need a bigger home. But self storage is plenty useful even if you almost never visit it.


The issue is that the current OOM killer doesn't support this usage at all.

To extend the analogy: what do you do if grandma comes and fills your house with stuff? You need space to work, so you go and drop it off at the self storage place, but what if she just keeps filling your house up?

The OOM killer will do absolutely nothing until both your house and the whole self storage place are totally full. By that point, you've spent a huge amount of time just driving to and from self storage, so you haven't had time to do any actual work; it would probably have been better to tell grandma that you don't want any more stuff once she filled up your house for the first time.


Well, it doesn't help that when grandma calls and asks whether you have room for more stuff, the Linux kernel responds on your behalf, "Yes, of course I have room. I live in a TARDIS." And you then do all driving to the self-storage facility to maintain the illusion as long as you can. I really don't like overcommit.

Anyway, I agree with you that this behavior is annoying, but I think it ought to be possible to fix it (e.g., with memory cgroups or something like Android's lmkd) without giving up on the idea of spilling infrequently-accessed private dirty pages to disk.


The analogy is now getting in the way, rather than helping to clarify.


One problem with relying on the OOM killer in general is that the OOM killer is only invoked in moments of extreme starvation of memory. We really have no ability currently in Linux (or basically any operating system using overcommit) to determine when we're truly "out of memory", so the main metric used is our success or failure to reclaim enough pages to meet new demands.

As for the analogy -- there are metrics you can use today to bat away grandma before she starts hoarding too much. We have metrics for how much grandma is putting in the house (memory.stat), at what rate we kick our own stuff out of the house just to appease grandma, but then we realise we removed stuff we actually need (memory.stat -> workingset_refault), and similar. Using this and Johannes' recent work on memdelay (see https://patchwork.kernel.org/patch/10027103/ for some recent discussion), it's possible to see memory pressure before it actually impacts the system and drives things into swap.


> One problem with relying on the OOM killer in general is that the OOM killer is only invoked in moments of extreme starvation of memory. We really have no ability currently in Linux (or basically any operating system using overcommit) to determine when we're truly "out of memory", so the main metric used is our success or failure to reclaim enough pages to meet new demands.

The problem with relying on swap instead of the OOM killer is that, instead of the OOM killer, the user gets invoked in moments of extreme starvation of memory and the whole machine gets rebooted. The OOM killer is far gentler; it only kills processes until the extreme starvation is resolved.


Well, just don't allow overcommit, problem solved.


Disallowing overcommit still doesn't solve the whole problem: you can just burn all of RAM and all of swap in commit charge, then swap. Another failure mode is excessive paging IO causing the kernel to spill private memory to the swap file prematurely, preferring instead to fill RAM with dirty disk-backed pages that only later get written out to disk. When your system is in this state, accessing even activity used pages (say, your window manager's heap) might incur be a slow hard fault.

(OSes have gotten a little more resilient against this scenario over the years, but it illustrates the issue.)

Memory pool balancing is a really hard control theory problem! I don't blame some people taking the RAM size efficiency hit and just turning off swap entirely. I just think it's a shame to have to resort to that extreme.


I wonder if ARC can be used for replacement policy:

https://en.wikipedia.org/wiki/Adaptive_replacement_cache

Although I guess it's patent encumbered...


> Very few people on this thread read and understood the article.

Hmm. I read the article and I think I understood it. However, in my experience, you run out of RAM if and only if your working set is too big. In my experience, all involved find it desirable to reduce the size of the working set as quickly as possible. Your experience seems to differ.

> The point isn't working with data sets larger than RAM. The point is making better use of the RAM you do have by taking pages you'll almost never touch and spilling them to disk so that there's more room in RAM for pages you will touch.

Your reasoning is too sloppy. It supports neither your blanket statements nor your pained analogy.

You appear to presuppose that:

(1) The kernel can predict which pages the user will "almost never touch."

(2) Mispredicting which pages will be "almost never touched" is of relatively low cost.

(3) Swapping pages that the user will "almost never touch" to disk frees up an appreciable amount of RAM.

(4) When pulling those pages back from disk, the work held up is, on average, less important than whatever we got to do with the RAM in the meantime.

I disagree with (1). Like I said elsewhere in the comments on this article, the kernel cannot reliably predict whether a process will "almost never touch" a given page. The kernel does not have sufficiently detailed knowledge of the process's purpose or access patterns.

I also disagree with (2). The consequences of getting these predictions wrong seem to be very bad. When lots of mispredictions happen in a tight cluster, the kernel and all running processes will be stopped when the user forcibly bounces the machine. If you let the OOM killer run instead of swapping, the kernel stays up and only a few running processes die. Having a working set whose size is larger than RAM but smaller than RAM + swap seems to be a recipe for a very long cluster of such mispredictions and a human intervention.

I am curious to hear about workloads where (3) occurs. (Non-latency-sensitive Java code that doesn't churn objects too fast? You've allocated a heap of a certain size, and the half or so that's free doesn't get disturbed too much.)

Regarding (4), even if the kernel could reliably predict cold pages, "page will almost never be touched" isn't necessarily the right criterion for swapping a page to disk. What if reading from the page will be on the critical path for something users do care about, such as logging in and killing a misbehaving process?


> (1) The kernel can predict which pages the user will "almost never touch."

> I disagree with (1). [...] The kernel does not have sufficiently detailed knowledge of the process's purpose or access patterns.

You're in for quite a surprise, particularly on desktop. I have a number of processes with some pages swapped out, and I see no impact on interacting with the said processes. Firefox, gDesklets, a volume changer, and several instances of rxvt are among them.

> (2) Mispredicting which pages will be "almost never touched" is of relatively low cost.

> I also disagree with (2). The consequences of getting these predictions wrong seem to be very bad.

Only in the case of repeated mispredictions, which only happens if you really have low RAM and are on a good way to invoke OOM killer anyway. With (1) being quite accurate (mainly because swapping out unused pages is not that aggressive), (2) magically becomes true as well.


> You're in for quite a surprise, particularly on desktop. I have a number of processes with some pages swapped out, and I see no impact on interacting with the said processes. Firefox, gDesklets, a volume changer, and several instances of rxvt are among them.

Is an appreciable amount of RAM freed up here? I was under the impression that Firefox churned through whatever it allocated (garbage-collected Javascript VM) and rxvt had a very small footprint, most of which is code shared among all of your rxvt instances.

>> I also disagree with (2). The consequences of getting these predictions wrong seem to be very bad. > > Only in the case of repeated mispredictions, which only happens if you really have low RAM and are on a good way to invoke OOM killer anyway.

Even if things were as rosy as you suggest, isn't that my point? Better that the OOM killer cleans something up than I bounce the machine and clean everything up. That said, the OOM killer won't necessarily run anytime soon:

I just spun up a VM with 1GB of memory and 1GB of swap. 'time ssh guest echo hello' from the host usually takes anywhere from 140ms to 1.2s. I wrote a C program that allocates a gigabyte (in two 512MB pieces) and churns it through swap by writing to random bytes. 'time ssh guest echo hello' now takes 4-8 seconds. The oom killer didn't run once in the five minutes I ran the swap-churning process. Setting /proc/sys/vm/swappiness to 0 didn't change the symptoms; 'time ssh guest echo hello' still takes 4-8 seconds. This is on Linux 4.9.65.

If I crank the number of churning threads up from 1 to 8, the one 'time ssh guest echo hello' I tried took 33 seconds. I am not patient enough to see what happens with 64 churning threads, which is entirely reasonable, but I would expect the latency involved in rescuing the machine to cause any reasonable administrator to simply bounce it.

In this workload, the kernel is consistently failing to predict which pages are unimportant; the mispredictions are expensive; the RAM saved by swapping out bash, sshd, and killall (or whatever) is negligible; and the important work of allowing remote login to diagnose and clean up the mess is held up unconscionably long to make room for what, in practical instances, is a user error.

I did a 'swapoff -a' and ran the same C program and it gets killed almost immediately.


> self storage is plenty useful

With self-storage rising to over $300 per month, it's more cost effective to take the stuff to the dump and buy it again if it is ever needed.


Well, it depends where you live. In Buffalo, you can get self-storage for around $25/month, if internets are to be believed.


"Very few people on this thread read and understood the article."

I started to read the article, and then thought, "I know this, who doesn't know this?" and stopped.

"The point is making better use of the RAM you do have by taking pages you'll almost never touch and spilling them to disk so that there's more room in RAM for pages you will touch."

Exactly. Who with any technical experience in this day and age doesn't understand that. Are there really people trying to argue against swap?


> Exactly. Who with any technical experience in this day and age doesn't understand that

You're on a site infamous for the comment "I switch to Node when I want to be close to the metal".


Is this a joke or was there actually such a comment? If so, can you link it?

To someone like me, who usually lives somewhere between C++ and shader code, it sounds a bit too strange to be true.


I think this was the origin: https://news.ycombinator.com/item?id=2710383


In theory swap is useful, in practice it can be less so https://news.ycombinator.com/item?id=16147634


If I hit my thumb with a hammer, that doesn't mean the hammer isn't useful. The edge cases with swap are also entirely useless arguments against swap.


Feel free to explain it to me.

" Under no/low memory contention

[...]

Without swap: We cannot swap out rarely-used anonymous memory, as it’s locked in memory. While this may not immediately present as a problem, on some workloads this may represent a non-trivial drop in performance due to stale, anonymous pages taking space away from more important use."

Now imagine that I have no memory contention. In other words I've got 8 Gigs of memory and I have never run out of memory. The OOM killer has never run. I've never even come close. How exactly is this representing a non-trivial drop in performance?

To be fair, if I put some of my long running processes into swap, I could cache more files, but I really don't see how this represents a statistically significant improvement. I honestly can't think of anything else.

If you sometimes run out of memory (or even get close), then you should have some swap. This seems fairly obvious to me. Relying on the OOM killer to "clean things up" is pretty dubious. But was there every any serious argument to do this? I've literally never heard of that before.

I'd be very happy to hear something enlightening about this, but I didn't see anything in the article (perhaps I missed it).


> If you sometimes run out of memory (or even get close), then you should have some swap. This seems fairly obvious to me. Relying on the OOM killer to "clean things up" is pretty dubious. But was there every any serious argument to do this? I've literally never heard of that before.

Why does that seem obvious to you? With swap, running low on memory is game over. Without swap, the OOM killer runs. You can call the OOM killer dubious, graceless, or any number of other things, but it gets the system responsive again without doing as much damage as the human intervention that's otherwise required.


I mean, it really depends on your application how non-trivial the performance improvement will be, but this statement isn't theoretical -- memory bound systems are a major case where being able to transfer out cold pages to swap can be a big win. In such systems, having optimal efficiency is all about having this balancing act between overall memory use without causing excessive memory pressure -- swap can not only reduce pressure, but often is able to allow reclaiming enough pages that we can increase application performance when memory is the constraining factor.


The real question is why those pages are being held in RAM. If they're needed, swapping them out will induce latency. I'd they're a leak or not needed, the application should be fixed to not allocate swathes of RAM it does not use.


There are some systems which are memory-bound by nature, not as a consequence of poor optimisation, so it's not really as simple as "needed" or "not needed". As a basic example, in compression, more memory available means that we can use a larger window size, and therefore have the opportunity to achieve higher compression ratios. There are plenty of more complex examples -- a lot of mapreduce work can be made more efficient with more memory available, for example.


Indeed. None of the above are typically used (as in most of the time) on desktop systems where swap is the most problematic. As for compession, the only engine I know of that wants more than 128 MB of RAM is lrzip and other rzip derivatives.

Common offenders that bog down the system in swap for me as a developer are the web browser, JVM (Android) and electron based apps (messengers, two).

I would also like a source that substantiate the claim that using swap in map-reduce workloads actually helps. Or perhaps in database workloads. Or on any machine with relatively fixed workload.


I'm a big fan of determinism and service uniformity. Having that rarely used and response time critical function/data/whatever swapped out increases service time variation at best and complicates all worst case response time calculations at least.

I understand from the land of JIT compilers, garbage collectors, and oversubscribed everything that this is not much of a substantial concern as these features are already traded away.

The swap may be the best case in a bad situation. I would argue along the lines of don't be in a bad situation...

I'm looking at you 8 of 16 GB used on cold boot Mac laptops... Looking at you with indignation and rancor Chrome.


One of the article's points is that running without swap doesn't necessarily alleviate that. The rarely-used code pages of your rarely-used but response time critical daemon can just as easily be dropped from the page cache and have to be refaulted in from disk, and in fact that's more likely if there isn't swap available to stow the dirty anonymous pages from the cron daemon that wakes up once a day or whatever.

The solution for your rarely-used but response time critical daemon is for it to mlock() its critical data and code pages into memory, which works regardless of whether or not you have swap available. (Or, alternatively, use one of the cgroup controllers that the article alludes to, to give the critical daemon and related processes memory unaffected by memory pressure elsewhere in the system).


No they cannot. Code that is loaded into memory stays in memory when three is no swap. (Mlock is to prevent swap out or compression - major faults.) The exception would be lazy library loading. BIND_NOW is your friend in this case.

Essentially having no swap is similar to having everything mlocked - no major faults can happen except with mmapped files which will just use direct disk IO.

If you mean disk caches, when have you seen a multigigabyte executable?


Code that is loaded into memory stays in memory when three is no swap.

That is not true. In the normal case (absent debugging, JIT, self-modifying code etc), pages of executable code are clean, shared mappings so they do not interact with swap at all.

As clean, shared mappings they are eligible to be dropped from the page cache in the same way as other clean file mapped pages.

(Your executable code pages essentially are mmapped files. )


> Under temporary spikes in memory usage > With swap: We’re more resilient to temporary spikes, but in cases of severe memory starvation, the period from memory thrashing beginning to the OOM killer may be prolonged. We have more visibility into the instigators of memory pressure and can act on them more reasonably, and can perform a controlled intervention.

Somehow that doesn't resonate with my experience. I tend to remember the cases where I can't even SSH into the box, because the fork in sshd takes minutes, as does spawning the login shell.

I'd really like some way to have swap, but still loosen the OOM killer on the biggest memory hog when the system slows down to a crawl. I haven't found that magic configuration yet.


Well the article does suggest a mechanism to apply.

As for the problem with SSH and login: You might well find that it is not the fork that is the problem. You might well be surprised at how much chaff is run by a login shell, or even by non-login shells.

A case in point: I recently reduced the load on a server system that involved lots of SCP activity by noticing that, thanks to RedHat bug #810161, every SCP session even though it was a non-login non-interactive shell was first running a program to enumerate the PCI bus, to fix a problem with a Cirrus graphics adapter card that the machine did not have on a desktop environment that the machine did not have. This driven by /etc/bashrc sourcing /etc/profile.d/* .

* https://github.com/FedoraKDE/kde-settings/blob/F-26/etc/prof...


There's a userspace daemon that does that: https://github.com/rfjakob/earlyoom (I am the author)


Swap iotime quotas maybe? I suspect the solution is a lot more complicated though - what I really want is a way to wall off my UI so it stays responsive during swap thrashing and let's me react to the situation.


I remember not being able to SSH on the box too, but this stopped be the case (for me at least) some five years ago - I was able to login to heavy swap trashing boxes without any problem. I thought that something was changed in contemporary linux distros. Either login shell was given much more priority or something like that.


Even for those who understand this well, it's historically been really hard to coerce the Linux kernel into applying the right swap policies to your application.

As the author notes much of this has been improve by cgroups, and there's always been big hammers like mlock(), even with those things it can be hard to prevent memory thrashing in extreme cases. I've seen swap disabled completely by people who understood how it worked as a last result because of that.

It's always seemed to me that this was mainly a problem of the kernel configuration being too opaque. Why can't you configure on a system-wide basis that you can use swap e.g. only for anonymous pages and for nothing else?

Similarly it would be nice to have a facility for the OOMkiller to call out to userspace (it would need dedicated reserved memory for this) to ask a configured user program "what should I kill?". You might also want to do that when you have 10G left, not 0 bytes.


Android has this last bit. as explained at https://www.youtube.com/watch?v=ikZ8_mRotT4&t=2145 (linked in the article) though, Linux does not have the facilities presently to determine when you have "10G left" in a way that applies across all system configurations.


Swap is only used for anonymous pages (well, and dirty private file pages, which are basically the same thing).


I recently reenabled swap on my Windows machine due to frequent OOM, even with 16GB of RAM while playing Overwatch and browsing on Firefox. It seems like both of these programs allocate vast swaths of memory but then do not actually use that memory very heavily. After I turned swap back on, I did not notice any degradation in performance but my system stability skyrocketed.


Windows has vastly different policies for RAM allocation and commit than Linux. Windows basically does not overcommit while Linux systems not only does it by default but quite depend on it for various loads to work properly. In consequence, the userspace has a tendency to handle RAM differently, but there is no magic: if programs are trying to allocate twice the amount of RAM and then only use half of it, Windows with a swap that can be large enough will work perfectly while without swap the allocation will fail. Under Linux, the situation is less clear: without swap it will succeed (well it depends on the fine details of overcommit that are selected, but you get the idea) although if you really then use all that RAM, the OOM killer will start to more or less "randomly" kill "any" process to cope with the lack of RAM (as a last resort measure though; the caches and buffers are flushed before, etc.)


Windows also has two syscalls - one to reserve memory and one to commit memory. You can reserve+commit at once, but you can also just reserve a chunk of virtual memory that you commit at a later time. Accessing pages that aren't committed is a segfault. So you can say "I might need up to X contiguous bytes of virtual memory" and then commit as you go. IIRC, Windows will let you over-reserve, but not over-commit.

Edit: sorry, not two syscals, it's an option to the malloc-equivalent - VirtualAlloc


Without wanting to impose my method or reasoning here, I run my dev machine without swap, and I'd rather have the same for the cluster machines I access.

This is for academic use only. I know how much RAM my machine has, and if I oom, it usually isn't because I tried to squeeze in just a tiny bit too much data, but rather because I made some stupid mistake and keep allocating small chunks of memory very rapidly. On a system with even a moderate amount of swap, this makes everything grind to a halt, and it is usually much faster to just reboot the machine and deal with the problems later in the unlikely event that rebooting actually causes problems.


If you're running a single-(or close to)-purpose machine, then seeing an explicit memory usage limit on the main app could give you even better/faster results.


We've disabled swap (by not configuring a swap partition) on every server we've deployed since 2009. It's a little irritating to have to manually remove the swap partition from various Linux' "server" default install options even today. Of course this means I'm still installing on bare metal that I own, so...dinosaur.


I'd be more convinced by the argument that swap shouldn't be thought of as slow RAM if the author addressed the fact that it's generally known as 'virtual memory'—and it has been since at least System 370, so it's not simply a later misconception: http://pages.cs.wisc.edu/~stjones/proj/vm_reading/ibmrd2505M... . Instead the article just omits the term 'virtual memory' completely, and pretty conspicuously.

I also think that a convincing case for swap would have to discuss the concepts of latency, interactivity, and (soft) real-time performance, things that largely weren't to the fore in the salad days of the 370 family or the VAX. Virtual memory is the TCP of local storage.


That is not the argument.

The article actually says, four times over, that it should not be thought of as emergency memory. It's not emergency memory; it's ordinary memory that should see use as part of an everyday memory hierarchy.

And if you are going to question the terminology, the elephant in the room that you have missed is calling paging swapping. (-:


Despite the repeated use of the word 'emergency' this is not quite obvious. For example in

> many people just see it as a kind of “slow extra memory” for use in emergencies

the scare quotes are around 'slow extra memory' not 'emergencies'. Now granted in the last bullet point of the conclusion it affirms that VM is a source of slow memory, but earlier it uses 'memory' where it's referring specifically to RAM, for example

> Without swap: Anonymous pages are locked into memory as they have nowhere to go.

Really the main reason my original comment was rubbsih is that I took the article far too much as a general discussion of swap when, as it said, it's largely about how much swap to enable on a given Linux system running some already-determined software.


I have an older Chromebook (c720), which is really quite memory starved (2GB RAM), and have experienced ChromeOS completely frying the SSD simply through prolonged tab-heavy swapping.

Now, I've replaced the SSD and installed a non-Google Linux distro, and would like to limit the amount of swapping Firefox can do.

I had been planning to simply use cgroups' memory features to limit the amount of memory consumed by Firefox processes, but if I am to understand the article (which I admit I didn't read in full detail), I should also be able to tune swapping to limit the actual amount of swapping that takes place, avoiding a drastic uptick in SSD wear whenever open too many tabs.

That, and perhaps a Firefox extension that suspends background tabs in memory (which I've used before with a certain amount of effectiveness in the pre-WebExtension days).


ChromeOS does not use an on-disk swap partition. Your SSD died just because cheap SSDs like those typically found in Chromebooks die early. :(

ChromeOS uses zram instead of physical swap, which works quite well, even on 2GB models. Zram is available in any Linux distro, being built into the kernel, and is also the default configuration in GalliumOS (Ubuntu+Xfce for Chromebooks, most of which are less broadly compatible than your PEPPY).


Firefox has a couple options which may help you get by. I can't guarantee these will fix everything but they are worth experimenting with.

about:memory has various options, including a 'minimize memory usage' button and profiling tools.

about:preferences has Privacy & Security > Cached Web Content > Override automatic cache management (select and set at 500MB, 1GB, or whatever works best).


Yep, you can adjust the max amount of cache it will store in ram and disk(separately).


I don't see any option to separate that in the settings page, is it hidden behind a flag somewhere?


In about:config search for browser.cache.memory.capacity and browser.cache.disk.capacity.They are in kilobytes. To enable or disable any of those caches use browser.cache.disk.enable and browser.cache.memory.enable.


Thanks!


Thank you, I had not known about about:memory. Interesting.


Setting the `swappiness` value might be of use to you.

Setting a lower value (min is 0, default is 60 iirc, max is 100, so above half) reduces the likelihood of the kernel swapping. The lower the number the fuller ram needs to be before the kernel will start swapping. Hence a low number will mean that less swapping will happen, hence meaning less SSD wear.


Thanks, I should simply start with that.

Along with a hard limit set by cgroups so that browser tabs start being killed in order to stop the swap from being overwhelmed when memory is being used well beyond its capacity.

(I'd be interested to know if 'swappiness' already effectively implements the following system-wide, but here goes.)

Now that I think of it, it's not so much the quantity of disk storage being consumed by swap, so much as the number of write / delete transactions. One thus wonders if an approach in which the browser somehow favors a small number of tabs to keep in ram, and then dumps the state of the remaining tabs to disk, might be just as effective, but without worrying about the growth of swap. Then, when the user opens a tab that had been saved to disk, if we crudely assume that the memory consumed by open tabs are roughly comparable in memory use, then we can take care of the whole affair of 'swapping' in a single exchange between memory and disk, where we dump the state of the in-memory tab in order to make way for restoring the saved one.

(Or did I just reinvent what swappiness does already? From your description of swappiness, I'm inclined to guess the answer is no. This approach strikes me more akin to using the swapfile as a filesystem, and keeping just a small number of tabs paged into RAM.)


Swap was a great idea, but its time is gone. Swap doesn't make sense anymore, hard drives have not scaled and kept up with the improvements in RAM.

In the Pentium 1 era EDO RAM maxed out at 256MB/s and hard disk xfer was 10MB/s. Common RAM size was 16MB.

In today's era DDR4 maxes out at 32GB/s and hard disk xfer is 500 MB/s. Common RAM size is 16GB.

RAM xfer rate has grown is 320x. RAM capacity has grown 100x. Disk xfer rate has grown 50x.

Swap is no longer a useful tool.


The purpose of swap is not running things out of it. The purpose is shoveling unused data out of memory. And for that it doesn't need to be particularly fast.

Swap has a lot less purpose in a world without memory leaks and extraneous functions. But in practice it's quite good at getting several gigabytes of unnecessary data out of the way, so ram can be used properly.

Swap, well-used, should only take up a few percent of the drive's bandwidth.


> The purpose of swap is not running things out of it. The purpose is shoveling unused data out of memory. And for that it doesn't need to be particularly fast.

You can't detect a priori whether data is "unused." If you guess wrong a few times in a row, you get the familiar pattern where your Linux box is unresponsive to everything and needs to be bounced.

If you could detect whether data is rarely used, swap still isn't necessary. Applications can mmap() a file and use that region for "rarely used data" if such is known in advance.

Extraneous functions should be backed by the executable in the common case. In the JIT case, they probably won't be JITted anyway.

I still think the OOM killer is less intrusive than swapping to disk. It kills some, but not all, of the processes on the machine get killed. The system pretty reliably comes back to life in less time than it takes a human to diagnose the problem and bounce the system. As a bonus, no human needs to get involved.


> Applications can mmap() a file and use that region for "rarely used data" if such is known in advance.

They could, but that's a lot like just making swap be manual.

> Extraneous functions should be backed by the executable in the common case.

I don't mean the code itself, I mean all the data it builds up for something that isn't needed.


> They could, but that's a lot like just making swap be manual.

Sure. Isn't that a good thing? Rarely-used data can be swapped out to storage allocated for the purpose, as you desire, and too-large working sets don't have to hose the machine.

> I don't mean the code itself, I mean all the data it builds up for something that isn't needed.

I guess I don't often run programs that waste large amounts of memory for no reason. Either my working set fits or it's too large, and swap only matters if it's too large.

Is Java the typical beneficiary here?


This reply was a much better explanation than the entire original article, or to summarize “Because memory leaks”.


Swap may or may not be obsolete, but I don't find this particular argument on the subject convincing.

When dealing with swap, continuous transfer rate of the hard disk is not the relevant metric; seek time is.

In my experience, when the system starts needing to read a lot of pages back in from swap, it tends to do so in more or less random order. It reads a small amount of data, then it seeks to a new position, then reads another, and so on.

(I also find the argument unconvincing for a separate reason: I think hard drives are somewhat obsolete. Even my home system has flash instead of swap. And flash has a massively better seek time.)


There have been times when I have needed to do something which requires more memory than is physically available on the machine. Without swap, those tasks would have been literally impossible to do without upgrading the machine. With swap, it just takes a little longer - but still much shorter than it would take to order more RAM sticks.

One of those times was on a 512MB RAM VPS, where I needed to compile something - you don't want a 512MB VPS to do a lot of compilation, but in that one instance, I was very glad I could easily just make a swap file and get on with it. The other time was on my laptop with 8 GB of RAM.

Also, even ignoring the times where swap makes possible a task which would otherwise have been impossible, you flat out ignored the content of the article. Did you even read it? If you have a long running task which allocates a lot of memory, then proceeds to only very rarely use that memory (or maybe it only needs that memory when it shuts down, or just forgot to free that memory), swap allows the system to swap out that memory and instead do something useful with it, like caching files or not killing processes. It doesn't matter that the disk is slower than RAM, if the swapped-out memory is rarely or never accessed.


Well, they wouldn't have been literally impossible. Anything that can be done with swap can be done by writing the data to disk and then reading it back. That's all swap is doing.


Ok, you are technically correct; I _could_ have changed GCC to cache stuff to disk instead of keeping everything in RAM (which would probably have been a big refactor), but I think we both know that's not really practical.


32GB/s is 128× 256MB/s, not 320×, and 16GB is 1024× 16MB. The latter, I think, is the reason swap has gotten less useful. It seems even modern programmers don’t know what to do with all that RAM (possibly because, at top speed, it takes 8 times as long to fill it all up (about 20 times as long if the data has to come from a hard disk)


    hard disk xfer is 500 MB/s
You need better disks. The one in the laptop I'm typing this on can do 2GB/s, which greatly weakens your comparison.


They won't be able to achieve anything like 2GB/s when handling reads from swap, because they are likely to be lots of small chunks of reads - the pages being swapped in are often 4k and will be very difficult to predict, so bulk I/O will be unlikely. Same goes for writes, with the added proviso that your SSDs will have far worse small file write performance than reads, which is probably the 2GB/s headline figure.

A better statistic to use is how many IOPS the disk can handle.


If I may ask, which disks are you using?


the highest end NVMe SSDs can sustain those multi-gigabyte/second reads. https://smile.amazon.com/Samsung-950-PRO-Internal-MZ-V5P512B... for instance.

not sure even they could pull off those speeds with random reads, though.


I think when most people say "hard disk", they usually mean rotating discs that use magnetism to store data. That is what I took tmyklebu's question to mean, since I too have never heard of a HDD reaching anywhere near 2 GB/s.


> I think when most people say "hard disk", they usually mean rotating discs that use magnetism to store data.

while i would personally avoid referring to an SSD as a "hard disk", i was attempting to interpret the original claim in the most charitable possible fashion, since it was utterly absurd if interpreted strictly.


Either way, I learned something. I can picture an array of 20 disks sustaining 2GB/s, but you aren't going to fit 20 disks into a laptop. I didn't realise a high-end SSD could get there, or even how much better regular SSDs are for throughput. (That cost per TB, though!)


This is an informative and well written article, but seems incomplete in this day and age. In public cloud environments, network attached storage is far more prevalent, so the swap story may be different there (I honestly don't know though). Since the author works at Facebook, he probably lacks experience in this regard.


Every cloud provider I've worked with (okay, so AWS :P) gives you ephemeral local storage. Obvioulsy you don't swap onto a network drive.


Even on AWS they're phasing out local storage on new instance types: https://ec2instances.info/ (search for ec2 only, but it's the majority of new instance families)


Modern AWS instance types are EBS-only.


With the exception of the High IO types (I2/I3). They still get it and the newer instances get NVMe SSDs. In other words they are making it a feature of certain types that would benefit from it.


For example, F1 instances have NVMe local instance storage.

https://aws.amazon.com/ec2/instance-types/f1/


Huh. You're right; it seems for the newer instance types only c1.medium and m1.small get swap mounts. That seems like a mistake by aws; but I guess you can a M3 instead of a M5.


well the default kubernetes install (kubeadm) will actually fail installing when having swap enabled. (even worse you can force him to ignore that, but kubelet would fail starting when swap is enabled).


Does hibernating via a swap file work reasonably well yet? I haven't had a chance to try this out yet, but that's the main reason I still have a swap partition on my laptop.


Well done mate - you are the first person to mention this here. It was also only briefly mentioned in the article.

Yes, hibernation does work well and it requires swap. Personally, I set a swap partition equal to RAM + 512MB on systems that I want to hibernate on.

Linux also supports swap files and this might be handy: https://wiki.debian.org/Hibernation/Hibernate_Without_Swap_P...


On windows I have found it necessary to disable swap to keep myself efficient. Many times Ive had applications decide to allocate massive amounts of memory and then it leads to my system slowing down with tons of swap activity. In nearly all cases, I didn't want my system to try its best to handle these massive memory requests but rather it should have just killed the offending application. Often in these failure scenarios the swap goes nuts and my computer becomes unresponsive that it takes a long time to even get to kill the bad actor.

Thus I disabled swap and I never had these unresponsive issues. I run with 32GB of ram so generally well behaved applications never run into memory issues.

Some applications that would cause issues would be too many VirtualBox instances that use more than available memory. A text editor that chocks trying to open a >1GB text file (looking at you, the new JS-based editors.)


Windows is terrible at swapping. As soon as you hit max RAM performance suffers a lot. Even if you never use inactive apps.


Yep. When I was using win7 with 2gb of ram, I was basically managing it manually - as soon as memory usage hit 90-95%,I was shutting apps down. If I let it swap, it would often hang completely.


For many years now I use ram compression instead of swap for desktops/laptops. I particularly like zram but zswap is also great if you you are hitting hardware limits.

The difference with swap is that the computer doesn't get unresponsive, it just slows down a bit. And Ram compression still buys some time before OOM killer hits.


Count me among the believers in running w/ swap. Here's all it takes to provide Linux with a little swap space:

  fallocate -l 8G /swapfile
  chmod 0600 /swapfile
  mkswap /swapfile
  swapon /swapfile
Add an entry in /etc/fstab & you're done. "This little trick" made all the difference on a compute cluster I managed, where each node contained 96G of RAM. It's much more pleasant to monitor swap usage than the OOMKiller and related kernel parameters.


How long before desktop OSs manage memory like mobile ones, automatically shutting down background apps that aren't being used, so that the system remains responsive no matter what?

Or, worst case, if two tasks that need 8GB each are running on a machine with 8GB memory, kill one, let the other finish, and restart the first one. Or, less ambitious, freeze one, swap it out to disk, let the other finish, and only then resume the frozen app.

Desktop OSs are so primitive at memory management, forcing the complexity onto the user.


> How long before desktop OSs manage memory like mobile ones, automatically shutting down background apps

Once desktop systems and applications support required APIs to handle saving state before being shut down.


> Once desktop systems and applications support required APIs to handle saving state before being shut down.

SIGTERM and friends? :-)

If your application is just dropping state on the floor as a result of having an intentionally trappable signal being sent to it or its children, that seems like a bug.


I feel like the real metric is like to see from swap usage is: how much time did I spend waiting to swap-in a page, how much extra cache pages it gave me, and what's the cache hit ratio. If the big purpose is to allow those extra few pages to be available, then either it's worth doing or not - there should be an objective way to look at this. Unfortunately only the second and third part is easily available. The first... maybe via systemtap?


> how much time did I spend waiting to swap-in a page

You can do this with eBPF/BCC by using funclatency (https://github.com/iovisor/bcc/blob/master/tools/funclatency...) to trace swap related kernel calls. It depends on exactly what you want, but take a look at mm/swap.c and you'll probably find a function which results in the semantics you want.


So, in other words, if you have enough memory for your workload that you won't run out, there's no benefit to having swap space (i.e. you've wasted money on memory you don't need).

But if you DO have swap space, there won't be a performance hit (at least not under Linux) because it will only swap out some rarely used pages and then sit there doing nothing.

So, in the general case, it's better to have it and not need it than need it and not have it.


> So, in other words, if you have enough memory for your workload that you won't run out, there's no benefit to having swap space (i.e. you've wasted money on memory you don't need).

No, that's the opposite. If you have enough memory for your workload that you won't run out, swapping lets you use more memory for disk cache (instead of keeping unaccessed anonymous pages in real ram).

Unless by "won't run out" you mean "never have to throw away a disk cache page", which seems very unrealistic.


> If you have enough memory for your workload that you won't run out, swapping lets you use more memory for disk cache

Except that it doesn't happen in practice, on my systems anyway. If you have plenty of memory, you can keep all your programs in it and as much as the system wants to cache and still not run out.

The theory says that swap effectively buys you some memory to spend on more important things (than what the system chooses to page out). So does buying more memory.

> Unless by "won't run out" you mean "never have to throw away a disk cache page", which seems very unrealistic.

I have an instance of top running on my desktops & laptops all the time. I never see cache using up all of the memory.

                  total        used        free      shared  buff/cache   available
    Mem:           7848        1523        5827          53         497        6033


Are you talking desktop workflows here? If this is a service, why are you paying for RAM that never gets used? Get a smaller instance.


> Are you talking desktop workflows here?

Yes [I explicitly spelled out desktops & laptops] but it's been true on my servers too.

> If this is a service, why are you paying for RAM that never gets used? Get a smaller instance.

Not all providers are so flexible. I may want a server with more CPU & traffic and some additional disk space. Going there gets me more RAM too. It turns out these services are constrained by the real hardware. If I'm getting a box with enough CPU for my needs, well they are not going to a shop to buy that special box for me, they use what they have and what they have comes with plenty of RAM too.


If you, however, want to get back to work as quickly as possible if one of your calculations erroneously uses too much memory, having swap will prolong the system-is-frozen period before the OOM killer solves the problem.


> the OOM killer "solves" the problem.

I fixed this for you. At least there are still some OSes that don't consider this a "solution."


In defence against swap on my personal computer:

-PCs have a lot of RAM now

-When you allocate that much memory it's usually a bug in your own code like a size_t that overflowed. I never saw programs I would actually want to use try to allocate that much

-When using swap instead of ram, everything becomes so slow that you're screwed anyway. The UI doesn't even respond fast enough to kill whatever tries to use all that memory.

-How common is a situation where you need more memory than your ram size yet less than ram+swap size in a useful way? Usually if something needs a lot, it's really lot (and as mentioned above not desirable)

-Added complexity of making extra partition

-Added complexity if you want to use full disk encryption

-I do the opposite of using disk as ram: I put /tmp in a ramdisk of a few gigs

-Disks are slow and fast ssd's are expensive so you would't want to sacrifice their space (maybe if this changes some day...)


Regarding the complexity of making an extra partition: I completely agree, having a separate partition for swap is annoying. Luckily, Linux supports swap files; all my systems (where I had to manually set up swap), I just create a file called /swapfile.

I imagine this would solve the full disk encryption complexity too.


The reasons I couldn't live happily without swap on my development machine:

- Macbook Pro's don't have a lot of RAM. Still. (Let's not get into that whole discussion as valid as it may be, that is far from the biggest issue I have with current macbooks...)

- Macbook Pro's have no user facing complexity to use swap or to have it encrypted. Heck, they even give you a middle ground as well by default of compressed RAM.

- I have to run a lot of services to run our development stack. Most of the time I'm only using a few but don't want to have to go manually start and stop services all the time. Also, even within a service typically access to most of the memory isn't required for every request. Swap handles this quite well.

- There are parts of our development stack that, simply put, are extremely bloated in terms of memory use. Four gig webpack process, I'm looking at you. Yup, that is ridiculous and should be fixed and maybe there is a fix out there we haven't figured out ... but I don't care, isn't my problem, and I don't have to fight it because it seldom accesses most of that bloated memory so it lives great in swap.

- I like being able to switch to working on something else without having to shut down and restart all the applications I'm using as required for the new task. Swap is a great fit for that. For example, if I have to switch from developing code to analyzing a large Java heap dump, which requires lots of memory, I don't have to go shut everything down, it just quickly gets paged out to swap and comes back when I need it. I don't care if it takes an extra 30 seconds for switch between these two tasks, it already takes much longer than that to load the heap dump regardless.

I think often people give swap a bad name because they never see what it does well for them unless they go looking for it, they only see it when they are asking their machine to do something where the working set is actually too big for memory and blame swap.

That said, no need for swap if you can live fine without it. I just don't think that is good general advice.


> I never saw programs I would actually want to use try to allocate that much

What about applications that have memory-bound performance characteristics? In these cases, saving a bit of memory often directly translates into throughput, which translates into $$$.

This isn't a theoretical, a bunch of services which I've run in the past and currently literally make more money because of swap. By using memory more efficiently and monitoring memory pressure metrics instead of just "freeness" (which is not really measurable anyway), we allow more efficient use of the machine overall.


Exactly! Also, the person who goes from 32G of RAM to 256G of RAM is going to run without swap.


I have 4GiB swapfiles on the cluster nodes I manage, which have 512GiB RAM.

It's hardly used, around 300MiB at present, probably things like the mail daemon. It's been useful to have a very slow node, which I can SSH into (after 10 minutes) and kill a chosen process, rather than a dead/OOMed node. But I think the difference is marginal, and perhaps 512MiB would have been a more appropriate size for the partition.

(Swappiness is set to 1.)


We run without swap, and haven't noticed an issue. Did you see any practical benefit to running swap vs not?


Other than a couple of occasions where things have run slowly, giving me time to kill the process I choose, I've not noticed any benefit.

Picking a random machine, there is 600MB of swap used. "top" shows where about 50MB is used (Hadoop daemons, systemd bits) but I don't know what the rest is. I guess it could backfire, since logind is swapped out, and I might want to log in on the serial console if the machine is very busy.


The system should be tuned so that under excessive pressure from requests it starts to turn requests away before running out of RAM. Having a small ammount of swap will let you get closer to the limit of RAM use without risking OOM killing something and getting the system into an undefined state. Also you can swap a fair bit of the system you don't use and leave it there, giving you more RAM to use for processing requests.


All of this boils down to "because buying more disk is cheaper than buying more RAM" and "avoid memory contention."

The author discusses the situation as if the quantity of RAM is fixed and swap can be added (or not). But that isn't the only possibility — you can also add more RAM (it's just expensive). For the same number of GB of RAM+swap vs just RAM, there is no reason to prefer the option with swap.


Sure, but why wouldn't you run some swap with your much bigger ram anyway?

In the end the core idea is: sometimes you have anonymous memory that is accessed so rarely that you'd rather have an extra disk cache page. If you assume that the kernel is not paging out memory that you actually use when not under pressure, swap doesn't hurt you.


> Sure, but why wouldn't you run some swap with your much bigger ram anyway?

If you don't need it, you don't need it. The other question is: how much swap exactly should I have? And why wouldn't I just add that much RAM instead?

> In the end the core idea is: sometimes you have anonymous memory that is accessed so rarely that you'd rather have an extra disk cache page.

That's the theory. In practice I always have more than enough RAM for all the cached pages the system wants to cache. On my laptop right now (booted today), I have 500MB of cached pages and 5.8 gigabytes of free memory. On my server (booted 499 days ago) I have 700MB of cached pages and 6 gigabytes of free memory.

If I were running out of memory [be it for cache or applications], I'd prefer to have more RAM than add swap. Yes, I keep calling it emergency memory.

> If you assume that the kernel is not paging out memory that you actually use when not under pressure, swap doesn't hurt you.

1) Bad assumption 2) it doesn't help you either, so why bother? Actually I might have a use for that disk space. In that case the swap just hurts.


>If you don't need it, you don't need it. The other question is: how much swap exactly should I have? And why wouldn't I just add that much RAM instead?

Because adding RAM costs money, adding swap space is a config setting.

It looks to me that your boxes are idle. Sure, if you're not doing any work, it doesn't matter...

>1) Bad assumption 2) it doesn't help you either, so why bother? Actually I might have a use for that disk space. In that case the swap just hurts.

1) Is it? Can you source that claim somehow? 2) It does help you, that's the point. It makes your disk reads faster 3) Your answer to needing memory is 'buy more ram', but you 'may have a use for that disk space'? Buy more disk.


> 1) Is it? Can you source that claim somehow?

Only my anecode. I've had systems become annoyingly sluggish because the OS decided I no longer need something and paged it out, even tough I had plenty of RAM. Turns out I needed that something.

> 2) It does help you, that's the point. It makes your disk reads faster

I just gave my numbers. The systems are not caching nearly as much as I have ram. These numbers come from systems I use every day; they are not idle.

> 3) Your answer to needing memory is 'buy more ram', but you 'may have a use for that disk space'? Buy more disk.

Why are people so hell bent on telling me I should use swap that doesn't actually help me at all? Yes I buy as much as disk as I need, and I'm not putting unnecessary swapfiles or partitions on them. Yes I also buy as much RAM as I need.

The only real justification I see for swap here is that it's cheaper -- poor man's RAM. I call that emergency memory for when you can't have enough RAM. If I have enough memory, swap is completely pointless.


> you can also add more RAM (it's just expensive)

Or just impossible

Many laptops still have remarkably low maximum-RAM limits. The ones I have here ( Dell SME & corporate types ) are 4GB and 8GB. I live in constant fear of the solid-blue disk-access lamp.

And when I boot-up after a swap-thrash I am scolded for an unclean shutdown :(


You have not given an actual example of "impossible." A computer that can hold more RAM than 4-8 GB is possible to obtain, at some cost. You may object to the cost ("expensive"), but it is not impossible.


Well, considering that current Intel laptop chipsets support only 16g of RAM, this is pretty much a hard limit. Yes, you can switch to desktop hardware, but that is limited to usually heavier devices which run shorter on battery. So there are some very practical limits on increasing the RAM.


Agreed, in most situations.

One counter example:

If your processes in sum tend to A) access many disk locations, at large total disk space B) hold a lot of underused data in ram

This isn't totally impossible. Maybe an Ethereum node with a script doing a bunch of data reads, running side by side with hundreds of Chrome tabs, few of which are regularly accessed. (Totally hypothetical, of course...)

Swapping some rarely used ram out so the OS can buffer disk into ram seems like a reasonable approach (although maybe even more reasonable is: close some tabs).

Your point still stands that more ram is strictly as good or better performance in this scenario, but you might be able to get an equivalent performance boost much more cheaply with some swap space for the underused ram. Also, upgrading past 32 GB ram starts to veer from expensive to impossible on a laptop.


Author assumes that you actually have some disk space to spare, which is not always the case.

I wonder if instead of using 4GB RAM, I'd setup 3GB RAM + 1GB swap space in RAM disk would result in much wiser OOM killer decisions (or stability).

Showing superiority of such configuration would probably convince all swap sceptics


In that case, you should use zram or zswap, it's what Chromebooks do to avoid swapping to their limited-performance flash storage, despite only 2GB or 4GB RAM.


My use case for swap: 10 dev environments on a cheap-ass AWS server. Where each environment is about 5 docker images.

They are slower, but to give every branch a fully usable test system is pretty awesome. No reason to pay through the butt for RAM for tier-1 dev environments. You can also have a premium dev environment for the develop branch on a different server.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: