
In defence of swap: common misconceptions - c4urself
https://chrisdown.name/2018/01/02/in-defence-of-swap.html
======
cosarara97
In my experience, a misbehaving linux system that's out of RAM and has swap to
spare will be unusably slow. The process of switching to a tty, logging in,
and killing whatever the offending process is can easily take a good 15
minutes. Xorg will just freeze. Oh, and hopefully you know what process it is,
else good luck running `top`.

Until this is fixed, I'll just keep running my systems with very small amounts
of swap (say, 512MB in a system with 16GB of RAM). I'd rather the OOM killer
kick in than have to REISUB or hold down the power button.

Some benchmarks with regards to the performance claims would be nice.

~~~
kzrdude
What function does the 0.5GB swap have?

~~~
petecox
I just wish Linux distro installers would make opting out the default option;
no, I don't want to swap on my SSD. The last time I installed a distro, I
still had to select the manual option for partitioning.

With an 8 Gig stick in my NUC, for normal desktop usage it never goes above 3.

~~~
Karunamon
And even then; they provision an absurd amount of it. I just did a fresh
Ubuntu install. On a machine with 32 gigs ram, it creates a 32 gig swap
partition by default!

~~~
KozmoNau7
They probably reason that you want your system to be able to hibernate, plus
storage is cheap.

I have 8GB of swap for the 8GB in my laptop, for that reason.

On my desktop with 16GB, the w 2GB of swap it has is sometimes too little, and
everything grinds to a halt.

~~~
vinw
32GB of SSD isn't that cheap!

------
quotemstr
Very few people on this thread read and understood the article. The point
isn't working with data sets larger than RAM. The point is making better use
of the RAM you do have by taking pages you'll almost never touch and spilling
them to disk so that there's more room in RAM for pages you _will_ touch.

Banning swap is like making self-storage companies illegal and forcing
everyone to hold all possessions in their homes. Sure, you'd be able to get to
grandma's half broken kitschy dog coaster that you can't bring yourself to
throw away, but you'd also be harder to harder to fit and find your own stuff,
the stuff you need all the time.

If you find yourself driving to and from the self storage place every day, you
probably need a bigger home. But self storage is plenty useful even if you
almost never visit it.

~~~
anowlcalledjosh
The issue is that the current OOM killer doesn't support this usage at all.

To extend the analogy: what do you do if grandma comes and fills your house
with stuff? You need space to work, so you go and drop it off at the self
storage place, but what if she just keeps filling your house up?

The OOM killer will do absolutely nothing until both your house and the whole
self storage place are totally full. By that point, you've spent a huge amount
of time just driving to and from self storage, so you haven't had time to do
any actual work; it would probably have been better to tell grandma that you
don't want any more stuff once she filled up your house for the first time.

~~~
quotemstr
Well, it doesn't help that when grandma calls and asks whether you have room
for more stuff, the Linux kernel responds on your behalf, "Yes, of course I
have room. I live in a TARDIS." And you then do all driving to the self-
storage facility to maintain the illusion as long as you can. I really don't
like overcommit.

Anyway, I agree with you that this behavior is annoying, but I think it ought
to be possible to fix it (e.g., with memory cgroups or something like
Android's lmkd) without giving up on the idea of spilling infrequently-
accessed private dirty pages to disk.

~~~
MaxBarraclough
The analogy is now getting in the way, rather than helping to clarify.

------
wahB4vai
I'm a big fan of determinism and service uniformity. Having that rarely used
and response time critical function/data/whatever swapped out increases
service time variation at best and complicates all worst case response time
calculations at least.

I understand from the land of JIT compilers, garbage collectors, and
oversubscribed everything that this is not much of a substantial concern as
these features are already traded away.

The swap may be the best case in a bad situation. I would argue along the
lines of don't be in a bad situation...

I'm looking at you 8 of 16 GB used on cold boot Mac laptops... Looking at you
with indignation and rancor Chrome.

~~~
caf
One of the article's points is that running without swap doesn't necessarily
alleviate that. The rarely-used code pages of your rarely-used but response
time critical daemon can just as easily be dropped from the page cache and
have to be refaulted in from disk, and in fact that's _more_ likely if there
isn't swap available to stow the dirty anonymous pages from the cron daemon
that wakes up once a day or whatever.

The solution for your rarely-used but response time critical daemon is for it
to mlock() its critical data and code pages into memory, which works
regardless of whether or not you have swap available. (Or, alternatively, use
one of the cgroup controllers that the article alludes to, to give the
critical daemon and related processes memory unaffected by memory pressure
elsewhere in the system).

~~~
AstralStorm
No they cannot. Code that is loaded into memory stays in memory when three is
no swap. (Mlock is to prevent swap out or compression - major faults.) The
exception would be lazy library loading. BIND_NOW is your friend in this case.

Essentially having no swap is similar to having everything mlocked - no major
faults can happen except with mmapped files which will just use direct disk
IO.

If you mean disk caches, when have you seen a multigigabyte executable?

~~~
caf
_Code that is loaded into memory stays in memory when three is no swap._

That is not true. In the normal case (absent debugging, JIT, self-modifying
code etc), pages of executable code are clean, shared mappings so they do not
interact with swap at all.

As clean, shared mappings they are eligible to be dropped from the page cache
in the same way as other clean file mapped pages.

(Your executable code pages essentially _are_ mmapped files. )

------
perlgeek
> Under temporary spikes in memory usage > With swap: We’re more resilient to
> temporary spikes, but in cases of severe memory starvation, the period from
> memory thrashing beginning to the OOM killer may be prolonged. We have more
> visibility into the instigators of memory pressure and can act on them more
> reasonably, and can perform a controlled intervention.

Somehow that doesn't resonate with my experience. I tend to remember the cases
where I can't even SSH into the box, because the fork in sshd takes minutes,
as does spawning the login shell.

I'd really like some way to have swap, but still loosen the OOM killer on the
biggest memory hog when the system slows down to a crawl. I haven't found that
magic configuration yet.

~~~
JdeBP
Well the article does suggest a mechanism to apply.

As for the problem with SSH and login: You might well find that it is _not_
the fork that is the problem. You might well be surprised at how much chaff is
run by a login shell, or even by non-login shells.

A case in point: I recently reduced the load on a server system that involved
lots of SCP activity by noticing that, thanks to RedHat bug #810161, _every
SCP session even though it was a non-login non-interactive shell_ was first
running a program to enumerate the PCI bus, to fix a problem with a Cirrus
graphics adapter card that the machine did not have on a desktop environment
that the machine did not have. This driven by /etc/bashrc sourcing
/etc/profile.d/* .

* [https://github.com/FedoraKDE/kde-settings/blob/F-26/etc/prof...](https://github.com/FedoraKDE/kde-settings/blob/F-26/etc/profile.d/qt-graphicssystem.sh)

------
avar
Even for those who understand this well, it's historically been really hard to
coerce the Linux kernel into applying the right swap policies to your
application.

As the author notes much of this has been improve by cgroups, and there's
always been big hammers like mlock(), even with those things it can be hard to
prevent memory thrashing in extreme cases. I've seen swap disabled completely
by people who understood how it worked as a last result because of that.

It's always seemed to me that this was mainly a problem of the kernel
configuration being too opaque. Why can't you configure on a system-wide basis
that you can use swap e.g. only for anonymous pages and for nothing else?

Similarly it would be nice to have a facility for the OOMkiller to call out to
userspace (it would need dedicated reserved memory for this) to ask a
configured user program "what should I kill?". You might also want to do that
when you have 10G left, not 0 bytes.

~~~
Hello71
Android has this last bit. as explained at
[https://www.youtube.com/watch?v=ikZ8_mRotT4&t=2145](https://www.youtube.com/watch?v=ikZ8_mRotT4&t=2145)
(linked in the article) though, Linux does not have the facilities presently
to determine when you have "10G left" in a way that applies across all system
configurations.

------
slaymaker1907
I recently reenabled swap on my Windows machine due to frequent OOM, even with
16GB of RAM while playing Overwatch and browsing on Firefox. It seems like
both of these programs allocate vast swaths of memory but then do not actually
use that memory very heavily. After I turned swap back on, I did not notice
any degradation in performance but my system stability skyrocketed.

~~~
wilun
Windows has vastly different policies for RAM allocation and commit than
Linux. Windows basically does not overcommit while Linux systems not only does
it by default but quite depend on it for various loads to work properly. In
consequence, the userspace has a tendency to handle RAM differently, but there
is no magic: if programs are trying to allocate twice the amount of RAM and
then only use half of it, Windows with a swap that can be large enough will
work perfectly while without swap the allocation will fail. Under Linux, the
situation is less clear: without swap it will succeed (well it depends on the
fine details of overcommit that are selected, but you get the idea) although
if you _really_ then use all that RAM, the OOM killer will start to more or
less "randomly" kill "any" process to cope with the lack of RAM (as a last
resort measure though; the caches and buffers are flushed before, etc.)

~~~
Asooka
Windows also has two syscalls - one to reserve memory and one to commit
memory. You can reserve+commit at once, but you can also just reserve a chunk
of virtual memory that you commit at a later time. Accessing pages that aren't
committed is a segfault. So you can say "I might need up to X contiguous bytes
of virtual memory" and then commit as you go. IIRC, Windows will let you over-
reserve, but not over-commit.

Edit: sorry, not two syscals, it's an option to the malloc-equivalent -
VirtualAlloc

------
black_puppydog
Without wanting to impose my method or reasoning here, I run my dev machine
without swap, and I'd rather have the same for the cluster machines I access.

This is for academic use only. I know how much RAM my machine has, and if I
oom, it usually isn't because I tried to squeeze in just a tiny bit too much
data, but rather because I made some stupid mistake and keep allocating small
chunks of memory very rapidly. On a system with even a moderate amount of
swap, this makes everything grind to a halt, and it is usually much faster to
just reboot the machine and deal with the problems later _in the unlikely
event that rebooting actually causes problems_.

~~~
viraptor
If you're running a single-(or close to)-purpose machine, then seeing an
explicit memory usage limit on the main app could give you even better/faster
results.

------
dboreham
We've disabled swap (by not configuring a swap partition) on every server
we've deployed since 2009. It's a little irritating to have to manually remove
the swap partition from various Linux' "server" default install options even
today. Of course this means I'm still installing on bare metal that I own,
so...dinosaur.

------
leoc
I'd be more convinced by the argument that swap shouldn't be thought of as
slow RAM if the author addressed the fact that it's generally known as
'virtual memory'—and it has been since at least System 370, so it's not simply
a later misconception:
[http://pages.cs.wisc.edu/~stjones/proj/vm_reading/ibmrd2505M...](http://pages.cs.wisc.edu/~stjones/proj/vm_reading/ibmrd2505M.pdf)
. Instead the article just omits the term 'virtual memory' completely, and
pretty conspicuously.

I also think that a convincing case for swap would have to discuss the
concepts of latency, interactivity, and (soft) real-time performance, things
that largely weren't to the fore in the salad days of the 370 family or the
VAX. Virtual memory is the TCP of local storage.

~~~
JdeBP
That is not the argument.

The article actually says, four times over, that it should not be thought of
as _emergency memory_. It's not emergency memory; it's ordinary memory that
should see use as part of an everyday memory hierarchy.

And if you _are_ going to question the terminology, the elephant in the room
that you have missed is calling paging swapping. (-:

~~~
leoc
Despite the repeated use of the word 'emergency' this is not quite obvious.
For example in

> many people just see it as a kind of “slow extra memory” for use in
> emergencies

the scare quotes are around 'slow extra memory' not 'emergencies'. Now granted
in the last bullet point of the conclusion it affirms that VM is a source of
slow memory, but earlier it uses 'memory' where it's referring specifically to
RAM, for example

> Without swap: Anonymous pages are locked into memory as they have nowhere to
> go.

Really the main reason my original comment was rubbsih is that I took the
article far too much as a general discussion of swap when, as it said, it's
largely about how much swap to enable on a given Linux system running some
already-determined software.

------
nicklaf
I have an older Chromebook (c720), which is really quite memory starved (2GB
RAM), and have experienced ChromeOS completely frying the SSD simply through
prolonged tab-heavy swapping.

Now, I've replaced the SSD and installed a non-Google Linux distro, and would
like to limit the amount of swapping Firefox can do.

I had been planning to simply use cgroups' memory features to limit the amount
of memory consumed by Firefox processes, but if I am to understand the article
(which I admit I didn't read in full detail), I should also be able to tune
swapping to limit the actual amount of swapping that takes place, avoiding a
drastic uptick in SSD wear whenever open too many tabs.

That, and perhaps a Firefox extension that suspends background tabs in memory
(which I've used before with a certain amount of effectiveness in the pre-
WebExtension days).

~~~
tombrossman
Firefox has a couple options which may help you get by. I can't guarantee
these will fix everything but they are worth experimenting with.

about:memory has various options, including a 'minimize memory usage' button
and profiling tools.

about:preferences has Privacy & Security > Cached Web Content > Override
automatic cache management (select and set at 500MB, 1GB, or whatever works
best).

~~~
javitury
Yep, you can adjust the max amount of cache it will store in ram and
disk(separately).

~~~
vanderZwan
I don't see any option to separate that in the settings page, is it hidden
behind a flag somewhere?

~~~
javitury
In about:config search for browser.cache.memory.capacity and
browser.cache.disk.capacity.They are in kilobytes. To enable or disable any of
those caches use browser.cache.disk.enable and browser.cache.memory.enable.

~~~
vanderZwan
Thanks!

------
CoconutPilot
Swap was a great idea, but its time is gone. Swap doesn't make sense anymore,
hard drives have not scaled and kept up with the improvements in RAM.

In the Pentium 1 era EDO RAM maxed out at 256MB/s and hard disk xfer was
10MB/s. Common RAM size was 16MB.

In today's era DDR4 maxes out at 32GB/s and hard disk xfer is 500 MB/s. Common
RAM size is 16GB.

RAM xfer rate has grown is 320x. RAM capacity has grown 100x. Disk xfer rate
has grown 50x.

Swap is no longer a useful tool.

~~~
Dylan16807
The purpose of swap is not running things out of it. The purpose is shoveling
unused data out of memory. And for that it doesn't need to be particularly
fast.

Swap has a lot less purpose in a world without memory leaks and extraneous
functions. But in practice it's quite good at getting several gigabytes of
unnecessary data out of the way, so ram can be used properly.

Swap, well-used, should only take up a few percent of the drive's bandwidth.

~~~
tmyklebu
> The purpose of swap is not running things out of it. The purpose is
> shoveling unused data out of memory. And for that it doesn't need to be
> particularly fast.

You can't detect a priori whether data is "unused." If you guess wrong a few
times in a row, you get the familiar pattern where your Linux box is
unresponsive to everything and needs to be bounced.

If you could detect whether data is rarely used, swap still isn't necessary.
Applications can mmap() a file and use that region for "rarely used data" if
such is known in advance.

Extraneous functions should be backed by the executable in the common case. In
the JIT case, they probably won't be JITted anyway.

I still think the OOM killer is less intrusive than swapping to disk. It kills
some, but not all, of the processes on the machine get killed. The system
pretty reliably comes back to life in less time than it takes a human to
diagnose the problem and bounce the system. As a bonus, no human needs to get
involved.

~~~
Dylan16807
> Applications can mmap() a file and use that region for "rarely used data" if
> such is known in advance.

They could, but that's a lot like just making swap be manual.

> Extraneous functions should be backed by the executable in the common case.

I don't mean the code itself, I mean all the data it builds up for something
that isn't needed.

~~~
tmyklebu
> They could, but that's a lot like just making swap be manual.

Sure. Isn't that a good thing? Rarely-used data can be swapped out to storage
allocated for the purpose, as you desire, and too-large working sets don't
have to hose the machine.

> I don't mean the code itself, I mean all the data it builds up for something
> that isn't needed.

I guess I don't often run programs that waste large amounts of memory for no
reason. Either my working set fits or it's too large, and swap only matters if
it's too large.

Is Java the typical beneficiary here?

------
ohazi
Does hibernating via a swap file work reasonably well yet? I haven't had a
chance to try this out yet, but that's the main reason I still have a swap
partition on my laptop.

~~~
gerdesj
Well done mate - you are the first person to mention this here. It was also
only briefly mentioned in the article.

Yes, hibernation does work well and it requires swap. Personally, I set a swap
partition equal to RAM + 512MB on systems that I want to hibernate on.

Linux also supports swap files and this might be handy:
[https://wiki.debian.org/Hibernation/Hibernate_Without_Swap_P...](https://wiki.debian.org/Hibernation/Hibernate_Without_Swap_Partition)

------
bhouston
On windows I have found it necessary to disable swap to keep myself efficient.
Many times Ive had applications decide to allocate massive amounts of memory
and then it leads to my system slowing down with tons of swap activity. In
nearly all cases, I didn't want my system to try its best to handle these
massive memory requests but rather it should have just killed the offending
application. Often in these failure scenarios the swap goes nuts and my
computer becomes unresponsive that it takes a long time to even get to kill
the bad actor.

Thus I disabled swap and I never had these unresponsive issues. I run with
32GB of ram so generally well behaved applications never run into memory
issues.

Some applications that would cause issues would be too many VirtualBox
instances that use more than available memory. A text editor that chocks
trying to open a >1GB text file (looking at you, the new JS-based editors.)

~~~
maxxxxx
Windows is terrible at swapping. As soon as you hit max RAM performance
suffers a lot. Even if you never use inactive apps.

~~~
mnx
Yep. When I was using win7 with 2gb of ram, I was basically managing it
manually - as soon as memory usage hit 90-95%,I was shutting apps down. If I
let it swap, it would often hang completely.

------
whopa
This is an informative and well written article, but seems incomplete in this
day and age. In public cloud environments, network attached storage is far
more prevalent, so the swap story may be different there (I honestly don't
know though). Since the author works at Facebook, he probably lacks experience
in this regard.

~~~
Anderkent
Every cloud provider I've worked with (okay, so AWS :P) gives you ephemeral
local storage. Obvioulsy you don't swap onto a network drive.

~~~
whopa
Modern AWS instance types are EBS-only.

~~~
eikenberry
With the exception of the High IO types (I2/I3). They still get it and the
newer instances get NVMe SSDs. In other words they are making it a feature of
certain types that would benefit from it.

~~~
_msw_
For example, F1 instances have NVMe local instance storage.

[https://aws.amazon.com/ec2/instance-
types/f1/](https://aws.amazon.com/ec2/instance-types/f1/)

------
javitury
For many years now I use ram compression instead of swap for desktops/laptops.
I particularly like zram but zswap is also great if you you are hitting
hardware limits.

The difference with swap is that the computer doesn't get unresponsive, it
just slows down a bit. And Ram compression still buys some time before OOM
killer hits.

------
ibiza
Count me among the believers in running w/ swap. Here's all it takes to
provide Linux with a little swap space:

    
    
      fallocate -l 8G /swapfile
      chmod 0600 /swapfile
      mkswap /swapfile
      swapon /swapfile
    

Add an entry in /etc/fstab & you're done. "This little trick" made all the
difference on a compute cluster I managed, where each node contained 96G of
RAM. It's much more pleasant to monitor swap usage than the OOMKiller and
related kernel parameters.

------
kartickv
How long before desktop OSs manage memory like mobile ones, automatically
shutting down background apps that aren't being used, so that the system
remains responsive no matter what?

Or, worst case, if two tasks that need 8GB each are running on a machine with
8GB memory, kill one, let the other finish, and restart the first one. Or,
less ambitious, freeze one, swap it out to disk, let the other finish, and
only then resume the frozen app.

Desktop OSs are so primitive at memory management, forcing the complexity onto
the user.

~~~
vardump
> How long before desktop OSs manage memory like mobile ones, automatically
> shutting down background apps

Once desktop systems and applications support required APIs to handle saving
state before being shut down.

~~~
cdown
> Once desktop systems and applications support required APIs to handle saving
> state before being shut down.

SIGTERM and friends? :-)

If your application is just dropping state on the floor as a result of having
an intentionally trappable signal being sent to it or its children, that seems
like a bug.

------
viraptor
I feel like the real metric is like to see from swap usage is: how much time
did I spend waiting to swap-in a page, how much extra cache pages it gave me,
and what's the cache hit ratio. If the big purpose is to allow those extra few
pages to be available, then either it's worth doing or not - there should be
an objective way to look at this. Unfortunately only the second and third part
is easily available. The first... maybe via systemtap?

~~~
cdown
> how much time did I spend waiting to swap-in a page

You can do this with eBPF/BCC by using funclatency
([https://github.com/iovisor/bcc/blob/master/tools/funclatency...](https://github.com/iovisor/bcc/blob/master/tools/funclatency.py))
to trace swap related kernel calls. It depends on exactly what you want, but
take a look at mm/swap.c and you'll probably find a function which results in
the semantics you want.

------
kstenerud
So, in other words, if you have enough memory for your workload that you won't
run out, there's no benefit to having swap space (i.e. you've wasted money on
memory you don't need).

But if you DO have swap space, there won't be a performance hit (at least not
under Linux) because it will only swap out some rarely used pages and then sit
there doing nothing.

So, in the general case, it's better to have it and not need it than need it
and not have it.

~~~
Anderkent
> So, in other words, if you have enough memory for your workload that you
> won't run out, there's no benefit to having swap space (i.e. you've wasted
> money on memory you don't need).

No, that's the opposite. If you have enough memory for your workload that you
won't run out, swapping lets you use more memory for disk cache (instead of
keeping unaccessed anonymous pages in real ram).

Unless by "won't run out" you mean "never have to throw away a disk cache
page", which seems very unrealistic.

~~~
clarry
> If you have enough memory for your workload that you won't run out, swapping
> lets you use more memory for disk cache

Except that it doesn't happen in practice, on my systems anyway. If you have
plenty of memory, you can keep all your programs in it _and_ as much as the
system wants to cache and still not run out.

The theory says that swap effectively buys you some memory to spend on more
important things (than what the system chooses to page out). So does buying
more memory.

> Unless by "won't run out" you mean "never have to throw away a disk cache
> page", which seems very unrealistic.

I have an instance of top running on my desktops & laptops all the time. I
never see cache using up all of the memory.

    
    
                      total        used        free      shared  buff/cache   available
        Mem:           7848        1523        5827          53         497        6033

~~~
Anderkent
Are you talking desktop workflows here? If this is a service, why are you
paying for RAM that never gets used? Get a smaller instance.

~~~
clarry
> Are you talking desktop workflows here?

Yes [I explicitly spelled out desktops & laptops] but it's been true on my
servers too.

> If this is a service, why are you paying for RAM that never gets used? Get a
> smaller instance.

Not all providers are so flexible. I may want a server with more CPU & traffic
and some additional disk space. Going there gets me more RAM too. It turns out
these services are constrained by the real hardware. If I'm getting a box with
enough CPU for my needs, well they are not going to a shop to buy that special
box for me, they use what they have and what they have comes with plenty of
RAM too.

------
Aardwolf
In defence against swap on my personal computer:

-PCs have a lot of RAM now

-When you allocate that much memory it's usually a bug in your own code like a size_t that overflowed. I never saw programs I would actually want to use try to allocate that much

-When using swap instead of ram, everything becomes so slow that you're screwed anyway. The UI doesn't even respond fast enough to kill whatever tries to use all that memory.

-How common is a situation where you need more memory than your ram size yet less than ram+swap size in a useful way? Usually if something needs a lot, it's _really_ lot (and as mentioned above not desirable)

-Added complexity of making extra partition

-Added complexity if you want to use full disk encryption

-I do the opposite of using disk as ram: I put /tmp in a ramdisk of a few gigs

-Disks are slow and fast ssd's are expensive so you would't want to sacrifice their space (maybe if this changes some day...)

~~~
IncRnd
Exactly! Also, the person who goes from 32G of RAM to 256G of RAM is going to
run without swap.

~~~
Symbiote
I have 4GiB swapfiles on the cluster nodes I manage, which have 512GiB RAM.

It's hardly used, around 300MiB at present, probably things like the mail
daemon. It's been useful to have a very slow node, which I can SSH into (after
10 minutes) and kill a chosen process, rather than a dead/OOMed node. But I
think the difference is marginal, and perhaps 512MiB would have been a more
appropriate size for the partition.

(Swappiness is set to 1.)

~~~
IncRnd
We run without swap, and haven't noticed an issue. Did you see any practical
benefit to running swap vs not?

~~~
Symbiote
Other than a couple of occasions where things have run slowly, giving me time
to kill the process _I_ choose, I've not noticed any benefit.

Picking a random machine, there is 600MB of swap used. "top" shows where about
50MB is used (Hadoop daemons, systemd bits) but I don't know what the rest is.
I guess it could backfire, since logind is swapped out, and I might want to
log in on the serial console if the machine is very busy.

------
Neil44
The system should be tuned so that under excessive pressure from requests it
starts to turn requests away before running out of RAM. Having a small ammount
of swap will let you get closer to the limit of RAM use without risking OOM
killing something and getting the system into an undefined state. Also you can
swap a fair bit of the system you don't use and leave it there, giving you
more RAM to use for processing requests.

------
loeg
All of this boils down to "because buying more disk is cheaper than buying
more RAM" and "avoid memory contention."

The author discusses the situation as if the quantity of RAM is fixed and swap
can be added (or not). But that isn't the only possibility — you can also add
more RAM (it's just expensive). For the same number of GB of RAM+swap vs just
RAM, there is no reason to prefer the option with swap.

~~~
Anderkent
Sure, but why wouldn't you run some swap with your much bigger ram anyway?

In the end the core idea is: sometimes you have anonymous memory that is
accessed so rarely that you'd rather have an extra disk cache page. If you
assume that the kernel is not paging out memory that you actually use when not
under pressure, swap doesn't hurt you.

~~~
clarry
> Sure, but why wouldn't you run some swap with your much bigger ram anyway?

If you don't need it, you don't need it. The other question is: how much swap
exactly should I have? And why wouldn't I just add that much RAM instead?

> In the end the core idea is: sometimes you have anonymous memory that is
> accessed so rarely that you'd rather have an extra disk cache page.

That's the theory. In practice I always have more than enough RAM for all the
cached pages the system wants to cache. On my laptop right now (booted today),
I have 500MB of cached pages and 5.8 gigabytes of free memory. On my server
(booted 499 days ago) I have 700MB of cached pages and 6 gigabytes of free
memory.

If I were running out of memory [be it for cache or applications], I'd prefer
to have more RAM than add swap. Yes, I keep calling it emergency memory.

> If you assume that the kernel is not paging out memory that you actually use
> when not under pressure, swap doesn't hurt you.

1) Bad assumption 2) it doesn't help you either, so why bother? Actually I
might have a use for that disk space. In that case the swap just hurts.

~~~
Anderkent
>If you don't need it, you don't need it. The other question is: how much swap
exactly should I have? And why wouldn't I just add that much RAM instead?

Because adding RAM costs money, adding swap space is a config setting.

It looks to me that your boxes are idle. Sure, if you're not doing any work,
it doesn't matter...

>1) Bad assumption 2) it doesn't help you either, so why bother? Actually I
might have a use for that disk space. In that case the swap just hurts.

1) Is it? Can you source that claim somehow? 2) It does help you, that's the
point. It makes your disk reads faster 3) Your answer to needing memory is
'buy more ram', but you 'may have a use for that disk space'? Buy more disk.

~~~
clarry
> 1) Is it? Can you source that claim somehow?

Only my anecode. I've had systems become annoyingly sluggish because the OS
decided I no longer need something and paged it out, even tough I had plenty
of RAM. Turns out I needed that something.

> 2) It does help you, that's the point. It makes your disk reads faster

I just gave my numbers. The systems are not caching nearly as much as I have
ram. These numbers come from systems I use every day; they are not idle.

> 3) Your answer to needing memory is 'buy more ram', but you 'may have a use
> for that disk space'? Buy more disk.

Why are people so hell bent on telling me I should use swap that doesn't
actually help me at all? Yes I buy as much as disk as I need, and I'm not
putting unnecessary swapfiles or partitions on them. Yes I also buy as much
RAM as I need.

The only real justification I see for swap here is that it's cheaper -- poor
man's RAM. I call that emergency memory for when you can't have enough RAM. If
I have enough memory, swap is completely pointless.

------
coding123
My use case for swap: 10 dev environments on a cheap-ass AWS server. Where
each environment is about 5 docker images.

They are slower, but to give every branch a fully usable test system is pretty
awesome. No reason to pay through the butt for RAM for tier-1 dev
environments. You can also have a premium dev environment for the develop
branch on a different server.

