
Do we really need swap on modern systems? - omnibrain
https://www.redhat.com/en/about/blog/do-we-really-need-swap-modern-systems
======
mwpmaybe
My personal rules of thumb for Linux systems. YMMV.

* If you need a low-latency server or workstation and all of your processes are killable (i.e. they can be easily/automatically restarted without data loss): disable swap.

* If you need a low-latency server or workstation and some of your processes are not killable (e.g. databases): enable swap and set vm.swappiness to 0.

* SSD-backed desktops and other servers and workstations: enable swap and set vm.swappiness to 1 (for NAND flash longevity).

* Disk-backed desktops and other servers and workstations: accept the system/distro defaults, typically swap enabled with vm.swappiness set to 60. You can and likely should lower vm.swappiness to 10 or so if you have a ton of RAM relative to your workload.

* If your server or workstation has a mix of killable and non-killable processes, use oom_score_adj to protect the non-killable processes.

* Monitor systems for swap (page-out) activity.

~~~
nisa
If you are on the experimental side:

There is also zram (just swap in memory lz4/lzo compressed) and zswap
(compressed cache in memory for swap pages before hitting disk) that needs a
real swap device but compresses pages beforehand.

I run zswap on my Desktop and on a few servers and it gives you some more time
before the oom killer comes and the system feels a bit longer responsive.

zram is a nice idea but quite a beast in practice (at least on MIPS with 32mb
RAM) sys constantly at 100% if you need it and other quirks. Maybe it got
better or I did something wrong.

But if you need an in-memory compressed block-device it's pretty great - you
can just format it with ext4 and have a lz4 compressed tmpfs.

~~~
nerdponx
First I've heard of either. How would I set these up?

~~~
mxvzr
You can setup zram like this. Typically you'll want to make a service for it
since it needs to run on every boot.

    
    
      # modprobe zram num_devices=1
      # echo 1G > /sys/block/zram0/disksize
      # mkswap zram0 /dev/zram0 -L zram0
      # swapon -p 100 /dev/zram0
    

Official documentation here:
[https://www.kernel.org/doc/Documentation/blockdev/zram.txt](https://www.kernel.org/doc/Documentation/blockdev/zram.txt)

------
Animats
Swapping should have disappeared years ago. At best, it gives the effect of
twice as much memory, in exchange for much slower speed. It was invented when
memory cost a million dollars a megabyte. Costs have declined since then. How
much does doubling the memory cost today?

What seems to keep swap alive is that asking for more memory ("malloc") is a
request that can't be refused. Very few application programs handle an out of
memory condition well. Many modern languages don't handle it at all. Nor is it
customary to check for a "memory tight" condition and have programs restrain
themselves, perhaps by starting fewer tasks in parallel, opening fewer
connections, keeping fewer browser tabs in memory, or something similar.

I've used QNX, the real time OS, as a desktop system. It doesn't swap. This
make for very consistent performance. Real-time programs are usually written
to be aware of their memory limits.

Most mobile devices don't swap. So, in that sense, swapping is on the way out.

~~~
AnthonyMouse
> Nor is it customary to check for a "memory tight" condition and have
> programs restrain themselves, perhaps by starting fewer tasks in parallel,
> opening fewer connections, keeping fewer browser tabs in memory, or
> something similar.

These aren't mutually exclusive and are actually complementary with swap.

If you have more than enough memory then swap is unused and therefore
harmless. The question is, what do you do when you run out? Making the system
run slower is almost always better than killing processes at random.

And it gives processes more time to react to a low memory notification before
low turns into none and the killing begins, because it's fine for "low memory"
to mean low physical memory rather than low virtual memory.

It also does the same thing for the user. "Hmm, my system is running slow,
maybe I should close some of these 917 browser tabs" is clearly better than
having the OS kill the browser and then kill it again if you try to restore
the previous session.

~~~
qznc
I cannot remember a single occasion, where my desktop recovered when it
started swapping. Always, the whole system locks up and I need to reboot.
Thus, better kill some random processes instead of all of them.

~~~
Razengan
> I cannot remember a single occasion, where my desktop recovered when it
> started swapping.

..which operating system is that?

~~~
qznc
Ubuntu

~~~
pareidolia
Sounds to me your swap is not swapon'd. I get the same behaviour when I'm not
running swap and memory is depleted.

------
scottlamb
I hate swap. My experience with it is that once a disk-backed machine (as
opposed to SSD) has started swapping, it's essentially unusable until you
manually force all anonymous pages to be paged in by turning off swap ("sudo
swapoff -a" on Linux) or reboot.

My hunch is that the OS is swapping stuff back in stupidly. Once memory is
available, I'd like it to page everything back proactively, preferring stuff
from swap and then from file-backed mmaps. But instead it seems to be purely
reactive, each major page fault requiring a disk seek to page in what's needed
with little if any readahead. Basically the whole VM space remains a minefield
until you stumble over and detonate each mine in your normal operation. Much
better to reboot and have a usable system again.

On my Linux systems, I've turned off swap.

On OS X...last I checked, I wasn't able to find a way to do this. I'd like to
turn off swap entirely, or failing that, have some equivalent way to force all
of swap to be paged in now so I don't have to reboot when I hit swap. Anyone
know of a way?

~~~
outworlder
> My experience with it is that once a disk-backed machine (as opposed to SSD)
> has started swapping, it's essentially unusable until you manually force all
> anonymous pages to be paged in by turning off swap ("sudo swapoff -a" on
> Linux) or reboot.

That depends. If your workload exceeds the amount of available memory, you
will start "thrashing" the disk and that can make a system un-responsive.

If you happen to launch a large application, or start working with a big file,
unused pages will be evicted to disk to make room and, after some slowdown,
the system should become perfectly usable again. YMMV

On OSX, I don't know a way, but I can't recall the last time I had to reboot
due to RAM/swap issues, even when I was developing apps on a 4GB Macbook Air.
I guess memory compression, which is enabled by default, helps here. Most OSX
systems have very fast SSDs as well.

~~~
tluyben2
On my 2015 MBP with 8GB & SSD, I am often stuck for 10-15 minutes unable to do
anything while thrashing. And I am someone who has Activity Monitor handy. I
do not have this on my much older and weaker Ubuntu X220s doing the same type
of development. Not sure why that is.

~~~
Yaggo
If it's 3rd party SSD, have you enabled TRIM? I had to do that for my old Mac
Mini, made big difference. (2015 MBP of course has factory-installed SSD, but
maybe this helps someone else.)

[http://osxdaily.com/2015/10/29/use-trimforce-trim-ssd-mac-
os...](http://osxdaily.com/2015/10/29/use-trimforce-trim-ssd-mac-os-x/)

~~~
tluyben2
It's all original Apple hardware.

------
benibela
Something seems to be seriously wrong with the swap implementation on modern
systems.

20 years ago on Windows 98 it just started swapping, but it was no big deal.
If something became too slow to be usable, you could just press ctrl+alt+del
and kill that swapped program and everything worked fine afterwards.

On the other hand, my modern linux laptop, it starts swapping, and it swaps
and swaps and you can do nothing, not even move the mouse, till 30 minutes
later something crashes.

~~~
throwawayish
I have been using various operating systems for a while.

I feel like Linux has, in general, from a UX point of view, the worst
behaviour when swapping and the worst behaviour in general under memory
pressure.

I feel like it has gotten worse over time, which might not be just the kernel
but the general desktop ecosystem. If you require much more memory to move the
mouse or show the task manager equivalent, then the system will be much less
responsive when it thrashes itself.

Honestly, I'ld much rather have Linux just crash and reboot, that'd be faster
than it's thrashing-tantrums.

Luckily, there's earlyoom, which just rampages the town quickly if memory
pressure approaches. Like a reboot (ie. damage was done), just faster.

In any case, it makes me sad (in a bad way) to see how bad the state of things
is when it comes to the basics of computing, like managing memory.

~~~
tluyben2
Not an excuse for bad implementations, but since I run i3wm, my feelings of
happiness increased rapidly. To such an extend that I do not want to ever run
anything else; stability, speed, memory use... Solves (for me) the issues you
have.

~~~
sevensor
i3 is magnificent. The same display seems 10x bigger when using i3. As true
for netbooks as for big desktops. My old x120 dual boots win7, which is
unusably slow and unstable on it. Arch with i3 is still snappy. Unless I'm
running a web browser. Web browsers have gone insane.

~~~
doubleplusgood
I couldn't figure out how to scale i3 to the high DPI on my Yoga 900, with
Wayland on F25.

~~~
sevensor
If you're on Wayland, use Sway instead. It feels so much like i3 that I often
forget it's not i3. Hidpi works pretty well:
[https://github.com/SirCmpwn/sway/issues/797](https://github.com/SirCmpwn/sway/issues/797).
I use this on a Dell Precision with the 4k display.

~~~
doubleplusgood
Thanks!

------
derefr
What I've always been specifically confused about, is if there's any point in
giving a VM a swap partition inside its virtual disk, rather than just giving
it a lot of regular virtual memory (even overcommitting compared to the host's
amount of memory) and then letting the host swap out some of that RAM to _its_
swap partition.

Personally, I've never given VMs swap. I'd rather have memory pressure trigger
horizontal scaling (or perhaps vertical rescaling, for things like DBMS nodes)
than let Individual VMs struggle along under overloaded+degraded conditions.

~~~
tedunangst
Generally yes. In fact, this is why "balloon" drivers exist, to allow the host
to create backpressure and make the guest swap. The guest knows more about
which pages are interesting than the host. If you make the host do the
swapping, it will pick silly things, like the guest's disk cache, to write to
swap.

~~~
scott_s
For clarification to other readers, "Generally yes" was the reply to the
originally posed question, which means the above comment actually disagrees
with the suggested solution. (I had to read both a few times to get this
straight.)

------
sirn
One usage of swap in modern systems: hibernation. If you need to use
hibernation, that means a swap must exists, either as a swapfile (pre-
allocated, as uswsusp require a fixed offset on the disk to resume) or as a
partition.

------
lmm
I've been reading these stories for ten years. About 8 years ago I started
taking them seriously and stopped using swap. Turns out not having swap works
much better. I'm amazed how slowly the consensus seems to be moving though.

~~~
problems
Yeah. I've had issues with this on some systems.

On Windows without swap when you hit a remotely low on RAM point, things start
going really poorly for some reason - random latency. So with 16 GB of RAM
even I can't disable swap on Windows without some really strange performance
characteristics, I run SSDs so I really wanted it off and I just stuffed more
RAM in my box - with 32 GB it isn't a problem.

On Linux however, you can pretty much turn it off and everything will run
smooth until you're actually out and then you lag badly briefly, Linux's oom-
killer does its thing and all is good again within the span of a few seconds.

~~~
jandrese
I've noticed the same thing, Windows just becomes bizarrely cranky if you
disable swap entirely. My solution was to instead leave it on, but limit it to
just a couple of megabytes. That seems to avoid the VM subsystem freakouts
thus far.

~~~
speeder
Sadly, trying to investigate this is quite hard, since people are outright
hostile to questions about it.

If you ASK about swapping on windows, you get people telling you that
"Microsoft engineers are smart, don't disable swap and go <insert expletive
here>" even if you asked something that is NOT about disabling swap.

So, I had this gamer laptop, i7, nVidia GPU, 8GB of RAM (when most machines
had 2 or 4), but some stupidly slow 5k RPM HDD made for power saving and
locked "noiseless mode", thus very slow seek too (ie: it moves the heads
slowly to avoid making noise and for aerodynamic reasons).

I noticed that ever after I just booted up, RAM usage would jump to 6gb and
the HDD would trash endlessy and make the machine unusable... after some
research I found some interesting posts by MS employees about it:

Windows can "preemptively" use swap, it will write on swap things it thinks
you might need to swap out. Sounds good on paper.

Also, Windows has several caching systems, that will write to "RAM" random
crap.

One day that was particularly bad, I noticed that when I booted, Windows would
immediately attempt to copy to RAM a gigantic binary file that was the sound
files of a game I played a lot recently, this caused trashing due to reading
the file, then, it would attempt to load other programs it had to, then page
out immediately, and enter some crazy loop of trashing the I/O forever...
Every time I opened the task manager and looked at the graphs, disk I/O would
be constantly maxed out at 100%...

Disabling the VM made the laptop behave better (despite all the bugs Windows
have when you disable VM).

But what I really wanted, was to change how the VM works... I wanted to keep
the VM, and the caching, but change settings, for example I would set it to
NOT page out anything at all unless RAM was used more than 80%, and also to
never "cache" stuff unless HDD was actually idle and a good amount of RAM
free. But sadly, this can't be done it seems, I got no useful answer on
stackexchange sites when I asked this (But got a couple personal messages and
e-mails full of expletives in many places where I asked about it, for some
reason people get personally offended when the subject is virtual memory).

~~~
Animats
Java on Windows used to have a background service which touched the pages of
the Java components to keep them in memory and make Java performance look
better. This was active even if you hadn't run a Java program in weeks.
OpenOffice once had a similar program. Enough things like that and you can't
get anything done.

~~~
jandrese
That program used to stall your startup something fierce too. It was really
annoying.

------
phil21
This is by far my biggest pet peeve in the space. The "rule of thumb" that you
need 2x RAM as swap. Even 10 years ago this "rule" was ancient and useless but
it was always a constant challenge educating customers as to why, and that yes
- we really did know better than your uncle Rob.

Once a server hits swap, it's dead. There is no recovering it other than for
exceptional cases. If you are swapping out, you've already lost the battle.

I tend to configure servers with 512MB to 1GB swap simply so the kernel can
swap out a couple hundred MB of pages it never uses - but that's really more
to make people feel better than it really being useful at all.

~~~
thatcks
Rules of thumb involving more swap than RAM probably date from decades ago,
when Unix virtual memory systems were sufficiently primitive that the total
amount of virtual memory you could use was just your swap space, not swap
space plus (most of) RAM.

(The limitation came about because the simple way to handle swapping is to
assign every potentially swappable page of virtual memory a swap address when
you allocate it in the kernel. Then the kernel always knows that there's space
for the page if it ever needs to swap it out and you're never faced with a
situation where you need to swap out a page but there's no swap space left.)

------
gravypod
I wish we took the path of EROS [0] rather then "RAM and DISK are seperate". A
lot of problems stem from that incompatable viewpoint of computing. Computer
Science is about hiding complexity under lays of abstraction that continualy
provide safer states and constraints on the things built on top of them. Our
abstraction that RAM and DISK are seperate is not safer nor does it provide
constraints that are simple to navigate. Thinking about this the other way,
where DISK is all you need and memory is just a write-through cache, is much
safer in my opinion and leads to some really cool application design.

If RAM and DISK are the same, then writing a file system is just writing an
in-memory tree. No need to pull data from the disk, just navigate the tree in
your program's memory and pull the blob data out. Want to persist acorss
reboots, protect against power outages, or save user settings? Just set a
variable and it'll be there.

The benifits are much better then the costs.

[0] -
[https://web.archive.org/web/20031029002231/http://www.eros-o...](https://web.archive.org/web/20031029002231/http://www.eros-
os.org:80/)

~~~
jandrese
How is this different than just memory mapped files? I guess it happens a
little more automatically, but it doesn't seem to really solve a major problem
that I can see.

~~~
gravypod
Have you ever lost power and lost data from a document you were editing? Has a
server ever crashed in a datacenter, it's data corrupted, and now your company
has lost a few 100k to a ffew million? Have you ever had to wait for processes
to start again for a long time after fixing the hardware failure?

These are all problems that have been solved on EROS based system. They used
to do demos where they would setup a system and have someone start working on
some code or a text document, they'd pull out the power plug of the system,
plug it back in and the user would be right were they left off. No data loss,
no corruption, just back to work.

None of that was handled in user space. That was all opaque and you didn't
have to worry about it at all.

~~~
SteveBash
I've read somewhere that for BeOS demos they used to play a bunch of videos
and music and then unplug/plug and after boot everything was playing again
from where it left. I guess they were using the same design for process
persistence.

~~~
gravypod
If "never loose data" isn't a great selling point then I don't really know
what is.

~~~
jandrese
It's a trivial problem if you're willing to run your system entirely off of
the disk. I mean the performance will be unbearably slow, but you'll never
lose your data.

------
mayoff
Is iOS a modern system? Because iOS does not have swap.

> Although OS X supports a backing store, iOS does not. In iPhone
> applications, read-only data that is already on the disk (such as code
> pages) is simply removed from memory and reloaded from disk as needed.
> Writable data is never removed from memory by the operating system. Instead,
> if the amount of free memory drops below a certain threshold, the system
> asks the running applications to free up memory voluntarily to make room for
> new data. Applications that fail to free up enough memory are terminated.

[https://developer.apple.com/library/content/documentation/Pe...](https://developer.apple.com/library/content/documentation/Performance/Conceptual/ManagingMemory/Articles/AboutMemory.html)

------
sevensor
My desktop at work has 16G of RAM. I didn't bother setting up swap, and I find
the old guidance (2x RAM) pretty absurd at this point. I've had the OOM-killer
render the system unresponsive a couple of times, but only because I'd written
a program that was leaking memory and I was pushing it to misbehave. If you
really want virtual memory on purpose, you can still set up a memory-mapped
file for your big data structure.

~~~
sddfd
I don't have swap either. On 8GB it is pretty annoying, because a program I
often use frequently overcommits and the system hangs.

Is there any way to tell the OOM killer which program to kill first?

~~~
wyldfire
> Is there any way to tell the OOM killer which program to kill first?

The fun OOM analogy [1] that comes up when people propose different OOM killer
designs:

> An aircraft company discovered that it was cheaper to fly its planes with
> less fuel on board. The planes would be lighter and use less fuel and money
> was saved. On rare occasions however the amount of fuel was insufficient,
> and the plane would crash. This problem was solved by the engineers of the
> company by the development of a special OOF (out-of-fuel) mechanism. In
> emergency cases a passenger was selected and thrown out of the plane. (When
> necessary, the procedure was repeated.) A large body of theory was developed
> and many publications were devoted to the problem of properly selecting the
> victim to be ejected. Should the victim be chosen at random? Or should one
> choose the heaviest person? Or the oldest? Should passengers pay in order
> not to be ejected, so that the victim would be the poorest on board? And if
> for example the heaviest person was chosen, should there be a special
> exception in case that was the pilot? Should first class passengers be
> exempted? Now that the OOF mechanism existed, it would be activated every
> now and then, and eject passengers even when there was no fuel shortage. The
> engineers are still studying precisely how this malfunction is caused.

[1] [https://lwn.net/Articles/104185/](https://lwn.net/Articles/104185/)

------
amyjess
I haven't used swap in years, and more recently I've accompanied that by using
earlyoom [0] to start killing processes when RAM usage rises above 90%.

Both changes have made my computers much more usable. Systems should designed
to fail fast when memory is low instead of slowing down.

[0] [https://github.com/rfjakob/earlyoom](https://github.com/rfjakob/earlyoom)

------
ChuckMcM
One of the things we used at Blekko was that swap became a 'soft' indicator
that something on the system had exceeded its foot print (our machines all had
96GB of RAM so it meant something had too much RAM) and OOM-killer messages in
the log was grounds for taking the machine out and rebooting it and looking
for a more serious problem (like sometimes things rebooted and had 32GB less
RAM).

That said, the article's recommendation was spot on in terms of making a
conscious decision on how you want your system to behave when its coming close
to running out of memory. Large swap space was originally the way you got
those things that were too big to fit in memory to run, and now they are a way
to essentially batch process very large data sets.

------
mixedbit
If Linux has no swap, it doesn't quickly and efficiently kill processes when
memory is exhausted. Instead it first removes executable code from RAM and
reads it back from disk when needed. This is because without swap executable
code is the only thing in RAM that is duplicated on disk and can be removed.
This makes the system completely frozen and unusable.

~~~
rcxdude
This is my experience too. I used to run my desktop without swap, but found
that the experience when running out of memory was even worse than with swap.
Also there appears to be enough memory which isn't actually used frequently
that it gives a bit more memory headroom (I will still manage to use up 32GB
of RAM).

------
phire
Last time I tried running a linux system with zero swap, I ran into huge
issues.

It would never actually hit the OoM killer, instead it would just lock up
while it still technically had a few hundred mb of memory free.

From what I can tell, it was stuck in a loop evicting something from cache and
then immediately pulling it back in from disk. Everything was technically
still running, but the ui wasn't responsive enough for me to even kill a
program.

Simply adding 200mb of swap would change the behaviour enough that the OoM
killer would eventually run.

------
phkahler
I never understood the rule of thumb where swap space was proportional to the
amount of physical RAM. It seem to me it should be the size of your largest
expected allocation (system wide) minus the amount of physical RAM or
something like that. If you had a nicely configured system and took out half
the RAM it doesn't make sense that you'd want _less_ swap space.

~~~
cbhl
I think it made a lot more sense in the mid-90s, where a system would have 32
MB of RAM and people read the RAM requirements of the software they'd buy. So
the size of your largest expected allocation was proportional to the size of
RAM only because if you had more RAM you'd run more RAM-intensive software.

Now, desktops can have 32 GB of RAM, but everyone just uses it to run Chrome.

~~~
fb03
> Now, desktops can have 32 GB of RAM, but everyone just uses it to run
> Chrome.

.... which will happily chew away your 32 GB of RAM if you let it run for
enough time :)

------
jedberg
My feeling on swap is this:

1) If you're ok with one machine dropping out of your system, you don't need
swap.

2) You should never build a system where losing a single machine is a problem.

3) Therefore, you should never need swap

4) Perhaps there is an exception for a desktop machine, since it's doesn't fit
rule 2.

~~~
galdosdi
Tend to agree.

A bit of a side ramble: Unfortunately, sometimes regarding rule 2, you already
have a system where losing a single machine is a problem, and it will take
time and resources to improve or replace it to the point where losing a single
machine isn't a problem, so "in the meantime" you have to accept and support
this.

Also, sometimes "the meantime" is very long. :-(

Also, by the time the system is improved to be more resilient, maybe you'll be
working somewhere else or on something else, and, presto, you'll uncover some
other horrible legacy system in your dependency chain that isn't resilient
either. It seems as if at every organization that has had computers for long
enough, there is an infinite supply of legacy systems.

Point being unless you only work with brand new things that themselves only
work with brand new things, you can't get out of getting decent at managing
services that aren't properly "any single machine can disappear" resilient

~~~
jedberg
Sure, dealing with legacy systems might mean messing with swap.

However, as pointed out elsewhere, if you're hitting swap your performance
will be so bad you might as well have lost the machine.

------
zumu
How do you hibernate with no swap? Do you need a special hibernation partition
to write to?

~~~
gravypod
The way I've done it is create a swap file and set it's swappiness to 0 so
nothing actually gets paged into it. Hibernation forces the writes so it will
get used on hibernate.

------
rkeene2
The main issue I have with not using swap in modern Linux is that it will
cause the kernel to be busy for hours at a time. What happens is, as the
kernel runs low on RAM, it has to spend more time searching for smaller and
smaller chunks of RAM to back the request, the smaller chunks are more
numerous and the "kswapd" kernel thread is responsible for this activity. As
the system approaches 0 RAM free kswapd will also try to release less
important pages, which takes more CPU time. Ultimately you get to the point
where allocations take a really long time, and there are lots of allocations.

------
rini17
I recommend using swap together with zswap, and increase swappiness. Zswap is
available in mainline kernel. It keeps compressed "swapped-out" pages in
memory (so they are accessible quickly on page fault) and only uncompressible
pages go to disk. Usually most of memory is compressible and overhead is
small, so it is suitable for many workloads. See
[https://wiki.archlinux.org/index.php/Zswap](https://wiki.archlinux.org/index.php/Zswap)
.

------
vbezhenar
Many applications request a memory, do some writes and don't use it in typical
scenarios. So this memory is effectively wasted. If there's swap, smart
operating system will swap that memory and use physical memory for more
important tasks, e.g. for disk caching. So using swap allows for more
efficient memory usage. E.g. on small server with 21 day uptime I have
102/1024MB memory used and 41MB is in swap, which means that I have 5% memory
almost for free.

------
pmontra
My Ubuntu development laptop has been running without swap since I bought it
in early 2014. It's got 16 GB of RAM and sometimes it hits 11 GB of used
memory. No problems whatsoever. If I'll start hitting the memory limit I'll
buy another 16 GB. I've replaced the HDD with a SSD but I don't understand why
I should use it as swap like in the old days of RAM scarcity.

------
kabdib
It's not uncommon for us to buy rack machines with much more RAM than disk.
The disk is almost uninteresting, except that we need a place for an OS to
boot from, and some other legacy things.

I suspect I would be fine with much of our datacenter being diskless (and put
disk -- ahem, I mean _storage_ \-- where it is needed). Local disk is a
headache more often than not.

------
treffer
I have a somehow different view on swap.

The issue is not swap or swap utilisation, the problem is worst case latency.
Even for a database an OOM is usually better than a latency hit that makes it
unusable slow.

As a simple example an app might start allocating and use memory in an
infinite loop. How long will that take? How long will your system be
unresponsive?

If you have more swap than you can write in 30s you'll most likely do it wrong
(your system can be unresponsive for 60+s).

Another worst case would be allocating all memory, using it and then
performing random reads throughout the memory space. Your swap to ram ratio
defines how much misses and thus how much IO you are doing instead of direct
memory access. This should stay way below your IO capacity.

As a result I usually try to use a small swap partition and monitor for swap-
ins, not swap usage or swap out.

So that's my thinking around swap mainly due to the fact that I have seen too
many servers causing issues due to swap related latency.

------
sitkack
If u want to exec from a process using a large fraction of the physical
address space on the machine, you need swap to maintain a nice amount of
virtual adddess space. Needing swap and using swap are different things. How
swap interacts with the process and memory subsystem is poorly understood.

------
xorcist
I tried to run systems without swap a few years ago. That wasn't a very good
idea. Most applications are very generous in their memory usage (not to
mention allocation, almost always insane), and normally those pages are
swapped out never to be heard from again. So without swap performance suffers
since less pages are available for cache. (And in virtual environments it gets
even worse since the balloon driver isn't really that great.) I didn't have
time to see it through and abandoned it.

In light of that this recommendation from Red Hat is very interesting. Just a
fifth of memory as swap is probably enough to get real world performance back,
without getting completely stuck when something goes haywire. On large memory
systems it should probably be even less.

------
dancek
> [Recommended amount of swap] depends on the desired behaviour of the system,
> but configuring an amount of 20% of the RAM as swap is usually a good idea.

This sounds like good advice compared to the classic "2x RAM" guideline. Back
in the HDD era when we already had around 8GB RAM I started wondering how long
it would take to actually fill 16GB of swap in terms of raw write speed.

On the other hand SSDs are fast enough that swap might actually make a low-
memory system feel faster.

My current Linux laptop has around the same amount of swap as RAM. Am I
mistaken in thinking that suspend-to-disk saves RAM contents on the swap
partition?

------
adultSwim
What about to support hibernation? Is that possible via swap file now?

~~~
kevinmgranger
It depends upon what filesystem you're writing it to, but the answer is mostly
yes.

------
beezle
I'm not sure if it is still true on Win 10, but earlier versions started to
perform terribly if you had no swap on the boot partition, regardless of how
much core you had.

~~~
sevensor
This is contrary to my experience in Windows 7. As under Linux, I run Win7
without swap on modern hardware, and I've had no trouble there. Which versions
of Windows have you had trouble with?

~~~
beezle
XP and 7. Current rig had 32GB though I left the boot swap at 1.5GB when 8 was
installed and likewise after the upgrade to 10. My understanding when
researching this (long ago) was that Windows like to purpopsely put stuff on
the swap even if there is free ram.

Even now, with not quite 9GB "in use" 22GB "standby" and 1G "free" the paging
file is at 1.5% use with peak about 3%. Granted, that is tiny on a fixed 1.5GB
file but for some reason Windows feels the need to drop about 20-50Mb in the
pagefile.

------
onli
That article takes a system with 2GB ram as example. For a modern system that
is pretty unrealistic, even Laptops have more. My system has 12.

I missed the mention of zram. Zram can create ramdisks, and compress them. It
can create a compressed swapdisk in ram, basically making your ram last longer
in case you really run out of memory. In my experience that is a good
alternative to having a bit of swapspace as reserve, as the article
recommends.

------
z3t4
Disk space is cheap. Add a few gigs of swap, or make a swap file! If you just
have a few gigs of HDD disk space left, buy another disk!! Having _some_ swap
is worth it.

One neat thing that swap can be used for is to take sleeping processes, that
might use a lot of memory, and put that memory on disk to free up memory for
other _active_ programs.

How much swap do you need ? Square root of your RAM size.

------
virmundi
I've disabled swap on MacOS. Works fine with just 16 GB. That's with Spring
STS and at least one 2GB virtualbox for running ArangoDB. I do use Safari so
it uses less memory than Chrome.

I shut it off because, in my opinion, OSX is pretty shitty at memory
management. It swaps for no good reason. I've had 6 GB free and still using
1.6 GB of swap. That shouldn't happen.

~~~
kalleboo
It swaps out unused memory to free up space for disk cache in order to improve
performance. This is actually a good use of memory.

------
patrickg_zill
Not knowing a huge amount of OS kernels theory, I can still point to Solaris
10 and it's handling of swap as being clearly superior. I had a 16gb ram
server giving me problems... I logged in and it was swapping continuously with
only 2mb, yes 2048kb, free. Yet my ssh session was not overly lagged.

Under Linux, if there is heavy swapping, forget it, nothing will work well.

------
INTPenis
I love how the article essentially says that each situation is different and
then says 20% of RAM is a good rule of thumb.

In my experience, the most commonly used swap configuration is minimal, 500M
perhaps. And vm.swappiness=1.

I'd say it's more rare to find a system that actually needs swap than one that
can do without it.

I have yet to run into an application that for some reason needed swap around.

------
leereeves
Swap seems like a nice safety valve. Preferable, I think, to suddenly shutting
down an important program in use because it's OOM.

~~~
Aardwolf
Must depend on use case, but I prefer program that is planning to use swap
(usually one where I accidentally allocate a way too big buffer) to fail
automatically, rather than having to try to use the now unresponsive system UI
to kill it

~~~
ordu
You are right, it depends. While building firefox from sources system needs
several gigs of RAM. At the same time normal functioning of my system does not
need more that 4Gb. And a couple of years I used swap just for such big
/usr/bin/ld processes. Now I have 8Gb of RAM and linking FF or LibreOffice is
not an issue anymore.

~~~
jandrese
I remember when it became impossible to build Firefox on our lab machines with
1GB of RAM and 2GB of swap. Even before that it took literally all day to
build.

I got another taste of that lately when I had to build Wireshark from source
on a Raspberry Pi model B thanks to broken packages in the repo. At least it
was the version with 512MB of memory and overclocked. For the most part
Wireshark isn't that bad to build, certainly better than Firefox, but there
are a couple of dissectors that have unreasonably large source files.

------
dredmorbius
Swap isn't the problem. Swapping is.

Question is: is there a way to identify who the culprit is and take
appropriate vengeful action in the instance of swapping? Generally it's "foo
large application", though there's also a very strong tendency for "foo large
application" to be a critical system element -- either OS or application
level.

------
Dayshine
So, the argument the article makes is:

1\. Swap is slow

2\. If using swap, your system starts to thrash

3\. If thrashing, you can't close programs to free memory

4\. If you can't close programs, you have to wait until the task is killed by
the OS

5\. If you have no swap (or very little), you don't have to wait.

Except with an SSD, swap isn't slow enough to cause that issue. So really this
article only seems to apply to servers, not desktops.

~~~
jandrese
It seems like the OS developer's opinion these days is that RAM is cheap,
don't swap. In the old days people cared a lot about swap performance because
RAM was so tight that you were virtually guaranteed to swap at some point.
These days you get 16GB sticks in boxes of Crackerjacks so why would you ever
swap?

Of course the trend of making notebooks thinner by ditching the SODIMM slots
and soldering insufficient amounts of memory to the mobo may reverse this.

------
TorKlingberg
Do current operating systems ever page out memory to swap when not necessary,
in order to make room for more disk cache in RAM?

~~~
saurik
If OS's stopped doing this at some point, I'd be shocked: this is why having
memory and swap is valuable.

~~~
deathanatos
You'll be shocked, then. Linux under the default settings won't swap unless
there is pressure to do so.

Windows will.

It's a tradeoff: if you swap something out while not under pressure, that
_could_ be the thing you next need, resulting in it just getting swapped back
in. Or, maybe not and the extra cache is useful (but if you're not under
pressure, maybe letting go of older cache, instead of swapping something out,
is a better trade: letting go of old cache doesn't require swapping out, since
it's by-definition backed by disk.)

~~~
pvdebbe
Windows' approach is painful if you run it on slow HDDs like I did 15 years
ago before switching to linux, full time.

------
smarterclayton
In designing Kubernetes, for instance, we document and (mostly) enforce swap
off for many of the reasons laid out here. Kubernetes takes over the
management of host overcommitment, and being able to react correctly to OOM
and near-OOM depends to some degree on having a clear understanding of the
actual memory use on the system.

------
garrybelka
The real issue is not an amount of swap but thrashing.

E.g., several large processes sleeping in memory on desktop would be fine if
only one or two used at the same time. OTOH, clustered nodes well tuned for a
single task may not need a swap.

In any case, it is a metric for thrashing that should be used to initiate
culling.

------
htns
I have 16 GB of memory with swap off yet still sometimes get lag and freezes
due to low memory. An aggressive OOM killer or a performance watchdog should
be considered a mandatory feature. On desktop I'd much rather have my programs
shut down than get any lag.

------
lazyant
Just a note for people playing with vm.swappiness ( = 0 for ex) , what it does
it's different depending on distro and version (as in with one version = 0
meant "absolutely no swap" and another version it may mean "try not to swap"

------
vmarsy
on modern system with _a lot_ of RAM I found myself doing the opposite of swap
to speed up development : mount disk on RAM using tmpfs [1] and then change
the ccache [2] directory to that ram disk. With that setup, I obviously don't
want swap to kick-in :) It can make the compilation of C++ programs much
faster.

[1] [http://ubuntublog.org/tutorials/how-to-create-ramdisk-
linux....](http://ubuntublog.org/tutorials/how-to-create-ramdisk-linux.htm)

[2] [https://linux.die.net/man/1/ccache](https://linux.die.net/man/1/ccache)

~~~
pavanky
I have had /tmp as tmpfs for a couple of years since arch switched to this
layout. I ended up having the opposite problem, probably because I never had
enough RAM.

I have only ever enabled swap when compiling a few projects with "-j8" would
take up all the memory. Using fewer threads would usually end up being slower
than more threads + swap.

------
zurn
Swap media is closing in on RAM speed at a fast clip. RAM latency has been
stalled for 30 years and storage latencies are doing very well & improving due
to flash. So from hardware POV, it should be a good time for swapping.

------
tscs37
Personally, I have a 100GB swap partition on my system... why? because I'm a
filthy tab hoarder. I just put them in the background, if i need them, sure it
takes a couple seconds but I don't have to close 'em.

/shrug

~~~
TheGrassyKnoll
Me too (typing this in my 277th tab)

Using Firefox on Bunsen Linux (Debian Jessie derived)

Its only using half of 4G of RAM, but there's 20 Mbytes in swap.

------
intoverflow2
All the laptops at my workplace have the minimum storage (Apple...), it
becomes frustrating when I open photoshop and almost my entire free space
suddenly vanishes while my 16gb of ram isn't even 20% utilised

------
Beltiras
I've had 32 gigs of memory in my desktop rig for more than a decade. Needless
to say, no swap. Some decent points about mitigation like memory priority in
the article. Good read.

~~~
braveo
I have 32G in one machine, and 16G in another. I recently moved over to the
16G machine to do my dev work in, and I run a few VM's in it.

I've found myself wanting to upgrade it to 32G ram, but honestly that's about
the only use case (besides production servers) where I would ever consider
swap, and at that point I consider it a problem of not enough memory rather
than swap being necessary.

------
isaac_is_goat
Can you change swap settings on macOS, I haven't dug in deeply but couldn't
find an option anywhere but I feel that the default settings is faaaaar too
aggressive.

------
kevin_thibedeau
I haven't used swap in years on an 8GiB machine. I might hit the OOM killer
once a year. It does its job and the system keeps running.

------
I_am_neo
I once swore if I had at least 200 Mb of ram I would turn swap off. Decades
and giga bytes of ram later that still hasn't happened

------
crest
These days the primary use of swap is to get the efficiency of overcommitting
RAM without compromising reliability.

------
shmerl
Is it possible to hibernate without swap? That's one of the use cases when
it's actually useful.

------
st4rbuck
Are this rules also applicable for FreeBSD or any other BSD like systems?

~~~
crest
FreeBSD doesn't overcommit the available memory (RAM + swap) by default.

Don't use swapfiles in FreeBSD because the file system write paths of UFS and
ZFS potentially allocate memory. Both geom_mirror (software RAID-1) and
geom_eli (disk encryption) are fine and I would recommend using GELI to create
a onetime keys for mirrored swap partitions at boot.

An other good habit to get into is to limit the resources available to your
services to some generous upper bound you expect them to require. The most
flexible way to enforce those are restrictions in FreeBSD are hierarchical
resource limits. Use them and monitor resource consumption. That way you get
an early warning before a rouge process drives the system into massive
swapping.

------
kebolio
I've found you definitely need swap if you don't have 8GB of memory. I
personally have nowhere near the amount of patience required to wait for the
OOM killer.

------
mamcx
What about OSX? Is goood to disable swap?

------
tomxor
Over the past 5 years swapping has always been due to a memory leak.

~~~
tomxor
(for me I mean), I should really learn how to disable it.

------
rascul
Just put your swapfile on a ramdisk.

