
Do Not Use Task Manager for Memory Info - aybassiouny
https://mahdytech.com/2019/01/05/task-manager-memory-info/
======
AGoodName
This article correctly states that committed memory is that in use + memory
that's being paged out. Now why would you want to know the committed memory
over the actual physical RAM in use?

I can trivially create an app that memory maps a massive file and will show
several GB of committed memory. This won't be in use of course, memory mapping
files so that the OS will page in/out as required is intentional. Those GB of
committed memory aren't something you should care about. I'd be scared if
someone looked at the committed memory use of a program that correctly uses
mmap and caused someone to exclaim "OMG this is uses TB of RAM!".

Task Manager is doing the right thing here. It's showing you want's actually
paged in and in use right now.

~~~
dijit
OOOOOOO, something I actually know!

> Now why would you want to know the committed memory over the actual physical
> RAM in use?

Because in Windows, committed size is relative to physical size. You can
commit a lot more than RAM, but watch your page-file grow.

malloc() can fail on Windows for this reason. This is not the same on Linux or
any of the BSD's I've tried. :)

I experienced/discovered this August last year.. sometimes understanding a lot
about Linux can make you blind to the architectural differences Windows has.

I wrote some cross-platform CPP to show it[0]

[0]:
[https://gist.github.com/dijit/cb2caa1a40d48e03613f5af0e518d6...](https://gist.github.com/dijit/cb2caa1a40d48e03613f5af0e518d626)

~~~
greglindahl
> This is not the same on Linux

Linux lets you choose an overcommit policy.
[https://www.kernel.org/doc/Documentation/vm/overcommit-
accou...](https://www.kernel.org/doc/Documentation/vm/overcommit-accounting)

~~~
toast0
You can choose an overcommit policy on Linux, but most library developers on
Linux have chosen the default and regularly allocate wide swaths they don't
intend to use.

This is a real pain when moving an application from FreeBSD to Linux, as
effective limits on memory are lost (ulimit set at ~90% of ram results in a
malloc failure and a clean crashdump rather than death by thrashing, or an
untrapable oom kill).

There could maybe be a middle ground where malloc would allocate large chunks
of address space for ease of administration, and then ask the OS to commit
those pages in smaller chunks as needed. Often, there's not much a lot you can
do when allocation fails, but it's way more actionable if the failure is
returned from a syscall vs failing when you write to an unbacked page, which
could happen basically anywhere in your program.

------
fencepost
For versions of Windows before 10, I'd agree with this 100%. Windows 10 (or
maybe 8-8.1?) pulled in a lot of functionality from Process Explorer and is
much improved.

For developers Process Explorer (and ProcMon and a few other utils) is likely
an improvement, but frankly if you're doing Windows development you should
already have learned about them and probably some of Nir Sofer's tools as well
(nirsoft.net). For 90% of people (even developers) you probably don't need
what Process Explorer provides.

Side note, in Process Explorer if you turn on the lower pane (View menu or
Ctrl-L) you can view all handles that a process has open, including file
handles. That can be useful for identifying unrecognized processes.

~~~
rococode
Here's the link to Process Explorer: [https://docs.microsoft.com/en-
us/sysinternals/downloads/proc...](https://docs.microsoft.com/en-
us/sysinternals/downloads/process-explorer)

It also has an option to replace Task Manager so that it comes up when you do
Ctrl+Shift+Esc, etc.

------
saagarjha
This article seems to have a high-level overview of how virtual memory works,
without mentioning it at all…I find the terminology use rather strange too.
Saying that the virtual address space as is “reserved by the OS for each
process” was a bit confusing to to me.

~~~
MarkSweep
The article confuses address space and memory reservations as you note. It
again makes the confusion when it suggests the virtual size column in Process
Explorer is a reflection of the address space[1]. It also suggests that
working set, private bytes, and committed memory are the same thing[2]. The
article does not even live up to its click-baity title: Task Manager's default
memory column of "private working set" is a decent measure of how much memory
a process is uniquely using and task manager has the ability to add all the
other measures of memory usage that the article mentions.

For a better explanation of virtual memory in Windows, I recommend Mark
Russinovich's article[3]. His tool VMMap[4] is useful for visualizing the
memory usage of an individual process.

[1]: The reserved memory is really large for 64-bit processes that use Control
Flow Gaurd: [http://www.alex-ionescu.com/?p=246](http://www.alex-
ionescu.com/?p=246)

[2]: Task Manager and Process Explorer add to the confusion by calling the
same memory different things (Process Explorer's "private bytes" number is the
same as Task Manager's "commit size" number on Windows 10 1809).

[3]:
[https://blogs.technet.microsoft.com/markrussinovich/2008/11/...](https://blogs.technet.microsoft.com/markrussinovich/2008/11/17/pushing-
the-limits-of-windows-virtual-memory/)

[4]: [https://docs.microsoft.com/en-
us/sysinternals/downloads/vmma...](https://docs.microsoft.com/en-
us/sysinternals/downloads/vmmap)

------
slenk
We took the site down.

Archive.org has a copy/mirror:
[https://web.archive.org/web/20190106190255/https://mahdytech...](https://web.archive.org/web/20190106190255/https://mahdytech.com/2019/01/05/task-
manager-memory-info/)

~~~
aybassiouny
Thanks for the mirror! It should be back up now.

------
quotemstr
The bigger problem is that there's no good number that captures the memory
impact of a process on modern unified-memory page-cache-ful architectures.
Practically nobody gets this right, and the author of the article is himself
glosses over some important details. Every byte of address space is either
allocated and backed by some memory object ("reserved" memory) or it's
unallocated. Commit charge is a measure of the amount of memory that the
system has guaranteed will be available in the worst case, should all
faultable address ranges be faulted, but that's not the same thing as memory
actually being used. For example, if you MapViewOfFile a 1GB file
PAGE_READONLY, you burn very little commit --- just enough for the page
metadata --- but if you change page protections to PAGE_WRITECOPY, then you
incur an extra 1GB of commit charge, even though you still haven't faulted
anything into memory or reserved more address space --- and that's because you
_could_ legally COW-fault every page in that file, and the kernel has to
commit (thus the word) to providing that memory in the worst case.

I work on Linux these days, which is even more confusing, because thanks to
overcommit, most people don't distinguish these different kinds of memory
allocation, even though the distinction between commit and reserved memory
exists on Linux too. (The kernel just lies about satisfying commit charges
unless you tell it not to lie to you. Most people are happy with overcommit's
optimism.)

Anyway, the key thing to realize about modern virtual memory subsystems is
that "memory consumption" is an incoherent concept. You can derive lots of
different numbers from memory management statistics, but each of these numbers
is useful for a _specific purpose_. There is no one number that will give you
an accurate measure of the impact of a particular process in all scenarios.
People constantly say, "Look: just give me a number that I can plot on a
dashboard and drive down over time". No such thing exists.

Task manager has to pick one of these numbers to show users by default, and
its choice, roughly equivalent to Linux Private_Dirty, probably isn't
terrible, since it's a decent proxy for how much RAM you get back if you kill
the process. I don't think total commit charge is as good a choice, since with
a large pagefile (which everyone should have) total commit can be much larger
than total resident memory. Linux PSS is another popular choice, since (unlike
Private_Dirty) it reflects the impact of a program's use of shared memory, but
PSS behaves in perverse ways --- e.g., starting an instance of memory-hungry
process can make PSS _decrease_ because some pages in this program are
distributed across more processes, increasing the denominator in the PSS
calculation.

Are you worried about running out of page file space? Yes, you want to look at
commit. Are you wondering why you're seeing a large number of page faults
starting a game? Commit won't help you, but RSS might. It really depends on
the situation.

I wouldn't take the advice in the article at face value. If you want to
understand the impact a particular program has on the system's memory
behavior, you need to understand how the virtual memory system actually
behaves, and that's non-trivial.

~~~
stdgy
Lovely post, but I now realize that I am rather ignorant of the interplay
between processes and the kernel with regards to memory allocation.

Any ideas on cool stuff to read to remedy this?

~~~
edf825
Drepper's _What every programmer should know about memory_ is a little old and
goes into perhaps unnecessary detail at times, but is a great place to start.

[https://lwn.net/Articles/250967/](https://lwn.net/Articles/250967/)

------
NelsonMinar
Does anyone understand the story behind Process Explorer? For 10 years it's
been an invaluable tool. But despite being developed by Microsoft you have to
install it like third party software. Doesn't even come with an installer! How
come it never got integrated into the main OS?

~~~
NeedMoreTea
It originally started out as something from NTInternals. They were building
small utils with capabilities similar to what they had become used to in their
Unix background. Then they focused on all the low level tools which led to MS
buying them.

Not having an installer is actually a bonus. Far too many "simple" Windows
programs seem to need Gigabytes of DLLs installing to Windows folder and spray
themselves all over the system. /Windows bloats massively after a couple of
years of active use. An exe I can keep in a folder, or drop into the path and
simply delete if no longer useful.

~~~
NelsonMinar
Well an Installer doesn't mean it has to install something bloated. Make a
folder for it in C:\Program Files\, copy over the .exe and .chm, done. I can
do this manually easily enough just think it's notable no one ever did it for
this tool.

------
ed_elliott_asc
David Solomon (author of inside windows nt) used to call it “task mangler” for
this very reason :)

~~~
aybassiouny
Totally :) If I had a dime for every one who mixed working set and private
bytes because of using Task Manager

------
Krasnol
Nah, if something hangs I use the key combination that starts up the Task
Manager. Not Perf Mon. There I instantly see what's the problem. I kill it (or
use the temporary Pause) and go back to work.

This is the most occurring case where I look into memory. This will probably
be the case for most windows users who know about the Task Manager out there
and there is no reason for them to "not use Task Manager" anymore.

I hate those generalizing click bait headlines...they should at least come up
with some equal justification for that.

------
saltyshake
I am surprised by how little thought MS always put into the Task Manager
compared to every other process manager for Windows, including MSs own Process
Explorer.

Even on Windows Server 2016, it doesn't show the per-process memory correctly
for processes using over 128 GB.

Had to learn that the hard way when the overall RAM usage was pretty high but
I couldn't find any individual process using that much. Then opened Process
Explorer and boom, SQL Server was using over 130 GB (cubes..).

------
qwerty456127
One small tip to whoever designs these apps: count memory in mibs - this is a
way more intuitive, apps that consume less than a mib are extremely rare.

