
Zombie Processes Are Eating Your Memory - janvdberg
https://randomascii.wordpress.com/2018/02/11/zombie-processes-are-eating-your-memory/
======
mwexler
The author mentions CcmExec.exe a million times without explaining it. A bit
of searching reveals that it's part of the Change and Configuration Management
system in MS Systems Management Server
([https://blogs.msdn.microsoft.com/jonathanh/2004/05/27/ccmexe...](https://blogs.msdn.microsoft.com/jonathanh/2004/05/27/ccmexec-
exe-and-inventory-exe-what-are-they-and-what-do-they-do/)). That's why he
alludes to most folks not seeing it unless they are on a corporate managed
machine.

~~~
overcast
Yes, this is part of System Center Configuration Manager. A required client
when deploying software, and updates to corporate machines.

------
wwwigham
> Synaptics driver leaked memory whenever a process was created

Funny! I found out the same issue last week and ended up just uninstalling the
driver (not sure how permanent that is, but I'd like to think it is). I
thought I was going crazy - why I was hitting pagefile after 28 days of uptime
but only using around 8/32 real GB of memory! I loaded up the ram viewer tool
and was aghast - hundreds of thousands of zombie processes! Death by a
thousand cuts!

~~~
brucedawson
I guess that Synaptics driver was triply bad. I could see the memory leak in
its process, and I could see the CPU usage it was triggering, but at the time
I did not realize it was creating zombies. Wow.

------
tekkk
Thank god, cant wait to try this out! My windows always feels enormously
sluggish and memory seems to dissipate soon after reboot. It has been my
biggest gripe with my lenovo y700 laptop and at times it has actually frozen
for couple minutes after opening from sleepmode(or quickstart, the one that
powers off). Really unbearable and been thinking of buying new one(or just
using my work MBP even more).

UPDATE: Umm.. After spending some time getting my Visual Studio to build the
FindZombieHandles-project (pre-built binary would have been nice) it
immediately exists not enabling me to see all the zombie processes. I don't
know how to code C# and can't really afford spending time to change the
termination based on say pressing Esc-key.

~~~
bonoetmalo
The repo has prebuilt binaries now

~~~
japaget
The binaries don't work for me. When I run as a privileged (admin) user I get:

Some process names may be missing. 0 total zombie processes. No zombies found.
Maybe all software is working correctly, but I doubt it. More likely the
zombie counting process failed for some reason. Please try again. Pass
-verbose to get full zombie names.

~~~
brucedawson
That happens sometimes. I don't know why.

------
MatmaRex
Interesting. On my machine, Razer mouse drivers (RzSDKService.exe) seem to be
leaking 2 handles every second.

~~~
lostconfused
Bad driver, time to remove it.

~~~
brucedawson
If you could complain to Razer first - maybe on twitter - that would be
helpful.

------
lifeformed
Running it on my machine tells me:

> 0 total zombie processes. No zombies found. Maybe all software is working
> correctly, but I doubt it. More likely the zombie counting process failed
> for some reason. Please try again.

------
obsurveyor
Looks like Creative Cloud leaks handles. Two and a half days uptime, no app
updates in Creative Cloud and it had ~16,700 open handles.

------
sebazzz
Sounds like a potential DoS. Just create an delete processes never freeing the
handle. And the offending process is not easily found by the average user.

------
vim_wannabe
Has anyone had an iOS device lose battery really fast, only lasting for less
than a day, until you reboot and everything is fine again?

~~~
khedoros1
No, but I've got one that abruptly went from several days of charge to about 4
hours, at the same time that the TouchID sensor failed. Reboot was the first
thing I tried. Wipe-and-reload was the next.

------
tankenmate

      "I don’t know why zombie processes consume that much RAM,
       but it’s probably because there should never be enough of
       them for that to matter."
    

Wow, I am surprised the complete breakdown in the logic there. It's almost
like saying the cart is in front of the donkey because the donkey doesn't
matter. It's easy enough to find out why but the OP author never bothers to.
And also the answer is normally covered in any reasonable undergrad operating
systems class.

Am I becoming an old curmudgeon or is understanding operating systems just not
seen as necessary these days?

~~~
to3m
I think the logic makes sense. Presumably the thinking is that zombie
processes take up 64KBytes apiece because there aren't enough of them, in
general, to make it worth doing the work to make them take up less.

~~~
tankenmate
No that is entirely wrong, if the RAM wasn't necessary it would be freed
regardless of how much it was. No person who writes production grade kernels
just leaves memory hanging around because it's not worth cleaning it up (and
certainly not every time a process exits).

The reason the memory is retained is so that it can be reported to the parent
process why the process died (e.g. normal exit (error code 0), or failure, or
killed by signal, etc), and retain enough information for simple diagnostic
purposes. It is for that reason that page table mappings (for the little
memory that is retained) and private data (for process exit status etc) is
kept.

~~~
radford-neal
Sure. It's not that there's NO reason for it. But if the kernel writers
thought there'd be lots of them, they would do something to reduce the amount
of memory - eg, apply some data compression method to them, at least once
they've been around a while (so that there would be no performance impact in
typical usage).

~~~
tankenmate
I have a strong suspicion that the minimum process size on Windows NT is 64kB
(between kernel and user pages, for Intel platforms anyway). So if you didn't
treat a zombie process as "just another process" for management purposes you'd
have to start having all sorts of exceptions in some very base level code
(page tables, scheduler, system statistics and reporting, etc etc) just to
clean up after other people who can't or won't code properly. I just can't see
a kernel engineer agreeing that this was a sane thing to do (I can image how
someone with a temperament like Linus would respond!).

~~~
radford-neal
Could be. But if it were a really big issue, I'd think that the minimum
process size could be reduced, which might be a useful change in any case.
After all, on the first Unix system I used, 64K was the MAXIMUM process
size...

The point is that there are always tradeoffs, whose resolution is affected by
how important an issue is perceived to be.

