It represents a fundamental misunderstanding of how modern OSes work. That misunderstanding is not the problem; modern OSes are complex pieces of software, and most people shouldn't have to understand them. OSes should just work. The problem comes in when people who don't understand how they work get the itch to "improve" their system.
I'm actually continually amazed that even some technically inclined people misunderstand how memory works in /all/ modern OSes: recently I had a discussion on Twitter that went something like this: "OS X is horrible, it uses nearly all of my 8 gigs of RAM and my browser is horribly slow!" to which I replied, "it's perfectly normal for the OS to appropriate the RAM in the fashion you see in Activity Monitor; this doesn't mean your machine is slow due to lack of RAM." Unfortunately, even after a lengthy discussion and several forwarded links, I seem to have failed to make the case.
It would be interesting if someone more knowledgeable than I am were to do a write up explaining memory usage in OS X, Windows, and Linux; would be an awesome resource to share with curious tinkers who may be slightly misguided in their understandings of the inner-workers of their computers.
On a few occasions, Safari has claimed gigabytes of memory and caused my machine to start thrashing the swap. In that case, the correct answer is to restart Safari, not take a refresher course on virtual memory allocation.
Windows Vista had an issue where copying large files would set off such a huge swap-storm that OS became completely unresponsive for several minutes. People gave all the same VM excuses then as well, but there obviously was something wrong.
Those aren't excuses, those are reasons: copying large files are the pathological case for just about all caching techniques. You've now kicked out everything useful from your cache with something that you will never use again. It has little to do with virtual memory itself.
As for why the browser is horribly slow, try blocking all the unnecessary (read: tracking) Javascript and Flash. Suddenly, the browser is a joy to use again.
Programmers need to remember that users are like this: (from an Apple discussion thread)
… [periodically]'installd' begins using 100% of all 4 CPU cores, my fan goes full speed and the whole computer gets very hot. Seems to happen before Software Update checks for updates. I usually go to the terminal and kill the 'installd' process, which reduces the fan speed and heat to normal within a minute.
I wonder if this guy is also one that says you need to reinstall from scratch every few months to keep things working?
The reaction of the user is not suprising. No background-daemon should use all 4 cores and be that noticable. Users don't feel good when the computer starts to make thing on his own they don't expect.
Ubuntu had or has the same issue with a deamon used for the graphical package-administrations-guis rebuilding an index (I forgot the name). They tried to help themselves with appropriate nice-settings, defused the issue, but on old machines you still need to move the cronjob to monthly to have a usable system.
You simply can't use the system properly when a background-process uses all ressources. And one normally want do something else than wait for the system.
I consider such behaviour a bug.
"You simply can't use the system properly when a background-process uses all ressources. And one normally want do something else than wait for the system. I consider such behaviour a bug."
Agreed, but to add to your point, if your background process is taking up all the resources in my system, that defeats the purpose of being a background process.
You are probably talking about apt-xapian-index. It rebuilds its indexes once a week, consuming all system resources.
I had an old notebook, which became unusable every weekend. I did not want to buy a new machine, so I simply deinstalled apt-xapian-index package. :-)
"installd" is a completely undocumented (outside of Apple) daemon that is somehow involved in installing your software (it runs while installing App store programs) and checking to see what updates you might need. It has no user visible existence. No one would ever know about it unless they ran "ps" or something similar. They will find no hint at what it does, or if killing it is safe.
installd is part of framework that handles installed packages. It's fired up as part of the normal software update process (although it really shouldn't be taking a core for itself). Killing it would probably abort the software update process.
I have seen installd peg my CPU to 100% and have killed it. What would you suggest I do?
One of the most annoying features of modern OSes is when some system process just decides to start going wild, eating memory and CPU. Often I find reinstalling is the only way to fix such things.
Let it finish. It has some thinking to do, maybe crypto checksums? Maybe just a really inefficient corner case on an algorithm. There's nothing wrong with letting your CPU work. if running at 100% makes your machine flakey, it is broken.
If your foreground performance is being impacted too severely (and I haven't seen this from installd, I just noticed and researched installd while removing Mac Keeper (malware) from my wife's laptop) then reboot. It's extreme, but it has the best chance of getting your processes shut down cleanly as opposed to a kill where you could nail a process in the middle of a state that really does not want to persist. Programmers are a lazy sort. They won't consider the effect of termination at each point in their program. You are hunting for bugs using your live system as bait if you kill a program.
More than weekly (but fairly randomly) I used to find installd would peg my CPU to 100% and just stay there for hours. It doesn't make my machine flakey, but it makes it slow and I have other things to do.
In the end I just reinstalled my OS, restored files from Time Machine, and everything was fine. I never did figure out why it was misbehaving. I have had (once) a similar problem with spotlight. Fortunately there I knew enough to run lsof to find it had got snuck in an infinite loop on one particular mp3 file, which I just deleted.
However, my point (which I should probably have been clearer about) is that bits of OSes are known to just start going wild for no reason, and often killing them, and eventually reinstalling, is the only option.
"then reboot. It's extreme, but it has the best chance of getting your processes shut down cleanly as opposed to a kill"
so... how do reboots work on OS X ? on every *nix flavor I know, there's one command that just halts the damn machine and damn the torpedoes, and there's one command that does it more gracefully, by sending progressively harder-to-ignore signals to ~every process except init, ending up on SIGKILL (which is not trappable).
A standard reboot from the GUI should be fine. If you boot up in verbose mode there are some amusing messages that are displayed at the top of the screen during reboot/shut down when a program has to be force quit by the OS.
It makes me feel old. As soon as I saw the headline I knew what the story was going to be. Another fresh wave of young faces learning about memory...I just wish they'd stay off the lawn.
It makes me feel a little embarrassed for HN actually. This article might be a revelation for the noobs on the Linux subreddit, but I'd expect the HN crowd to find it pretty pedestrian.
Not only that, but in some cases it is flat-out wrong:
No, disk caching only borrows the ram that applications don't currently want. It will not use swap. If applications want more memory, they just take it back from the disk cache. They will not start swapping.
Try again. You can tune this to some extent with /proc/sys/vm/swappiness but Linux is loathe to abandon buffer cache, and will often choose to swap old pages instead.
I have learned this the hard way. For example, on a database machine (where > 80% of the memory is allocated to the DB's buffer pool) try to take a consistent filesystem snapshot of the db's data directory and then rsync it to another machine. The rsync process will read a ton of data, and Linux will dutifully (and needlessly) try to jam this into the already full buffer cache. Instead of ejecting the current contents of the buffer cache, Linux will madly start swapping out database pages trying to preserve buffer cache.
Some versions of rsync support direct i/o on read to avoid this, but they're not mainstream and readily available on Linux. You can also use iflag=direct with dd to get around this problem.