Back then, I thought the conclusion was that there is nothing broken about OS X memory management, and that with every 'fix' you come up with, you will just introduce another degenerate corner case. The same holds for any OS, trade-offs are made that may have some negative effect in some cases, to the benefit of the general cases.
I don't recognize any of his symptoms anyway, and my OS X computers get pretty RAM-heavy use, with almost always a linux VM open, XCode, Safari with ~10 tabs, iTunes with a few thousand songs, etc.
Just to be sure I read through some of the links he provides that are supposed to explain what is going on and why the fix would be of any help, but nowhere do I see any hard facts that demonstrate what is going on. Only that he 'saw in vm_stat that OS X was swapping out used memory for unused memory'. I'd like to see some actual evidence supporting this statement.
I have two Macs, one brand new, one migrated from SL, and 10.7 Safari was almost unusable on both until I installed SSDs. If that isn't a negative effect in every possible use case, then I don't know one. I actually guessed it was just that Lion inofficially dropped support for HDDs (by removing all caches or so).
Given that Apple has fixed none of my reported bugs in 10.7, but I can't reproduce many of them in 10.8, I wonder if it even makes sense to analyze 10.7 anymore - seems it's a done deal for Apple.
It's like the system discards pages of programs just because the app has been inactive for an hour or so. So when I come back and start the same app the f*cking rotating HD I have sounds like a birds nest for too long periods.
Edit: Disabled the pager and the system now seems much more quiet regarding to disk seek noise when I start apps. Feels like a new machine! :-D
From what I've read, Windows memory manager does the same thing - after a while, it swaps out unused pages, even if plenty of free memory is available.
I wonder what's the logic behind this - did the engineers assume that the speedup coming from more free memory being available for disk cache is worth the hassle of waiting for the swapped out page (when it's actually needed)?
In Lion I get the impression they are just swapped out and thrown away / reused for something else despite there are no real pressure on the VM.
And yes, the logic is sound, its better to use a bit of swap for an infrequent daemon and let 4-5 megs of memory be at the ready if needed than leave it in place all the time. The "speedup" is not a speedup for your use, its to allow for better memory management. Which is what the VM subsystem is there for. Second guessing it all the time just makes its job harder.
Swap use when there is free memory isn't a bad thing. This fetish people have with their OS using swap at times seems to border on the ridiculous side. My iMac at home has 16g of memory and 400g of swap used right now (8g active, lots of file cache that'll get purged). Most of the swapped files belong to things like my ruby+pry repl, a clojure repl I haven't touched for 2 days, and other random things I don't use often enough to warrant they stay in active ram. Why SHOULDN'T that memory be reclaimed and at the ready for a new program or some other request? Its just going to page it out then and likely take longer to do. The only time its "wrong" is when I start using those processes again, which takes all of 1-2 seconds.
Its a hard problem, and both OSX/Windows choose the best possible solution you can heuristically.
It is, actually. I find such things unacceptable, be they on desktop or server use cases. I put as much or far more RAM in my systems than they will need, and I expect nothing to be swapped until it's actually full. Many other people do as well, which is why the Linux kernel devs finally started fixing the stupidity several years ago. Time for OS X to catch up.
Windows XP does that. It was a common source of grief. I remember it being mentioned as early as 2004. Since Windows Vista the memory manager doesn't have that problem.
YES. I have a MacBook Pro Core i7 from a little while back with an old style spinning rust drive and a 11" MacBook Air Core 2 Duo.
For purely CPU bound things, sure, the Core i7 kicks the pants out of the Core 2. Same for videogames. For day to day use, though, switching between Eclipse, Xcode, Chrome, etc. the Air provides a much more uniform experience. At its best it's far slower than the Pro at its best, but at its worst it's much faster and more responsive. I rarely see beachballs on the Air. I used to see them all the time on the Pro (the Pro has been sitting on a shelf for the past eight months as I switched to working exclusively on my Air, partially for this reason).
So my experience is that something may not be broken, but something definitely isn't set up optimally for users with poor disk performance and high memory/CPU performance.
Left some big files on the old HD, and symlinked them. The disk stays idle in the CD bay until I need it, then spins up.
If I were to do it over again I'd just get a larger SSD and leave the optical drive alone.
It also turns out I rather like working on an 11" screen. Keeps me focussed.
I haven't tried ML yet. My MBP is brand new (bought in January 2012), factory configuration (4GB RAM, Lion)
If you use it 'lightly' (that is, only Safari open) it's a breeze. But of course, it's never only that if you want to do any work.
Frankly, 4 GB should be enough! My past machine (with 3GB - and Linux) would rarely swap (in fact I could keep swap off and use a Windows 7 VM) But you can go only so far with an aging CPU
Some of the slowness can be attributed to Safari/Firefox, sure
But it really seems to have something wrong. Maybe they really neglected people with spinning disks.
(Yes, I considered buying a MacBook Air but 128Gb was not enough for me and the other options were above my budget)
I see this sentiment a lot, but I disagree with it. What are "iTunes, Xcode, Safari/Chrome, and a mail client" doing now that they weren't doing four years ago? Is it enough to justify their latest versions feeling less responsive than their versions from four years ago?
iTunes: layers of software for dealing with Wi-Fi sync, Ping social network, iTunes Match, and so on
Mail Client: totally agree with you there...
But I still pretty much agree with you overall...
Things change. My old 2006 iMac core 2 duo feels a bit clunky sometimes these days, but it runs a lot of stuff fine and is actually just as good a machine as it ever was.
Lot's of things. XCode was rewritten and does live AST syntax completion, background compiles, etc.
Safari/Chrome have several more features --did Chrome even exist 4 years ago?
Expectations are certainly a big part of the perceptual speed equation. But with OS X, don't underestimate the benefits of keeping your disk less than 90% full. With all the caches, iPhone and iPad backups (over 40 GB in my case), Xcode, sleepimages and swapfiles, installers (Adobe!), SyncServices, etc., a 160GB SSD fills up in no time. When things get slow, getting back below 90% works wonders.
Currently at 145 pages and growing https://discussions.apple.com/thread/3191630?start=2160&...
Yeah, the 2009 iMac started working the day I put Snow Leopard back on it. I then sold it and warned the owner that upgrading to Lion was at his own risk. I replaced my router with an Airport Extreme; useful excuse to buy a new toy; at one point to see if that resolved it. I even moved the iMac NEXT to the router one day.
We have been through this problem again and again and again, in different OSes, at different times and with different things triggering the various problems.
It usually ends with a "neck-beard" saying with enough authority "look, really, they are doing it right even if it seems totally illogical to you and any brokenness is just your configuration, little man". Which is to say "you might not like senseless disk-thrashing but would you rather have your machine randomly freeze when it got out of memory?" And scratching a little more, it comes down to admitting that memory-allocation is a hard problem in its full generality and they don't teach you that in application-programmer-school, and further that the solutions to it that any of these OSes have are tuned-black-magic-split-the-difference-haphazard affairs.
Consider. Either the machine keeps all your information in memory it or keeps on-disk and in-memory and either way, the machine hasn't a clue what information is important to you, my friend. It's just data to it. It's not like the computer is intelligent or anything. Why do you think they call it "random-access-memory"? The problem of dividing up chunks of memory for application programs to use is as hard as dividing up that hard disk for large and small files to live in, EXCEPT that application programs expect to be handed a chunk of contiguous memory when they call malloc. Hard problem even with the powerful tools that have evolved for solving it over the years. So when a given memory management scheme works, it isn't really "fixed", it just has been tuned for the corner-cases that are shouting loudest on the help lines.
And yes indeed, it is "funny" how just getting the "simple stuff" to work is a hard problem. IE you can find lots of simple examples where the standard solution seems fail terribly.
Angry how your 100 GB memory machine isn't faster? Look under the hood and you'll find Scotty from Star Trek shouting "Captain, I'm allocating your memory as fast as I can Sir..."
I think you missed a key ingredient to this problem, which is heavy disk reads caused by either spotlight or time machine.
Reading a massive amount of data (that you'll probably not use again anytime soon) has the unfortunate side effect of polluting the disk cache with junk. Now if OSX is anything like Linux in this regard, it is loathe to toss out old disk cache (in response to all the incoming junk it's being asked to cache) and will instead start swapping to free up more memory for disk cache.
Linux has /proc/sys/vm/swappiness to control how aggressive it will be in swapping stuff out to preserve precious buffer cache, but I don't think OSX has any such mechanism.
Oh boy I wish I could say the same. Admittedly I don't shut down on a daily basis but this didn't used to be a problem in SL. FWIW I've an old tank of a tower that has a video card on its last leg.. shutting down invariably leads to ~30 minutes of downtime while the card heats up and reconnects whatever needs reconnecting for both monitors to work.
Since Lion, I've noticed frequent hangs and beach-balls when doing even menial tasks. Transmit, terminal, texmate, a few tabs in chrome. If time machine starts backing up I can forget about a smooth Preview open or switching to a largish open textmate file without beach-ball'ing. If I want to use a Win7 Parallels VM--I can't do anything else. Even now as I type this I have a ubuntu vm running at the login screent and it causes the machine to shake off the cobwebs between almost everything.
It's certainly not a bad machine--there's 8gb ram, and tons of diskspace--good processor. In fact, I would go multiple months without a reboot and heavy use when on SL without hardly any problems at all.
Then there is the new i5 MBPro. Cool trick you can do: hook up an external monitor via thunderbolt and watch as the [left side] dock becomes a mangled mess with icons miss-positioned and wrongly triggering apps--it's like playing a game of whack-a-mole trying to open terminal to kill -KILL dock :)
The i5 has also been less than stellar compared to the older MBPro I sold to buy it in terms of performance.
Though I did improve matters drastically by telling mds not to index my (Linux) MP3 server and my Time Machine drive. That made it go from nearly unusable to just frequently annoying.
The other crazy thing I've been seeing is it routinely takes Chrome minutes to shutdown -- in fact, pretty much every time I try to reboot my MBP without shutting down Chrome first, the shutdown process times out trying to exit Chrome.
I too have experienced the Chrome issue enough that I don't even try to close it normally anymore. Force quit is the only way I exit Chrome. Thankfully, the restore tabs functionality works well.
That would be the job of the memory manager to decide when to do that. Memory could be kept in the non-free state for longer that it actually is needed, but still be marked internally to be available when needed.
OTOH, if you have paging, as you say, then something is wrong, true.
But I don't think that the screenshot shows something wrong.
I read through the comment of a few weeks ago, and I did not see anything conclusive or even anything that would outweigh my subjective impressions that something about Lion on my (stock) 2011 Mac mini is causing unnecessary lack of responsiveness.
You might be uncomfortable with subjective impressions and many pieces of weak evidence, but given the popularity of OS X on this site, not all of us want to wait for what you refer to as "hard facts" before engaging in a discussion of the issue.
For example: Let's take a large Chrome session (~150 tabs spread over several windows), an IDE open somewhere, Spotify, Steam and some background apps, and a small Windows VM.
Generally, Activity Monitor would show that Chrome in this instance would be eating 2-3GB of RAM, the VM would be eating 1GB + change of unpageable ('wired') memory, the random utils & spotify & steam & ide du jour & crap would eat another 1.5GB or so, and, long story short, there's very little 'free' memory left (think <100MB) on this 8GB system but a good >1GB of 'Inactive' memory.
Everyone agrees that Inactive memory should be freed when more memory is required by the system and we're out of Free. My own limited testing shows instead that opening more tabs during normal use to the point where the Free memory is consumed instead results in massive delays across the system as memory is paged out to disk, and the Inactive memory doesn't seem to noticeably change in size.
I don't really know what it's doing, but it will consistently make the system very unpleasant to use for a good 30 seconds until the hard disk stops clicking. (It's unpleasant enough that I try to limit my browser session sizes now, and only run the VM when I need to.)
You don't have to fill up every last bit of RAM before the OS starts swapping, as there could be pinned memory pages or processes that want to do larger allocations that are only held for a short time. If you have less than a few hundred MB of unused RAM with all the stuff you mentioned going on, it only takes some kind of scheduled OS background job to push the OS over the line where it decides it needs to swap in/out.
That said, from the comments some people posted here, it does appear that at least in some situations there seems to be something going on in some versions of OS X Lion. If Snow Leopard and the Mountain Lion preview are unaffected with the exact same usage pattern, maybe there actually is some kind of bug in the OS X memory management. But I'd still like to see some kind of evidence, facts or statistics, as I have never experienced anything like it myself, not even on my MacBook when it still had only 2 GB of RAM.
150 tabs? A VM? An IDE?
That's just A LOT. Of course things will go south, what did you expect a magic machine that can run everything and whistle away hapilly with 0% load?
edit: Just for the record. I'm not doing some kind of heavy processing work. Most of the time I've got one Chrome window open with some tabs, email client, macvim and iTerm2. It's not like I'm doing some heavy work. I'm not even running a VM.
I thought it was pretty clear that this isn't a "fix", but there's definitely something wrong here.
People say "Get and SSD", well, I've had 2 SSDs and 4 SSD failures (one drive failed three times, the other once and its replacement is still going.)
So, I'm all spinning rust here. 1.5 Terabytes of rust in my Macbook Pro and the only time I have a beach ball is trying to launch Team Fortress (but I blame valve for that).
I have massive, MASSIVE Final Cut and Aperture libraries. I leave the machine up for weeks. I leave Time Machine running all the time- there isn't even a slowdown when time machine is backing up.
My hard drives are encrypted with full disk encryption which means not only am I running spinning rust but its encrypted rust which means every read has to be decrypted.
No slowdowns or beach balls. Sure the occasional poorly written program will have a beach ball, and rendering video takes awhile, but that's to be expected.
Yet people constantly say that Lion sucks? Really? And they have these more beefy machines with more RAM?
Something doesn't add up here.
Came with Leopard, upgraded to Snow Leopard (not a fresh install) and them app store upgrade to Lion
machine is snappy as, unlike my coworker who has a brand new quad core i7 w/ 256GB SSD that runs Lion like a dog. no idea why, but my humble old macbook is faster in every way than his shiny new mac mini
At my machine with 8 GB RAM and uptime of 4 days I have page outs of only 2 Megabyte. And page ins of 2 Gigabyte.
I subscribe to your blog! * starstruck
What he's saying is happening is that the OS is doing this too aggressively, and that it ends up swapping out data that's actually in use in favor of disk data which doesn't really need to be cached, which hurts performance.
By disabling the pager, you make it impossible to move application data to disk at all. This limits the amount of RAM available for disk caching, but if the OS really is caching too aggressively, that will ensure that it can never page out useful application data by mistake.
My experience mirrors yours, in that it really doesn't seem to be a problem on the computers I've used, but that's what he says he's seeing.
Glad you like the blog, but I'm just a regular guy. I put my pants on with a high speed pants installation robot just like everybody else.
That's pretty much the GC algorithm. There's nothing wrong with that mention.
This seems like a good idea to unify paging and disk cache memory, but it actually isn't. This means, that if you do a lot of I/O, resident pages (i.e. your programs) can actually get pushed out of memory to free up RAM for the disk cache. This degenerates pretty badly in scenarios like using VMs, since you're also using large sections of mmap'd memory.
This doesn't happen on NT or Linux, because disk cache can only be turned into memory (i.e. making disk cache smaller), not the other way around; the policy is "Disk cache gets whatever's left over, Memory Manager has priority"
Unfortunately, the only thing you can really do about it, is have a machine with a huge amount of RAM, which will kind of help.
No, NT and Linux also have unified VM. What BSD
had pre-UVM was pretty antiquated.
I think you could probably hack something together that does this with DYLD_INSERT_LIBRARIES (OSX's LD_PRELOAD) that would would hook the open system call and fcntl F_NOCACHE on the file descriptor before it hands it back to the application.
That they are is just speculation from someone whose taken his experience and projected it onto everybody.
Since he's a person who mucks around with random system settings (like the one in his article) there's no telling what previous damage he's done to cause this problem.
EDIT: I can't get either spotlight or time machine to show any cache polluting behavior at all, at least not in the way that I run them. I used "mdutil -E /" to force a re-index of my disk, and I kicked off an initial time machine backup on a secondary drive I had lying around. I see both backupd and mdworker doing a lot of disk reads using iotop, but top shows my inactive memory not really changing as drastically as I'd expect, like if I were to cat a giant file to /dev/null.
I hope Apple engineers are looking at this thread.
I use a 1.5tb external drive formatted in exFAT to minimize cross-platform headaches, and whenever the drive is marked dirty (improper shutdown, eject, etc), OSX will run fsck_exfat on it before I can use it.
fsck_exfat isn't a huge deal -- or wouldn't be, if it didn't have a nasty tendency to leak RAM... the moment you plug in, fsck_exfat's footprint climbs up and up and up... never stopping! Pretty soon it's eaten up 8gb out of my 8gb RAM and poor ol' lappy is unusable.
I can say with authority what happens when you run out of physical RAM in OSX: it hard locks. Nothing works -- no keyboard, no mouse, nothing.
So, if you plug in your large, dirty (you dirty drive you!) exFAT-formatted external drive, with dynamic_paging switched off, and let fsck_exfat do its thing, your laptop freezes! Leaving the drive dirty, only to be re-scan on boot-up... freezing the laptop, leaving the drive dirty, only to be re-scan on bootup...
EDIT: this is with Snow Leopard...
chkdsk on Windows manages to clean exFAT volumes just fine without using up 8gb+ of memory.
Like the author I was shocked at how accustomed I was to waiting for an app to become responsive again. I was trained to wait on the OS to do it's business before I could do my work. Now things happen as quickly as I can think to do them, this is how computing should be.
In the meantime, I'm also hedging my bets, and I've gotten very comfortable with Windows 7 for productivity (ok, it's really for gaming) and Ubuntu Linux for web/LAN serving.
It would seem there's a simple solution -- another number on the system monitor displaying how much memory is available for use if needed.
So if my memory is "full" with a bunch of just-in-case stuff, I'll gladly swap it out for real data that a real running process is using it. But if it's "full" of data in use for running processes, then I want to think twice about opening a new application. And I want my memory manager to tell me the difference between those two "full" cases.
total used free shared buffers cached
Mem: 2042520 1816496 226024 0 294344 486908
-/+ buffers/cache: 1035244 1007276
Swap: 4194300 8172 4186128
Users of desktop systems clearly don't like this behavior, in fact they'll do crazy things like purging disk cache via cron every minute to try to stop this from happening.
Process A (let's call it Safari) allocated 600MB of memory. Out of this 600MB, it hasn't used 400MB for quite a while (because, for example, it contains data for tabs you haven't looked at for hours). Now I'm not sure how Darwin does this but I know for a fact that Windows NT kernels will try to write the contents of in-memory pages to the disk at the first good opportunity; this way they save time when the pages in question will really get paged out to the disk. I assume that there's a similar mechanism in Darwin. So it's very likely that the 400MB in question is already on the disk. Now the user starts process B (let's call it Final Cut Pro) that reads ands writes to the disk very heavily, and typically the same things. It's not an unreasonable thing to do on the kernel's part to just drop Safari's 400MB from the physical memory and use it for disk caching Final Cut Pro. Throw in a few mmaps to the picture and suddenly it's not obvious at all which pages should be in the memory and which pages should be on the disk for the best user experience.
The problem with this line of reasoning is that a large amount of cache will often not give you much more benefit than a small amount. Indeed, that's the nature of caching: you get most of the benefit from the first bit of cache, but the level of added benefit drops dramatically with more cache.
What if using 400MB of cache for FCP only gave 5% of a net performance advantage over using 40MB of cache? Would it still be worth it to take away that extra 360MB from Safari?
And there's the issue of human psychology: people deal much more easily with a little slowdown spread evenly than with a full-on stop for a short amount of time (even if the full-on stop scenario gives you greater average performance). I'd prefer Aperture run 5% more slowly than it might otherwise, if that meant I never saw a beachball when running Safari.
I tried turning off spotlight (which was taking a very long time to complete) but it did not help.
For me, the problem turned out to be a failing hard drive. After replacing my system hard drive, things returned to normal speed.
I'm just posting this in case it might help someone else.
My friend had installed a new larger drive that was causing the problem, whereas there were no beach balls while booting off the original drive via USB.
She had 4 gigs of RAM which we recently upped to 8gigs which reduced the severity of the problem.
I really, really hope this is something that gets fixed in Mountain Lion. Tasks that should take 20 seconds take 10 minutes or more.
It's good to know she's not crazy.
I won't run any adobe software after I saw the abuse they did to my machine.
Apple basically gets a free pass if you're running Adobe. This is a company that ships crap.
Also, you're probably starving it of sufficient memory. If Lightroom is up, you're probably out of memory, even with 8GB.
I'd recommend getting rid of Lightroom and going to Aperture, or given aperture is a bit behind the curve, upgrading to 16GB of RAM and seeing what adobe-installed processes and KEXTS you can get rid of.
Upgrading from 4 to 8 gigs last week helped a lot. I'd go to 16 except her MBP won't support it.
I'd love to get her on an SSD but she's on a 1tb drive now and it would be hard for her to try and fit into a 512gb SSD now (especially now that she's on the D800 with huge video files and 72mb raw photo files.
It's frustrating that it will work find some of the time and not others, implying that the problem could be fixed with better memory management. I do hope that a serious Adobe competitor arises to force Adobe to make its apps faster and more resource efficient.
Forget the advice. Lightroom is faster, as noted in every review of both programs. Try Aperture yourself with the demo to find out.
Working with 10+ megapixel images is always going to be slow, and with camera advances, it will get worse every time your wife gets a higher resolution camera --so comparing it with how it used to be when you have 6mp files is not exactly correct.
More memory and an SSD will definitely help.
Yes, millions of professional designers using Adobe software are idiots. You are just making BS claims with no support whatsoever. Try opening a huge image in Photoshop and any other editor and see which behaves better and faster.
The only "crap" stuff Adobe does is mostly whatever it acquired from Macromedia.
>I'd recommend getting rid of Lightroom and going to Aperture, or given aperture is a bit behind the curve, upgrading to 16GB of RAM and seeing what adobe-installed processes and KEXTS you can get rid of.
And I'd recommend not listening to BS anecodotal suggestions on the internets. Read a couple of professionally done reviews and benchmarks. All state that Lightroom is faster and more efficient that Aperture. Aperture got a little better in the last version, but still no match to Lightroom.
(I'm not bashing Apple, I like both. Things are what they are though, and yes I've tried both of them.
Thing is: working with freaking huge images, like hundreds of 16 megapixel RAW files, will be slow, whatever you use.
I've set up a cron job to purge frequently, keeps thing humming.
Caches exist for a reason. Deleting them willy-nilly tends to be a bad idea.
The `purge` command has a pretty short man page:
> purge -- force disk cache to be purged (flushed and emptied)
> Purge can be used to approximate initial boot conditions with a cold disk buffer cache for performance analysis. It does not affect anonymous memory that has been allocated through malloc, vm_allocate, etc.
[This seems to talk about it more: http://workstuff.tumblr.com/post/19036310553/two-things-that...
On my 10.5 laptop it didn't seem to dramatically decrease the memory marked "inactive".]
I have regularly seen 1-3G of ram get free'd up on a 'purge'
// wait ~10 seconds
It's called the Unified Buffer Cache (UBC).
The class of problems described in the original post are not the sort of thing you 'just find' by glancing at kernel source code. The problems described sound like they could be an issue of poorly tuned heuristics/thresholds, or necessitate some extra machinery inside the OS X memory manager that isn't there currently. It's not like you can send Apple a pull request on github.
everyone is all positive about "open source" until they have to dive into a few millions lines of complicated system-level C code and then...
It's a facile point.
> everyone is all positive about "open source" until they have to dive into a few millions lines of complicated system-level C code and then...
Does anyone doubt that 99% of open source users never read a line of the source code which they are using? The point is, they have the opportunity to, and more importantly, the 1% (or whatever) with the skills and resources are able to actually do something about it.
If you don't have the ability to change or examine the source code, then there is little incentive to do any runtime analysis which might illuminate the problem.
become immensely employable.
Bad blocks in the disk, causing the system to beachball frequently due to disk I/O failures when swapping out to disk.
The solution for me was to back up, reformat the disk and zero-ing out everything causing bad sectors to get remapped, and then restore.
Problem resolved. Not that I don't still get inexplicable pinwheels, but nothing like before.
I opened all of my apps, expecting it to crash miserably: instead, the system started paging as it should, stayed responsive (though slower), and promptly returned to normal once it regained memory.
I don't know what's going on, but I can definitely say that this is how I want my computer to work.
"Buy all your developers SSDs. It makes them more productive."
This typically affects me in low memory situations, such as less than 100mb of free memory. The effect is most pronounced when switching between browser tabs, which would cause a lot of disk usage... pulling all of that data in and out of non-ram cache.
If you look at Windows 7 memory consumption with the same set of software you use in OS X, you'll notice memory usage is 1/2 or 1/3 on Windows compared to OS X. Maybe someone knows why that is?
I have a MBP with 4GB RAM and leave programs open all the time. After a few days, it feels very sluggish.
Aside from double my memory and changing my habits (i.e. shutting down every night), how do I fix this?
Whether this is unintentional, part of a calculated tradeoff, or a cynical business/tactical decision is another thing.
However, it's still early days. It might just be a "washed car effect."
(Mac Pro 1,1 / 7GB RAM / WD Caviar Black)
Yet it still swaps to disk ALL THE TIME and a new Terminal.app window can take up to 5 seconds to open.
I really don't give a shit how it's not "technically" broken - that's broken from an experience point of view. And I haven't re-installed the OS (this was an App Store upgrade from Snow leopard) because that's a major pain in the ass as this is an actual workstation used to do actual work.
I can't believe this is actually advice, either - that's what Windows users used to say in the 90s. Anyway, I guess I'm just ranting. OS X is wonderful except for the fact that it sucks at managing memory to keep a system snappy.
That's not swapping. That delay is /usr/bin/login searching the system logs so that it can display the date and time of your last login.
Create a .hushlogin file in your home directory to prevent that.
1. The default use of /usr/libexec/path_helper to manage your $PATH.
2. An accumulation of log files in /var/log/asl.
For (1), I just edit /etc/profile and disable path_helper altogether. I set the PATH manually. (This also allows me to put /usr/local/bin before /usr/bin, which is my preference. I've never understood Apple's default settings for $PATH. They put /usr/local/bin later - which defeats the whole point of installing, say, a newer vim in there.) For (2), a cron or launchd job can take care of it.
Really? Are you sure path_helper slows things down?
I'm sure that it did, but not sure that it does. The code you ran isn't quite what /etc/profile does. Here's a run of that on an older machine where I work (see below on versions):
$ time eval `/usr/libexec/path_helper -s`
$ time eval `/usr/libexec/path_helper -s`
0) `login -pf`
1) quietlog = 0
2) if ("-q" in argv) quietlog = 1
3) if (!quietlog) getlastlogxbyname(&lastlog)
4) if (!quietlog) quietlog = access(".hushlogin") == 0
5) dolastlog(quietlog) ->
6) if (!quietlog) printf(lastlog)
You can see from this that the "searching the system logs" (which, to be clear, is going to be really really fast: /var/run/utmpx is a small file with fixed length fields) happens in step #3, before .hushlogin is checked in step #4.
If you wish to verify, you can read the code at the following URL. Note that __APPLE__ and USE_PAM are defined for the OS X distribution of this code, while LOGIN_CAP is not.
Look at the code for getlastlogxbyname(). It does an ASL query for last login, and that's the source of the delay.
(edit: I have gone ahead and verified your statements regarding getlastlogxbyname now being based on ASL. Using that knowledge, and based on st3fan's comments about the output of dtrace, I then used dtruss to verify my own assertion regarding the order of events. The result: .hushlogin in fact only affects the output of "last login"; it does not keep login from getting that information in the first place with ASL. To keep it from doing so you must pass -q, something Terminal does not do.)
The correct way to bypass the ASL query is to set Terminal to open shells with /bin/bash (or your shell of choice) instead of the default login shell. Terminal will still use /usr/bin/login to launch the shell, but it passes the -q switch to prevent the ASL query.
When I dug into the source code a couple of months ago, I inadvertently made both changes (Terminal settings and .hushlogin). Clearly it's the Terminal settings that solved the problem and not .hushlogin. Thanks for clearing it up.
(edit: I have gone ahead and checked: thought_alarm is correct, in that getlastlogxbyname is now using ASL instead of utmpx; however, I have also verified my sequencing assertion with dtrace: .hushlogin has no effect on the usage of ASL, but manually passing -q to login does: it thereby cannot be the source of a .hushlogin-mediated delay.)
However, assuming that is the case for some people, we have to look elsewhere than the last login lookup. There are only a few other usages of quietlog: motd (open file, read it), mail (check environment, stat file), and pam_silent.
The first two are not going to cause any kind of performance issue, so we have to look at pam_silent. This variable is particularly interesting, as it is only set to 0 if -q is not passed (and it is not) and there is no .hushlogin (it is not directly controlled by quietlog).
If it is not 0, then it is left at a default value, which is PAM_SILENT, and is passed to almost every single PAM function. It could very well be that there is some crazy-slow logic in PAM that is activated if you do not set PAM_SILENT.
Given this, someone experiencing this issue might look through the code for PAM to see if anything looks juicy (and this is something that will best be done by someone with this problem, as it could be that they have some shared trait, such as "is using LDAP authentication").
(edit: FWIW, I looked through OpenPAM, and I am not certain I see any actual checks against PAM_SILENT at all; the only mentions of it are for parameter verification: the library makes certain you don't pass unknown flag bits to anything.)
kore:~$ ls -l .hushlogin
-rw-r--r-- 1 jay staff 0 Aug 15 2002 .hushlogin
Am I missing something? `w` on my linux system takes well below one second:
[burgerbrain@eeepc] ~ % time w
14:11:18 up 19:41, 6 users, load average: 0.27, 0.10, 0.14
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
w 0.01s user 0.05s system 61% cpu 0.096 total
Here the majority of files being opened are in /var/log or Homebrew related.
Also interesting ... creating a .hushlogin did not change much. It still opens about 50 files in /var/log/asl/
Anyway, instead of running /usr/bin/login, I just use /bin/zsh as my Terminal.app startup command which is much faster.
However, every time I access the file system, even some tab for autocompletion, it takes a few seconds. Even cd'ing into some directory can sometimes take a few seconds. Sometimes ~= it's more than 1h ago that I accessed that dir or so.
Edit: Maybe it's Time Machine or Spotlight or so which destroys the effectiveness of the VFS cache?
Normally Unixes (Linux included) use a pretty efficient binary file called wtmp for that, I'm surprised if OS X doesn't. Reading the last disk block of that file would contain the last login with overwhelming probability.
There has to be a lot of seeks even on a slow laptop rotating hd to get a 5 second delay, with 15 ms seek time
you get 333 seeks in 5 secs.
eklitzke@gnut:~ $ time sudo head -c 1073741824 /dev/sda > /dev/null
Even if the login command does need to sequentially read through logs to find the last login time (and I'm skeptical of that, because that would be a stupid way to implement login), I don't see how that would explain multiple seconds of waiting.
This has never been hard to do on OS X since 10.0 because they followed the Unix convention of separating user data from system files. It is, of course, rarely necessary unless you've used superuser access to seriously muck with things under /System.
I've done this repeatedly over the years when dealing with beta releases & system migrations and it's never taken much longer than the time needed to copy the files.
You should run some dtrace magic to find out what 'it' is. MIght be the OS, might be a badly behaving application. Who knows.
I find it too easy to blame the OS for all of this. One poorly written app can cause a lot of performance damage.
Lion has probably been optimized for SSD since Apple is quickly getting rid of spinning disks in their entire line up.
Is it actively paging to disk at times when there is plenty of free RAM? A common complaint on Linux is "I've closed a lot of stuff but it is still using swap" because even if the pages are read back into RAM when next used they are kept in the swap area in case they need to go out again (that way they don't need to be written unless changed, saving some I/O).
Under Linux you can see how much is found in both RAM and on disk with the "SwapCached" entry in /proc/meminfo - it won't stop counting those pages as present in the swap areas until either it runs out of never used swap space so needs to overwrite then to page out other pages or the page is changes in memory (at which point the copy on disk is stale so can not be reused without being updated anyway).
> and a new Terminal.app window can take up to 5 seconds to open.
Have you monitored system activity at such times to see where the delay is? While it could be due to unnecessary disk I/O it could also be elsewhere such as delayed DNS lookups if anything in your profile scripts does such a thing and there is an issue with your connectivity or DNS configuration.
(I'm not an OS X user and never have been so sorry if these thoughts are irrelevant - but I'm guessing memory management in OS X is similar enough for knowledge of how Linux plays the game not to be completely useless)
At least Windows, having gone through this particular growing pain, is nowadays fairly painless to reinstall. Did OSX ever improve that aspect of their product?
2) The OS reinstall path is identical to the OS upgrade path, making it very well tested. This has been the case since (IIRC) Snow Leopard.
3) The latest few generations of hardware can even (re)install the OS over the internet, meaning you don't even need to carry around media to reinstall. (Assuming you're on a fast connection or are willing to wait.)
It's entirely painless to reinstall OS X.
If you want to replace various bits of the system, reinstall the OS.
These are different scenarios with different use-cases, and I'd argue it's a much worse idea to conflate the behaviours, as in certain other OSes, than OS X's fault for properly treating them as separate operations.
It's a good thing.