Hacker News new | past | comments | ask | show | jobs | submit login

I came across this link, which helped me understand the process a bit better: http://kolbusa.livejournal.com/108622.html. So yes, the main savings seems to be that the page table is created in a tight loop rather than ad hoc. Given the number of pages in the scane, it's still going to be a TLB miss for each page, but it will be just a lookup (no context switch).

in my testing that the size of the file (and the mapping) will affect the page size

I'm doubtful of this, although it might depend on how you have "transparent huge pages" configured. But even then, I don't think Linux currently supports huge pages for file backed memory. I think something else might be happening that causes the difference you see. Maybe just the fact that the active TLB can no longer fit in L1?

And obviously, MAP_POPULATE is bad if physical memory is getting exhausted.

I'm confused by this, but this does appear to be the case. It seems strange to me that the MAP_POPULATE|MAP_NONBLOCK is no longer possible. I was slow to realize this may be closely related to Linus's recent post: https://plus.google.com/+LinusTorvalds/posts/YDKRFDwHwr6




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: