Hacker Newsnew | comments | show | ask | jobs | submit login

Two simple test cases:

http://pastie.org/402608 (read)

http://pastie.org/402607 (mmap)

Each opens a 10M file and accesses aligned pages. Depending on how many bytes in the page you ask the mmap() case to touch, mmap ranges from 10x faster to 10x slower for me. Reading straight through without seeking, it's no contest for me; read() wins. But you knew that.




Thanks for encouraging me to look at this closer. I was testing with this: http://pastie.org/402890

I was having trouble comparing results, so I combined your two into one, tried to make the cases more parallel, took out the alarm() stuff, and just ran it under oprofile.

My conclusions were that for cases like this, where the file is small enough to remain in cache, there really isn't any difference between the performance of read() and mmap(). I didn't find any of 10x differences you found, found that the mmap() version ranged from twice as fast for small chunks to about equal for full pages.

You might argue that I'm cheating a little bit, as I'm using memcpy() to extract from the mmap(). When I don't do this, the read() version often comes out up to 10% faster. But I'm doing it so that the code in the loop can be more similar --- I presume that a buf[] can optimize better.

I'd be interested to know how you constructed the case where read() was 10x faster than mmap(). This doesn't fit my mental model, and if it's straight up, I'd be interested in understanding what causes this. For example, even when I go to linear access, I only see read() being 5% faster.

-----




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: