Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Generally warming up caches (l1i, l1d, l2, ...) and other hardware state (e.g. out-of-order engine) so that you get a better picture of how your program will run in the steady state, rather than how it will perform on the first iteration with 'cold caches'. The justification for this is that in most (but not all) software applications, you should expect a reasonable amount of spatial/temporal locality and thus for your caches to be useful. So the 'average' performance of your program would be best captured by how the program runs when 'warm'.

Warm -> populated with the code/data your program uses. Cold -> empty, or filled with stuff your program doesn't use.

I do think that the case of exception handling is one where I would question the validity of this approach: if you catch exceptions rarely, it's quite reasonable to expect that the relevant lines would have been conflict'ed out of your l1i at some point in the interim, and therefore that the l1i (or even l2!) fetch cost is reasonable to include in the benchmark. However, if you expect your exceptions to fire 'relatively often' then it again becomes reasonable to warm up the caches before measuring, so it really comes down to the characteristics of your workload.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: