
Ask HN: Why is something always faster the second time you run it - lucb1e
Try any website, measure how long the HTML takes to load (in Chrome, press F12 and go to the network tab, in Firefox use ctrl+shift+K). Now press F5 and look at the time again. It likely got something like 4 times faster.<p>This is easily explained because the network needs to find a route, perform a DNS lookup, all that stuff. What my question is about though: Why does the same thing happen on a local server, or even localhost?<p>Write any PHP script, run it and measure the time in your browser, then run and measure again. It'll go much faster, even though memcache or anything like that is not enabled. If you make PHP (or whatever language) generate a random number to output, you'll see that you get the real output every time, so it's not cached.<p>The effect resets itself after a while (I'm not sure how long, I guess a few minutes). Is this something the CPU optimizes in its caches? Or is something stored in the RAM by either the webserver or the OS? Or does the file handle remain opened?<p>The effect works on every computer and system I've tried, and I'm really wondering what it could be. Perhaps someone knows?
======
jameswyse
There's optimisations for repeated execution of the same commands at probably
every level, from the interpreter down to the hardware on the server side, and
the browser on the client side will cache and preload things smartly in the
background.

Usually pressing reload (F5) only downloads modified content though, to ignore
the cache you have to do a force reload (ctrl+F5 I think!)

~~~
lucb1e
Actually, that's my point: There is no client-side or server-side caching
going on as far as I can see. As I said, the second-time-faster effect even
happens when you make it give a random output.

------
godisdad
Caching and memoization. Caching and memoization everywhere.

~~~
lucb1e
Yeah I guess something like that... But it even works with random output, so
what does it cache/memoize?

Could you give an example?

