Imagine a 22K page.... (1.5K frames, double every rtt)
after 1 RTT : 3K (2 frames * 1.5k per frame)
after 2 RTT : 3K+6K = 9K
after 3 RTT : 3K+6K+12K = 21K
after 4 RTT : 3K+6K+12K+24K = 45K
So for a client to get a 22K html page it requires 4RTT...if you think of 200ms for a RTT thats 800ms to get the page.
increase your slow start from 2 frames to 16 so in the first RTT we send up to a 24K page in one shot.
This is useful for non-keepalive connections. For keep alive you may think that it remains open for the whole keep alive time but that's not normally true. The parameter that controls how long your window stays open is a configurable parameter called.... net.ipv4.tcp_slow_start_after_idle which is on by default.
This causes your keepalive connection to return to slow start after TCP_TIMEOUT_INIT which is 3 seconds. Not probably want you want or expect.
So if you are using keepalives for you image server and you want that first image to load faster...set net.ipv4.tcp_slow_start_after_idle = 0 to disable this
I'd be curious to know if you've measured the real-world performance difference from making this change.
So in the real world, if a user click on a link to your site, the time to complete the load of the full html page (lets say a 22k page) is reduced to a single RTT instead or 4 RTT.
To test that it's doing the correct thing you can verify with a packet analyzer or you can even see the results on something like firebug. Even if you consider RTT times of 50ms to 100ms it is still considerable difference to have the page complete the load in 100ms vs 400ms.
Especially if you have script at the bottom of your page (say google analytics and such) this means that these requests start that much sooner, the page completed rendering sooner, etc.
In addition to the faster loading time there is another benefit to this.
In a busy server your best friend is the ability to process connection quickly. If you have to have a connection open for 4RTT vs 1RTT that starts to make a difference. Implementing this you release the process and connection to TIMEWAIT right away instead of tying up a valuable php process.
Notice that after the connection and the http request, in 1RTT you basically receive 10 segments at flight time (right after each other)
This means the server is not waiting for you to ack. You see the acks in this picture (but note the times as they are sent but dont arrive till rtt at the server)
With out altering slow start the server would wait to receive and ACK before sending more segments.
There are 2 factors at play here...
1. slow start
2. window size
Does not matter what the window size advertised, as slow start normally start at 2 segments. So in this case the client window is something huge. The advertised 5880 is the server receive window which also grows.