Hacker News new | comments | show | ask | jobs | submit login

We've been increasing slow start on the server for a while now. (recomiple the linux kernel needed) We increase the initial 2 packet limit to 16. We do this ONLY for our html page in order to get it to the client faster.

Imagine a 22K page.... (1.5K frames, double every rtt)

after 1 RTT : 3K (2 frames * 1.5k per frame)

after 2 RTT : 3K+6K = 9K

after 3 RTT : 3K+6K+12K = 21K

after 4 RTT : 3K+6K+12K+24K = 45K

So for a client to get a 22K html page it requires 4RTT...if you think of 200ms for a RTT thats 800ms to get the page. Lame... increase your slow start from 2 frames to 16 so in the first RTT we send up to a 24K page in one shot.

This is useful for non-keepalive connections. For keep alive you may think that it remains open for the whole keep alive time but that's not normally true. The parameter that controls how long your window stays open is a configurable parameter called.... net.ipv4.tcp_slow_start_after_idle which is on by default. This causes your keepalive connection to return to slow start after TCP_TIMEOUT_INIT which is 3 seconds. Not probably want you want or expect. So if you are using keepalives for you image server and you want that first image to load faster...set net.ipv4.tcp_slow_start_after_idle = 0 to disable this

We increase the initial 2 packet limit to 16. We do this ONLY for our html page in order to get it to the client faster.

I'd be curious to know if you've measured the real-world performance difference from making this change.

The motivation for it was to increase real world performance so the research, conclusion and implementation were guided by this fact. This was done on a real site with considerable traffic 50 to 100 php pages/second. This has been in production on the site for at least a year.

So in the real world, if a user click on a link to your site, the time to complete the load of the full html page (lets say a 22k page) is reduced to a single RTT instead or 4 RTT. To test that it's doing the correct thing you can verify with a packet analyzer or you can even see the results on something like firebug. Even if you consider RTT times of 50ms to 100ms it is still considerable difference to have the page complete the load in 100ms vs 400ms. Especially if you have script at the bottom of your page (say google analytics and such) this means that these requests start that much sooner, the page completed rendering sooner, etc.

In addition to the faster loading time there is another benefit to this.

In a busy server your best friend is the ability to process connection quickly. If you have to have a connection open for 4RTT vs 1RTT that starts to make a difference. Implementing this you release the process and connection to TIMEWAIT right away instead of tying up a valuable php process.

This kind of change can make a huge difference for two big reasons: 1. Fewer RTT means lower page load times, which are outlined above. Users notice this stuff, esp. when you have a keepalives off, and each little chunk of ajax/css/js/img requires this to all happen over and over again. 2. Shorter time from open to close on the http socket connection means fewer connections open; if you are running Apache prefork, this makes a big difference to your memory footprint. p.s. Use Nginx ;)

Are you doing this on the site listed in your profile? From what I can see with a 27ms RTT request, there is a 9ms pause after the GET ACK and content -- and then only 4 full packets before an RTT. That's close to the limit of my initial window (5880) before content ACKs arrive. That would seem like a typical result unless/until clients start tuning their initial window as well... which was kind of the point of Google going public on this. Have you seen much aggregate benefit beyond a window increase from 2 to 4?

Here is a pic of the transaction client side. The RTT is about 50ms. This is an intitial, non keep alive, request.


Notice that after the connection and the http request, in 1RTT you basically receive 10 segments at flight time (right after each other) This means the server is not waiting for you to ack. You see the acks in this picture (but note the times as they are sent but dont arrive till rtt at the server)

With out altering slow start the server would wait to receive and ACK before sending more segments.

There are 2 factors at play here... 1. slow start 2. window size Does not matter what the window size advertised, as slow start normally start at 2 segments. So in this case the client window is something huge. The advertised 5880 is the server receive window which also grows.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact