The motivation for it was to increase real world performance so the research, conclusion and implementation were guided by this fact. This was done on a real site with considerable traffic 50 to 100 php pages/second. This has been in production on the site for at least a year.
So in the real world, if a user click on a link to your site, the time to complete the load of the full html page (lets say a 22k page) is reduced to a single RTT instead or 4 RTT.
To test that it's doing the correct thing you can verify with a packet analyzer or you can even see the results on something like firebug. Even if you consider RTT times of 50ms to 100ms it is still considerable difference to have the page complete the load in 100ms vs 400ms.
Especially if you have script at the bottom of your page (say google analytics and such) this means that these requests start that much sooner, the page completed rendering sooner, etc.
In addition to the faster loading time there is another benefit to this.
In a busy server your best friend is the ability to process connection quickly. If you have to have a connection open for 4RTT vs 1RTT that starts to make a difference. Implementing this you release the process and connection to TIMEWAIT right away instead of tying up a valuable php process.
This kind of change can make a huge difference for two big reasons:
1. Fewer RTT means lower page load times, which are outlined above. Users notice this stuff, esp. when you have a keepalives off, and each little chunk of ajax/css/js/img requires this to all happen over and over again.
2. Shorter time from open to close on the http socket connection means fewer connections open; if you are running Apache prefork, this makes a big difference to your memory footprint.
p.s. Use Nginx ;)
I'd be curious to know if you've measured the real-world performance difference from making this change.