Agreed. I think a great option would be for the server to specify in a response header that it doesn't care about any headers for the rest of the keep-alive connection, unless they change.
I've been getting into comet a bit more lately, and am wondering (if you don't mind telling of course) 1) what you are currently using on the backend and 2) if you ever checked out Meteor (which I was playing with today)?
I decided to write my own. Gives me much more flexibility and am able to change, adapt, streamline the backend as much as I like. (Java nio) It peaks at about 10k connections at the busy times at the moment (incoming and outgoing). The backend code is about 22k lines of code, but that's everything - not just the base comet server.
I did have a look at Meteor and some of the other comet type servers but they didn't look like they would be a great solution for what I needed.
It would be much better still for them to implement WebSocket: this is a marginal optimization at best, while adopting WebSocket would be a fundamental improvement and would much more significantly reduce bandwidth, etc.
I'm not sure it's marginal though. Consider a site that has a gallery of 100 thumbnails, each one say 200 bytes in size.
With firefox3, the browser would have to send 49,500bytes of HTTP headers just to get those 100 thumbnails which are only 20,000bytes in size. That's a ridiculous overhead. Even if you're using deflate or gzip, that's only gzipping the 20k data, not the 49.5k of headers.
Sounds like premature optimization to me. You can't "save" bandwidth at the sub-packet level. It is like trying to "save" space used on files of less than a kilobyte in length -- you can do the math showing impressive theoretical gains, but other layers of your hardware/software stack process the data in chunks larger than that anyway, all your effort is for naught.
@patio11: Actually, you can save bandwidth at the sub-packet level. Ethernet is a common link layer protocol that we can examine: It has a variable length frame, and because it uses time-delimited bit-level framing, the smaller your ethernet frame the quicker it can be sent. The only real caveat to axod's article is that you often can't save on performance of intermediary systems (routers) because they tend to pre-allocate blocks of memory for packets, so no memory is saved when some packets are smaller. But your ISP doesn't meter you at the link layer anyway (or often even the IP level), so the savings as reported by the article are absolutely real in terms of bandwidth costs. (as in money)
Did you skip the bit where I said it would save mibbit 300GB/month transfer?
That's worth doing.
I can see the confusion, but you can't compare files with network. Files are allocated in chunks. So shaving off a few bytes of a file does indeed not always save you disk space.
However, if you send half as many bytes in your HTTP headers, you save that bandwidth. If you handle 600 of those HTTP requests a second, shaving every byte off that you can, is important. That's not premature optimization.
I don't think that's true. Bandwidth is typically the bottleneck, and AFAIK there's no minimum size (other than minimal headers) for HTTP, TCP, or IP protocols.
Ah, good point. With computes 300 bytes, 300 milliseconds, and other seemingly minor values matter because they are multiplied by large numbers. Every bit counts all the time I guess if you have a high traffic web app.
Using long-lifed connections, and sliding more data through as time goes on, avoids some of the overhead of additional HTTP headers that you see in opening new requests and replies.
Sure, I suspect he was referring to the server modifications section, not the Javascript. The JS really is a clever idea, I didn't realize that was configurable through JS!
Edit- Just checked the posting history of the OP. He looks like a troll. I withdraw my comment, he was just being a jerk.
Cool demo, though. Very clever- I wouldn't have necessarily thought about the header size in the sends.
http://cr.yp.to/sarcasm/modest-proposal.txt