

Faster Webpages with Cookieless Domains (With Keynote Testing) - symkat
http://symkat.com/105/cookieless-domains/

======
asuth
We were using cookieless domains at quizlet.com for our static assets (css,
js, images) for awhile, but recently turned them off. Some of our users
(enough to be of concern) wrote in saying the site looked all wrong, and on
closer inspection, weren't receiving our assets.

Because we're used in a lot of school systems, and many school districts have
weird rules (domain whitelists), our static domains were not resolving while
our main quizlet.com was. We decided (for now) it wasn't worth investigating
how many of our users were affected or if we could do a workaround.

We're still using a subdomain off our main domain (a.quizlet.com) which
prevents most cookies from sending but not al.

Going forward, I'd prefer to start base64-embedding images in css (only for
compliant browsers) and using more sprites to reduce the number of overall
requests.

------
jhrobert
What? a few bytes removed from the HTTP request and you double the speed..???
There is something I don't understand here.

I suspect the improvement comes exclusively from increasing the limit on the
number of connections, by way of now using two sources instead of one, rather
than from the removal of a few bytes in the HTTP request.

Unless I missed something...

~~~
symkat
More or less: yes. It's the combination of both removing those excess bytes.

If you transmit 10 static files per page request, and have 256bytes of cookies
you're sending 2560 bytes of cookies, or 2KiB per page load. That's on the end
user's upload bandwidth too, which is typically dramatically less than their
download.

The lessening of the connection limit also allows more entities to load at the
same time, the combined effect is a good increase in speed. Two tweaks for the
price of 1. =)

~~~
asuth
It doesn't quite work like that. TCP works in packets, so 256 bytes of cookies
does not necessarily translate into an extra packet sent over the network. So
if you have 10 static files per page, you might push a few of them over to an
extra packet, but not all of them.

It's something worth doing, but it's not the biggest win you can get.

------
birken
A few extra data points about using subdomains vs different domain for static
content (assuming that you serve your site exclusively off www.domain.com)...

OK: Google Analytics - You can use the function
"_setDomainName('www.domain.com')" on your GA tracker to restrict the cookie
domain to only the www.

NOT OK: Quantcast - The Quantcast tracking code explicitly forces the cookie
domain to be ".domain.com", and the only way around it would be to alter their
javascript and host it from your own server, which they do not allow (though I
am not sure what the recourse would be). I emailed support about it, and they
said they had no plans to let you specify a different cookie domain.

------
ck2
I've played with this and the performance improvement is trivial with modern
browsers/servers.

Way more hype than reality.

Before messing with cookies, when it takes over a second to receive the core
webpage (before the other objects, stylesheets, scripts, images) in Chrome no
less, you need to take serious look at how you are rendering the page on the
server, ie. WordPress. You get enough cache misses with WordPress+plugins and
your servers are going to be crying.

The article is also missing some important info for analysis, how large was
each of the transmitted parts? How large was the base webpage? Was gzip
compression used on the server?

Why not use localhost to eliminate transmission variances and prove with 10000
cacheless page fetches if cookie-less really helps or not more than a few
percentage points?

------
danfitch
Something interesting to do is see the source for google.com, msn.com

Compressed really well and built for speed. Then view the source at yahoo.com

Links to look at for performance. yslow <http://developer.yahoo.com/yslow/>

page speed [http://code.google.com/speed/page-
speed/docs/rules_intro.htm...](http://code.google.com/speed/page-
speed/docs/rules_intro.html)

~~~
WALoeIII
This is a total micro-optimization at best. Nearly every client is going to
support compressed content. The difference between minified then compressed
and unminified and compressed is tiny.

You are right though the difference probably does matter on the scale of
Google, MSN, and Yahoo, but I wouldn't focus on it for my site.

~~~
chrisbolt
15% of clients don't do compression, usually because of "internet security"
software altering the Accept-Encoding header.

From the talk you linked to in another comment...
[http://en.oreilly.com/velocity2010/public/schedule/detail/11...](http://en.oreilly.com/velocity2010/public/schedule/detail/11792)

------
kevinpet
I have been aware of domains like yimg.com (yahoo static content) forever, but
it never occurred to me why you'd bother doing this until I saw this article.
Thanks.

------
js2
<http://developer.yahoo.com/performance/rules.html>

