
Revisiting the “Cookieless Domain” Recommendation - luu
http://www.jonathanklein.net/2014/02/revisiting-cookieless-domain.html
======
paulsutter
Separate domains for CSS and images files are primarily intended to overcome
certain (old?) browsers' limitation on the number of concurrent connections to
a single domain [1][2]. Cookieless domains can only help you if you have
supermassive cookies. Keep those to a special subdomain, if you really need to
have them. But you probably don't.

There is no one rule of thumb for performance. Generally, inline everything
that's small enough to inline. If it's big enough such that caching a separate
file will help you on subsequent page loads, then put it in a separate file.

Test and measure. If you can't measure the difference, it doesn't matter. If
you don't measure the difference, the rule you are following probably isn't
helping you.

[1] [http://www.stevesouders.com/blog/2013/09/05/domain-
sharding-...](http://www.stevesouders.com/blog/2013/09/05/domain-sharding-
revisited/)

[2] [http://www.stevesouders.com/blog/2008/03/20/roundup-on-
paral...](http://www.stevesouders.com/blog/2008/03/20/roundup-on-parallel-
connections/)

~~~
gry
> Test and measure. If you can't measure the difference, it doesn't matter. If
> you don't measure the difference, the rule you are following probably isn't
> helping you.

/Yes/.

~~~
sgustard
That's a sad state of affairs for our industry, though. It's like every
structural engineer being told to measure which is stronger, iron or steel.
How about some basic shared knowledge we can all trust and build on?

~~~
ChuckMcM
There may yet come a day when all steel is sold by weight, manufactured in
unnamed factories at unknown chemical compositions such that the only way to
know if a piece of steel you have will carry the weight being asked of it, is
to use a sample of it in a test fixture.

Unfortunately that day is already today on the Internet, where browsers have
unknown design decisions made of unknown reasoning across networks that were
optimized for unknown requirements.

------
gavinpc
Is the "flash of unstyled content" really so "dreaded"?

I've noticed it regularly on the publish side of the Google Play Store.

The chain reaction this sets off in my head is about 100 FUOC's long. And it
goes something like this.

Hey, I just saw a FUOC on the Google Play store admin!

They must not be optimizing CSS delivery for above-the-fold content... [0]

Or maybe they just don't lavish the same resources on admin sites as they do
on front-end...

I mean, did anyone else see that? And what if they did? What would they think?
They wouldn't think anything, because only a web developer would even
_recognize_ a FOUC...

And even if someone else saw it and recognized it for what it was... so what?

If I tried to explain to someone what happened in that moment, that Google,
yes, Google had failed to expend all the necessary brain power on ensuring
that their markup was not rendered a fraction of a second before their
stylesheets were parsed...

And that people actually _care_ about that, actually want to _protect_ users
from that horrid sight...

I would be seen for what I am, which is a madman.

[0]
[https://developers.google.com/speed/docs/insights/OptimizeCS...](https://developers.google.com/speed/docs/insights/OptimizeCSSDelivery)

~~~
lucaspiller
I use a lot of high latency connections (my country has rather slow ping times
to major internet centres - around 200ms) and sometimes connections just get
"stuck" when loading the CSS.

This results in seeing the title of the page with a loading spinner and a
blank white page. If browsers actually rendered the page (is CSS really that
important on a news site?) I would at least be able to see the content.

~~~
leviathan
I have the same problem with my internet. So many times have I given up
waiting and just hit cmd+u to view page source and read the info I want in raw
html.

------
leeoniya
i'm personally fairly irked about the massive shift towards "cdn all the
things!", i have noscript blocking third-party assets and a larger number of
sites is broken for me every day.

rather than cdns, there should be an sha or md5 hash sent with every asset,
like an etag, so that things need not live on a specific domain to be pulled
from cache.

EDIT: those downvoting, care to state your case?

~~~
bbcbasic
The downvotes are probably because you are complaining sites are broken when
you install something that intentionally breaks them.

Your CDN alternative is not clear (to me at least).

~~~
mc808
I think the CDN alternative would be:

Site 1 sends <script src="/foo/jquery.js" hash="SHA3:12345...">. Your browser
hasn't cached this file, so it downloads jquery.js, verifies the hash and
caches the contents. Site 2 sends <script src="/bar/jquery-2.5.js
hash="SHA3:12345...">. The browser finds that the hash matches the cached
jquery.js and loads that instead of downloading the script again.

The scripts could still be served from CDNs, but it wouldn't have to be the
same one to have a cache hit. Popular libraries like jQuery would have so many
hits that a CDN might not even be worth the effort. Actually the concept is so
simple, it's surprising that this hasn't already been implemented unless there
is a security issue that I'm not seeing.

~~~
laurent123456
I don't know if it's a big issue, but if you know the SHA3 of some files on
other websites, you could use that to know where the user has been or not. For
example, you add a file that you know is specific to Facebook <script
src="/js/somefakefile.js" hash="SHA3:FACEBOOKHASH..."> If a user don't
download /js/somefakefile.js, you know they have visited Facebook at some
point.

~~~
cbr
You can already do this with a timing attack. Use js to add a script tag with
[http://facebook.com/js/somefakefile.js](http://facebook.com/js/somefakefile.js)
to the page, and time how long it takes to load. If it's in cache it will be
much faster.

------
gingerlime
Perhaps a silly idea / question, but could browsers support some kind of an
optimization meta tag that tells to fetch the resource without sending
cookies? something like `<img src="..." data-no-cookies=true>` or even a
directive that applies to all static resources unless specified otherwise,
e.g. `<meta no-cookies-for="jpg;css;js">` ??

~~~
bodyfour
In HTTP/2 (SPDY) the only headers that need to be sent are ones that changed
since the last request on the connection. So once that is more common you'll
actually be better off just attaching the cookies to all requests since that
will mean they only get sent once per TCP connection.

~~~
tracker1
I've been suggesting a move towards HTTPS only (for security) with SPDY/HTTP2
support.. for my own usage, 2/3 of the clients support SPDY... so having more
resources from the initial domain is a better win.. with PUSH resources for
CSS/JS with initial connection it gets better still.

If ES6 (including modules) were supported in browser, it would be pretty
awesome as an addition to SPDY... I think we're reasonably 5-6 years off
before any broadly available sites can really use it, but it's cool. Similar
compared to Web Sockets a few years back.

Given that a lot of interactive data is now pushed to dedicated API services,
and images are offloaded for CDN, it's far easier to deliver CSS and JS with
the markup on the same deployment(s).

------
ernestipark
Great post by Jonathan. For the case of a big 'single page' JS app where a
most of the rendering happens from Javascript fetched on the page, I'd guess
this approach won't help that much since you have to fetch the JS from the
CDN/cookieless domain anyways. Still though, definitely worth an experiment to
know for sure.

~~~
morgante
> For the case of a big 'single page' JS app where a most of the rendering
> happens from Javascript fetched on the page, I'd guess this approach won't
> help that much since you have to fetch the JS from the CDN/cookieless domain
> anyways.

If you're taking that approach, you probably don't care much about performance
anyways (I've _never_ seen a pure JS SPA which rendered fast).

~~~
colanderman
FastMail begs to differ: [http://blog.fastmail.com/2014/12/15/dec-15-putting-
the-fast-...](http://blog.fastmail.com/2014/12/15/dec-15-putting-the-fast-in-
fastmail-loading-your-mailbox-quickly/)

Loads & renders in between 0.5-1.0s.

(Though I must say FastMail is rather unique in this regard.)

~~~
kuschku
Reminds me of the time, 2 or 3 years ago, when Google announced that every
page which would take more than 300ms to load and render would be downranked
in the search.

Seems like we’re going backwards...

------
seriocomic
I've always shirked away from a separate "cookie-less domain" in favor of
cookie-less sub-domains (think costs and SSL). This does require you to think
about www/no-www (cookie-less sub-domains require you to limit the cookies to
the www sub-domain, as no-www means that *.domain inherit all cookies). I
wonder if this is still relevant in the HTTP2/SPDY world?

------
joevandyk
If you terminate SSL at the CDN, if you don't own the network and CDN, won't
that leave your data open while in-transit between the CDN and the app
servers?

I'm using cloudfront and aws, reluctant to let cloudfront be the root CDN
because of this. Anyone got any insight?

~~~
rb2k_
Is there a reason your CDN couldn't talk to your backend via SSL too?

~~~
youngtaff
If your visitors are using HTTPS to talk to you CDN, then you almost certainly
should be using HTTPS from CDN to origin.

------
gojomo
I don't trust this analysis. There's no clear mechanism for why forcing the
CSS to the same domain would speed things up. Also, the comparison doesn't
truly isolate the same-domain/different-domain decision as the cause of any
slowdown. Perhaps, the test server is simply less-loaded/less-laggy/network-
closer to his measurement browser... so it's the move fo the CSS to that
server, not the unification of source domains, that causes the measured
speedup. Or many other things.

~~~
youngtaff
Loading CSS from the same domain speeds things up for a couple of reasons:

1\. No DNS look up for the second host 2\. Many browsers speculatively open a
second TCP connection to the original host in anticipation that another
request will be made so the TCP negotiation overhead for the second request
moves forward 3\. CSS is on the critical path for rendering so getting it more
quickly improves rendering time

------
vkjv
I think a much more interesting concept is prefetching files. Load the link to
content with some of the critical files it needs to download, so that the
browser can start downloading them before resolving the link.

[https://developer.mozilla.org/en-
US/docs/Web/HTTP/Link_prefe...](https://developer.mozilla.org/en-
US/docs/Web/HTTP/Link_prefetching_FAQ)

------
imaginenore
What do you guys think of having the absolute minimum page with inlined
everything (even images) that just shows a simple progress bar - it loads the
rest of CSS/JS/whatever.

I've done it once for a large front end app, and it worked pretty well - the
user gets an almost instant webpage and sees that the stuff is loading.

~~~
kansface
I would greatly prefer you make websites that don't require progress bars.

~~~
dspillett
Browser progress bars are hardly reliable though.

IE's can get to ~80% before a single byte is received.

------
elchief
One issue is that you should not use any compression (http or tls) on a
request/response with any sensitive info, such as session ids or csrf tokens
(see beast, crime attacks).

It's easy to turn off compression on your www domain and turn it on on your
cdn domain.

So now you're not compressing your css, which would slow the response time,
but by how much I can't say. You could still use css minification.

~~~
bluesmoon
you're confusing content compression with header compression. it's fine to
pre-gzip your css before serving it over http or tls.

~~~
elchief
HTTP compression is simply not safe on your main web domain:
[http://security.stackexchange.com/questions/20406/is-http-
co...](http://security.stackexchange.com/questions/20406/is-http-compression-
safe)

What I was trying to say is that, if you're security conscious, and running a
CDN anyway, it might not be worth the risk to allow (selective) HTTP
compression on your main web domain. It would be safer to disable it
completely.

------
brianpgordon
> Here’s the rub: when you put CSS (a static resource) on a cookieless domain,
> you incur an additional DNS lookup and TCP connection before you start
> downloading it. Even worse, if your site is served over HTTPS you spend
> another 1-2 round trips on TLS negotiation

Unless you're using SPDY, using a different domain doesn't add any more TLS
overhead than using the same domain, right? I didn't think that browsers reuse
connections to the same server.

~~~
TazeTSchnitzel
>Unless you're using SPDY, using a different domain doesn't add any more TLS
overhead than using the same domain, right? I didn't think that browsers reuse
connections to the same server.

This isn't new to SPDY: HTTP/1.1 has keep-alive.

~~~
brianpgordon
You're right, I was confusing keep-alive with pipelining, which browsers
typically don't support.

