
The best practices of HTTP1 are harmful in a HTTP2 world - MozMorris
https://mattwilcox.net/web-development/http2-for-front-end-web-developers
======
molf
This article makes too many assumptions for my taste. Some other
considerations that spring to mind:

\- A sprite tends to have fewer bytes than separate images, because of image
format overhead and better compression when combining similar things.

\- The same may be true for zipped CSS/JS.

\- What about CSS/JS/image parsing overhead?

\- Even though HTTP2's request overhead is substantially less than HTTP1,
fewer HTTP requests shouldn't make your site slower.

So the best practices will be outdated. Harmful? Not so sure.

~~~
joshstrange
Agreed, as someone who has done a significant amount of work to sprite our
images and concat our js/css files this title, and most the content, scared
me. The only thing I haven't pushed through is different domains for static
assets (though I have them cached for a year with a query string as a cache-
buster for when we deploy new js/css). None of this, in a HTTP2 context, seems
like it would be extremely, or at all for that matter, harmful. Furthermore
with our aggressive caching the request only needs to be made a single time
(assuming the user hasn't cleared their cache).

I welcome HTTP2 but I'm not too sure on the timeline for rollout and if it
will end up being an "IE6"-type thorn in my side. Even if we are split 50-50
between HTTP1 and HTTP2 it sounds like the best approach is keep doing what
you are doing... Either way I don't think sprites/concating/minifying is going
anywhere anytime soon.

~~~
treve
I think if there's a significant enough uptake on HTTP/2, we may get to a
point where we start feeling that it's not worth optimizing for HTTP/1.1
anymore. Obviously HTTP/1.1 will still just work, but the HTTP/2 approach
_can_ potentially work much more elegantly, when all the tooling is in place,
and I definitely believe it will be preferable.

I agree that the existing approaches are very much valid though, but I also
look forward to not having to concatenate javascript files anymore. We've
developed a lot of tooling to work around issues that stem from that, and it
doesn't address the problem of needing a varying set of javascript files on a
per-page basis easily.

------
callum85
Several comments are misinterpreting the article, I think, as saying that your
site will be slower _than before_ when you switch to HTTP2 if you're still
using the old techniques. In fact, nearly all sites will just magically speed
up thanks to basic improvements like header compression.

But, by continuing to use the old hacks, your site won't be as fast _as it
could be_. It's an opportunity cost. Some of those opportunities being:

\- increased cache granularity (avoids invalidating a whole sprite or
concatenated bundle when just a single part changes)

\- parallel downloading of files that were previously bundled into one file

\- fewer DNS lookups, now that you're not sharding

\- less energy/memory usage in the client because you're not
decoding/remembering whole sprites

And the subtlest, biggest win:

\- simplifying your build process.

------
nailer
Nearly all HTTP 1.1 servers use gzip. Does that mean that minification for the
purposes of reducing payload size is also unnecessary there? (serious
question)

Edit: answering my own question:
[http://stackoverflow.com/a/807161](http://stackoverflow.com/a/807161)

~~~
tootie
Every popular HTTP server _supports_ gzip compression, but you still have to
turn it on and surprising number of sites don't bother.

~~~
josteink
Given how gzip is the single most effective way of reducing BW consumption and
site-loading speeds, I think we can conclude that those who don't bother don't
really care about performance either way.

And in those cases HTTP 1.x vs HTTP 2.x is a moot discussion anyway.

~~~
tootie
Even if site administrator's don't care, users do.

------
geofft
Is there a good way to make a website that's available over both HTTP/1.1 and
HTTP/2 and does the right thing in both cases? I feel like it'll be many years
before I can design a website that's available over HTTP/2 only.

For instance, is there an asset pipeline that will concatenate and minify all
my JS for HTTP/1.1 clients, but minify my JS separately for HTTP/2 ones, and
build versions of my HTML page that references the two different assets
depending on which HTTP protocol is in use?

~~~
jomohke
I don't think the complexity would be worth it in most cases.

The article is misleading on this, but those HTTP/1.1 best practices aren't
slower when served over HTTP2. HTTP2 will still be faster than HTTP/1.1.

\- Spriting and concatenation will not be worse under HTTP/2, just (mostly)
unnecessary.

\- Splitting content across multiple domains will be 'harmful', in that you're
enduring multiple TCP handshakes instead of one. But this is no worse than
HTTP/1.1

\- Minification is unchanged. It will still decrease the download size.
Although I sometimes find that the difference _after compression_ is trivial
on many modern sites, so is often not worth the decrease in
readability/debuggability.

~~~
geofft
Sure, but as long as I'm supporting a good chunk of HTTP/1.1 users, then I
don't get the benefits of the HTTP/2 architecture, right?

I could buy that what I should do is wait a few years, until the majority of
my users are HTTP/2 instead of HTTP/1.1, and then optimize everything for
HTTP/2 and still work (slowly) on HTTP/1.1. But at the moment it's not clear
why I should care: my options seem to be really fast on HTTP/2 and really slow
on HTTP/1.1, or kinda fast on HTTP/2 and kinda fast on HTTP/1.1.

~~~
jomohke
You'll still see performance benefits of HTTP2 while supporting HTTP/1.1
content.

HTTP2 requests have (sometimes significantly[1]) less overhead, and most sites
following good HTTP/1.1 practices still make _many_ requests.

Open the network panel in your browser and view a few large sites. Despite
minimizing requests with sprites, concatenation etc, most still make dozens of
requests, with some large sites pushing over a hundred.

(for example, I just loaded a page for a single tweet on Twitter and it
involved twenty requests, with a size over 2MB.)

I think your list of options are incorrect:

\- You can optimize for HTTP/1.1 and it will be fast over both protocols.

\- Or don't optimize, and it will be fast over HTTP2 and slow over HTTP/1.1.
The main benefit is saving development time & complexity. This will not be a
worthy tradeoff until HTTP2 is more widespread among your users.

[1] The reduction in latency can be massive, for instance, as the HTTP2 server
can push resources immediately without waiting for the browser to request
them.

~~~
geofft
Sure, there's still an advantage for the user to upgrade to HTTP/2\. But it
sounds like I shouldn't be particularly optimizing for HTTP/2 quite yet?
That's what I'm interested in: I'm not usually a web developer so I don't pay
close attention, but I'd like to know how to design websites that aren't bad
in general.

It sounds like I'm currently best off designing for HTTP/1.1 (with all the
usual hacks) and knowing that HTTP/2 will perhaps make things better, instead
of designing in any way for HTTP/2, which will make things worse for HTTP/1.1.
That seems to be not what this article is saying.

Should I be caring about designing for HTTP/2 yet, or should I just ignore it
for a few years?

------
lispm
> It can leave the connection open for re-use for very extended periods of
> time, so there's no need for that costly handshake that HTTP1 requires for
> every request.

HTTP1 supports pipelining requests.

> HTTP2 also uses compression, unlike HTTP1, and so the size of the request is
> significantly smaller - and thus faster.

'significantly'? How much is that?

> HTTP2 multiplexes; it can send and receive multiple things at the same time
> over one connection.

If that one connection stalls, multiple things won't be transferred.

Plus it introduces new protocol overhead.

~~~
currysausage
_> HTTP2 also uses compression, unlike HTTP1, and so the size of the request
is significantly smaller - and thus faster._

Also, an HTTP/1.1 server, when configured properly, will use compression as
well.

~~~
jomohke
It does support compression of the message body, but not of the headers.

They designed a new compression algorithm called HPACK specifically to
compress headers in HTTP2:
[https://http2.github.io/http2-spec/compression.html](https://http2.github.io/http2-spec/compression.html)

In small HTTP/1.1 requests the headers can be much larger than the content,
which is part of the motivation for combining files into one request.

~~~
Illniyar
Considering the best practice for HTTP/1.1 is to create few very large files -
why would compressing headers really make a difference?

~~~
rakoo
HTTP/2.0 does not only compress headers, it also keeps a context of headers
that were already sent. So if you want to include a specific header with the
same value in multiple requests (like, say, a cookie), it won't actually be
re-sent _on the wire_. The other side will know that the value hasn't changed.

------
jrochkind1
> The long and short of it is; when you build a front-end to a website, and
> you know it's going to be served over HTTP2...

Well, that's the rub, isn't it?

When are we going to possibly be in a state where we _know our website is
going to be served over HTTP2_?

Not for a while, probably? Not just wait until all the browsers support HTTP2,
it's wait until the browsers that _don't_ support HTTP2 are a small minority.

------
CognitiveLens
This info is in some replies further down the page, but just to highlight:
your optimized HTTP1 markup will not be slower when you move it to HTTP2, but
when you're on HTTP2, you might get even more speed benefit by structuring
things differently.

A reluctance to change your _markup_ should not be a reason to resist a change
to HTTP2, if and when it is actually available.

~~~
matt_kantor
I think I'm missing something here. By "markup" do you mean the content of
text/html response bodies? By "optimized HTTP1 markup" do you mean minified
HTML?

~~~
thomasfoster96
I think what is meant is that a lot of HTML. Is written assuming concatenated
assets, image sprites, CDNs, etc., but a lot of this redundant now. Http/2
might mean a rewrite of you html is required, not just changing some things on
your server.

~~~
matt_kantor
Even if you decide to de-concatenate some CSS and JavaScript files and serve
them from the site's main domain, the changes to your HTML will be minuscule
(some extra <link>/<script> elements and changes to some src attributes). The
bulk of the changes would occur in the linked assets themselves.

I guess if you're embedding data URIs in your HTML to cut down on requests and
you want to switch them to normal HTTP URLs then there could be nontrivial
markup changes, but I hope it's not common for web developers to do such
things by hand.

I think the word "markup" just threw me off. If you replace it with something
more general that also covers CSS, JavaScript, images, etc then I
wholeheartedly agree with CognitiveLens's comment. The point is that you
shouldn't be afraid to throw away obsolete performance hacks if it increases
developer productivity and/or leads to a better experience for your userbase.

------
pmontra
I expect to have to support HTTP for a long time on my sites but it would be
nice to start using HTTP2 with browsers that can handle it. I can see two
problems:

1) How can they coexist on the same server? I googled a little but maybe with
the wrong keywords. I found this but it's pretty shallow on details
[http://nginx.com/blog/how-nginx-plans-to-support-
http2/](http://nginx.com/blog/how-nginx-plans-to-support-http2/)

2) I still want to serve HTTP1 optimized content to HTTP1 clients. I hope web
servers are going to let us serve different content based on protocol version.
Maybe the application server needs to know about it too.

Anybody here with first hand experience?

~~~
teraflop
> how can they coexist on the same server?

At the protocol level, this is covered by section 3 of the HTTP/2 spec:
[http://http2.github.io/http2-spec/index.html#starting](http://http2.github.io/http2-spec/index.html#starting)

For [https://](https://) URIs, the client and server agree on which version to
use as part of the TLS negotiation. For [http://](http://) URIs, major
browsers won't be using HTTP/2 at all, but if they did, the spec defines a
mechanism similar to websockets: the client makes an HTTP/1.1 request with a
special Upgrade header, and the server responds by changing protocols in a
coordinated way.

~~~
pmontra
Thanks

------
omgitstom
I'm having a hard time understanding the argument.

Let's say I have a site that uses best practices for HTTP/1.1, and loads
within 2 seconds without caching. It sounds like the article saying that
switching to HTTP/2 will make my site slower.

What I've read so far in other places is that the site should by default (no
modifications) be faster, it is just wont be optimized until I remove all my
hacks for HTTP/1.1

~~~
mordocai
I am far from an expert but from what I understand a HTTP/2 site with HTTP 1.1
'hacks' for performance will:

Run slower than the same site with the HTTP 1.1 'hacks' removed

Run faster than the same site serving via HTTP 1.1.

In most cases.

------
PythonicAlpha
Serving from a cookie-less domain, still good?

I understand the points made about sharding and concatenating (and also the
limits of the arguments), but the short article just mentions cookie-less
domains and then just seem to forget about them. So this practice is still
valid? Or is there some feature in HTTP2 that also obsoletes this?

~~~
MichaelGG
Another comment says that repeated headers aren't sent in the same connection.
Thus that'd eliminate sending cookies except the first time, eh? So using
multiple domains might hurt, since you need extra DNS lookups and TCP
connections.

~~~
michaelmior
This is true although if you're not doing domain sharding and just have your
static assets on a single domain, I don't think this is a concern. Certainly
the performance benefit of having your assets served by a CDN on a different
domain will outweigh the cost of the extra connection. (Assuming your CDN
supports HTTP2.)

------
noselasd
[http://packetpushers.net/show-224-http2-its-the-biggest-
netw...](http://packetpushers.net/show-224-http2-its-the-biggest-network-
thing-happening-on-the-internet-today-repost/) gives some info of HTTP2 for
people that are rather new to it. It's not a whole lot of indepth technical
info but well worth to listen to

------
cromwellian
Concatenating CSS and JS together is still a win if you have a globally
optimized build that dead strips unused code and performs optimizations based
on global knowledge.

------
EugeneOZ
And don't forget to setup http2 on your server before removing all these
hacks.

------
bebopsbraunbaer
slower compared to what? as far as i can tell the sites wouldnt get slower
than before, they just would not use the full potential of HTTP2 , which makes
them slower compared to the optimum? hmmm

