Hacker News new | comments | show | ask | jobs | submit login
The best practices of HTTP1 are harmful in a HTTP2 world (mattwilcox.net)
123 points by MozMorris 1014 days ago | hide | past | web | favorite | 60 comments



This article makes too many assumptions for my taste. Some other considerations that spring to mind:

- A sprite tends to have fewer bytes than separate images, because of image format overhead and better compression when combining similar things.

- The same may be true for zipped CSS/JS.

- What about CSS/JS/image parsing overhead?

- Even though HTTP2's request overhead is substantially less than HTTP1, fewer HTTP requests shouldn't make your site slower.

So the best practices will be outdated. Harmful? Not so sure.


The other thing missing is how this interacts with CDNs. Concatenating my whole site's CSS into one file isn't just to decrease round trips, it's because there's only 1 URL to cache (both CDN and client browser) for every page.


That's actually harmful for cache performance. Ideally you want many small, granular caches that only expire when the specific content changes to get the highest cache hit rates: if you concatenate everything together, making changes to a single file requires redownloading the entire bundle.

(It's still faster under HTTP/1.1 to concatenate because of the overhead of making multiple HTTP requests and the parallelization limit, but one of the perf gains of HTTP2 is that since extra requests are cheap caches can be targeted and granular. Changes to a single script won't necessitate redownloading a giant concatenated bundle — you only need to download the single script that got updated.)


This is 100% right. One additional point about the importance of granularity to effective cache management: if you bundle rarely accessed resources together with frequently accessed resources in the same cache entry, those rarely accessed resources are taking up space that could be used for other data. You may actually be causing cache misses and additional network traffic!


That's only true if you're updating your assets faster than your cache TTL. I suppose if you're a continuous deployment shop that deploys to prod 25 times a day, that's a concern, but not if you deploy once a month.


Cache granularity absolutely matters regardless of your cache TTL, because as I mentioned in my other reply in this thread, by bundling rarely accessed and frequently accessed resources together you are pushing other frequently accessed resources out of the cache.


It's true that images may compress more efficiently in some cases when combined into a sprite sheet, but in my experience the scenario described by the article is very likely: your sprite sheets will end up containing images that the current page doesn't need. That has significant costs in terms of bandwidth usage, memory usage on the client side, and CPU / energy usage on the client side (for decoding unneeded image data).

Further, if any image on the sprite sheet is currently visible, the entire sprite sheet must remain in memory, when otherwise the browser could free the memory of all the non- visible images. And it may sometimes be necessary to use a much more expensive drawing path when drawing sprites to ensure that pixels from one image don't bleed into another image.

These negative effects will be felt most severely on resource-constrained mobile devices, where it matters most.

One should always measure when making decisions about performance, but in an HTTP2 world my recommendation would be to avoid sprites in most cases.


Sprite sheets contain images browser might need. Essentially it's a preloading technique, so you won't be slowed by waiting for something to load, even if it takes half a sec.

Also, most if not all browsers use GPU to render web pages. And spriting actually comes from gamedev/GPU world [1], where many textures are baked/combined into a big one because it's efficient from performance/memory layout POV.

A final thought, what about server IO becoming a bottleneck when it needs to read hundreds of small files from disk for each request?

[1] http://www.blackpawn.com/texts/lightmaps/


HTTP2 supports server push, which should serve your needs for preloading just fine. (And there are other approaches as well; that's just one example.)

Browsers generally use the GPU for compositing web pages, but the CPU still generally does most of the rendering, and that frequently includes at least some of the image rendering. That's not actually relevant, though. The problem is bleeding; see here [1] for an example of someone encountering it in a gamedev context.

So how do you solve bleeding? If you read the answer to that Stack Overflow question, you'll see that it involves correctly setting up the data in the texture to avoid the issue. The problem is that browsers cannot assume that you've done that. The workaround depends on which graphics backend (OpenGL, D2D, etc.) is in use, but it can sometimes involve making a temporary copy of the region of the texture you're going to draw, which is obviously expensive.

As for server IO being slowed down by reading hundreds of small files from disk, I'd expect a properly configured server to be serving those files from memory, either explicitly or implicitly through the OS's filesystem cache.

[1] http://stackoverflow.com/questions/7894802/opengl-texture-at...


Agreed, as someone who has done a significant amount of work to sprite our images and concat our js/css files this title, and most the content, scared me. The only thing I haven't pushed through is different domains for static assets (though I have them cached for a year with a query string as a cache-buster for when we deploy new js/css). None of this, in a HTTP2 context, seems like it would be extremely, or at all for that matter, harmful. Furthermore with our aggressive caching the request only needs to be made a single time (assuming the user hasn't cleared their cache).

I welcome HTTP2 but I'm not too sure on the timeline for rollout and if it will end up being an "IE6"-type thorn in my side. Even if we are split 50-50 between HTTP1 and HTTP2 it sounds like the best approach is keep doing what you are doing... Either way I don't think sprites/concating/minifying is going anywhere anytime soon.


I think if there's a significant enough uptake on HTTP/2, we may get to a point where we start feeling that it's not worth optimizing for HTTP/1.1 anymore. Obviously HTTP/1.1 will still just work, but the HTTP/2 approach _can_ potentially work much more elegantly, when all the tooling is in place, and I definitely believe it will be preferable.

I agree that the existing approaches are very much valid though, but I also look forward to not having to concatenate javascript files anymore. We've developed a lot of tooling to work around issues that stem from that, and it doesn't address the problem of needing a varying set of javascript files on a per-page basis easily.


I've said this before and I'm still not certain, but given that all browsers on desktops now auto update, and that browsers on phones and tablets have a short shelf life because those devices are far more disposable, I can't see how we're going to end up in an IE6 scenario.


A few reasons why concatenating CSS and JS might be harmful under HTTP2. Firstly, it means that the code must be downloaded serially. Given that the per-request overhead is minimal for HTTP2, it's more efficient to download the files separately in parallel. Second, in most cases you don't actually depend on all the CSS/JS for the user to start interacting with the page so keeping things separate can mean that the perceived response time is shorter. Finally, separate resources can result in less cache invalidation when assets are changed which could make a big difference for repeat visitors.


better compression when combining similar things.

Or, as I like to put it: "Birds of a feather compress better together."


Shouldn't a single request be multiplex-able, too? Isn't BitTorrent just one example of such a single-logical-request, multiple-connection protocol?


Several comments are misinterpreting the article, I think, as saying that your site will be slower than before when you switch to HTTP2 if you're still using the old techniques. In fact, nearly all sites will just magically speed up thanks to basic improvements like header compression.

But, by continuing to use the old hacks, your site won't be as fast as it could be. It's an opportunity cost. Some of those opportunities being:

- increased cache granularity (avoids invalidating a whole sprite or concatenated bundle when just a single part changes)

- parallel downloading of files that were previously bundled into one file

- fewer DNS lookups, now that you're not sharding

- less energy/memory usage in the client because you're not decoding/remembering whole sprites

And the subtlest, biggest win:

- simplifying your build process.


Nearly all HTTP 1.1 servers use gzip. Does that mean that minification for the purposes of reducing payload size is also unnecessary there? (serious question)

Edit: answering my own question: http://stackoverflow.com/a/807161


Every popular HTTP server supports gzip compression, but you still have to turn it on and surprising number of sites don't bother.


IIS, for instance, makes you install "Dynamic" compression separately from Static compression, and it's not on by default. They also have some stupid frequency measurement, so you can end up with slow first loads until IIS decides its time to compress files. And changing it requires modifying some global config deep under system32.

Supposedly they separate all these little features out (no telnet client or tftp client!) for security. But it's really a cover-your-ass style that results in worse security for many users which just install everything trying to make stuff work. Dynamic compression being separate is a great example of that.


Given how gzip is the single most effective way of reducing BW consumption and site-loading speeds, I think we can conclude that those who don't bother don't really care about performance either way.

And in those cases HTTP 1.x vs HTTP 2.x is a moot discussion anyway.


Even if site administrator's don't care, users do.


Is there a good way to make a website that's available over both HTTP/1.1 and HTTP/2 and does the right thing in both cases? I feel like it'll be many years before I can design a website that's available over HTTP/2 only.

For instance, is there an asset pipeline that will concatenate and minify all my JS for HTTP/1.1 clients, but minify my JS separately for HTTP/2 ones, and build versions of my HTML page that references the two different assets depending on which HTTP protocol is in use?


I don't think the complexity would be worth it in most cases.

The article is misleading on this, but those HTTP/1.1 best practices aren't slower when served over HTTP2. HTTP2 will still be faster than HTTP/1.1.

- Spriting and concatenation will not be worse under HTTP/2, just (mostly) unnecessary.

- Splitting content across multiple domains will be 'harmful', in that you're enduring multiple TCP handshakes instead of one. But this is no worse than HTTP/1.1

- Minification is unchanged. It will still decrease the download size. Although I sometimes find that the difference after compression is trivial on many modern sites, so is often not worth the decrease in readability/debuggability.


Sure, but as long as I'm supporting a good chunk of HTTP/1.1 users, then I don't get the benefits of the HTTP/2 architecture, right?

I could buy that what I should do is wait a few years, until the majority of my users are HTTP/2 instead of HTTP/1.1, and then optimize everything for HTTP/2 and still work (slowly) on HTTP/1.1. But at the moment it's not clear why I should care: my options seem to be really fast on HTTP/2 and really slow on HTTP/1.1, or kinda fast on HTTP/2 and kinda fast on HTTP/1.1.


You'll still see performance benefits of HTTP2 while supporting HTTP/1.1 content.

HTTP2 requests have (sometimes significantly[1]) less overhead, and most sites following good HTTP/1.1 practices still make many requests.

Open the network panel in your browser and view a few large sites. Despite minimizing requests with sprites, concatenation etc, most still make dozens of requests, with some large sites pushing over a hundred.

(for example, I just loaded a page for a single tweet on Twitter and it involved twenty requests, with a size over 2MB.)

I think your list of options are incorrect:

- You can optimize for HTTP/1.1 and it will be fast over both protocols.

- Or don't optimize, and it will be fast over HTTP2 and slow over HTTP/1.1. The main benefit is saving development time & complexity. This will not be a worthy tradeoff until HTTP2 is more widespread among your users.

[1] The reduction in latency can be massive, for instance, as the HTTP2 server can push resources immediately without waiting for the browser to request them.


Sure, there's still an advantage for the user to upgrade to HTTP/2. But it sounds like I shouldn't be particularly optimizing for HTTP/2 quite yet? That's what I'm interested in: I'm not usually a web developer so I don't pay close attention, but I'd like to know how to design websites that aren't bad in general.

It sounds like I'm currently best off designing for HTTP/1.1 (with all the usual hacks) and knowing that HTTP/2 will perhaps make things better, instead of designing in any way for HTTP/2, which will make things worse for HTTP/1.1. That seems to be not what this article is saying.

Should I be caring about designing for HTTP/2 yet, or should I just ignore it for a few years?


With ASP.NET, this is easy enough for JS and CSS bundling since that can be done as part of the runtime handler. (You keep the files on disk all normal, then specify which ones belong to which bundle names. ASP.NET does the rest for you.) So a simple if/else should be enough (I'm actually doing this for debugging on a site, just not based off of HTTP/2.)

Caching could be an issue, but if you're frontending with a reverse proxy, you could append another querystring parameter for http2 then do caching in the proxy level.


> It can leave the connection open for re-use for very extended periods of time, so there's no need for that costly handshake that HTTP1 requires for every request.

HTTP1 supports pipelining requests.

> HTTP2 also uses compression, unlike HTTP1, and so the size of the request is significantly smaller - and thus faster.

'significantly'? How much is that?

> HTTP2 multiplexes; it can send and receive multiple things at the same time over one connection.

If that one connection stalls, multiple things won't be transferred.

Plus it introduces new protocol overhead.


HTTP 1.1 supports pipelining requests but in reality it's disabled everywhere.


> HTTP2 also uses compression, unlike HTTP1, and so the size of the request is significantly smaller - and thus faster.

Also, an HTTP/1.1 server, when configured properly, will use compression as well.


It does support compression of the message body, but not of the headers.

They designed a new compression algorithm called HPACK specifically to compress headers in HTTP2: https://http2.github.io/http2-spec/compression.html

In small HTTP/1.1 requests the headers can be much larger than the content, which is part of the motivation for combining files into one request.


Considering the best practice for HTTP/1.1 is to create few very large files - why would compressing headers really make a difference?


HTTP/2.0 does not only compress headers, it also keeps a context of headers that were already sent. So if you want to include a specific header with the same value in multiple requests (like, say, a cookie), it won't actually be re-sent on the wire. The other side will know that the value hasn't changed.


"Best practice" can take a lot of work and often doesn't happen in real life. HTTP/2 gets you this advantage for free.


If the headers are big enough, then you may need multiple packets to transmit the request.

Also, some TCP stacks (or an extension option?) will allow data in the first packet, so you don't have to wait for the handshake. Perhaps this doesn't work if the data doesn't fit in a single packet?


Aside from the other responses to this comments, this goes towards making HTTP2 requests cheap which enables a lot of other optimizations that were antipatterns under HTTP/1.1


The body, but not the headers, right?

Does http2 also compress headers?


It does


> HTTP1 supports pipelining requests.

Head-of-line blocking makes the HTTP/1.1 pipelining perform poorly. Multiplexing is a more performant solution than pipelining.

Here's some worthwhile reading on the subject: http://http2.github.io/faq/#why-is-http2-multiplexed

> 'significantly'? How much is that?

Depends. In HTTP/1.1 request headers weren't compressed, and you generally have a ~1500 byte limit for a request to fit in a single packet. If you crossed that threshold, and if compression brings you back under (it certainly might), you could see 2x or better perf gains on time-to-first-byte depending on how many packets your initial request was being broken into.

> If that one connection stalls, multiple things won't be transferred.

While true, it's still often easier to optimize a single saturated connection than multiple ones for a variety of reasons (slow start, congestion, etc). More reading on the subject: http://http2.github.io/faq/#why-just-one-tcp-connection

Personally I'm really excited about HTTP2 being deployed. It makes the web fast by default — no need to concatenate or domain shard once it's widely deployed — and adds extra opportunities for performance (e.g. server push) that we haven't seen yet.


RE: The last one

QUIC is a solution for that - It's Google's answer to TCP, and dealing with multiplexed connections. I recommend taking a look.


> The long and short of it is; when you build a front-end to a website, and you know it's going to be served over HTTP2...

Well, that's the rub, isn't it?

When are we going to possibly be in a state where we know our website is going to be served over HTTP2?

Not for a while, probably? Not just wait until all the browsers support HTTP2, it's wait until the browsers that _don't_ support HTTP2 are a small minority.


This info is in some replies further down the page, but just to highlight: your optimized HTTP1 markup will not be slower when you move it to HTTP2, but when you're on HTTP2, you might get even more speed benefit by structuring things differently.

A reluctance to change your markup should not be a reason to resist a change to HTTP2, if and when it is actually available.


I think I'm missing something here. By "markup" do you mean the content of text/html response bodies? By "optimized HTTP1 markup" do you mean minified HTML?


I think what is meant is that a lot of HTML. Is written assuming concatenated assets, image sprites, CDNs, etc., but a lot of this redundant now. Http/2 might mean a rewrite of you html is required, not just changing some things on your server.


Even if you decide to de-concatenate some CSS and JavaScript files and serve them from the site's main domain, the changes to your HTML will be minuscule (some extra <link>/<script> elements and changes to some src attributes). The bulk of the changes would occur in the linked assets themselves.

I guess if you're embedding data URIs in your HTML to cut down on requests and you want to switch them to normal HTTP URLs then there could be nontrivial markup changes, but I hope it's not common for web developers to do such things by hand.

I think the word "markup" just threw me off. If you replace it with something more general that also covers CSS, JavaScript, images, etc then I wholeheartedly agree with CognitiveLens's comment. The point is that you shouldn't be afraid to throw away obsolete performance hacks if it increases developer productivity and/or leads to a better experience for your userbase.


I expect to have to support HTTP for a long time on my sites but it would be nice to start using HTTP2 with browsers that can handle it. I can see two problems:

1) How can they coexist on the same server? I googled a little but maybe with the wrong keywords. I found this but it's pretty shallow on details http://nginx.com/blog/how-nginx-plans-to-support-http2/

2) I still want to serve HTTP1 optimized content to HTTP1 clients. I hope web servers are going to let us serve different content based on protocol version. Maybe the application server needs to know about it too.

Anybody here with first hand experience?


> how can they coexist on the same server?

At the protocol level, this is covered by section 3 of the HTTP/2 spec: http://http2.github.io/http2-spec/index.html#starting

For https:// URIs, the client and server agree on which version to use as part of the TLS negotiation. For http:// URIs, major browsers won't be using HTTP/2 at all, but if they did, the spec defines a mechanism similar to websockets: the client makes an HTTP/1.1 request with a special Upgrade header, and the server responds by changing protocols in a coordinated way.


Thanks


I'm having a hard time understanding the argument.

Let's say I have a site that uses best practices for HTTP/1.1, and loads within 2 seconds without caching. It sounds like the article saying that switching to HTTP/2 will make my site slower.

What I've read so far in other places is that the site should by default (no modifications) be faster, it is just wont be optimized until I remove all my hacks for HTTP/1.1


I am far from an expert but from what I understand a HTTP/2 site with HTTP 1.1 'hacks' for performance will:

Run slower than the same site with the HTTP 1.1 'hacks' removed

Run faster than the same site serving via HTTP 1.1.

In most cases.


Right, the practices are 'harmful' in an HTTP/2 world, meaning, in that world, these practices harm you rather than help you. You are correct in assuming that your site and its resources would still load faster in HTTP/2.


As a follow up question, has anyone thought / worked through support both HTTP/1.1 and HTTP/2 simultaneously?


Serving from a cookie-less domain, still good?

I understand the points made about sharding and concatenating (and also the limits of the arguments), but the short article just mentions cookie-less domains and then just seem to forget about them. So this practice is still valid? Or is there some feature in HTTP2 that also obsoletes this?


Another comment says that repeated headers aren't sent in the same connection. Thus that'd eliminate sending cookies except the first time, eh? So using multiple domains might hurt, since you need extra DNS lookups and TCP connections.


This is true although if you're not doing domain sharding and just have your static assets on a single domain, I don't think this is a concern. Certainly the performance benefit of having your assets served by a CDN on a different domain will outweigh the cost of the extra connection. (Assuming your CDN supports HTTP2.)


Thanks, yes, when this is the case, that would eliminate the cookie-less domain reason.


http://packetpushers.net/show-224-http2-its-the-biggest-netw... gives some info of HTTP2 for people that are rather new to it. It's not a whole lot of indepth technical info but well worth to listen to


Concatenating CSS and JS together is still a win if you have a globally optimized build that dead strips unused code and performs optimizations based on global knowledge.


And don't forget to setup http2 on your server before removing all these hacks.


slower compared to what? as far as i can tell the sites wouldnt get slower than before, they just would not use the full potential of HTTP2 , which makes them slower compared to the optimum? hmmm




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: