> HTTP/2 actively discourages the use of compression for secure websites. HTTP compression (gzip, deflate, ...) has been known to compromise the SSL/TLS security in the "breach" and "CRIME" attacks.
This is a patently wrong statement in an otherwise good article.
There are no known security vulnerabilities with HTTP/1.x's style of compression That's because HTTP/1.x only supports compressing response bodies. During SPDY development, the same compression algorithms used for compressing HTTP responses (gzip/deflate) were applied to compress request and response headers. This is what lead to the CRIME vulnerability. The solution was to still use compression, but use a different compression scheme, HPACK, which (glossing over a ton of technical details) allows compression while avoid CRIME because it "[uses] separate compression dictionaries are used for each source of data." HTTP/2 uses HPACK.
Use of TLS compression is not recommended, but that has always been a performance best practice, since TLS compression is not context aware.
For the love of god, keep using compression with your websites, whether HTTP/1.x, TLS + HTTP/1.x, or HTTP/2
> There are no known security vulnerabilities with HTTP/1.x's style of compression That's because HTTP/1.x only supports compressing response bodies
You are completely wrong. The BREACH attack showed that you can be vulnerable to a compression oracle by merely compressing response bodies: http://breachattack.com/
> For the love of god, keep using compression with your websites, whether HTTP/1.x, TLS + HTTP/1.x, or HTTP/2
You can continue compressing files such as CSS that don't reflect any attacker-controlled content, but compression of dynamic HTML pages is likely to be insecure.
Theoretically, you can, as long as any secrets within the response body are represented as a "literal" (that is, uncompressed) within the compressed stream.
In practice, there isn't a way I know of to mark the relevant parts of the response body so that the compression, which is usually done as a separate post-processing step, will know to avoid compressing these parts of the stream. And it would be very fragile; forgetting to mark a secret as "do not compress this" would work perfectly fine but reintroduce the vulnerability.
Simply enough in HTTP2: lift out the relevant parts of the response body—turn them into separate resources and refer to them in the original object by their URL. When the client then requests those dependent objects, deliver them uncompressed. (And server-hint/push the dependent objects if you can, of course.)
Your app is vulnerable if it includes content from the user (e.g. a GET query parameter or something from a POST request body) in the response, and includes secret info (e.g. an anti-CSRF token) in that same response.
Wow, that's scary. I imagine quite a lot of people are unaware of this.
I personally use SSL on my personal site just because I like the idea that readers can be sure the content is what I've sent, rather than because the information is private / sensitive. So I'm not personally concerned because this doesn't allow anyone to MITM the connection, just read it.
Not really. The theory behind the attack is that if the user-specified content is equal to the secret content, it will compress more effectively and have a smaller content length. The other content on the page doesn't really matter.
Adding data of random length will make the attack more difficult, but won't defeat it entirely. This is called length hiding.
The attack depends on a secret to match a user-provided string enough for the compression algorithm to notice this redundancy?
A counter-measure then would be to scramble the user input in the page with a random key, and include this key into the page for de-scrambling using JavaScript. It will still efficiently compress the parts of the page which are not user-controlled.
> Protocol Relative URLS are now considered an anti-pattern
Sorry to sidetrack, but what's wrong with protocol-relative URLs? The only info I've found is a quote from Paul Irish relating it vaguely to the China/Github DDOS incident...
I find protocol-relative URLs very helpful for running without HTTPS in my local environment.
The "problem" with protocol-relative URLs, is that it's possible to include HTTP content if the parent was HTTP.
Since HTTPs is preferred for both security and privacy, we should give as little options as possible to use the insecure HTTP protocol and force HTTPs everywhere.
As you mention though, in dev-environments it's a convenient hack to use plain old HTTP. However, in production, preferring HTTPs would be considered the way to go.
I would suggest only asking the second time people land on your site. The first time, they're probably not going to do it, because they just came in via some article link and are thinking they probably won't see your site again.
For sure. Well, I unblocked the site while reading, so I guess the scale tipped over to appreciation.
Although, I'm skeptical about the Facebook/share buttons. It'd be great if they didn't load content from the sharing sites themselves automatically (as they can track who is reading what by looking at the requests and referrer).
One thought I had while reading though this is that HTTP2 really isn't just a hypertext transfer protocol anymore, it's going to be used for everything.
> One thought I had while reading though this is that HTTP2 really isn't just a hypertext transfer protocol anymore, it's going to be used for everything.
It was designed specifically for web browsers and never specifically optimized for other use cases, but yes this will have a huge effect on REST services as well. Server push is going to dramatically change architectures. You can essentially implement HATEOAS with a single round trip.
This is a patently wrong statement in an otherwise good article.
There are no known security vulnerabilities with HTTP/1.x's style of compression That's because HTTP/1.x only supports compressing response bodies. During SPDY development, the same compression algorithms used for compressing HTTP responses (gzip/deflate) were applied to compress request and response headers. This is what lead to the CRIME vulnerability. The solution was to still use compression, but use a different compression scheme, HPACK, which (glossing over a ton of technical details) allows compression while avoid CRIME because it "[uses] separate compression dictionaries are used for each source of data." HTTP/2 uses HPACK.
Use of TLS compression is not recommended, but that has always been a performance best practice, since TLS compression is not context aware.
For the love of god, keep using compression with your websites, whether HTTP/1.x, TLS + HTTP/1.x, or HTTP/2