For instance, his explanation of Content Security Policy headers is much more detailed than in the OP's link.
While the author's example of `max-age=3600` means there's only an hour of potential problems, enabling Strict-Transport-Security has the potential to prevent people from accessing your site if for whatever reason you are no longer able to serve HTTPS traffic.
Considering another common setting is to enable HSTS for a year, its worth enabling only deliberately and with some thought.
That might not be something that a company's management team wants to hear, but indicating to your users that falling back to insecure HTTP is just something that happens sometimes and they should continue using your site is one of the worst things you can possibly do in terms of security.
Well, just last week the HTTPS certificate expired in the middle of the day. I had about a half days' worth of work typed up into the browser's text field and when I clicked "submit", all of my work vanished and Firefox only showed a page stating that the certificate was invalid and that nothing could be done about it. I clicked the back button, same thing. Forward button, same thing. A half-days worth of work vanished into thin air.
Is this my fault for letting the certificate expire? Absolutely. Should I have used letsencrypt so I didn't have to worry about it. Sure. Should I be using a notes system that doesn't throw away my work when there's a problem saving it? Definitely. I don't deny that there's lots that I could have done to prevent this from being a problem and lots that I need to fix in the future.
But it does point out that if you use HSTS, you have to be _really_ sure that _all_ your ducks are in a row or it _will_ come back to bite you eventually.
Maybe you don't care about protecting whatever data you were entering into your wiki, but in most (if not all) cases of sending data to companies you interact with, you do not want your user-entered data being sent in the clear to the server, or even worse, being sent to the server of a malicious attacker performing a MITM attack. What you want is for your browser to stop sending the data entirely when it encounters a suspicious situation (such as an HTTPS->HTTP downgrade or an expired cert), which is exactly what happened.
Again, "reduced security" is not a valid failure state. It's like having a button on your front door that says "Lost your key? Just press this button and the door will unlock." At that point, why even have a door lock anyway?
Since the expired cert can't be distinguished from an attack my guess is that the text contents aren't lost when that transaction fails due to the expired cert (as then bad guys could throw your data away which isn't what we want) so I think you could just have paused work, got yourself a new valid certificate, and then carried on.
Now, of course, it may be that your web app breaks if you do that, the prior session you were typing into becomes invalid when you restart, and new certificates can't be installed without restarting, that sort of thing, but that would be specific to your setup.
that's not a valid argument against HSTS! the browser behaviour with regard to your data is outrageous, and shouldn't be tolerated. and i'm saying this as a longtime firefox user. the browser just sucks, big time.
"luckily", as a vim junkie, i can't stand the textarea at all, and do anything that requires more effort than, say, this comment, in vim, then copy/paste over when i'm done. still, we should have gotten $VISUAL embedding fifteen years ago: what's happened, Mozilla? lining up your Pockets the whole time?
I thought the general way was to automatically save any progress in localstorage/etc ready to be retrieved if needed once the problem is fixed?
For unreliable webforms, Ctrl-A, Ctrl-C is useful.
In the example you gave, wouldn't you have lost all your work anyway without HSTS? I don't think browsers supply an easy way to retry POST to the corresponding http: URL whether HSTS is set up or not.
With HSTS, that button goes away in browsers.
HSTS worked perfectly, your poor maintenance of certificates and your site lost you half a days worth of work.
Some people need to touch the stove and feel the pain...if you blame someone else for you touching the stove that is just willful ignorance.
> "Reduced security" is not a valid fallback option.
Agreed! But if my HTTPS is broken, I might well want to replace my site with an HTTP page explaining that we'll be back soon. If that is impossible until the max_age expires, that can lead to an awkward explanation to the higher-ups.
1) You're not going to be able to do that for anyone who has bookmarked the site, or loads it from their history / address bar, with the https already included. Under what circumstances, other than someone hand-typing a URL, do you expect anyone to reach your site by HTTP? (And note that any such user can potentially get compromised, such as by ISPs.)
2) Search engines will link to your site with the https pages they found when crawling. And if you stay down long enough for search engines to notice, you have a bigger problem and a much more awkward explanation to give.
3) Many kinds of failures will prevent anyone from reaching the whole domain, or the IP address the domain currently points to, or similar. Have an off-site status page, social media accounts, and similar.
Sorry, but sometimes security is absolute.
Like, how does it happen, ever?
And what happens to your users' credentials if you do?
There are plenty of scenarios in which this happens online:
* Legacy systems (e.g. Aminet)
* Software distribution (e.g. apt mirrors)
* Anything involving FTP where a HTTP mirror would be useful (e.g. overcoming FW restrictions)
* Anything where permissionless access is a requirement (HTTPS is a permissioned system)
Once you go HTTPS you’re all in regardless whether or not you’ve set HSTS headers. Let’s say your HTTPS certificate fails and you can’t get it replaced. So what, you’re going to temporarily move back to HTTP for a few days? Not going to happen! Everyone has already bookmarked/linked/shared/crawled your HTTPS URLs. There is no automated way to downgrade people to HTTP, so only the geeks who would even think to try removing the “s” will be able to visit. And most geeks won’t even do that because we’ve probably never encountered a situation where that has ever helped.
In that case, old visitors were rejected due to the policy.
I wish I had set a lower duration.
Also because their links and bookmarks would have all failed.
It won't work because the HSTS setting the visitor got months ago told it not to.
In the webmaster tools, you want to get google to remove all references to the non https versions. Ensure https is up on all URL's, then use their tools to re-index everything and remove all references to http://
Are you saying you can't set up https on some of your URL's?
There's no issue with switching FROM HTTP to HTTPS: that's easy, just redirect them. The issue is if you have to switch back.
HSTS can't be enabled on plain HTTP so it's not possible to create the problematic scenario if you never had SSL enabled to begin with. The problem is switching from SSL to non-SSL, not the other way around.
Are you saying that you have applications that require HTTP port 80 only?
Also: HSTS applies to all ports once applied, not just 80/443. That is another important thing to consider before turning it on.
I would like to add that a lot of web-apps break if they aren't served over HTTPS regardless, due to the Secure flag being set on cookies. For example if we run ours in HTTP (even for development) it will successfully set the cookie (+Secure +HttpOnly) but cannot read the cookie back and you get stuck on the login page indefinitely.
So we just set ours to a year, and consider HTTPS to be a mission critical tier component. It goes down the site is simply "down."
HSTS is kind of the "secret sauce" that gives developers coverage to mandate Secure cookies only. Before then we'd get caught in "what if" bikeshedding.
Whilst there's cases where you might fail to serve HTTPS traffic temporarily (i.e. if your cert expires and you don't handle it) almost all HTTPS problems are quick fixes, and are probably your #1 priority regardless of HSTS. If your HTTPS setup is broken and your application has any real security concerns at all then it's arguably better to be inaccessible, rather than quietly allow insecure traffic in the meantime, exposing all your users' traffic. I don't know many good reasons you'd suddenly want to go from HTTPS back to only supporting plain HTTP either. I just can't see any realistic scenarios where HSTS causes you extra problems.
Isn't that how it should work? Would you rather use Gmail over HTTP if its HTTPS stopped working? Besides, just supporting HTTP fallback means you're much more vulnerable to downgrade attacks -- it's the first thing attackers will attempt to use.
I imagine them looking at a business continuity plan and being aghast - why are we spending money to manage the risk from a wildfire in California overwhelming our site there, yet we haven't spent ten times as much on a zombie werewolf defence grid or to protect against winged bears?
HSTS defends against a real problem that actually happens, like those Californian wildfires, whereas "whatever reason you are no longer able to serve HTTPS traffic" is a fantasy like the winged bears that you don't need to concern yourself with.
Also, for "Set-Cookie", the relatively new "SameSite" directive would be a good addition for most sites.
Oh, and for CSP, check Google's evaluator out.
He also missed Expect-Staple and Expect-CT, in addition to that, most of security headers have the option to specify an URI where failures are sent, very important in production environments.
Furthermore, especially with Symantec out of the picture, there is no broad consumer market for certificates from the Web PKI which don't have SCTs. The audience of people who know they want a certificate is hugely tilted towards people with very limited grasp of what's going on, almost all of whom definitely need embedded SCTs or they're in for a bad surprise. So it doesn't even make sense to have a checkbox for "I don't want SCTs" because 99% of people who click it were just clicking boxes without understanding them and will subsequently complain that the certificate doesn't "work" because it didn't have any SCTs baked into it.
There are CAs with no logging for either industrial applications which aren't built out of a web browser (and so don't check SCTs) and are due to be retired before it'd make sense to upgrade them (most are gone in 2019 or 2020) or for specialist customers like Google whose servers are set up to go get them SCTs at the last moment, to be stapled later. Neither is a product with a consumer audience. Which means neither is a plausible source of certificates for your hypothetical security adversary.
As a result, in reality Expect-CT doesn't end up defending you against anything that's actually likely to happen, making it probably a waste of a few bytes.
One good reason to set both options, as I mention in the post, is that scanners who rate site security posture may penalize site owners who don't set both - no harm in doing it that I know of.
For a "complete" guide (maybe "comprehensive starter guide"?) I'd at least add a note in the x-frame-options section that it's been superseded by CSP and only needed if you must support IE (or I guess please a tool), and if you have interesting frame requirements (i.e. more than one allowed ancestor but not all) you're going to have to use a hack to support that with the old header.
Another interesting callout is that most of the CSP directives can be specified by a meta tag in the markup. Not only is this handy for quick serverless testing but can become necessary if you end up routing through something (like some CDNs) that has a max overall headers limit... CSP headers can get pretty big if you don't just bail out with a wildcard.
Definitely agree CSP can have its own post. It's complicated and still evolving with new spec versions. I recently learned about chrome's feature-policy header proposal, which to me is like more granular script-src policies, so I wouldn't be surprised if some future CSP version just absorbs it...
I added some text to the x-frame-options to note the CSP rules - it's a great addition.
I've reminded myself that v3 still hasn't fully stabilized into an official recommendation despite being in final-draft since October (it's basically closed for new things) so for now awareness of 2 and 3 is probably going to continue to be important for anyone responsible for producing a moderately complex string (guess who that is on my teams ;)). Though even at just level 2 there are a few things I could say about differences in behavior just between Chrome and Firefox... Testing is crucial!
* [HTTP Signatures](https://tools.ietf.org/id/draft-cavage-http-signatures-01.ht...)
* [draft-cavage-http-signatures-10 - Signing HTTP Messages](https://tools.ietf.org/html/draft-cavage-http-signatures-10)
Feature-Policy: accelerometer 'none'; autoplay 'none'; camera 'none'; fullscreen 'none'
The deny option seems to work just fine. My default browser (Firefox) doesn't complain. MDN doesn't indicate any browsers have dropped support. Plus, dropping support would be an unmitigated and unnecessary unforced security error, by making old sites insecure. Do you have a link to an example of a browser ignoring the header?
It was an easy option to force with load balancers or any intermediate server. Frames should always be blocked on the open internet.
The content security policy can't be adjusted easily. It screws with applications and frameworks that use it for any of the twenty other options it covers.
While XFO is simpler to overwrite on a global basis, it’s imprecise and doesn’t permit “allow certain sites to frame, deny all others” and is likely to become fully unsupported whenever any CSP policy is defined, given its deprecated status. Taking the XFO way out will only help you short-term at best.
> Access-Control-Allow-Origin: http://www.one.site.com
> Access-Control-Allow-Origin: http://www.two.site.com
And in the examples setting both. Because in my experience you cannot set multiple . Lots of people instead set it to * which is both bad and restricts use of other request options (such as withCredentials). It looks like the current working solution is to use regexes to return the right domain , but I'm currently having trouble getting that to work, so if there's some better solution that works for people I'd love to hear it.
Envoy has a plugin to do this (envoy.cors), allowing you to configure allowed origins the way people want (["*.example.com", "foo.com"]) and then emitting the right headers when a request comes in. It also emits statistics on how many requests were allowed or denied, so you can monitor that your rules are working correctly. If you are using something else, I recommend just having your web application do the logic and supply the right headers. (You should also be prepared to handle an OPTIONS request for CORS preflight.)
It is actually kind of weird that it's in here, because the other things they talk about add more security, but if you don't need CORS and you decide to just add it to your configuration for no reason, you actually now have less security. Especially if you return * for the allowed origin.
The recommended way to do multiple sites seems to be to have the server read the request header, check it against a whitelist, then dynamically respond with it, which seems terrible.
Thanks for catching this - I updated the post to reflect this and make it more clear.
Multiple message-header fields with the same field-name MAY be present in a message if and only if the entire field-value for that header field is defined as a comma-separated list
Which applies to HTTP headers such as Cache-Control:, and probably goes back to the email RFCs allowing multiple To: headers.
It's just that Access-Control-Allow-Origin isn't defined to accept a comma list, just like Content-Security-Policy doesn't (which is another header breaking things if it appears more than once)
If anyone's interested, I wrote a guide a while ago on adding these headers via Cloudflare Workers, which can be helpful if you're hosting a static site on S3, GitHub Pages, etc. where you can't add these headers directly:
## General Security Headers
add_header X-XSS-Protection "1; mode=block";
add_header X-Frame-Options deny;
add_header X-Content-Type-Options nosniff;
add_header Strict-Transport-Security "max-age=3600; includeSubDomains";
I will include in in my newsletter next monday if you don't mind.
IMHO, the best value for X-XSS-Protection is either 0 (disabling it completely like Facebook does) or not providing the value at all and just letting the client browser use its default. Why?
First, XSS 'protection' is about to not be implemented by most browsers. Google has decide to deprecate Chrome's XSS Auditor, and stop supporting XSS 'protection'. Microsoft has already removed its XSS filter from Edge. Mozilla has never bothered to support it in Firefox.
So most leading net companies already think it doesn't work. Safari of course supports the much stronger CSP. So it's only possibly useful on IE - if you don't support IE, might as well save the bytes.
Second, XSS 'protection' protects less than one might think. In all implementing browsers, it has always been implemented as part of the HTML parser, making it useless against DOM-based attacks (and strictly inferior to CSP).
Worse, the XSS 'protection' can be used to create security flaws. IE's default is to detect XSS and try to filter it out, this has been known to be buggy to the point of creating XSS on safe pages, which is why the typical recommendation has been the block behaviour. But blocking has been itself exploited in the past, and has side-channel leaks that even Google considers too difficult to catch to the point of preferring to remove XSS 'protection' altogether. Blocking has an obvious social exploitation which can create attacks or make attacks more serious.
In short, the best idea is to get rid of browsers' XSS 'protection' ASAP in favour of CSP, preferably by having all browsers deprecate it. This is happening anyway, so might as well save the bytes. But if you do provide the header, I suggest disable XSS 'protection' altogether.
 e.g. https://github.com/WebKit/webkit/blob/d70365e65de64b8f6eaf1f...
 CVE-2014-6328, CVE-2015-6164, CVE-2016-3212..
 Assume that an attacker has enough access to normally allow XSS. If he does not, the filter is useless. If he does, the attacker can by definition trigger the filter. So trigger the filter, make a webpage be blocked, and call the affected user as "support". From there the exploitation is obvious, and can be much worse than mere XSS. Now, remember that all those XSS filters in all likelihood have false positives, that may not be blocked by other defences because they're not attacks. So It's quite possible the filter introduces a social attack that wouldn't be possible otherwise!
Hattip: https://frederik-braun.com/xssauditor-bad.html which gave me even more reasons to think browsers' XSS 'protection' is awful. I didn't know about  before reading his entry.
The author recommends either changing the default behaviour to block or disabling the filter altogether. I believe experience has shown this protection method cannot be fixed.
Ultimately, safe code is code that can be reasoned about but there never was even any specification for this 'feature'. By comparison, CSP has a strict specification. It covers more attacks, and has a better failure mode between XSS protections' filter and block entire page load behaviours.