
HTTP Security Headers – A Complete Guide - BCharlie
https://nullsweep.com/http-security-headers-a-complete-guide/
======
deftnerd
This is a good basic overview of the basic headers, but I suggest spending
some time on Scott Helme's blog. He runs securityheaders.io, a free service
that scans your site, and assigns it a letter grade based on what headers and
configurations you've applied.

For instance, his explanation of Content Security Policy headers is much more
detailed than in the OP's link.

[https://scotthelme.co.uk/content-security-policy-an-
introduc...](https://scotthelme.co.uk/content-security-policy-an-
introduction/)

~~~
el_duderino
securityheaders.io is now securityheaders.com

[https://scotthelme.co.uk/security-headers-is-changing-
domain...](https://scotthelme.co.uk/security-headers-is-changing-domain-and-
branding/)

------
spectre256
It's definitely worth repeating the warning that, while very useful, Strict-
Transport-Security should be deployed with special care!

While the author's example of `max-age=3600` means there's only an hour of
potential problems, enabling Strict-Transport-Security has the potential to
prevent people from accessing your site if for whatever reason you are no
longer able to serve HTTPS traffic.

Considering another common setting is to enable HSTS for a year, its worth
enabling only deliberately and with some thought.

~~~
txcwpalpha
Unless your site is nothing but a dumb billboard serving nothing but static
assets (and maybe even then...), the inability to serve HTTPS traffic should
be considered a breaking issue and you shouldn't be serving _anything_ until
your HTTPS is restored. "Reduced security" is not a valid fallback option.

That might not be something that a company's management team wants to hear,
but indicating to your users that falling back to insecure HTTP is just
something that happens sometimes and they should continue using your site is
one of the _worst things you can possibly do_ in terms of security.

~~~
bityard
Here's a real example of how HSTS can break a site: My personal, non-public
wiki is secured by HTTPS with a certificate valid for 5 years. I thought it
would be neat to enable HSTS for it because what could go wrong?

Well, just last week the HTTPS certificate expired in the middle of the day. I
had about a half days' worth of work typed up into the browser's text field
and when I clicked "submit", all of my work vanished and Firefox only showed a
page stating that the certificate was invalid and that nothing could be done
about it. I clicked the back button, same thing. Forward button, same thing. A
half-days worth of work vanished into thin air.

Is this my fault for letting the certificate expire? Absolutely. Should I have
used letsencrypt so I didn't have to worry about it. Sure. Should I be using a
notes system that doesn't throw away my work when there's a problem saving it?
Definitely. I don't deny that there's lots that I could have done to prevent
this from being a problem and lots that I need to fix in the future.

But it does point out that if you use HSTS, you have to be _really_ sure that
_all_ your ducks are in a row or it _will_ come back to bite you eventually.

~~~
tialaramex
Without HSTS how do you think your scenario plays out differently? Your
expired cert still isn't good, and I assure you Firefox isn't going to say
"Oh, there's an insecure HTTP site we could try, would you like me to send the
HTTP POST there instead?". So I think this only works out "fine" in the
scenario where lack of HSTS means you just never use any security at all.
Which is a fairly different proposition.

Since the expired cert can't be distinguished from an attack my guess is that
the text contents aren't lost when that transaction fails due to the expired
cert (as then bad guys could throw your data away which isn't what we want) so
I think you could just have paused work, got yourself a new valid certificate,
and then carried on.

Now, of course, it may be that your web app breaks if you do that, the prior
session you were typing into becomes invalid when you restart, and new
certificates can't be installed without restarting, that sort of thing, but
that would be specific to your setup.

~~~
jazzdev
Wouldn't the browser allow you to inspect the cert and choose to continue the
connection? Then you can decide for yourself if you trust the cert.

------
undecidabot
Nice list. You might want to consider setting a "Referrer-Policy"[1] for sites
with URLs that you'd prefer not to leak.

Also, for "Set-Cookie", the relatively new "SameSite"[2] directive would be a
good addition for most sites.

Oh, and for CSP, check Google's evaluator out[3].

[1] [https://developer.mozilla.org/en-
US/docs/Web/HTTP/Headers/Re...](https://developer.mozilla.org/en-
US/docs/Web/HTTP/Headers/Referrer-Policy)

[2]
[https://www.owasp.org/index.php/SameSite](https://www.owasp.org/index.php/SameSite)

[3] [https://csp-evaluator.withgoogle.com](https://csp-
evaluator.withgoogle.com)

~~~
will4274
Referrer-Policy is nice, but browsers should just default to strict-origin-
when-cross-origin and end the mess.

------
Avamander
Instead of X-Frame-Options one should use CSP's frame-ancestors option, it has
wider support among modern browsers. But CSP deserves more than one paragraph
in general.

He also missed Expect-Staple and Expect-CT, in addition to that, most of
security headers have the option to specify an URI where failures are sent,
very important in production environments.

~~~
tialaramex
Expect-CT is pretty marginal. In principle a browser could implement
Certificate Transparency but then only bother to enforce it if Expect-CT is
present, in practice the policy ends up being that they'll enforce CT system-
wide after some date. Setting Expect-CT doesn't have any effect on a browser
that can't understand SCTs anyway, so that leaves basically no audience.

Furthermore, especially with Symantec out of the picture, there is no broad
consumer market for certificates from the Web PKI which don't have SCTs. The
audience of people who know they want a certificate is hugely tilted towards
people with very limited grasp of what's going on, almost all of whom
definitely need embedded SCTs or they're in for a bad surprise. So it doesn't
even make sense to have a checkbox for "I don't want SCTs" because 99% of
people who click it were just clicking boxes without understanding them and
will subsequently complain that the certificate doesn't "work" because it
didn't have any SCTs baked into it.

There are CAs with no logging for either industrial applications which aren't
built out of a web browser (and so don't check SCTs) and are due to be retired
before it'd make sense to upgrade them (most are gone in 2019 or 2020) or for
specialist customers like Google whose servers are set up to go get them SCTs
at the last moment, to be stapled later. Neither is a product with a consumer
audience. Which means neither is a plausible source of certificates for your
hypothetical security adversary.

As a result, in reality Expect-CT doesn't end up defending you against
anything that's actually likely to happen, making it probably a waste of a few
bytes.

~~~
Avamander
Unfortunately yes, Expect-CT could use more enforcement and support but I
think spending those few bytes is worth in the sense of indicating people want
to see CT enforced more.

------
Grollicus
Should mention for Access-Control-Allow-Origin that the default value is the
safe default and setting this header weakens site security.

~~~
BCharlie
Great point! I added a sentence to say that the default is all that's needed.

------
mitchtbaum
* [Signing HTTP Messages]([https://tools.ietf.org/id/draft-cavage-http-signatures-09.ht...](https://tools.ietf.org/id/draft-cavage-http-signatures-09.html))

* [HTTP Signatures]([https://tools.ietf.org/id/draft-cavage-http-signatures-01.ht...](https://tools.ietf.org/id/draft-cavage-http-signatures-01.html))

* [draft-cavage-http-signatures-10 - Signing HTTP Messages]([https://tools.ietf.org/html/draft-cavage-http-signatures-10](https://tools.ietf.org/html/draft-cavage-http-signatures-10))

* [[https://www.rfc-editor.org/rfc/rfc4686.txt](https://www.rfc-...](https://www.rfc-editor.org/rfc/rfc4686.txt\]\(https://www.rfc-editor.org/rfc/rfc4686.txt\))

* [[https://www.rfc-editor.org/rfc/rfc3335.txt](https://www.rfc-...](https://www.rfc-editor.org/rfc/rfc3335.txt\]\(https://www.rfc-editor.org/rfc/rfc3335.txt\))

------
the_common_man
X-frame-options is obsolete. Most browsers complain loudly on the console or
ignore the header. Use csp instead

~~~
floatingatoll
For those wondering, CSP ‘frame-ancestors’ if I remember correctly.

~~~
user5994461
It's a shame browsers are breaking the X-Frame-Options.

It was an easy option to force with load balancers or any intermediate server.
Frames should always be blocked on the open internet.

The content security policy can't be adjusted easily. It screws with
applications and frameworks that use it for any of the twenty other options it
covers.

~~~
floatingatoll
Why? It’s been deprecated for years and years. You don’t have to set any of
the other 20 CSP options to set CSP:frame-ancestors. There’s no reason to
avoid it except taking a completionist approach to CSP headers (“we have to
set all possible CSP attributes for maximum security in a single go on our
first try”) which I strongly discourage.

~~~
user5994461
You can't just do a "set header Content-Security-Policy frame-ancestors none"
on all traffic. This is gonna break anything using CSP for any of the 20
settings it provides.

~~~
floatingatoll
Correct. You would be expected to merge it into any CSP headers used by your
app, either using (in your Apache scenario) If/Else and Header modify or by
modifying your application where appropriate.

While XFO is simpler to overwrite on a global basis, it’s imprecise and
doesn’t permit “allow certain sites to frame, deny all others” and is likely
to become fully unsupported whenever _any_ CSP policy is defined, given its
deprecated status. Taking the XFO way out will only help you short-term at
best.

------
dalf
There is the Feature-Policy header too : allow and deny the use of browser
features in its own frame. I've seen this header on a bank website.

Example :

    
    
      Feature-Policy: accelerometer 'none'; autoplay 'none'; camera 'none'; fullscreen 'none'
    

Documentation: [https://developer.mozilla.org/en-
US/docs/Web/HTTP/Feature_Po...](https://developer.mozilla.org/en-
US/docs/Web/HTTP/Feature_Policy)

------
joecot
I'm a little confused by the examples for Access-Control-Allow-Origin:

> Access-Control-Allow-Origin:
> [http://www.one.site.com](http://www.one.site.com)

> Access-Control-Allow-Origin:
> [http://www.two.site.com](http://www.two.site.com)

And in the examples setting both. Because in my experience you cannot set
multiple [1]. Lots of people instead set it to * which is both bad and
restricts use of other request options (such as withCredentials). It looks
like the current working solution is to use regexes to return the right domain
[2], but I'm currently having trouble getting that to work, so if there's some
better solution that works for people I'd love to hear it.

1\. [https://developer.mozilla.org/en-
US/docs/Web/HTTP/CORS/Error...](https://developer.mozilla.org/en-
US/docs/Web/HTTP/CORS/Errors/CORSMultipleAllowOriginNotAllowed) 2\.
[https://stackoverflow.com/questions/1653308/access-
control-a...](https://stackoverflow.com/questions/1653308/access-control-
allow-origin-multiple-origin-domains)

~~~
BCharlie
You are right on this - I thought you could set multiple sites by setting
multiple headers, but it doesn't work that way, which I should have known
because headers don't work that way in general...

The recommended way to do multiple sites seems to be to have the server read
the request header, check it against a whitelist, then dynamically respond
with it, which seems terrible.

Thanks for catching this - I updated the post to reflect this and make it more
clear.

~~~
unilynx
Actually, headers _do_ often work that way. HTTP says:

Multiple message-header fields with the same field-name MAY be present in a
message if and only if the entire field-value for that header field is defined
as a comma-separated list

Which applies to HTTP headers such as Cache-Control:, and probably goes back
to the email RFCs allowing multiple To: headers.

It's just that Access-Control-Allow-Origin isn't defined to accept a comma
list, just like Content-Security-Policy doesn't (which is another header
breaking things if it appears more than once)

------
jakejarvis
Great overview.

If anyone's interested, I wrote a guide a while ago on adding these headers
via Cloudflare Workers, which can be helpful if you're hosting a static site
on S3, GitHub Pages, etc. where you can't add these headers directly:

[https://jarv.is/notes/security-headers-cloudflare-
workers/](https://jarv.is/notes/security-headers-cloudflare-workers/)

------
hcheung
The nginx header directives are all not in correct syntax with the extra ":",
and for those directives with multiple values, it should be wrapped within a
"" (such as "1; mode=block"), here is the correct settings:

    
    
        ## General Security Headers
        add_header X-XSS-Protection "1; mode=block";
        add_header X-Frame-Options deny;
        add_header X-Content-Type-Options nosniff;
        add_header Strict-Transport-Security "max-age=3600; includeSubDomains";

------
cujanovic
[https://www.netsparker.com/whitepaper-http-security-
headers/](https://www.netsparker.com/whitepaper-http-security-headers/)

~~~
kureikain
This is such a great and completed guide. Lots of headers with example and
explanations. Have been looking for it.

I will include in in my newsletter[0] next monday if you don't mind.

\---

[0]: [https://betterdev.link](https://betterdev.link)

------
wheresvic1
For Node.js servers running on express, check out helmet[1] which adds a lot
of these headers for you :)

[1]
[https://www.npmjs.com/package/helmet](https://www.npmjs.com/package/helmet)

------
yyyk
The X-XSS-Protection header recommendation is a Zombie recommendation which is
at best outdated and at worst harmful. Its origins are based on old IE bugs
but it introduces worse issues.

IMHO, the best value for X-XSS-Protection is either 0 (disabling it completely
like Facebook does) or not providing the value at all and just letting the
client browser use its default. Why?

First, XSS 'protection' is about to not be implemented by most browsers.
Google has decide to deprecate Chrome's XSS Auditor[0], and stop supporting
XSS 'protection'. Microsoft has already removed its XSS filter from Edge[1].
Mozilla has never bothered to support it in Firefox.

So most leading net companies already think it doesn't work. Safari of course
supports the much stronger CSP. So it's only possibly useful on IE - if you
don't support IE, might as well save the bytes.

Second, XSS 'protection' protects less than one might think. In all
implementing browsers, it has always been implemented as part of the HTML
parser, making it useless against DOM-based attacks (and strictly inferior to
CSP)[2].

Worse, the XSS 'protection' can be used to _create_ security flaws. IE's
default is to detect XSS and try to filter it out, this has been known to be
buggy to the point of creating XSS on safe pages[3], which is why the typical
recommendation has been the block behaviour. But blocking has been itself
exploited in the past[4], and has side-channel leaks that even Google
considers too difficult to catch[0] to the point of preferring to remove XSS
'protection' altogether. Blocking has an obvious social exploitation which can
create attacks or make attacks more serious.[5]

In short, the best idea is to get rid of browsers' XSS 'protection' ASAP in
favour of CSP, preferably by having all browsers deprecate it. This is
happening anyway, so might as well save the bytes. But if you do provide the
header, I suggest disable XSS 'protection' altogether.

[0]
[https://groups.google.com/a/chromium.org/forum/#!msg/blink-d...](https://groups.google.com/a/chromium.org/forum/#!msg/blink-
dev/TuYw-EZhO9g/blGViehIAwAJ)

[1] [https://developer.microsoft.com/en-us/microsoft-
edge/platfor...](https://developer.microsoft.com/en-us/microsoft-
edge/platform/changelog/desktop/17723/)

[2] e.g.
[https://github.com/WebKit/webkit/blob/d70365e65de64b8f6eaf1f...](https://github.com/WebKit/webkit/blob/d70365e65de64b8f6eaf1ff5bf1a901765e47923/Source/WebCore/html/parser/XSSAuditor.cpp)

[3] CVE-2014-6328, CVE-2015-6164, CVE-2016-3212..

[4] [https://portswigger.net/blog/abusing-chromes-xss-auditor-
to-...](https://portswigger.net/blog/abusing-chromes-xss-auditor-to-steal-
tokens)

[5] Assume that an attacker has enough access to normally allow XSS. If he
does not, the filter is useless. If he does, the attacker can by definition
trigger the filter. So trigger the filter, make a webpage be blocked, and call
the affected user as "support". From there the exploitation is obvious, and
can be much worse than mere XSS. Now, remember that all those XSS filters in
all likelihood have false positives, that may not be blocked by other defences
because they're not attacks. So It's quite possible the filter introduces a
social attack that wouldn't be possible otherwise!

Hattip: [https://frederik-braun.com/xssauditor-bad.html](https://frederik-
braun.com/xssauditor-bad.html) which gave me even more reasons to think
browsers' XSS 'protection' is awful. I didn't know about [2] before reading
his entry.

~~~
yyyk
For [3] (exploiting IE's XSS filter default behaviour to create XSS) see also
[https://www.slideshare.net/codeblue_jp/xss-attacks-
exploitin...](https://www.slideshare.net/codeblue_jp/xss-attacks-exploiting-
xss-filter-by-masato-kinugawa-code-blue-2015) .

The author recommends either changing the default behaviour to block or
disabling the filter altogether. I believe experience has shown this
protection method cannot be fixed.

Ultimately, safe code is code that can be reasoned about but there never was
even any specification for this 'feature'. By comparison, CSP has a strict
specification. It covers more attacks, and has a better failure mode between
XSS protections' filter and block entire page load behaviours.

