
HTTP headers for the responsible developer - kiyanwang
https://www.twilio.com/blog/a-http-headers-for-the-responsible-developer
======
LeonM
I have been trying to explain the importance of HTTP headers to some
younger/junior devs I am working with. I have noticed that headers are often
considered to be 'too technical' or even 'old tech'.

I'm going to recommend them to read this, but I do think I need to explain a
couple of things that the article is not clear about:

\- Be sure that you understand the concept of HSTS! Simply copy/pasting the
example from this article will completely break subdomains that are not HTTPS
enabled and preloading will break it permanently. I wish the authors made that
more clear. Don't use includesubdomains and preload unless you know what you
are doing. Scott Helme also did a great article about this [0].

\- CSP can be really hard to set up. For instance: if you include google
analytics, you need to set a script-src _and_ a img-src. The article does a
good job of explaining you should use CSP monitoring (I recommend sentry), but
it doesn't explain how deceptive it can be. You'll get tons of reports of CSP
exceptions caused by browsers plugins that attempt to inject CSS or JS. You
must learn to distinguish which errors you can fix, and which are out of your
control.

\- Modern popular frontend frameworks will be broken by CSP as they rely
heavily on injecting CSS (a concept known as JSS or 'styled components'). As
these techniques are often adopted by less experienced devs, you'll see many
'solutions' on StackOverflow and Github that you should set unsafe-inline in
your CSP. This is bad advise as it will bascially disable CSP! I have
attempted to raise awareness in the past but I always got the 'you're holding
it wrong' reply (even on HN). The real solution is that your build system
should separate the CSS from JS during build time. Not many popular build
systems (such as create-react-app) support this.

\- Cache control can be really hard too. If you don't have time to fiddle with
these settings, I recommend using a host like Netlify, they seem to do a
proper job at caching in my experience.

[0] [https://scotthelme.co.uk/tag/hsts-
preload/](https://scotthelme.co.uk/tag/hsts-preload/)

edit: typos

~~~
amelius
> Be sure that you understand the concept of HSTS!

Instead of using HSTS, you can also simply redirect any HTTP request to HTTPS.
That way, you are certain that HTTPS is used, even if a browser does not
understand HSTS.

~~~
hywel
This will leave your users vulnerable to man-in-the-middle attacks. If I
control the router between their computer and the Internet, I can serve back a
HTTP page which doesn't redirect, and trick them to enter their password (for
example).

HSTS is designed to prevent this.

~~~
amelius
How can HSTS prevent a man in the middle attack if the server has not even
been contacted yet?

~~~
heinrich5991
It can only do that if you add it to the preload lists of browsers (which is
mentioned in the article).

But even if it is not, it's still helpful for people connecting to your site
again.

~~~
tialaramex
And because the preload list is hierarchical whole swathes of the Web can be
covered with a single entry. .dev is the biggest example, but they can protect
all the stack exchanges, all the default blogspot blogs, that sort of thing.

------
llamataboot
To me this just seems like more duct tape over the fact that everything on the
web is doing things it wasn't really designed to do. We never could have
imagined what we would be able to do with HTML/CSS/JS in a browser
environment. We also never could have imagined how the pressure of business
demands would essentially drive more and more duct tape solutions until the
whole web was built on rickety scaffolding all sort of lashed together and
swaying.

There are backend bandaids and frontend bandaids but with the sheer amount of
stack knowledge and framework knowledge required to do anything as a webdev
these days, there's no way to stay on top of it all and we are just kinda
winging some combination of best practices and getting shit done.

I don't know if things like PWAs and WASM is going to allow us to move towards
a change yet, and would love input form someone with an opinion.

~~~
jimmaswell
> the sheer amount of stack knowledge and framework knowledge required to do
> anything as a webdev these days

I think this is greatly exaggerated. You can get by just fine making your own
sites knowing some basic html/css/maybe js, maybe some php too if you want
backend stuff. Optionally some frameworks if you want, which should usually be
easy enough to just follow some examples and get the functionality you want
pretty fast.

If you're put on an existing web project, you probably only have to learn the
bits immediately surrounding the things you do, picking it up as you go along.
I still don't know Angular, React, Vue, or much else in the way of JS
frameworks other than jQuery after being in web dev professionally for years,
as it simply hasn't been needed.

~~~
llamataboot
Yeah, sorry if I wasn't clear, but that's kind-of my point. You can make a
site by knowing how to use some subset of stuff, but the rest you use as a
black box - whether it is importing a bunch of third party libraries or being
able to build a web app without understanding how cross-site attacks work, etc
etc so at some point you either have to learn these little gotchyas all over
the place or they go un-fixed.

Like how many sites still don't have mandatory HTTPS even though it is free
and easy?

~~~
jimmaswell
That's no different from programming in any other environment then - good
libraries are generally meant to work like black boxes and security issues etc
happen everywhere.

------
gambler
Gotta love all the ritual incantations one has to perform to "keep your
website safe" these days. Worst of all, people are clearly bragging about
possessing this arcane knowledge, instead of constantly complaining about how
stupid the whole thing is to begin with. "Responsible developer"? Hah.

The web needs a real security model relevant to what browsers are doing today,
not these piecemeal hacks ducktaped to a hypertext delivery protocol.

~~~
admyral
As someone who builds websites for money, I couldn't agree more. I rarely get
to bill for making incremental changes, I get to bill for implementing
features. Spending money to implement and log a properly restrictive Content-
Security-Policy doesn't seem like wise use of my clients limited budget.

~~~
dwheeler
It may be a wise use if a security break-in would be a problem for your
client.

I am a big fan of restrictive CSP, but it's often hard to get there from an
existing site. It's often better to do it in stages, e.g., when you work on
page Q, you make _that_ page have a restrictive CSP. Later, when you work on
page R, that can grow one (or at least have fewer CSP issues). If having
someone break into your site would be a serious problem, then you should speed
up what it takes to get there.

------
apple4ever
Great article.

Somebody else mentioned Scott Helme, but didn't link to three of his amazing
sites:

[https://securityheaders.com](https://securityheaders.com) which checks
important headers

[https://report-uri.com/](https://report-uri.com/) which allows sending CSP
reports to to catch errors. It also has a CSP builder (among a bunch of tools)
which is hugely helpful: [https://report-
uri.com/home/generate](https://report-uri.com/home/generate)

[https://scotthelme.co.uk/](https://scotthelme.co.uk/) is his blog with a ton
of info. It also has a cheat sheet for CSP: [https://scotthelme.co.uk/csp-
cheat-sheet/](https://scotthelme.co.uk/csp-cheat-sheet/)

(I might be a fan of the guy ha)

------
kijin
These are great features, but I wish there were better ways to communicate the
security policies for my website than having to send lengthy headers with
every page.

CSP in particular tends to get rather long-winded. As the article says, it can
contain up to 24 policies, many of which contain their own lists! It's bound
to get even more complicated as web apps integrate with an ever greater number
of external services. Feature-Policy also looks like it could easily balloon
to 1KB or more if you wanted to control all the features. No matter how much
compression you add, at some point this is going to affect the load time.
Additional TCP round trips aren't cheap, especially for HTML resources that
usually aren't cached at the edge.

Wouldn't it be convenient if I could store a structured representation (JSON,
YAML, whatever) at a predefined location under /.well-known/ and use ordinary
Cache-Control headers to make browsers cache the rules?

~~~
LeonM
> Feature-Policy also looks like it could easily balloon to 1KB or more

Twitter sends over 6kb of CSP headers on every single request. This is what
happens if you run loads of different advertisement and tracking vendors.

~~~
perlgeek
> Twitter sends over 6kb of CSP headers on every single request.

Now I understand why HTTP/2 uses compression for HTTP headers.

------
asaph
I read the whole article just to find out what the X-Shenanigans header shown
in the picture at the top of the article is. There was no further mention of
it.

Looks like it's an inside joke from Twilio[0].

[0] [https://github.com/kwhinnery/todomvc-
plusplus/issues/7](https://github.com/kwhinnery/todomvc-plusplus/issues/7)

------
spiderfarmer
Aren't advertising networks blocking the adoption of CSP headers? Seems like
it's quite a job to maintain the exceptions needed for Doubleclick for
example.

~~~
LeonM
It is always quite a job to maintain CSP. And it's really easy to break
something with CSP.

This is why loads of devs usually throw in the towel and disable CSP or use a
unsafe-line. Basically like trying to solve a hard CSS problem and at some
point give up and add !important statements everywhere.

It's also really, really hard to explain to customers that it takes time to
set up, and every time they install a new tracking/ad/video/whatever plugin on
their CMS, you'll have to spend time on adjusting the CSP accordingly.

That said, I do encourage developers to use CSP. It's a really powerful tool
to secure your site and protect your visitors from fraud/phishing.

~~~
anyzen
It is also at least some level of defense against malicious npm packages
(doesn't eliminate threat completely, but at least less sophisticated attacks
will be thwarted).

CSP headers are a _very_ useful tool and I encourage everyone to use them.
They are a PITA to set up though. Fortunately at least Firefox clearly
communicates in console log when a CSP rule is hit, and how to relax it (if it
was by mistake).

Note that CSP can be set as META tags too. There's a gotcha though: if they
are set in both places (HTTP headers and HTML META tags), an intersection of
the rules is used.

------
Tsubasachan
Developers connect people. Developers help people. Developers enable people.

Look I understand that people in general feel the need to pretend that their
work is very important and good but come on. You are not working for Warchild
in a Lebanese refugee camp.

------
nurettin

      Developers connect people. 
      Developers help people. 
      Developers enable people.
    

I don't remember agreeing to these conditions. Is this some sort of
psychological manoeuvre to get people to use SSL encryption on the web ?

~~~
twic
Whilst, if you ever upset one badly enough, you will find that sysadmins
disable people.

------
cuillevel3
When dealing with user uploads, we still need content disposition headers to
force browsers to treat certain formats as attachments, rather than showing
them inline, right?

------
throwaway77384
I kind of understand why CSP isn't more widespread.

I tried adopting CSPs on all my sites to full Mozilla Observatory[0] standard.
One is a Go based Heroku instance, where I used unrolled/secure[1], though
there are a few different packages achieving this. The others are static
Netlify deploys using Netlify CMS. For those, you have to include a headers
file (in my case I am instructing Hugo to build the site with a .headers media
file included, which Netlify parses).

Some observations:

\- It's a huge pain in the ass / trial and error process

\- The formatting for CSP rules was evidently made to be as insufferable as
imaginable. All on the same line, with commas and semicolons being the only
separators, no line-breaks, tabs or anything allowed. Seriously, wtf

\- When you think you've got it working, some other thing breaks in a weird,
silent way

\- Debugging CSPs in Firefox is nearly impossible (as for certain in-line
scripts, you will need to get SHA values to tell the CSP to let them through.
Chrome provides the SHA in the console. Firefox bizarrely doesn't.)

\- Trying to integrate google recaptcha with CSP is hilariously complicated

\- You should try to host all fonts yourself, lest you need to enable google
or fontawesome exceptions for font, CSS, script and svg, because apparently
that's what you need just to get an FB icon on your page to work

\- Forget about React, or anything using inline-script or styles. Netlify CMS
and the Netlify identity widget all require inline styles and scripts. Even
generating SHA values for all of those, I could not get this stuff to work. In
the end I gave up and disabled the CSP again

And this is for static sites using really simple tooling. I have yet to find a
viable way to make this work.

edit (some additional notes):

\- Tools like this one[2] did not generate SHA values that were accepted by
the CSP. I have tried a few different tools, checked all white spaces over and
over. I just couldn't get it to work. Only Chrome returned the proper SHA
value.

\- I tried fixing a hover state loading in improperly (it flickered on first
hover). This wasn't related to the CSP, but because I had to try lots of
different things, like load in an SVG sprite, or png sprite, try pre-loading,
use some JS, etc. etc. I had to keep changing the CSP to work with this, too.
So applying a CSP should only be done at the end of a project. At the same
time, if anything breaks from one day to the next, your debugging will now
include the CSP as well most likely.

\---

[0][https://observatory.mozilla.org/](https://observatory.mozilla.org/)

[1][https://github.com/unrolled/secure](https://github.com/unrolled/secure)

[2][https://passwordsgenerator.net/sha256-hash-
generator/](https://passwordsgenerator.net/sha256-hash-generator/)

~~~
thrower123
If something is so hard to use that nobody can be arsed to do it properly,
it's generally a signal that the tool needs to be redesigned.

~~~
dwheeler
CSP could be better, but it's perfectly useful as it is. The problem is that
too many people did things badly (using inline JavaScript), mixing up code and
data. It's time-consuming to fix problems like that, but possible. Enabling
CSP is easy... it's fixing your system so it works with it that takes time.

The CII Best Practices Badge uses restrictive CSP. You can tell that here:
[https://securityheaders.com/?q=bestpractices.coreinfrastruct...](https://securityheaders.com/?q=bestpractices.coreinfrastructure.org&followRedirects=on)

------
davidcuddeback
You can check your site's usage of most of these headers with
[https://securityheaders.com](https://securityheaders.com). HSTS and more is
checked by
[https://www.ssllabs.com/ssltest/](https://www.ssllabs.com/ssltest/).
Definitely make sure you understand what the headers do before changing them.
Don't just copy/paste what you see here.

------
hwj
This is how you can get the headers:

    
    
        curl -I -X GET \
            https://www.twilio.com/blog/a-http-headers-for-the-responsible-developer

~~~
pstuart
"-X GET" is superfluous, as that is the default method.

~~~
nybble41
Unless you use "-I", as in the example, in which case the default method is
HEAD.

~~~
pstuart
D'oh! I'll let the record stand as lesson on rash comments.

------
awcode
"Browser support for CSP is good these days, but unfortunately, not many sites
are using it....I think we can do better to make the web a safer place"

Interestingly enough the blog this was posted on falls into the 94% not taking
the effort to use CSP!

------
qubyte
CSP interacts in a surprising way (at least it was to me) with service
workers.

[https://qubyte.codes/blog/content-security-policy-and-
servic...](https://qubyte.codes/blog/content-security-policy-and-service-
workers)

------
bullen
I'm curious why/how XSS is a problem. Can someone describe a practical example
of how this has been successfully abused? To me XSS allows a page to be
distributed over many servers and that's more of a feature than a threat!

~~~
gmiller123456
There are actually two different types of CSS, there's a _persistent_ version
which loads any time any user loads a certain page, and there's a _reflected_
version which only shows up when a user clicks a mal-crafted link. The
persistent version is the most dangerous, as it doesn't rely on the user being
incredibly stupid. The reflected version is by far the most common, but since
it requires the user to click on a malicious link, isn't usually the easiest
to exploit.

But either way, they both allow an attacker to display information on a
website, when the content didn't originate from that site.

An example of how this could be really bad, would be a script that deletes all
of the content from the document body and replaces it with a login screen.
Rather than actually logging you in, it submits the username and password you
entered to a site the attacker has control over.

Another, less obvious method, would be a script that captures your session
cookie and submits that to another site the attacker has control over. If you
were logged in to the site, the attacker could use the session cookie to
authenticate to the site as you without logging in.

~~~
bullen
Ok, in that case I don't see how headers solve the problem better than:

1) Don't click on bad URLs. (should be taught at kinder-garden by now)

2) Replace all input <> with &lt;&gt; etc.

I'm convinced scripts should not be able to read cookies for other domains?

Surely I'm missing something?

~~~
gmiller123456
The Content-Security-Policy header prevents any new JavaScript from getting
executed. Any inline scripts have to have a matching nonce, or SHA hash in the
CSP header. With XSS, an attacker can insert content into the web page, but
they can't modify the headers, so this effectively stops all XSS without some
additional vulnerability being exploited.

------
Kiro
> Did you ever wonder why you can’t use local environments like my-site.dev
> via HTTP with your browser anymore? This internal record is the reason –
> .dev domains are automatically included in this list since it became a real
> top-level domain in February 2019.

.dev hasn't been working locally for me for more than a year.

~~~
chrisweekly
I use "foo.local" as a drop-in replacement, haven't had problems...

~~~
voltagex_
Any devices using mDNS might disagree.

~~~
chrisweekly
Sure; tho I only said _I_ haven't had problems using ".local". In any case, I
think your peer commenter who replied about use of '.test' "wins", though,
given the RFC.

------
mro_name
thx, my future headers cheat sheet.

------
venins
thanks :)

