* either an unreleased safari version, or the most recent version will send preflight requests even if the request meets the spec (like if the Accept-Language is set to something they don't like).
* If you use the ReadableStream API with fetch in the browser, a preflight will be sent.
* If there are any events attached on the XMLHttpRequestUpload.upload listener, it will cause a preflight
* cross-domain @font-face urls, images drawn to a canvas using the drawImage stuff, and some webGL things will also obey CORS
* the crossorigin attribute will be required for cross-origin linked images or css, or the response will be opaque and js won't have access to anything about it.
* if you mess up CORS stuff, you get opaque responses, and opaque responses are "viral", so they can cause entire canvas elements to become "blacklisted" and extremely restricted.
I feel like I've cut my teeth on this API more than most, and I still feel like I'm only scratching the surface!
People moan about C yet I find the web stack greatly more painful to write because you didn't even have control over the compiler following standards strictly (where stuff has even been standardised).
I really do wish we worked together to create a new standard for building and deploying documents and applications over the internet because this HTML (and all its supporting technologies) is an experiment that has gone bad. Id preferably want something that doesn't allow each browser to interpret the specifications differently and absolutely something that isn't controlled by Google (they would obviously need input but the last thing we need is another AMP).
Of course it will never happen, but one can dream / rant nonetheless.
There have been other examples in history where programming languages used to differ - sometimes even significantly - depending on which compiler / platform you were targeting and where a standards body later stepped in to create a basic subset of said language that should be universal across all dialects (please note they cannot enforce this). In those instances that has lead to code to become greatly more portable.
To some extent, this is now happening with the web as well; however my secondary point to the complaint about differing outputs between browsers is that I believe HTML et al is a lousy way to design applications from the outset. That definitely is not a problem created by warring proprietary vendors or slow revisions of standards but rather just an artefact of technology evolving past it's original purpose yet still having to retain backwards compatibility. Maybe the time has come that we need a second language for the web so we have HTML et al for legacy applications, blogs and other stuff that is following some of the original visions of the web, but have a new language for web applications and anything that requires a stronger security model.
It's amazing if you look at it from that point. You can't have security by default because the old standard is insecure by default (by todays means) and that's what legacy applications depend on, yet there has been a lot of progress that you can opt into.
My favorite example are cookies. While everything else is origin based, those are still based on a model that is very close to dns (everything psl+1 is the same thing). This opens you up to a large number of attacks, especially in a man in the middle situation (even with https). You can secure your own application by using secure cookie prefixes, but everything else is doomed.
Maybe it isn't great in all the ways, but at least it's something that works and is flexible enough for all sort of things.
I will grant you that things have gotten better in recent years and I do agree that there isn't such thing as a perfect solution, but given we're in an era where desktop software is being fazed out in favour of web applications (and even some desktop software is now being written using web technologies - such as Electron) it really feels like we're going wholesale into using a stack of technologies that could be significantly improved if we redesigned it from the ground up taking into account:
* what we have learned form the last ~25 years of the web,
* the change in how the internet (in a broader sense) is consumed over the last 10 years
* all the lessons we've learned from the decades of desktop software development.
Plus removing all the legacy cruft which your sibling commenter highlighted has to be seen as a bonus too.
I'm not suggesting we get rid of HTML entirely, but maybe have a new programming language for the modern web - like how we have different programming languages for other areas of computing for when we have different problems we are trying to solve. eg Bash, Perl, Go and C can all be used to write CLI tools but you'd use them to target different problems.
But as I said, this is just soapbox ranting. I couldn't see it changing without one of the powerhouses developing it largely in isolation and then we run the risk of walled gardens which - in my opinion at least - is worse than the current status quo.
I.e. what kind of dev work do you do that makes you have to deal with this more than the average developer?
Maybe I'm just full of myself though!
And in no way I meant to imply that you were full of yourself :)
I was just curious as I've only done some front-end dev and dealing with CORS was a minor part of it and I'm always interested in HNers with niche jobs or uncommon experience.
* Most devs are at least full stack
* Most apps now-a-days interact directly/indirectly with browser
* Attributing this problem to frontend devs without understanding why CORS and issues in handling that, makes you unaware of security issues with your backend services.
FYI: I am a backend developer.
That said, CORS is the only thing about third party that I actually like. It's secure by default. That the ensure header and accept header are different things is amazing. I know all about the performance issues, but I'm ok with it in the right context.
 It's kinda funny how against the herd I am here. I think it is shitty that the preflight sometimes doesn't happen. It's so weird to me that we carved out exceptions for this and seems like an otherwise secure-by-default system should have come with the guaranteed preflight.
It's common to have app.example.org point to a CDN and api.example.org point to an API.
And CORS implementation is terrible. The server has to transmit validation rules for the browser to enforce (with vendor specific caching differences), rather than just enforcing access itself.
The reason it's implemented this way is because of the organic evolution of web security.
That's a typical misunderstanding of purpose of CORS. Regardless of your website setting or not setting CORS, an attacker with a modified browser or a custom browser can ignore it. That's not what CORS protects from - CORS protects against a non-modified browser uased by Joe Random User installed via a factory/distribution path being tricked into doing something against the site policy, therefore exposing the user.
My beef with CORS is that I have to encoded the validation logic into its HTTP headers.
But maybe what I want doesn't fit in its headers, e.g. I want to allow requests from *.example.org. So then I am writing server code, and if I am writing server code anyway, why have the extra step of encoding validation logic in CORS headers in the first place?
Is it really that much more expensive to check the Origin header than to check the Authorization and Cookie headers?
The problem is that existing servers don’t generally check origin headers, so browsers needed some other mechansim to understand which requests were safe.
I disagree. The current model where the server has to opt-in to cross origin access by explicitly sending an Access-Control-Allow-Origin header is secure-by-default, whereas a model relying on the server to enforce access would be insecure-unless-the-server-properly-checks-the-origin-header, leading to all sorts of vulnerabilities.
If CORS aimed to only protect users, there would be no need for a preflight at all. The only reason preflights exist is to protect services from receiving requests they might really not expect (e.g. malformed data) and doing bad things as a result.
In particular, the idea is to protect non-publicly-routable services. Publicly-routable ones, where you can just issue an attack request with cURL or the like, have to be hardened against malformed requests to start with.
But there are tons of non-publicly-routable things (think printers and the like behind firewalls) that could be attacked via browsers that are running behind the firewall loading web pages from outside the firewall. And CORS aims to mitigate or prevent some of those attacks.
Could you elaborate on that? I can't picture the alternative you're suggesting.
Even still, a lot of people just put ‘Access-Control-Allow-Origin: *’ on everything as soon as they run into an issue, so that rule has to ban credentialed requests altogether.
Then the server responds with 200 or 403.
Huh? They already implement authorization. (Transport security is similar but different.)
And I've seen plenty of Allow-Origin-Access-Control: * because people get frustrated with CORS, e.g. they can't allow access for *. example.org.
The reply is always "yes, you can CSRF yourself, because it's not supposed to protect against that; it's supposed to protect you from other people". In exactly the same way, CORS is there to protect you from other people. You can always hack your own user-agent to disregard CORS, but the only person you can harm that way is yourself.
That's not true. You can set CORS: * and validate all requests in your server. The extra rules are for vast majority of servers that never inspect Origin headers.
But you can still use IE, edge, chrome and Safari and trust that they implement CORS and most other basic security features correctly.
That means extra delay, extra db connections and calls, etc.
How do you deal with them?
DB connections are pooled and cached using AWS Lambda.
btw- Our app is a plugin, so origin is always the platform provider... then our app makes CORS calls to AWS.
Especially, setting the "Access-Control-Allow-Credentials" header to true means that a client which sent a request with a cookie is allowed to read the result, but whether the request is sent with a cookie or not, and will be treated as such by the server, is entirely up to the client.
So although malicious.com cannot read the details of bank.com using AJAX, it can definitely send a POST request to trigger the transfer from the user's account to a malicious account using the user's cookie (blindly so).
This is the reason proper CSRF protection must be implemented by the server, independently of whether CORS is enabled or not.
If you're writing an article on CORS today I also think you should mention recent CORS developments such as Cross-Origin Read Blocking (CORB) and features on the horizon such as Cross-Origin-Resource-Policy, Cross-Origin-Window-Policy, etc. that in light of Spectre, Meltdown etc. are meant to help plug speculative execution holes.
Too ironic not to mention.
If I have access to the Fetch API, I can tell the browser to send the user's cookie cross-origin and I can validate the request based on the origin. This allows for interesting authentication scenarios without the need for explicit client-user consent pages.
Are you talking about the HTTP referer? That's easily spoofable and can't be relied on server-side. The same-origin policy and all the CORS security is implemented in the browser itself, not in HTTP.
If you need to be certain that a request originated from your own page and not another domain you need to use a CSRF token.
Also, it's easily bypassed.
In my opinion, it would have been much better to improve the browsers to not include cookies in 3rd party requests automatically (only when they are explicitly specified via JS for example). It should have solved the issue equally well, without introducing some bulky server-side security feature to remote control browsers.
If the browsers separated the session by origin (as blauditore wrote), the whole problem space would look very different.
Sadly even though this is an obvious concept and trivial to implement, it took them over 20 years since the web came out to get it in most browsers. The cost to society of having thousands of copies of the same commonly used files (like jQuery) hosted locally on countless servers rather than having a centrally hosted version already cached from previously visited sites is staggering to contemplate. I'd really like to know who was behind the holdup on deploying SRI.
If we had something like SRI from the start, we could have linked to resources by their hash instead of their URL (more like how IPFS works). There's a name for this concept that eludes me, and also a great video explaining its potential but also difficulties when it comes to security and HTTPS. The short of it is that if we had routers that accepted hashes as well as URLs, then we could ask for a list of all data matching a given hash and download that file (or its pieces) from the closest cache(s). So instead of linking to jQuery at https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.mi... we could just ask the router for the file with SHA2 hash 160a426ff2894252cd7cebbdd6d6b7da8fcd319c65b70468f10b6690c45d02ef and it would return its contents regardless of where it came from, including the browser's own cache if it already had that file (I just used https://hash.online-convert.com/sha256-generator but there would be a better standard for this).
Anyway, hope this helps and sorry for any confusion.
Google Analytics or recaptcha for example aren't versioned. Deploying SRI is just going to break your site when they update the script.
You could just as easily frame CORS as "antibiotics for the people who dared to leave their house".
(There's also a no-true-scotsman fallacy going on in your argument)
It's a "subsidy" because we all have to spend time on it, even though lots of people don't receive any benefit from it. It's not intolerable, the Web is based on community standards and responds to community needs, and I'm fine with that. But I'm always on the losing end of this one so I'm going to say so.
JWT/tokens + Local/session storage + adding fetch headers seems like the best way as long as you don't run untrusted JS.
What is your reason for preferring JWT + localStorage for authentication and session handling? I'm genuinely curious, as httpOnly cookies strike me as better in every meaningful way.
If the initial SSR needs some initial client-state to complete its work before sending the HTML payload, it can see the cookie, but not localStorage.
A PITA to find the reason why Firefox wouldn't use the Google fonts.
My only contribution to the discussion is that if you get a CORS error where you wouldn't expect it, the problem might not be a CORS issue. I spent the better part of a weekend trying to debug why a request to a Google API wasn't working and why I was seeing a CORS error (same thing worked fine on another system). Turns out, it wasn't the same thing, my url had a typo...
Also, this would not be secure by default, because you would have to change the default behavior of the server to block cross origin requests.
This is something I could never get my head around with CORS - what's the point of whitelisting origins if getting around the whitelist is nothing more than an inconvenience?
To prevent someone abusing your API otherwise, use an authentication method.
It acts as a `fetch` implementation that allows you to declare cross-origin policies in advance, then channel the requests through an iframe which enforces those policies.
Obviously in any nontrivial web app it would fail because of authentication issues, but if a server doesn't do ANY sort of security checking, that should work, no? Does that mean that the onus is on the server developer of mybank.com? And if so, what would stop the malicious request from working on any server developed before the existence of CORS?
If HTTP, that’s done via setting some information in the request headers, be it a cookie, or basic auth, or token auth, or similar.
So far it's only gotten in my way as a developer. But it's there to protect users, not me. So at the end of the day, I'm glad it's there as a way to somewhat prevent people from tricking my users into hitting my api with malicious requests.
* not all cross origin requests need to be preflighted
* to use credentials, server needs to explicitly allow credentials to be sent from client
Of course I don't, nobody does.
Content-Type should only be allowed to be `application/x-www-form-urlencoded`, `multipart/form-data`, or `text/plain` to be allowed without preflight.
Edit: I can't reproduce this on Chrome 69.0.3497.100 (Official Build) (64-bit). Setting the Content-Type to anything other than the above with a POST request will cause an OPTIONS preflight, even when using your example.
- domain js hack
- reverse proxy
- http header
What else? Referrer Policies await.
- Just use my API...
- I tried, please enable CORS.
- What's CORS?
I find it frustrating that this seems to be the default for most servers. I think it should be opt in and not opt out.