Hacker News new | comments | show | ask | jobs | submit login
Do You Really Know CORS? (performantcode.com)
603 points by grzegorz_mirek 13 days ago | hide | past | web | favorite | 122 comments





This is an awesome overview! But don't take it as all encompassing, it doesn't go into some of the more esoteric edge cases with CORS, like:

* either an unreleased safari version, or the most recent version will send preflight requests even if the request meets the spec (like if the Accept-Language is set to something they don't like).

* If you use the ReadableStream API with fetch in the browser, a preflight will be sent.

* If there are any events attached on the XMLHttpRequestUpload.upload listener, it will cause a preflight

* cross-domain @font-face urls, images drawn to a canvas using the drawImage stuff, and some webGL things will also obey CORS

* the crossorigin attribute will be required for cross-origin linked images or css, or the response will be opaque and js won't have access to anything about it.

* if you mess up CORS stuff, you get opaque responses, and opaque responses are "viral", so they can cause entire canvas elements to become "blacklisted" and extremely restricted.

I feel like I've cut my teeth on this API more than most, and I still feel like I'm only scratching the surface!


The entire webstack is such a broken mess of inconsistencies and thousands of hidden traps that can render the entire thing insecure.

People moan about C yet I find the web stack greatly more painful to write because you didn't even have control over the compiler following standards strictly (where stuff has even been standardised).

I really do wish we worked together to create a new standard for building and deploying documents and applications over the internet because this HTML (and all its supporting technologies) is an experiment that has gone bad. Id preferably want something that doesn't allow each browser to interpret the specifications differently and absolutely something that isn't controlled by Google (they would obviously need input but the last thing we need is another AMP).

Of course it will never happen, but one can dream / rant nonetheless.


The web is the state it's in because it's a no-mans-land between warring proprietary vendors. Any one of Apple, Microsoft, or Google (even secondary players like Amazon, Oracle, or Valve) would much prefer a world in which they had the dominant platform and could get a 30% cut and arbitrary veto over all software written for that platform.

The problem you describe does exist but I respectfully disagree that's the reason why writing web applications has become as cumbersome as it has. I think the issue there is more down to standards authorities being glacially slow to recognise the change in demands. This has obviously allowed a situation where browser vendors such as Microsoft and Google have felt they needed to run amok just to offer many of the features developers were asking for (and to an extent, consumers too since end users were lured in by prettier and more interactive sites).

There have been other examples in history where programming languages used to differ - sometimes even significantly - depending on which compiler / platform you were targeting and where a standards body later stepped in to create a basic subset of said language that should be universal across all dialects (please note they cannot enforce this). In those instances that has lead to code to become greatly more portable.

To some extent, this is now happening with the web as well; however my secondary point to the complaint about differing outputs between browsers is that I believe HTML et al is a lousy way to design applications from the outset. That definitely is not a problem created by warring proprietary vendors or slow revisions of standards but rather just an artefact of technology evolving past it's original purpose yet still having to retain backwards compatibility. Maybe the time has come that we need a second language for the web so we have HTML et al for legacy applications, blogs and other stuff that is following some of the original visions of the web, but have a new language for web applications and anything that requires a stronger security model.


The problem is that the original concept of websecurity doesn't apply to the current time anymore, but it's very hard to change anything because you have to be backwards compatible to literally the whole internet.

It's amazing if you look at it from that point. You can't have security by default because the old standard is insecure by default (by todays means) and that's what legacy applications depend on, yet there has been a lot of progress that you can opt into.

My favorite example are cookies. While everything else is origin based, those are still based on a model that is very close to dns (everything psl+1 is the same thing). This opens you up to a large number of attacks, especially in a man in the middle situation (even with https). You can secure your own application by using secure cookie prefixes, but everything else is doomed.


I don't think there's the ultimate solution to anything. On a high-level anything seems like it can be done easily, but the issue always lies in the details.

Maybe it isn't great in all the ways, but at least it's something that works and is flexible enough for all sort of things.


It's great for document markup. But the web has moved beyond that and writing interactive applications has always been a serious of kludges to work around the fact that the web is ostensibly just a network of documents (apologies for the massive over simplification here - please bare with me on this....)

I will grant you that things have gotten better in recent years and I do agree that there isn't such thing as a perfect solution, but given we're in an era where desktop software is being fazed out in favour of web applications (and even some desktop software is now being written using web technologies - such as Electron) it really feels like we're going wholesale into using a stack of technologies that could be significantly improved if we redesigned it from the ground up taking into account:

* what we have learned form the last ~25 years of the web,

* the change in how the internet (in a broader sense) is consumed over the last 10 years

* all the lessons we've learned from the decades of desktop software development.

Plus removing all the legacy cruft which your sibling commenter highlighted has to be seen as a bonus too.

I'm not suggesting we get rid of HTML entirely, but maybe have a new programming language for the modern web - like how we have different programming languages for other areas of computing for when we have different problems we are trying to solve. eg Bash, Perl, Go and C can all be used to write CLI tools but you'd use them to target different problems.

I guess to an extent developers are trying to do this already with some of the massive tooling you get that compiles down to CSS, Javascript and so forth. Plus the experiments we see in web assembly are another example of developers trying to break free from the constraints of scripting stuff inside a document. But I'd rather see a secondary development platform that is native to the web and is more application aware and security conscious than our current situation of having to run vast and complex frameworks that still, ultimately, compile down to the same inconsistent and insecure platform that we're currently stuck with.

But as I said, this is just soapbox ranting. I couldn't see it changing without one of the powerhouses developing it largely in isolation and then we run the risk of walled gardens which - in my opinion at least - is worse than the current status quo.


Chrome's sendBeacon content-type CORS bug is another fun one:

https://bugs.chromium.org/p/chromium/issues/detail?id=490015


Genuine question: why do you feel you've cut your teeth more than most?

I.e. what kind of dev work do you do that makes you have to deal with this more than the average developer?


The main reason is that I've had to re-implement it on the server side 2 times now by sheer "luck", as well as I feel like i've just hit more of the edge cases than most just because of the areas I ended up working in (which just happened to be a handful of canvas-based apps where a couple of them needed to call out to unknown 3rd parties and I had to be very careful about using opaque responses as they would ruin the canvas).

Maybe I'm just full of myself though!


Ah. That's interesting.

And in no way I meant to imply that you were full of yourself :)

I was just curious as I've only done some front-end dev and dealing with CORS was a minor part of it and I'm always interested in HNers with niche jobs or uncommon experience.


The kind of work where you need to run javascript loaded from third party sites and make it do stuff to the user (advertisements, chat).

Not OP, but my guess... Most devs aren't frontend devs and I would categorize this as mostly a frontend developer problem.

Good points. Also, the strange case of access-control-allow-origin containing a list of origins. Or rather the lack of support for the whole standard.

I'm ideologically against third party on the web because it is a privacy nightmare. But I'm in the system that I'm in, and I don't take on fights that aren't possible to win, so barring my becoming a billionaire I've kinda just accepted that third party is here for at least a little while and I'm not going to refuse to use ads and analytics. Except on my personal website, that gets to stay cool.

That said, CORS is the only thing about third party that I actually like. It's secure by default. That the ensure header and accept header are different things is amazing. I know all about the performance issues[0], but I'm ok with it in the right context.

[0] It's kinda funny how against the herd I am here. I think it is shitty that the preflight sometimes doesn't happen. It's so weird to me that we carved out exceptions for this and seems like an otherwise secure-by-default system should have come with the guaranteed preflight.


CORS is not necessarily about third parties.

It's common to have app.example.org point to a CDN and api.example.org point to an API.

And CORS implementation is terrible. The server has to transmit validation rules for the browser to enforce (with vendor specific caching differences), rather than just enforcing access itself.

The reason it's implemented this way is because of the organic evolution of web security.


> And CORS implementation is terrible. The server has to transmit validation rules for the browser to enforce (with vendor specific caching differences), rather than just enforcing access itself.

The only concerns of CORS is with Javascript running in the browser. CORS is not about server-side security but what Javascript can or cannot access. It is there to protect the browser's user and make script execution more secure.


> And CORS implementation is terrible. The server has to transmit validation rules for the browser to enforce (with vendor specific caching differences), rather than just enforcing access itself.

That's a typical misunderstanding of purpose of CORS. Regardless of your website setting or not setting CORS, an attacker with a modified browser or a custom browser can ignore it. That's not what CORS protects from - CORS protects against a non-modified browser uased by Joe Random User installed via a factory/distribution path being tricked into doing something against the site policy, therefore exposing the user.


I don't misunderstand.

My beef with CORS is that I have to encoded the validation logic into its HTTP headers.

But maybe what I want doesn't fit in its headers, e.g. I want to allow requests from *.example.org. So then I am writing server code, and if I am writing server code anyway, why have the extra step of encoding validation logic in CORS headers in the first place?


You are still thinking about a server checking if a browser is trusted to run some code on a server, when the CORS is about a browser deciding if it should trust some of the server response.

Surely you can implement your security rules this way. But for most people CORS are a cheaper way to enable cross origin requests. And clueless people are stuck with same origin policy which secure by default.

> CORS are a cheaper way to enable cross origin requests

Is it really that much more expensive to check the Origin header than to check the Authorization and Cookie headers?


If your server can handle Origin checks that way, then leave your CORS headers wide open and problem solved.

The problem is that existing servers don’t generally check origin headers, so browsers needed some other mechansim to understand which requests were safe.


> And CORS implementation is terrible. The server has to transmit validation rules for the browser to enforce (with vendor specific caching differences), rather than just enforcing access itself.

I disagree. The current model where the server has to opt-in to cross origin access by explicitly sending an Access-Control-Allow-Origin header is secure-by-default, whereas a model relying on the server to enforce access would be insecure-unless-the-server-properly-checks-the-origin-header, leading to all sorts of vulnerabilities.


It’s secure in a “trust the client” manner. Any server code not still doing its own authentication is in for a rude awakening.

If you don’t trust the client, there is nothing you can do. An insecure browser could spoof the origin header too.

I think you’re missing my point. While you can assume well behaved clients will reduce unwanted traffic, a malicious client will spoof everything it can. Thus, there is definitely something you can do: you should never trust the client and the server should authenticate every request (as if CORS didn’t exist) instead of assuming all requests from clients are valid.

It's already insecure-unless-the-server-checks-the-cookie-header.

Something servers already do.

CORS protects users, not services, so it's entirely reasonable that the evaluation of policy occurs in the user agent.

Actually, CORS protects both.

If CORS aimed to only protect users, there would be no need for a preflight at all. The only reason preflights exist is to protect services from receiving requests they might really not expect (e.g. malformed data) and doing bad things as a result.

In particular, the idea is to protect non-publicly-routable services. Publicly-routable ones, where you can just issue an attack request with cURL or the like, have to be hardened against malformed requests to start with.

But there are tons of non-publicly-routable things (think printers and the like behind firewalls) that could be attacked via browsers that are running behind the firewall loading web pages from outside the firewall. And CORS aims to mitigate or prevent some of those attacks.


> rather than just enforcing access itself.

Could you elaborate on that? I can't picture the alternative you're suggesting.


Same here... from what I can tell, only the browser has reliable information about what's happening. How could the server know code from another domain is making the requests? And if anything, the web is more secure - desktop native applications can usually access each other's data and do malicious requests to remote servers using stolen credentials just fine.

Via the Origin header, just like the preflight requests use. The server is ultimately the one telling the browser what the rules are already. The reason a preflight is needed is so that there is effectively a default deny policy, since both historically and even today you can’t assume all APIs that accept authentication tokens from the browser take into account any kind of cross origin access.

Even still, a lot of people just put ‘Access-Control-Allow-Origin: *’ on everything as soon as they run into an issue, so that rule has to ban credentialed requests altogether.


You are talking about the Origin header set by the browser that you don't trust?

The whole point of CORS is that you trust the browser to do the preflight requests and obey them, otherwise it doesn’t do anything. If you have access to the credentials (which the browser does) and the ability to send whatever HTTP requests you want, you can bypass CORS entirely.

The alternative is for the idea agent to send the Origin header on all requests.

Then the server responds with 200 or 403.


The people with the security problem are the users of browsers, and the browser vendors have a solution to solve that problem built into the browser. If they didn’t enforce it at the browser level, the owners of the servers would have little incentive to enforce CORS and most just wouldn’t. See https adoption.

Bingo. I still kind of wish there was an ability to make the browser invoke some kind of request that servers would reject by default (e.g. with a non-standard HTTP method) that could combine the preflight check and the request while still making servers that don’t anticipate CORS blanch and not take any dangerous actions. But the browser security model is complex and messy enough as it is, so I doubt it’s worth it.

> the owners of the servers would have little incentive to enforce CORS and most just wouldn’t

Huh? They already implement authorization. (Transport security is similar but different.)

And I've seen plenty of Allow-Origin-Access-Control: * because people get frustrated with CORS, e.g. they can't allow access for *. example.org.


The origin could still be falsified client-side.

No, it can't be falsified by JS in the browser. CORS is only relevant for JS in the browser; it doesn't impact curl at all, for example.

This is kind of like the reports I see every so often to Django's security address, where someone demonstrates that they can CSRF their own session.

The reply is always "yes, you can CSRF yourself, because it's not supposed to protect against that; it's supposed to protect you from other people". In exactly the same way, CORS is there to protect you from other people. You can always hack your own user-agent to disregard CORS, but the only person you can harm that way is yourself.


Then it means that a service is vulnerable by default unless it implements the list of allowed origins. With CORS if there are no headers then it means no authorization.

Using HMAC or API keys.

> The server has to transmit validation rules for the browser to enforce (with vendor specific caching differences), rather than just enforcing access itself.

That's not true. You can set CORS: * and validate all requests in your server. The extra rules are for vast majority of servers that never inspect Origin headers.


Exaclty. This is a big concern for me. The server is really telling the browser what to do. But the browser can ignore it. It is quite a hacky an inellegant model. Very broken.

CORS are about browser security. A browser can have terrible security in lots of ways (allowing JavaScript to access http only cookies, etc). In this case you have a bad browser and you should not use it.

But you can still use IE, edge, chrome and Safari and trust that they implement CORS and most other basic security features correctly.


Our microservices stack is pretty dependent upon clients making cross-origin requests. I don't necessarily consider these "3rd party".

Case by case basis, but it's not uncommon to have clients hit a single domain and route the traffic based on the url. Exposing a microservice architecture to clients has pros and cons, of which this is a con.

That would mean each call made by the client would require a preflight OPTIONS call.

That means extra delay, extra db connections and calls, etc.

How do you deal with them?


Pre-flight overhead, yes (maybe 30ms?)

DB connections are pooled and cached using AWS Lambda.

btw- Our app is a plugin, so origin is always the platform provider... then our app makes CORS calls to AWS.


One idea that the article doesn't convey well, in my opinion, is that the Same-Origin Policy only prevents the browser from reading the response from an HTTP server to third-party host, but it doesn't prevent the request from being issued in a first place. The CORS headers are merely a way for the server to indicate to the browser whether it is allowed to read the response of not, but it doesn't protect the server from anything.

Especially, setting the "Access-Control-Allow-Credentials" header to true means that a client which sent a request with a cookie is allowed to read the result, but whether the request is sent with a cookie or not, and will be treated as such by the server, is entirely up to the client.

So although malicious.com cannot read the details of bank.com using AJAX, it can definitely send a POST request to trigger the transfer from the user's account to a malicious account using the user's cookie (blindly so).

This is the reason proper CSRF protection must be implemented by the server, independently of whether CORS is enabled or not.


This is not entirely true. The preflight's role is exactly to prevent a post request to be sent to the server. There is no preflight only in particular cases.

This is entirely false. POST requests with headers set automatically by the user agent aren't preflight-ed. There is a preflight only in particular cases.

I recently implemented a feature that depends on CORS and I don't see anything in this article that adds any value over Mozilla's thorough CORS docs.[1]

If you're writing an article on CORS today I also think you should mention recent CORS developments such as Cross-Origin Read Blocking (CORB)[2] and features on the horizon such as Cross-Origin-Resource-Policy, Cross-Origin-Window-Policy, etc. that in light of Spectre, Meltdown etc. are meant to help plug speculative execution holes.[3]

[1]: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS

[2]: https://www.chromium.org/Home/chromium-security/corb-for-dev...

[3]: https://www.arturjanc.com/cross-origin-infoleaks.pdf


> performantcode.com took too long to respond. ERR_CONNECTION_TIMED_OUT

Too ironic not to mention.


I love CORS. I wouldn't be able to provide the level of security with my API that I do because of it. Without CORS, I don't believe my API design would be practically possible due to the security risks.

If I have access to the Fetch API, I can tell the browser to send the user's cookie cross-origin and I can validate the request based on the origin. This allows for interesting authentication scenarios without the need for explicit client-user consent pages.


>and I can validate the request based on the origin

Are you talking about the HTTP referer? That's easily spoofable and can't be relied on server-side. The same-origin policy and all the CORS security is implemented in the browser itself, not in HTTP.

If you need to be certain that a request originated from your own page and not another domain you need to use a CSRF token.


I'm talking about the Origin header, which is present on XHR requests and cannot be assigned by code. A CSRF token is not necessary if you require Origin because the origin is not sent with standard HTML forms.

It can be inside of insecure browsers, you're not any more secure than you were before after CORS.

Users with insecure browsers are subjecting themselves to security vulnerabilities, not me. In this case, the service just wouldn’t work for them. Not overly worried about those users because they represent a diminishingly small portion of our user base.

I never really understood why we have CORS. I mean, the problem with CSRF is that some random page can trick your browser into adding its authentication token to a request which does not originate from the authenticated page. So why the do we need the server to tell the browser that it should not send requests from other origins?

In my opinion, it would have been much better to improve the browsers to not include cookies in 3rd party requests automatically (only when they are explicitly specified via JS for example). It should have solved the issue equally well, without introducing some bulky server-side security feature to remote control browsers.


CORS is really for the opposite problem. Browsers do block requests from other origins by default (mostly). CORS is used to let the server decide which origins are allowed to request data and how it can be requested. If the client was allowed to decide via javascript, then attacker.com could make a request via javascript to facebook.com telling the browser to send cookies and return the user's data. This is actually what the client JS has to do anyway with CORS (using credentials: true), but the server side needs to be able to allow/deny it.

But why not just completely separate origins with regards to sessions, or at least let the user give permission to use that Facebook session here? That way, many use cases would already be covered without any danger. If a travel website is CORS-reading weather data from another origin, pre-existing sessions probably don't matter at all.

Well, yes, in fact, I was complaining about the Same-Origin Policy, and CORS is just the consequence of the way the SOP works. Nevertheless, this doesn't really change the situation.

If the browsers separated the session by origin (as blauditore wrote), the whole problem space would look very different.


Ya I generally think CORS is a waste of time. It would have been better to provide a hash of the file we're linking to and trust that rather than where it came from. Which is precisely what Subresource Integrity (SRI) does:

https://en.wikipedia.org/wiki/Subresource_Integrity

Sadly even though this is an obvious concept and trivial to implement, it took them over 20 years since the web came out to get it in most browsers. The cost to society of having thousands of copies of the same commonly used files (like jQuery) hosted locally on countless servers rather than having a centrally hosted version already cached from previously visited sites is staggering to contemplate. I'd really like to know who was behind the holdup on deploying SRI.


CORS is about a lot more than just static assets. SRI does not replace CORS.

CORS is for data retrieval, not subresource inclusion. In fact, you don't need CORS at all to include a script in your page; that has never been the case.

After sleeping on it, I realized that my comment was a bit too critical and also missing some context. I didn't mean to be as negative at CORS as I came across, I was more disappointed that something like SRI hasn't been part of the web from the start. Some background:

https://en.wikipedia.org/wiki/Content-addressable_memory

https://en.wikipedia.org/wiki/Distributed_hash_table

https://en.wikipedia.org/wiki/Merkle_tree

If we had something like SRI from the start, we could have linked to resources by their hash instead of their URL (more like how IPFS works). There's a name for this concept that eludes me, and also a great video explaining its potential but also difficulties when it comes to security and HTTPS. The short of it is that if we had routers that accepted hashes as well as URLs, then we could ask for a list of all data matching a given hash and download that file (or its pieces) from the closest cache(s). So instead of linking to jQuery at https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.mi... we could just ask the router for the file with SHA2 hash 160a426ff2894252cd7cebbdd6d6b7da8fcd319c65b70468f10b6690c45d02ef and it would return its contents regardless of where it came from, including the browser's own cache if it already had that file (I just used https://hash.online-convert.com/sha256-generator but there would be a better standard for this).

Anyway, hope this helps and sorry for any confusion.


The major hold up on deploying SRI is that a lot of third parties aren't supplying immutable content.

Google Analytics or recaptcha for example aren't versioned. Deploying SRI is just going to break your site when they update the script.


If they hashed and cashed the resource files, then they could be found locally most of the time.

CORS is a technical subsidy granted to (sloppy) users of cookie authentication. I’ve never worked on a project where it was anything other than an annoying hoop to jump through.

When has "subsidy" become the go-to narrative to argue against any sort of deference to real-world usage?

You could just as easily frame CORS as "antibiotics for the people who dared to leave their house".

(There's also a no-true-scotsman fallacy going on in your argument)


More like "mandatory medical testing at the airport because some people don't vaccinate" from my perspective.

It's a "subsidy" because we all have to spend time on it, even though lots of people don't receive any benefit from it. It's not intolerable, the Web is based on community standards and responds to community needs, and I'm fine with that. But I'm always on the losing end of this one so I'm going to say so.


I'd also argue it encourages worst practices

Im curious how many newer JS+API applications still use browser cookies as a means of authenticating API requests and how prevalent cookie usage still is for these types of applications?

JWT/tokens + Local/session storage + adding fetch headers seems like the best way as long as you don't run untrusted JS.


"as long as you don't run untrusted JS" is a surprisingly tough hurdle to hit, even for very experienced developers.

What is your reason for preferring JWT + localStorage for authentication and session handling? I'm genuinely curious, as httpOnly cookies strike me as better in every meaningful way.


LocalStorage is so much more preferable than cookies, I agree, however as SSR ("server-side-rendering") of heavy client-side JS apps becomes more prevalent, suddenly cookies are back in business.

If the initial SSR needs some initial client-state to complete its work before sending the HTML payload, it can see the cookie, but not localStorage.



You the MPV!

Had to deal with a firewall that filtered all unknown/"new" HTTP headers. This included CORS.

A PITA to find the reason why Firefox wouldn't use the Google fonts.


That kind of crapware is why I'm increasingly glad that the http specs are moving towards being completely illegible to middleware boxes.

How would http be any harder for middleware than app-layer? If both linked to libhttp.c, couldn't they each get the full parsing/reading/writing - wether proxy or server?

Being able to parse HTTP doesn't get anywhere when you can't actually get at the contents because they're encrypted (as with all major HTTP2 implementations).


This is really an informative article. We've recently stumbled across this issues and all other pages I could google did not explain it as clearly as this page.

My experience was the same as yours, resources explaining things assumed technical knowledge far beyond my level or were not very clear. This was a very well put-together article.

My only contribution to the discussion is that if you get a CORS error where you wouldn't expect it, the problem might not be a CORS issue. I spent the better part of a weekend trying to debug why a request to a Google API wasn't working and why I was seeing a CORS error (same thing worked fine on another system). Turns out, it wasn't the same thing, my url had a typo...



Why not just include the Origin on all cross-origin requests? Then the server could deny/allow it without the need for preflight.

I would be concerned about the privacy implication of it. Imagine if the browser sent the origin to widely used CDNs, or to Google Fonts, and that people didn't actually block Google domains on their browsers.

Also, this would not be secure by default, because you would have to change the default behavior of the server to block cross origin requests.


Browsers already send the referer header on all those CDN requests, provided that the CDN is loaded over https. If anything, the origin contains less information.

How do you prevent people proxying your API via a node service?

This is something I could never get my head around with CORS - what's the point of whitelisting origins if getting around the whitelist is nothing more than an inconvenience?


CORS is mostly used to prevent attacks from a browser script on a non-whitelisted website (CSRF etc.).

To prevent someone abusing your API otherwise, use an authentication method.


The user is still protected in that case.

If you create a proxy for foo.com, your javascript can't get the browser to send the user's cookies for foo.com to your proxy.


It's not free to run a proxy like that.

I built out https://github.com/krakenjs/fetch-robot to avoid some of the esoteric issues around CORS endpoints -- and to avoid the performance hit of that preflight request.

It acts as a `fetch` implementation that allows you to declare cross-origin policies in advance, then channel the requests through an iframe which enforces those policies.


I don't know CORS that well, but like any dev worth their weight in salt I know how to get around it:

- iframe - domain js hack - reverse proxy - http header

What else? Referrer Policies await.


I agree, the only thing I have ever found with CORS is that it makes it difficult for people who don't consider it when planning out servers should run. It goes like this:

- Just use my API...

- I tried, please enable CORS.

- What's CORS?

I find it frustrating that this seems to be the default for most servers. I think it should be opt in and not opt out.


In order to make it opt-in you’d need to disable cookies by default (at least for auth) or else you get massive pwnage by default.

Here's one question that's always bugged me - What's stopping a malicious user from sending an HTTP request from any API client like Postman, or even Curl from the CL? Something like a post with: {transferTo: myAccountId, amount: 1000000000}?

Obviously in any nontrivial web app it would fail because of authentication issues, but if a server doesn't do ANY sort of security checking, that should work, no? Does that mean that the onus is on the server developer of mybank.com? And if so, what would stop the malicious request from working on any server developed before the existence of CORS?


Server is supposed to check authentication/authorization through some method.

If HTTP, that’s done via setting some information in the request headers, be it a cookie, or basic auth, or token auth, or similar.

CORS is done by the browser - to not allow certain requests to be made (In case you are accidentally executing malicious javascript code). The server tells the browser via the CORS headers which requests are ok to make.


My only experience with CORS has been when trying to access api.foo.com from a web page on foo.com, and then getting denied. Then I messed with the settings on api.foo.com trying to get it to allow access from foo.com, and then I gave up and just configured the load balancer on foo.com to proxy requests to foo.com/api to api.foo.com.

So far it's only gotten in my way as a developer. But it's there to protect users, not me. So at the end of the day, I'm glad it's there as a way to somewhat prevent people from tricking my users into hitting my api with malicious requests.


You have it backwards. This type of request was not possible at all before CORS, CORS is what allows you to make it possible.

very nice article. I thought I understood CORS but learned some new things:

* not all cross origin requests need to be preflighted * to use credentials, server needs to explicitly allow credentials to be sent from client


No... I don't despite my best efforts to be diligent about learning it. :( It's a workaround for a bunch of compromises we made where the cure is almost not worth the pain.

> Do You Really Know CORS?

Of course I don't, nobody does.


I know it well enough that changing to a custom mime-type like "text/x-myapp-foo" is a solution that gets around CORS and pre-flight as well in the latest version of Chrome.

Are you sure? That's a pretty insane security issue if it's true!

Content-Type should only be allowed to be `application/x-www-form-urlencoded`, `multipart/form-data`, or `text/plain` to be allowed without preflight.

Edit: I can't reproduce this on Chrome 69.0.3497.100 (Official Build) (64-bit). Setting the Content-Type to anything other than the above with a POST request will cause an OPTIONS preflight, even when using your example.


I'm using 69.0.3497.81. Make sure your server accepts the header... after changing I stopped seeing OPTIONS requests in my server logs, not that it measurably improved the speed of the SPA. It was several months ago I made the change and FF behavior is definitely different (more relaxed?) than Chrome when it comes to CORS.

There's caching for successful preflights; maybe it's involved here?

I know that chrome limits the amount of time that the CORS preflight validation response result can be cached (to a maximum of 5 minutes — and if the server sets expiration header to a longer time - chrome still only caches up to 5 minutes). Does Firefox perhaps not limit this? It could be that you are seeing a difference in behavior due to the max time that each browser is willing to cache the CORS preflight check ...?

Can you elaborate on this / link to a more detailed spec?

Please explain why browsers can't request and use any url as a regular curl command does (I'm explicit talking about request without sending browser cookies)

    curl http://192.168.1.1/
Every site you visit would have unauthenticated read access to internal servers on your LAN, such as your router's home page.

Or your printers admin interface. Or your NAS.

You think you know CORS, and then your users visit your site with the Edge browser...

I tend to use jsonp to get out the cors restriction time to time

But jsonp have to be supported by your extern API.

the link is not working for me. Is there an alternative link.


I am very upset that no one said "Of CORS I do!"



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: