CORS does not protect endpoints against malicious clients, since you can always just make the same request outside of a browser. And it doesn't protect any site from making or receiving cross-site requests, since CORS can always be disabled on the server side.
CORS protect against the scenario where a malicious site tricks an unmodified browser to make a cross-site request to a legitimate site. If the user has an authentication cookie for the legitimate site, the cookies will be sent along with the request. So the malicious site can perform transactions in the legitimate site on behalf of the user, despite not having direct access to the authentication cookie.
CORS is further complicated because certain forms of cross-site requests have always been allowed by browsers, and therefore must remain enabled by default for backwards compatibility. GET requests to separate sites is allowed, since this has always been allowed e.g. to embed images from other domains. POST requests are allowed with the caveat that you can't inspect the result, because you have always been able to initiate a post to a different site from a html form.
Bit it is a good point that CORS enables more forms of cross-site requests than what is allowed under the same-origin policy. But the requests are enabled under certain confusing restrictions which is better understood if you understand the scenarios they are designed to protect against.
I don't quite understand what you mean with PUT and DELETE not being allowed before SOP? What exact operations where not possible before SOP and are now (and pose a security risk?)
The only question is whether requests are restricted (i.e. in terms of sending resources like cookies).
If there's some kind of dangerous misunderstanding that stems from this, I can understand emphasizing it, but otherwise it just feels pedantic.
> Hey, the results from the security audit are in and it’s mostly fine, except the pen testers said that we don’t need CORS for our /foo endpoint. Can you disable it entirely please?
If you understand what CORS is, you will interpret that as “Our security is too lax, let’s tighten it up by removing the exceptions to the SOP that CORS grants” and correctly increase security.
If you think CORS is how you described, you will misinterpret that as “We don’t need security for this endpoint, open it up to the world with Access-Control-Allow-Origin: *” and create a massive security vulnerability.
If you think of CORS as something that restricts things in the name of security, your understanding is 100% backwards from what it actually is, and this can be disastrous for security.
This is simply false. You are somehow wrongly assuming that only same-origin requests exist or are needed. This scenario never existed in the real world beyond the scope of small personal projects.
So it is not CORS that protects, since it restricts nothing, but SOP (potentially relaxed by CORS).
No they are not making that assumption.
From having to deal with CORS over the years, to me is just a tool to protect one thing… the intellectual property of whatever site we are loading resources from.
For example, images arrive “tainted” and you can’t get their pixels on a canvas no matter how hard you try. You can’t generate a “screenshot” you can print, and so forth.
If this was just about protecting the relying party website, that website would have a way to get around this “protection” to get something done.
Yes CORS is just a relaxation of SOP, which makes it impossible to get sensitive data that would have otherwise been revealed to the relying party. Again — SOP is protecting the user’s data loaded from the ORIGIN. Its main job is’t “protecting the receiving site”, that’s a pretty disingenuous characterization.
You're getting that wrong. It doesn't protect the website that initiates the requests. It protects the user from the website initiating the requests.
SOP is to stop website A from doing shady things to the user using their session from website B.
It does protect you in the sense that protecting your users is also protecting you, but it requires voluntary cooperation from the user, so it doesn't do anything to protect you from malicious users.
In the "hotlinking" example you're giving, CORS is only protection from the absolute laziest of attackers, since you can get around that with about 5 minutes of setting up nginx to proxy from your domain to the resources you want to re-use. You can also get the same protection without CORS by blocking requests based on the Referer header
(Besides, I'm not sure where it falls inside the new Firefox extension rules, but this is the kind of thing to implement on an extension, or in the worst case, on a plugin.)
Can you please give an example of this?
Imagine a user is logged into facebook, and visits legitimate website X, which has been hacked to inject malicious script Y.
Malicious script Y makes a bunch of API calls to Facebook; since the user has been logged in, the Facebook cookies are sent along with the request by the browser. Malicious site Y could delete your posts, make you join pro-Nazi pages, or exfiltrate your network.
With CORS in place, the requests would be denied, because API requests can be set up to reject any requests that are not from a specific list of domains.
This is a bad example, because Facebook doesnt rely just on cookies, and in fact does have an SDK for making API requests from third party sites. The principle remains, however, if that were not the case, then CORS would be one possible remedy to the problem.
> This browser implementation can be bypassed at any time. First, it is up to the browser itself: if CORS is not integrated, or not integrated cleanly, then it will not work.
> An attacker can access the API key via the source code of the web app and use it, for example, via a cURL request to directly access the API resources of the backend. With cURL, no CORS takes effect, so the attacker has direct access with the full rights of the user.
All of this is completely wrong.
CORS does not protect anything from anyone. The same-origin policy stops code from one site reading resources from another site. CORS selectively removes that protection – it decreases security.
If CORS is not integrated, the same-origin policy is in full effect and code from one site cannot read resources from your site. It starts off as secure, and stays secure.
If you use cURL, then the same-origin policy doesn’t apply, and you can of course read any resource.
The same-origin policy is not a generic access barrier, and shouldn’t be used as such. It stops one site from abusing the user’s authenticated state with other sites, and that’s it.
This comment comes off as either disingenuous or needlessly contrarian, and in the process tries to make points that fall somewhere between completely wrong and miopic.
Your personal assertion that CORS somehow decreases security is based on the patently false assertion that at any point in time requests only came from the same origin. Not only was this never the case, this ignores the fact that with the popularity of REST and SPAs and microservices and API-as-a-service, the norm was since switched to having browsers make requests to anything other than the same origin.
So, without CORS, you have a all-or-nothing security model, where the needle would always pend to the "nothing" side. With CORS, that needle can point "all that matters", which is pretty close to the optimal same-site solution. What CORS does is allow developers to abstract away the definition of "same origin" to mean "the origins that I explicitly allow".
You simply cannot claim that allowing only requests to the origins you explicitly allow is an erosion of a security model. That makes no sense. It made no sense when implicitly that list was only comprised of your own domain, it makes no sense now when that list includes reputable APIs that you pay to use.
> So, without CORS, you have a all-or-nothing security model, where the needle would always pend to the "nothing" side. With CORS, that needle can point "all that matters"
Yes, and a policy that permits nothing is more secure than a policy that permits some things. Although “nothing” is not accurate, some requests are still permitted in either case.
> You simply cannot claim that allowing only requests to the origins you explicitly allow is an erosion of a security model.
I didn't claim that. “Allowing only requests to the origins you explicitly allow” implies that you are enforcing a restriction, when in fact the opposite is happening – you are permitting more by removing restrictions. Tightly scoping what you open up with CORS is still removing restrictions, not adding them.
You're reading that backwards. "nothing" is "no security".
Bluntly, your response is the one coming off as needlessly argumentative, and the parent comment made plenty of sense to me. I don't think it's disingenuous to clearly explain the difference between the Same-Origin Policy and CORS, and to emphasize that CORS relaxes the Same-Origin Policy.
The grandparent comment argues about semantics and provides information that is technically correct, but not in a useful sense. To say that "CORS decreases security" because it opens up cross origin communication is perhaps not disingenuous (who can say?) but certainly misleading. All browser users would be worse off without CORS.
I think we are arriving at the word 'security' with different starting points and lenses, and a different chunking of what constitutes CORS. So I too am guilty of making a semantic argument.
You and fivea are conflating functionality with security.
Yes, it may be functionally desirable for an app to allow cross-origin requests, as enabled by CORS. But, you're granting permissions, and that is opening things up, possibly making them less secure.
I think great-grandparent's point highlighting this is worth making, given that there is so much confusion around CORS.
Letting people have access to things that they are supposed to have access to is part of security. You don't get perfect security just by denying everyone access to everything.
fivea wasn't conflating functionality and security. They were saying that a world without CORS would be a world where those rights are always granted.
Or a world where those rights are never granted.
Which of those you conclude is a matter of semantics, and this interpretability contributes to the confusion around what CORS actually does. It's that confusion which another commenter was addressing  and that my comments sought to support/clarify. You may be unaware that some people believe that CORS is intended to "lock things down" vs. "open them up". They don't understand that it's opt-in or the default behavior without it.
But, it's a matter of fact that usage of CORS presents risks vs the default policy, and those risks must be considered. That's really the point vs the idea that CORS has no utility.
It's worth noting too that, strictly speaking, there are frequently workarounds to the default policy that don't require CORS, some of which are arguably more secure by way of being less prone to configuration errors.
Okay, sure, there's a semantics question and also some people are just confused.
So here is where I took issue: Yes, many confused people are conflating functionality and security. But while fivea/pshc's words might accidentally encourage that confusion, they were not confused, and were not conflating the two.
Probably more semantics. I don't claim to know what's in fivea/pshc's heads or whether they themselves are confused. I was speaking to my observation that their comments merged the two issues.
You may prefer phrasing like, "their words might accidentally encourage confusion" versus "conflation". I'd say conflation is the mechanism there, but OK.
Or you might prefer I specifically clarify "their comments conflated the two" vs "they conflated the two".
So their comments did not conflate, but might accidentally cause future conflation, so it's reasonable to reply to make the difference extra clear, but I don't think it's reasonable to accuse them of conflation.
It happens. Thanks for clarifying.
> All of this is completely wrong.
Is not only needlessly argumentative but also wrong itself. The 3rd point they include as "completely wrong" is not wrong, and they end up just rephrasing it later in their correction.
> With cURL, no CORS takes effect, so the attacker has direct access with the full rights of the user.
The attacker has direct access with the full rights of the user because this is not a situation where one origin is making a request for a resource from another origin, so there’s nothing that says this shouldn’t happen. It’s got absolutely nothing to do with CORS at all.
I believe it’s important developers learn the nuances, otherwise we will likely see more faulty implementations.
It's great that CORS is a thing in browsers, and you can use it if you need. But each person who uses it is allowing more origins, and so allows more requests.
Personally I always avoid CORS and just server every thing I need from the same origin, but that's just because of the kinds of things I've worked on.
No, not really. Your simplistic example fails to acknowledge that your same-site "lock with no keys" model rendered same-origin-policy completely unusable in projects that fall beyond the scope of a personal hobby project, which left the whole world with no alternative to turn it off. CORS recognizes the absurdity of implicitly assuming that the same origin is the only possible and conceivable safe origin, or even that a site has a single origin.
Therefore, your comparison would actually be between an old lock which was no longer usable and thus forced everyone to go around with no locks, or a lock designed around one of the world's most basic requirements and thus made it possible for the old lock concept to be applicable.
If you offer a website with an API then CORS is an enhancement in that you're helping protect users from getting owned or hijacked. Alternatively if you can't stomach the risk, you could take down your API, which is about as useful as solving a math equation by multiplying both sides by zero.
It is very confusing and I’m not entirely sure how it ended up like that.
From my own time observing the process of how these things get drafted up, it's because the creators of these mechanisms work in a committee and in a circle in which everyone is highly familiar with their specific terminology. There is no thought given to accessibility of general understanding for 'the masses' and that eventually manifests itself in this way. I'm not saying they should or shouldn't be giving thought to naming, just pointing out what I observe.
Really CORS cannot be used to lock anything down. The behavior of a server not implementing CORS has the same end result as a server trying to be as restrictive as possible. Both would not send any CORS headers in responses and would simply ignore pre-flight requests.
Lets say you have a legitimate website with uses form POST's or GET's to implement cross-site requests to some service on a different domain, authenticated through a cookie. This is vulnerable to cross-site request forging. Changing this to use fetch with CORS can protect against this kind of attack, since the service can now ensure the request is initiated from a legitimate origin.
It is just important to understand what CORS protect against and what it doesn't protect against. You can always forge a request to look exactly like a CORS-compliant request from a legitimate origin - but you still wouldn't have the necessary authentication cookie. If the attacker have somehow gotten access to the users cookie, then there is no need for request-forging in the first place, the attacker can just directly log into the service. If the users browser is compromised it is the same. So CORS protect against the specific scenario where a malicious site visited by a legitimate user forges a request to a legitimate site, misusing the authentication cookie associated with the legitimate domain. In this scenario, the CORS header will indicate the origin of the malicious site, and the server will be able to reject the request.
Otherwise a malicious site could just forge the Origin header in the preflight request.
yep, that's what Cross-Origin Resource Policy (CORP) for.
It is designed to seal the last loop hole in same-origin policy. (img and scripts don't require cors at all)
But there is a big gotcha, the old browser that don't know about it simply don't care. So it isn't strictly useful for now.
My favorite is... imagine I routinely port-forward an app's debug port to my workstation for debugging. It has a useful /email@example.com endpoint. You can then call that over the Internet by serving me a web page that contains <img src="http://localhost:1234/email-debug-info?address=example@examp...">. CORS doesn't care. My browser doesn't care. It will just silently leak information. (Also great for other things on your network. Log into your router at 192.168.1.1 recently? Someone can make a web page with a form that submits to http://192.168.1.1/enable-port-forwarding or whatever, and you can forward whatever ports you want.)
What's hilarious to me is that CORS seems burdensome in the opposite direction too. It breaks peoples applications somehow! While writing this comment, I did a search to see if I could remember the standards-track proposal for fixing the two "bugs" I mention above... but all the search results are people asking how to disable CORS rules because their app is broken. Sigh! The web is a mess.
It soon will https://web.dev/cors-rfc1918-feedback/
I don't really understand the first one or why you think SOP should protect you in that case. Could you restate that problem?
The second one is a CSRF issue, as noted by another user.
The third thing you describe is not people who disable CORS but actually use CORS to the max (usally things like Access-Control-Allow-Origin: *), because they don't understand what it does. It's a huge problem...
This is just a CSRF issue, which is well understood and easily fixed with a CSRF/authenticity token.
Also, GET requests should not have side effects like sending an email. A reasonable default seen in some frameworks is GET requests have no side effects and don't require CSRF tokens, while all other verbs do.
Request headers and body content-type are also a factor for POST, anything which couldn't be set through a FORM is forbidden.
The most common issue is that the request is only simple if its content-type is `application/x-www-form-urlencoded`, `multipart/form-data`, or `text/plain`. JSON POST requests usually run afoul that. The second big issue is setting bespoke headers as only Accept, Accept-Language, Content-Language, and Content-Type (restricted to the above list) are simple. There common sticking points are headers like Authorization.
To my weak-minded brain, the sticking point always comes down to: CORS/SOP is a policy enforced by the browser, correct? The client. The endpoint tells the client what origins should and should not be allowed to make the request. It is up to the client (browser) to decide whether to enforce this policy or not. Is that at least somewhat correct?
The point is that in a client-server architecture, you can never trust the client to play by your rules, and anything you put into the client becomes public knowledge.
The big mistake in a previous article which this one criticizes is this: it put an API key hardcoded into the client, which let everyone who looks at the client code do whatever the REST API lets you do. At that point, rules like "you need an account, you have to log in, you can only change your own stuff" can simply be ignored by accessing the REST API directly instead of running the client code.
The correct way to do it is to require an authentication token sent along with requests and which the server checks. The difference between the insecure API key and the authentication token is that the client gets the token only after the user logs in, it is unique to that user, possibly valid for a limited time, and the server will only only allow viewing or modifying things which the user for which that token was generated is allowed to view or modify.
It's not this specific topic, it's any topic and a general mindset.
Thinking about possible attack vectors on your own application is a mindset, and to be able to do that properly you need to deeply understand the technology you are using.
There's no exhaustive list of security issues to avoid, just as there is no exhaustive list of every function you'll ever need to write. Security means preventing the wrong kind of people to do the wrong kind of things. What that means is entirely dependent on your application.
You cannot make a boat sink-proof if you don't understand why a boat floats. Sure someone could make a list of things to watch out for and rules to abide by, but that's probably not gonna cut it in the long run.
you should never hard-code an API key in this manner.
Getting CORS right involves a couple days of forgetting what I think I know, closely reading how it works (again), writing up a custom middleware for whatever web framework I’m using that time because the OOTB middlewares always have subtle bugs (usually with the Origin header), and then essentially forgetting about it for the next few years.
CORS is well defined here: it ALLOWS a browser to make certain types of requests to endpoints outside the domain the browsed website comes from:
Because the browser follows the same origin policy for a number of API (like FETCH):
Also if you are a developer, please check:
To add an extra layer of protection against cross site scripting. I think one of the recent data leak could have been avoided if the website implemented that very basic header all modern browsers support.
I found comparing CORS with CSP helpful in understanding both concepts: https://stackoverflow.com/a/50726191/1472186 (I wrote the answer)
Also, like others have pointed out, CORS or CSP depends on the client (browser) to enforce it. So it doesn't protect against attacker with customized clients.
Maybe the author is being careful not to offer an abstract use case as that might ironically be misinterpreted and be counterproductive? I feel security blogging is just one of those subjects where it’s more useful and prudent to express what shouldn’t be done then should. Maybe it’s okay it doesn’t include the solution, that’s left up to the reader to infer.
Even storing a session token in localStorage is not a problem if you‘re protected against xss (if your not, nothing else will protect you anyway)
It's the same reason OAuth uses expiring tokens. If you think that doesn't help and you're surely smarter than the whole security community who has been developing these standards for decades, please write a specification yourself and let us review it to see how great that is.
Let's just say, certain people were caught with their pants down on that one. Notably, Amazon. If you go to watch the bonus content for The Expanse on Prime Video, you will notice that it does not work in Chrome but works fine in Firefox (as of a few weeks ago, at least).
If your site depends on SameSite None, you need to explicitly set it now.
The second has no value because you are either using techniques that are not vulnerable by XSS or not. “same-site” cookie is one of them.
Once fully understood, it is useless mechanism of security theater.
An attacker with curl will not have the user's cookie and an attacker with a malicious website will not have the right origin.
CORS is supposed to secure the user’s data. You are NOT supposed to send global server-side data (like secret keys to third party services) through CORS.
Consider that any user data shown “publicly” to all other authenticated users (eg user icon via Facebook’s API) can be used to deanonymize that user, because someone can just create a fake account, exfiltrate the images, and do a reverse image search.
But the author is right, CORS is just one part of the equation. Together with SRI, they can definitely make secure cross-chain interfaces.
The actually insecure alternative back in the day was JSONP. Read my stackoverflow answer from OVER 10 YEARS AGO: https://stackoverflow.com/a/5447005
Presuming that only a logged in uses sees it is the problem. Everybody and their dog sees it, and can impersonate the person whose key it was.
> using JWT in a typical SPA <-> API scenario.
Is this typical? It's a pretty horrible setup.
Cookies have a lot of great features that 'store a JWT in LocalStorage' just doesn't have.
I'm still interested in the original question: if you use localstorage for auth tokens and you have proper CSRF protection, what does allowing all CORS actually make you vulnerable to?
I don't know what the actual reason was, but I would guess the design decision was made so the multiple domains use case would be infinitely scalable. Otherwise you would run into header length limitations. Imagine trying to fit 10,000 domains into the header! Eventually you'd need something like this implementation anyways.
My guess is that it's related to subdomains not guaranteeing to be from the same origin.
The result is that people run into errors, Google them, copy paste the top SO result which advices to allow *
Doesn't seem ideal
(Also, I'm pretty sure an api key is a form of authentication.)
What does it mean to share API key on client side? Is it same as passing the header based authorization for client? API keys are something like JWT?
The CORS thing is funny, I like seeing the scans people do on your servers like .env or some php script