Hacker News new | past | comments | ask | show | jobs | submit login
Developers don't understand CORS (fosterelli.co)
930 points by chrisfosterelli 7 days ago | hide | past | web | favorite | 355 comments





The blog post author doesn't understand CORS either! Their advice on how to fix the problem is wrong:

> So what would a secure implementation of this feature look like? The webserver listening in on "localhost:19421" should implement a REST API and set a "Access-Control-Allow-Origin" header with the value "https://zoom.us". This will ensure that only Javascript running on the zoom.us domain can talk to the localhost webserver.

"Access-Control-Allow-Origin" doesn't block the request from going through, it just prevents wrong-origin Javascript from accessing the response.

The original vulnerability actually just a slight variant of cross-site request forgery (CSRF) -- "wrong site request forgery" :-)

For the localhost server to detect wrong-site requests, there are two simple options:

1. Use a CSRF token. This requires a shared secret between the localhost server and the Zoom website.

2. Check the "Origin" header. This requires that the request be an HTTP POST, which is what they should be.

(Note: If you're determined to use some CORS machinery, you could build something more complicated based on pre-flight requests, but there's no point.)


The advice isn't wrong, but you did misunderstand it. The CORS header will definitively block the request to an AJAX REST API from going through, because it will be a POST request with an `application/json` Content-Type, which will trigger a preflight request.

You're assuming the API will remain identical, just with new headers. I didn't advise this, what they have now is not a semantically RESTful API.

What they have now is a mess made to fit within their image-hack's constraints, there's no sensible reason to keep the same GET pattern without that.


No, I wasn't assuming the API will remain identical -- I saw the "should implement a REST API" part of your recommendation.

Here's what's wrong with your recommendation:

1. An AJAX REST API request doesn't necessarily mean "application/json". It could also be "application/x-www-form-urlencoded", which won't necessarily trigger a pre-flight request.

2. Your advice goes into detail on an aspect that is not the crux of this particular vulnerability ("Access-Control-Allow-Origin") but not on the aspects that are:

2a. How to handle the pre-flight request, which is the thing that would actually block wrong-site requests.

2b. If you're relying on a Content-Type of "application/json" to trigger a pre-flight request, then it should be made clear that this is now a security check -- it's common for server code to ignore the Content-Type header as long as the body is valid.


Ah I see -- so it sounds like you agree the advice is not wrong, but it's not in the level of detail you would like. It is only a 500 character example unfortunately, the details of the server API are not really in the scope of my post but I appreciate you elaborating here.

The advice is wrong. Someone could follow the advice in the article and still be vulnerable. The article should mention the actual vulnerability: CSRF, because there's a state-changing HTTP handler with no CSRF protection. CORS doesn't automatically protect against CSRF, and people shouldn't be given that impression.

I've been seeing more and more developers not being aware about CSRF. Maybe the reason is that frameworks got very good built in support for CSRF protection, which caused CSRF vulnerabilities to be less common, which lead to OWASP to remove it from #8 on the list in 2013 to to not be explicitly on the top 10 at all in 2017. (marked as "Merged or retired, but not forgotten"): https://www.owasp.org/images/7/72/OWASP_Top_10-2017_%28en%29...) but it's still a very valid vulnerability that I think every developer should understand.

> Merged or retired, but not forgotten

Oh the irony.


> Someone could follow the advice in the article and still be vulnerable.

This applies to any advice that is not an actual reference implementation in code. There's only a state-changing GET handler because they chose to work around CORS and you can't POST for images. Implemented properly this CORS approach does protect against CSRF without the need for an explicit token. I've written more about this before, with concrete examples in Node for example: https://fosterelli.co/dangerous-use-of-express-body-parser


>There's only a state-changing GET handler because they chose to work around CORS and you can't POST for images.

I'm not sure of that. There's tons of stuff in existence that has state changing GET handlers that don't have the goal of working around CORS, check out the HN upvote button. It's conceivable to me that if Zoom managed to send a CORS response header, they might have used an XHR with GET (considering GET is the default for XHR). But a discussion of GET vs POST seems largely irrelevant, because POST doesn't solve the problem.

>Implemented properly this CORS approach does protect against CSRF without the need for an explicit token.

The problem is that your definition of "properly" is "must reject requests with a Content-Type other than application/json". That's a non-obvious definition of properly that's not stated in the CORS article. That body parser article is better (although it does seem mistaken about XHRs being unable to make cross-origin requests). The CORS article would be better if it mentioned the Content-Type requirement and linked to the body parser article.

Although implementing CSRF protection via Content-Type is somewhat fragile. Someone might want to support a new Content-Type in the future and not realize that the Content-Type restriction is security-critical. Implementing it via an Origin check (or an X-Not-Simple-CORS: true header) would be clearer.


State-changing GET handlers are against the HTTP concept and specs and are trouble waiting to happen. Each and every one of them were designed and implemented by incompetent and/or inexperienced developers.

I've occasionally used stage-changing GET for admin-only functionality on small websites. I'd accept "reckless" but bristle somewhat at "incompetent and/or inexperienced".

;-)


It's not just the level of detail. For example, what if your advice included one fewer detail:

> The webserver listening in on "localhost:19421" should implement a REST API. This will ensure that only Javascript running on the zoom.us domain can talk to the localhost webserver.

Do you see how that is wrong?


You shouldn’t assume that a post titled “Developers don’t understand CORS” will only be read by people that understand all the intricacies of CORS. :) For everyone else, the need for a POST request [edit2: as well as a non-form Content-Type] may not be immediately apparent.

(Edit: It’s true that a proper RESTful API shouldn’t be using GET for operations with side effects anyway, but that’s different from knowing that avoiding it is mandatory for security.)


Some time ago, I wrote (quite popular) article about basics of how CORS works - perhaps someone may find it useful when reading the author's article:

http://performantcode.com/web/do-you-really-know-cors


Haha true! It's tricky to find a balance between a short post that nails home a point and a complete guide that hits all the nuances. I definitely intended the former here but I think this is good evidence there is room for the later.

i think the tragedy of web security is that, it's become too complicated to provide simple guidance to developers who are feature-focused. They don't want to become experts in security-header-machinery and crazy amount of domain knowledge needed, but there aren't many other options.

Just look at all the security options a dev has to contend with:

   HSTS
   CORS (with it's myriad of headers.. ACAO, ACAH, ACAC, etc.)
   CSRF
   XSS
   CSP
   SRI
.. That doesn't even count the other fun stuff like SQLi, XXE, LFI, RFI, SSRF, and on and on. It's become real obvious to me, that if the framework or language they develop with doesn't provide it enabled by default AND the most commonly searchable/referenced docs don't explictly tell you to disable it - it's likely not going to happen.

Honestly, I think the real issue is being able to see exactly what the machines are doing when they talk to each other. And I don't mean diagrams either.

I just went through this when setting up an nginx reverse proxy to a gunicorn web server. Once I was able to see all the X- headers and how wsgi was setting up its environment against that, it all became very clear to me what was happening and why each piece was necessary.

I think the same would apply to being able to see exactly what happens with preflight requests. PS: non-interactive diagrams don't fill that gap.


Chrome Developer Tools shows the preflight request in the network tab, as it does every other request. It's even called out as being part of the same communication, if you group by communication instead of sorting by timestamp. What's not clear is why it's necessary sometimes and why it's not, where you have to dig into a mix of history and security policy, and do a bit of threat modelling.

> The CORS header will definitively block the request to an AJAX REST API from going through, because it will be a POST request with an `application/json` Content-Type, which will trigger a preflight request.

Then you should edit your article to say that, or at least put a caveat saying it's more complicated than you imply in your article and to go look up the details elsewhere.

If I wanted to design an API that was prone to security issues - I would design something like CORS. While I understand why some requests require pre-flights and others don't, looking at it without considering web history: it is absolutely insane that changing what appears to be a non-security related parameter (Content-Type), drastically changes how the request is executed. Making it worse, is that its super easy to ignore the Content-Type header on the server side and just assume a that the body will be whats expected. Making it even worse, is that if a request is not pre-flighted, it will be sent to the server to be executed and only the reading of the response will be blocked by a non-matching Access-Control-Allow-Origin header - which is not intuitive at all.


Worth mentioning that according to OWASP's latest recommendation #1 is considered as a first line of defense and #2 is considered more as a defense in depth technique (e.g. in addition to #1, not as a replacement): https://github.com/OWASP/CheatSheetSeries/blob/master/cheats...)

The SameSite=Strict is also an honorable mention as an up and coming technique once all browsers support it.


> "Access-Control-Allow-Origin" doesn't block the request from going through, it just prevents wrong-origin Javascript from accessing the response."

On a GET request (like the one generated by a IMG tag) and all the other cases that do not require a pre-flight request ( https://www.w3.org/TR/cors/#preflight-request ) this is correct.

Are we clear on the thread model and what Zoom is trying to do? As far as I know, Zoom wants to make "joining a meeting" as easy as clicking a link. I do not think Zoom wants people to be logged in a zoom account in order to join.

And what Zoom should try to avoid is to have random people joining meetings and website forcing actions on the laptop of the people.

A simple fix is to use a GUID or any other long not-guessable string for the meeting id. And, of course, do not expose any other potentially dangerous endpoints beside the one that starts the meeting.

In general it should not be about either CORS or CSRF. There are better ways to start a video call communicating with a plugin than exposing a port on localhost and using <img src="localhost:12345/a-command?aParameter=12345" /> to communicate with it.


The threat model seems to have 2 requirements:

1. Only if the top level url bar says zoom.us is it allowed to cause any noticeable change to the zoom native app. This means some arbitrary website cannot cause any noticeable change to the zoom native app without redirecting you to zoom.us . This also means that a sandboxed iframe, such as an embedded web ad, has no possible way to cause a noticeable change to the zoom native app, because it is not able to do top level navigation to zoom.us .

2. When the zoom native app is triggered, the user must click yes on a confirmation button before joining the meeting.

I don't see people advocating that you need to be logged in to zoom.us .

Achieving requirement 1 without requirement 2 is still bad, because some website could redirect to zoom.us and force you into a meeting.

Achieving requirement 2 without requirement 1 is not too bad, but is annoying, because some sandboxed ad in a website could cause the zoom native app to pop up, annoying you, and could maybe almost be a DOS.


I’m not sure you understand the threat model either. Even if the meeting ID is a GUID, you’re still leaving an avenue for people to have their visitors join arbitrary meetings.

Well, I guess I need to explicitly say that having the Zoom client asking something like:

"www.visited-site.com" wants to start a meeting with you. Do you want to join?

is (uncommon?) common sense that the Zoom app should follow.


They seem to have engineered this just to avoid such a prompt, because Safari brings up one when navigating to a protocol that launches an external app.

> "Access-Control-Allow-Origin" doesn't block the request from going through, it just prevents wrong-origin Javascript from accessing the response.

If the server only reacts to POST, then Access-Control-Allow-Origin may be enough, because the browser will fist do a preflight OPTIONS, and if Access-Control-Allow-Origin is not set, the POST will not be made by the browser.


There are some exceptional cases where a POST is allowed without a preflight check: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS#Simpl...

You are right, and reading this the cases are not really exceptional but rather usual; so yes, it's better to not rely on CORS for protection in this case.

Not really, you can just check that Content-Type is application/json and refuse to do anything if it’s not. That effectively shuts down application/x-www-url-encoded or multipart/form-data CSRFs. In fact, parsing POST payload as JSON without checking Content-Type first is frowned upon, and some frameworks refuse to do that by default.

(Throws up hands) But this is ridiculous!

The security of the API can’t rely on strict validation of the content-type, how is that code supposed to make any sense to the 3rd generation of developers who are tasked to maintain it 5 years from now?

CORS is clearly not designed as a means of access control to ensure the POST request will never be made. Therefore, it is fairly catastrophic that an article prognosticating about how nobody bloody well understands CORS has also gotten it terribly wrong.

What this tells me is that there is probably big money in bug bounties for finding sites which use CORS for access control, and almost certainly are not checking the content-type.


application/x-www-form-urlencoded and multipart/form-data POSTs predate CORS. They have to be whitelisted, or large swaths of the existing web would be broken.

Yes, but that doesn't mean we should build a website that has its CSRF security depend on the Content-Type header. There are other mechanisms to gain this security that are less confusing.

Sure, nothing wrong with an additional (trivial) check, though.

These things keep coming up... but how large a swath is it really?

Then again, only with TLS 1.3 do we get rid of RC4!! Except when downgrading to 1.2, 1.1, 1.0, ssl3 (is that even around?)


You should not implement SSLv3

If you are willing to do SSLv3 the POODLE attack downgrades you and steals one byte of encrypted data per 256 iterations.

If you demand SCSV to defend against this downgrade, every implementation that speaks SCSV also offers a better protocol version than SSLv3 so you won't end up talking SSLv3 anyway, thus you should just not implement SSLv3.

You also shouldn't implement RC4 in 2019. Refuse to connect to peers that only offer RC4 instead.


That was my point

TLS version can usually be upgraded transparently by, say, web hosts. To retrofit CORS you actually need to inspect legacy code and sometimes make modifications (in addition to modifying say nginx config), which is a fair bit harder.

Is this true if the request is a POST? Shouldn't a POST lead to a preflight request, and then the browser refusing to send the real request?

POST requests are not necessarily preflighted. If they used non-standard headers or content-type it would be.

https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS#Simpl...


The designers of CORS kept the previous security for certain types of POST requests because they are basically equivalent to how forms work, which operate at a much worse security standard and are a source of a great number of CSRF problems. The simplest way to remember it is that most things which break the legacy behavior of POST triggers the preflight. Speaking of form problems last I checked HN has a login CSRF.

It will trigger preflight if POST and not a form content type. (E.g. a POST application/json)

Is all of that worthless if the calls to localhost aren't encrypted?

All of what worthless? The advice that cakoose gives seems quite valid to me without any encryption to localhost.

What do you mean encrypt calls to localhost? Encrypt with https? That sounds hard since you can't get a real CA to issue you such a cert. I don't see much point in encrypting with https because people off your machine can't sniff or intercept your connection to localhost. Is your goal to protect yourself from other programs or users on your same machine? That might be a valid concern on a machine with multiple users, but definitely makes the problem harder.

Or do you want some custom application-level encryption? I can't think of a good way to implement that either, but maybe it's possible.

But since I don't know what attacks you're trying to protect against, I can't tell whether there's any benefit at all to encryption.


I certainly don't understand CORS. I've read about it several times and I don't remember what I read. It's like CORS has a teflon coating that prevents it from sticking in my mind. I know what CORS is for and why you need it, but I have no idea how, where, and when to use it.

I have some basic understanding of it. I'll try to simplify what I know if anyone else is feeling confused about CORS.

- You visit evil.com.

- Evil tries to makes an HTTP request to bank.com/transfer.php

- But before the request goes through the Browser says, hey, wait a minute I first need to make sure if evil.com can access bank.com

- Browser asks bank.com, if evil.com (also known as origin) is allowed by sending an "OPTIONS" request first (this is before making the actual request)

- Bank.com replies in header saying "sure, everyone is allowed" (e.g. my bank), technically by setting a HTTP "Access-Control-Allow-Origin: * " header (or just evil.com instead of * )

- Browser is happy, it makes the requests and you get the data

Of course there is more to this like the server can tell which HTTP methods evil.com can do like "POST", "GET", "PUT", etc using "Access-Control-Allow-Methods". If the browser needs to send the cookies too with the request ("Access-Control-Allow-Credentials"), etc. But that is pretty much the gist of it as I understand.


No.

- You visit evil.com

- Evil tries to make an HTTP request to bank.com/transfer.php

- The browser happily performs the request, authenticated with your cookies, and the bank, having a CSRF vulnerability, happily sends your money to the attacker.

- Since 'evil.com' and 'bank.com' are different origins, Browser refuses to provide the response to evil.com, but the attacker doesn't care, he got the money.

CORS allows you to relax these restrictions, not tighten them.

Now, bank doesn't like these attacks. So they make the legitimate application send an additional custom header, "X-Totally-Secure: true". Despite being a really bad idea, if (big if) the browser follows the standards, this prevents the attack:

- evil.com tries to make the HTTP request as before

- Browser lets it through, as before

- Bank rejects the request because it's missing the magic header

So the attacker adds the header to the request:

- evil.com tries to make a non-standard HTTP request to bank.com/transfer.php, with the header attached

- BECAUSE IT'S A NON-STANDARD REQUEST, browser asks bank.com (as you described, OPTIONS)

- Bank.com replies "wtf do you want I don't know what OPTIONS is"

- Browser refuses to make the request

Unfortunately, the bank forgot that they have a marketing department, that runs ournewbankapp.com, and shows your current balance in the fake screenshot of the app to show how awesome it is. And your bosses' bosses' boss has yelled at the IT department that rolled out the security measure to make it work again. They make ournewbankapp.com send the magic header (including access-control-allow-credentials), but now the OPTIONS request fails. So they teach the web server to respond with "everyone is allowed" (with "access-control-allow-origin: *" as you described) because they're lazy and dumb.

But because browser vendors know that developers are lazy and dumb, the browser completely ignores this: If access-control-allow-credentials is set, the allowed origin must be listed explicitly. The developers give in, and explicitly add ournewbankapp.com to the header, and now it works, but the attack doesn't work.

(part 2 follows)


Ways this story could have ended badly for the developers:

- instead of hardcoding the allowed value, they set it to always echo the value of the Origin header - browsers are powerless against that much stupidity, and can't distinguish it from a correctly configured server that is allowing the request. evil.com sends the request, bank says "evil.com" is allowed, browser shrugs and sends the request.

- instead of using CORS, they could have tried to build their own custom solution that allows ournewbankapp.com. The attacker would have to analyze how it works, and would then most likely find a way to exploit it to perform the attack, while legitimate users with privacy extensions would make the support hotline despair due to countless people complaining "I can't send money".

- instead of adding the custom header, they could have decided to check if "origin" is present, and if so, assume it's a cross-site request and check the origin against a whitelist. This still relies on standard-compliant browsers but isn't the worst idea AS LONG AS YOU REQUIRE AN ORIGIN VALUE, AND TREAT LACK OF AN ORIGIN HEADER AS A FATAL ERROR, BEFORE TAKING ANY ACTION (e.g. the money transfer) BASED ON THE REQUEST. I think can force same-origin requests to include the origin header with some option on fetch(). But if you treat a missing (or "null") header as ok, the attacker will likely find a way to make some browsers send no header, and steal money again. Sending "null" (or some other default value, been a while since I played with it) is easy, I think with some iframe trickery.

- they could have used plain HTTP, exposing them to DNS rebinding attacks even from remote attackers who cannot sniff their user's traffic.

- they could have an XSS vulnerability somewhere on the main site or the whitelisted marketing site, allowing the attacker to proxy his malicious requests through that site (and thus make them use a whitelisted origin). Maybe some long-forgotten kludge designed to proxy requests to third parties that don't support CORS...


Variant of #1 that I've seen not infrequently:

They set it to check a regex for `bank.com` to also allow `subdomain.bank.com`, but inadvertently also allow `bank.com.evil.com`, `evilbank.com`, or similar.


How would DNS rebinding work? I thought DNS rebinding only works if there's some sort of authority that's ambient across hostnames (such as whitelisted user IP ranges, or connectivity to a network), which doesn't seem to apply here. If you do have authority that's ambient across hostnames, my thought of how to protect yourself is to check the Host header.

XSS is essentially completely separate from CORS and CSRF. XSS is a bad vulnerability, worse than CSRF, and you can't expect any type of CSRF prevention to protect against XSS.


How would one handle the situation of having to respond to (authenticated) requests from a web app running on both localhost and a domain-based website?

I've been in this situation before with web-based cross-platform apps using Cordova. As far as I remember, I ended up echoing the origin domain if it matched a set of allowed domains (basically localhost and example.com), since regular expressions or lists are not supported in allowed-origin headers. For session tracking, I used tokens rather than cookies to avoid CSRF issues. Would there have been any better solution for this? I think not.


This is the first comprehensible, and more importantly memoizable, description of CORS I have read. Hope it sticks this time.

I know you know this, but to clarify steps 2/3:

- Evil tries to make an HTTP request to bank.com/transfer.php

If this is a regular HTTP POST (ie submitting a form, and the browser window changes location), the browser will allow it.

If this is an xhr POST, the browser, following the same-origin policy, allows the request, but prevents accessing the response (unless allowed with CORS).

[EDITED with tgsovlerkhgsel's help!]


Even a JavaScript initiated POST request will go through. Blocking it would not make a lot of sense, because the attacker could just use the FORM (possibly in an iframe to keep it invisible to the victim).

It is possible that XHR, or common XHR libraries, default to adding some header that makes it a non-standard request, but a fetch() call works.

In Firefox, open a debug console and run

    fetch('https://otherorigin.example.com', {method: 'POST', body: 'blah'})
You will see two things. In the console:

    Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://otherorigin.example.com/. (Reason: CORS request did not succeed).
In the network tab, a HTTP request.

Replace with a URL of a server you control, or run `nc -lnvp 9999` and replace the URL with http://127.0.0.1:9999, to see that the request is indeed being made.

As the author said... few people understand CORS.


Bad naming strikes again? The message "Request Blocked" means something unambiguous to me. And it doesn't mean "I made the request, but won't let you see the response."

But for XHR only POST requests that meet a lot of constraints (defined here https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS#Simpl...) will work, right?

Nothing with a Cookie header for example...


That page says that the Cookie header cannot be "manually set". Headers can still be automatically sent, and the browser can automatically send cookies from the cookie jar. So for example this will not send a request with the manually set cookie:

    r = new XMLHttpRequest();
    r.withCredentials = true;
    r.addEventListener('load', function(e) {console.log('loaded: ', this, e);});
    r.addEventListener('error', function(e) {console.log('error: ', this, e);});
    r.open('GET', 'https://www.google.com');
    r.setRequestHeader('Cookie', 'somecookie=somecookievalue')
    r.send();
But this will send a request with the automatically set cookies just fine:

    r = new XMLHttpRequest();
    r.withCredentials = true;
    r.addEventListener('load', function(e) {console.log('loaded: ', this, e);});
    r.addEventListener('error', function(e) {console.log('error: ', this, e);});
    r.open('GET', 'https://www.google.com');
    r.send();
Assuming the user is already logged in to bank.com , the user's cookies will be automatically sent on the request, and the transfer will go through assuming there is no CSRF protection.

TIL. Thank you for teaching me something - I had no idea the request still went through and it was just the response being blocked.

Thank you, I stand corrected! Editing :-)

How hard is it to trigger a regular HTTP POST instead of an xhr POST?

Not hard. Just make an html form with the values you want, then with javascript call

    document.getElementById("myForm").submit();

Another option is to use a secure token (https://github.com/OWASP/CheatSheetSeries/blob/master/cheats...) this is a PITA but frameworks like ASP.NET have this baked in so it's not really much of a hassle in practice.

It certainly won't allow me to make xhr POST requests without sending an OPTIONS request first. Are you sure about that?


Amazing. You learn something new everyday. Though it does make sense why blocking it won't serve any purpose as form submissions are going thru anyway.

Also glad I didn't mention jsonp. God knows what the devs at ournewbankingapp would do with that one ;)


Yes, you can send some POST requests without a preflight OPTIONS request: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS#Simpl...

Actually they can do better than just a custom header with an XSRF token, which is fairly standard nowadays for requests that are authenticated via Session Cookies (Django for example strongly enforces this by default.)

Indeed, this is the canonical way of avoiding CSRF (= XSRF) attacks. I intentionally only explained the header method because I didn't want to confuse readers with another concept, and because it's a really nice example to explain CORS.

https://github.com/OWASP/CheatSheetSeries/blob/master/cheats... is probably one of the best documents discussing this, including the drawbacks of the header method that I used (and warned against) in my example:

https://github.com/OWASP/CheatSheetSeries/blob/master/cheats...


Wait. If the resource at bank.com is protected by a CORS filter, the request which is issued at step three will have an origin header, and the CORS implementation is free to issue a 401 at that point. At least as far as I have understood. That would be tightening the restrictions. Am I missing something?

There used to be ways to get rid of the origin header, not sure if this is still the case. You can easily set it to "null", e.g. with some iframe trickery.

As others have mentioned - thanks for this! I thought I understood CORS, but one thing I apparently wasn't clear on was the conditions on which a pre-flight request was made. Thanks for clarifying!

Thanks for this; it helped me.

Very naively, I don't understand why evil.com origin was allowed to make any request to other domains using any cookies/session/identity of the browser/user in the first place.

Why was this accepted standard? I was blown away when I first learned that a request made to xyz.com while in a browser tab showing abc.com would actually complete the request using my identity on xyz.com.


Because you used to be able to host your web site on www.compsci.tech.youruniversity.edu, and put a HTML <form> element in there, that would send the result to myawesomeguestbook.com.

Thus, cross-origin POST requests were allowed, and changing it was impossible without breaking the web. Likewise you can include images or iframes of other origins, allowing GET requests. However, you cannot read the responses! The JavaScript APIs just make it more convenient to make these requests, but they don't change what you can do - you still can't read the response. Not allowing it in the JS API would just make the attack more annoying (fall back to the legacy methods) and limit legit use cases.

It's important that you are not allowed to make arbitrary requests: As soon as you start using advanced features like custom headers (in other words, things you couldn't do with img tags, redirects, forms, and iframes), you run into preflight (where the browser asks the server whether the request is allowed before performing it, via OPTIONS, i.e. in a way that legacy servers will not understand and safely reject).


My reading, feel free to correct if I'm missing something.

This is a compatibility feature that can never change, and shouldn't change, because we can't break the web. But my reading is that CORS is mostly a band-aid on what was a bad security policy to begin with.

The problem isn't that a browser might let a web page on a separate domain send and read any arbitrary request to another domain without permission -- with the exception of DDOS and bot attacks against servers, it's difficult to think of what risks this would actually pose to users themselves. And any native program on your computer can already send an arbitrary request to an arbitrary domain and read the data -- so there are at least few arguments that it might even be beneficial for websites to be able to do the same.

The security problem is that requests are made with your current cookies regardless of what domain you're on. Ideally, browsers from the start would have isolated every domain into their own container, so that if evil.com made a GET/POST request to mybank.com, it wouldn't be authenticated as me, regardless of how the server was configured.

We made it so that cookies could only be read by certain domains, but we didn't think until very recently to make it so that cookies could only be sent by certain domains. And at this point, too much infrastructure relies on this behavior for us to ever come up with a better model, so we're stuck with hacks like CSRF tokens which are dependent on websites being coded well.


>The problem isn't that a browser might let a web page on a separate domain send and read any arbitrary request to another domain without permission

That's not fully correct. Even without cookies it's bad if a.com can read a response from b.com . Why? Because b.com might be hosted on a private network, and contain private information. We don't want a.com to steal that information. Even if cookies aren't sent, there can be other types of ambient authority, and connectivity to a private network is one of those types of ambient authority. Another similar type is if a website is protected using a whitelist of user IPs. Side note: any website that uses this type of ambient authority that isn't associated with a hostname must also check the Host header to protect against DNS rebinding.

Of course if the web was built from the start such that a.com can read b.com just without cookies, then such websites that rely on non-host-associated ambient authority (hopefully) wouldn't be built and you would be right.

Just because a native program can do something doesn't mean a website should be able to. A native program can delete all my files without asking me. A website can't.


The alternatives are grim.

As an extreme example, if I were to stream video to your browser from my website, evil.com, I would have to either host or proxy those streams through my server. Instead, I can just point your browser to a video from youtube.com or something.

If youtube.com then decides you need to log-in (or be logged-in) to verify your age (as they do), then that is between your browser and youtube.com to negotiate -- evil.com never sees your credentials.


A tangent, but YouTube actually does not require login to verify age on embedded videos.

You're welcome. Yes you still can make GET requests from evil.com to bank.com/transfer.php by setting the image src or script src or form submission though you won't be able to read the response.

This is probably the reason that any url that makes changes to user data like logout.php, transfer.php, etc must be a POST request (with csrf token to protect again requests created via form submissons)

Also it's still possible to bypass CORS using JSONp requests or for submisson, but for that transfer.php must support jsonp too (by setting the callback method)


Wow that makes more sense too. Basically, don't implement GET in any remotely-sensitive API that actually causes changes in the remote system/service?

EDIT: Why isn't there a header that web servers can give out to browsers basically saying "Don't use the cookie/session I'm giving you ever, unless you're literally on this site"? I could see this being very useful for banks or other origins where they expect no requests except from direct users.


"Why isn't there a header that web servers can give out to browsers basically saying "Don't use the cookie/session I'm giving you ever, unless you're literally on this site"?"

There is, it's relatively new though:

https://blog.mozilla.org/security/2018/04/24/same-site-cooki...


Yes also see my comment about csrf token too. If the request changes user data it must be a POST request with csrf protection.

You can't make POST requests with ajax as CORS will protect you, but evil.com can create a fake form submission (with bank.com/transfer.php as the form's action) which can be a POST request. So in this case you match the csrf token which only you know and which is unique for the form/session.

It's a rabbithole :)


It seems you can make a POST request from javascript, as long as it doesn't have non-standard headers or content-type.

Try it yourself: https://news.ycombinator.com/item?id=20406633


By default, under the same origin policy, a browser won't allow requests cross origin.

But there are valid situations where you want a request from 1 domain to be made to other domains. This is where CORS comes in.

CORS is a mechanism to loosen security, not increase it. It allows a server to say, these are the domains (outside my own domain) who can make requests. CORS headers should be set carefully so that you are only allowing the domains that should be allowed through.


> CORS is a mechanism to loosen security, not increase it.

Or we could call it CORB instead (Cross origin request blocking), and then we see it's a mechanism to tighten security. Since fundamentally, what we have is an agreement against major web browser vendors that blocks cross origin requests unless the web server authors have used CORS.

I mean, how many people have encountered a problem with CORS? Almost no-one, and those that have encountered a problem with CORB and solved it by enabling a shitty CORS that opened the doors. (At least, they're fixing security holes in software that was written by devs who encountered a CORB problem and fuxed it. But all CORS problems follow a CORB problem.)

If we called it by its true name, maybe it would help people understand what's happening. Names are important. If developers understand CORB, they will potentially understand CORS. But no-one can understand CORS till they've understood CORB.


I think you have a bad acronym collision with Cross-Origin Read Blocking.

https://fetch.spec.whatwg.org/#corb


> By default, under the same origin policy, a browser won't allow requests cross origin.

Save a rather short-but-impactful list of exceptions.

> CORS is a mechanism to loosen security, not increase it.

Would that everyone shared your understanding.

Add in these two insights to those we are enlightening:

* CORS is enforced by the browser, so no, your curl command working doesn't say your service is fine

* That error message in the browser about 'no-cors'? It is 99% likely that no-cors is NOT what you want, so the error message is just misleading and unhelpful

...and you'll have covered my CORS wishlist :)


> By default, under the same origin policy, a browser won't allow requests cross origin.

Cross origin requests are allowed (as long as they're simple). Reading the response is what's blocked.


This context was incredibly useful; thank you.

Am I missing something or is this transaction actually a browser being the deciding factor for whether or not the request gets sent?

If that’s true, couldn’t a nefarious browser decide when to push a request and completely ignore the OPTIONS header?


Yes, and that is a very important point you noticed.

Both CORS and CSP are for guarding against malicious code on the web, being executed by non-malicious standards-following browsers used by non-malicious users. (the user is in fact who is being attacked).

I think is frequently confused, and it leads to a mistake in the other direction, people thinking CORS or CSP can guard against malicious user-agents operated by malicious users. It is not for that!


Yeah that's correct, CORS is a browser feature. If you had a nefarious browser installed it could indeed defeat CORS, but at that point you already have a nefarious browser and CORS is the least of your concerns.

CORS prevents a malicious site from exfiltrating/accessing data using the access you have, for example an internal site that is on your computer's network but the malicious website's servers can't directly access.


Of course, anyone can use netcat to send any kind of request.

However, this attack is most useful if you can get the victim's browser to send the request for you, because that way, you can get it to include the victim's authentication cookies.

If you as the attacker send the request yourself, you don't have the cookies.

If you make the victim's browser send it, you either can't make them use a nefarious browser, or you already won since you have code execution on the victim's machine.


Sure, but then again a nefarious browser could just record and broadcast all your interaction (including password etc) with bank.com directly.

No web standard can protect you from a nefarious browser, since the browser could just decide to not follow the standard.


You're correct, the browser is the safety net when it comes to CORS.

Safari used to let you turn off CORS checks in the developer tools menu.

This Teflon you're talking about, it's a thing. Not just with CORS, not just with you.

For example, I've gotten decent at regular expressions a few times but have to keep relearning it. Adding insult to injury, I might relearn something from Stackoverflow, and realize afterwards I posted the answer!

Not for trivial stuff. But a string of punctuation with capture groups or whatever, forget about it [1].

Any support for this informal draft definition below?

------------------

Teflon tech (draft definition v0.1 [2])

Knowledge that could be learned by most with reasonable effort, but often must be relearned or is never fully grokked.

This can be due to infrequent opportunities to recall the concepts in a particular role, or insufficient need to apply (thereby practicing) the relevant techniques. It may also occur when a tech is used frequently but the foundational theory is hidden by the use of higher level abstractions.

[1] https://stackoverflow.com/questions/1381097/how-do-i-get-the...

[2] content provided under MIT license.


I call it my intellectual immune system, but it might be more like a runaway process killer or an OOM-killer. It hates ugliness, and it hates complexity. I kind of appreciate its attempts to clean Javascript out of my brain. Even though it makes it frustrating to do front end tasks, I appreciate the good intentions.

In the 2000s I learned Perl no less than three times. I was really impressed by what people could do with small scripts and one-liners, and I wanted that power for myself. I got really good three times, and each time it took me less time to forget it than it took me to learn it. I did not agree with its decision in that case, but I lost the argument.


As an online discussion grows longer, the probability of an unrelated analogy involving computer science and software engineering approaches 1.

I shall graciously accept this being called adtac's law out of sheer humility :P


More like: people rely on shared knowledge when communicating. This is Hacker News, after all.

I think the proposed definition is too sympathetic to the Teflon. Sometimes a whole approach is misconceived, yielding billowing complexity; it may be too hard to see through the fog to be sure of this in particular cases, but surely it happens. This can apply to the "foundational theory" too.

Probably some combination of things that break principle of least surprise along with not needing to think about that thing very frequently. A good IDE and good coding standards can smooth things over.

I'm ok with regex too, but if you asked me to do something with look-behind I'd have to experiment a little to figure out how it works again.


Javascript is like this with me. I have no idea why.

Probably my having a super low opinion of it's whole ecosystem doesn't help. ;)


I think it's 100% because you don't like Javascript. You resent having to learn it. I feel the exact same way about Git, I've never been able to stomach it, I'm bewildered as to why the entire programming community have decided it's the one VCS to rule them all, and I resent having to grok its UI.

The Javascript ecosystem is huge. It's also the wild west, completely non-curated, open to anyone, and you have to treat it accordingly. It's not really one entity.


Git is amazing, beautiful piece of software.

But... sigh I feel exactly how you feel about Git about Vim. It's the one editor everyone has decided despite it's arcane and bizarre UX that you aren't a real programmer if you don't know.

Refuse to learn it. Absolutely refuse.

I think everyone has one of those things.


I have at least 2. Git and vim. It seems fitting then, that vim should be the default editor for git's commit messages.

Hahahaha. Hot damn. Oh that would really get my goat.

I only just read this comment. It is golden. Now we should just bookmark this comment and use the term teflon tech as often as possible to make it stick (euhm...)

Yeah I keep regex syntax pinned up at my desk because it always gets cache-flushed beyond the basics even if I used it yesterday.

Anki

I read extensive parts of the CORS specification and it is indeed teflon (and just as toxic as teflon as well). The spec is full of exceptions and leaves many choices to the browser vendors.

To truly understand CORS, you have to fundamentally understand 'the origin' as a security and execution context. Sharing means that content normally intended for another origin is shared with the initial origin.

Perhaps that is blatantly obvious and you are more confused by the myriad of headers, exceptions and lifetime. In that case, I propose to read more of the spec.


> In that case, I propose to read more of the spec.

Ingest more teflon when in doubt about its toxicity.


But Teflon is non-toxic and biocompatible. You can eat it, your body is unable to process it or break it down and it will pass through your digestive system. You can coat surgical instruments and medical implants with Teflon.

(Above 300°C Teflon will generate toxic fumes, but then again, so will wood.)


Studies funded by companies selling Teflon products say 300 degrees. Others conclude it happens at temperatures as low as 200 which you can easily reach while cooking.

Anyways I'm almost always using plain old steel pans and pots. There are very few things that stick so badly I resort to the one Teflon pan I have.


Watch out: some types of stainless steel can leech enough nickel to approach the safety line.

Which safety line? :)

> Above 300°C Teflon will generate toxic fumes, but then again, so will wood

Unless you are a parrot. I've heard it is disproportionately toxic to birds. (or is it just parrots?)


I have read $somewhere that it accumulates in testicles.

And why would I coat surgical instruments?

Edit: Just one article how healthy Teflon is:

https://fortune.com/longform/teflon-pollution-north-carolina...


That is an article about byproducts of Teflon manufacture.

Wood tastes better

With regards to the toxicity, I was referring to the Intercept [1] article which appeared last year. It's a long read and details the manufacturing process. Nevertheless, where there is smoke...

[1] https://theintercept.com/2018/07/31/3m-pfas-minnesota-pfoa-p...


Love the teflon analogy; can totally relate. At this point our team his literally anthropomorphized CORS into some sort of a malevolent entity actively trying to ruin our lives.

But stick with it. It does slowly sink in.

Something that especially confused me at first was that all of the CORS blocking happens in the browser. So you never run into CORS using curl, etc.


CORS stands for Cross-origin resource sharing.

It allows you to make arbitrary HTTP requests across origins, even those that would normally be blocked by the same-origin policy, as long as the server collaborates and allows it, easily and in a secure way.

If you control the server (or can get someone to adjust it, or it already supports it), you can use it to make requests from one website to another, directly from the client.

For example, you can have JS on a site served on www.example.com make calls directly to api.someservice.net, without any kludges.

Any time you think of some contrived proxy solution, JSONP, or other kludges to bypass the same-origin policy, you want CORS.

For non-anonymous APIs, authentication can still be a challenge.


Well... CORS controls whether the browser lets the other page read the response, not issue the request, preflight not withstanding.

Yes, because issuing the request was already allowed despite the same-origin policy. For example, you can do GET requests with iframes, images or redirects, and POST requests with HTTP forms.

The important part is that CORS can be used to relax any existing same-origin restrictions on HTTP requests.

1. If the same-origin policy isn't a problem for you, no need to change anything, obviously.

2. If you want to _tighten_ the same-origin policy, CORS isn't going to help you (as far as I know).

3. You can get rid of any request-related same-origin policy restriction with CORS, easily and in a way that is easy to secure. There is no need to build around it.

4. If you try to use something custom to achieve that instead, you're likely making a dumb mistake, because you're not only creating unnecessary work, but also most likely either breaking many more users than there are users that don't support CORS, or introducing a gaping security hole, or both. Usually both.


The only thing I know about CORS is when every web project I do starts generating funky errors and the fix is some CORS related copy-pasta I have to add.

Very little objective thought involved.


I came to this thread hoping to find that I am not alone. Thanks for validating that for me.

I was also surprised to read: "the browser explicitly ignores any CORS policy for servers running on localhost." on the Zoom vulnerability writeup.

Among all the W3 specs, the CORS one is definitely not the largest spec. See https://www.w3.org/TR/cors/

Ultimately the Zoom vulnerability reminded us that there is always a trade off between usability and security. Most of the developers default to usability and simplicity.

The average developer looks for the fastest way to achieve "what I want to do". And they do not even question installing stuff with "curl https://any.domain.com/any-script | sudo bash".

And that is the root of all evils.


I think my personal brain-fog when it comes to web security is that it isn't just protecting the server from malicious clients, or the client from malicious data, or the user from a malicious client, but all of the above and then some. The attack vectors point every-which-way and it can get confusing whom you're trying to guard against whom in a given case.

I love that we're not alone in this. CORS is an antimeme, confirmed: http://www.scp-wiki.net/antimemetics-division-hub

As a motivation to use: To prevent other sites using your resources, causing you an extra bandwith cost.

On the contrary: CORS is what enables other sites to use your resources (in a way allowed by your policy). It was created because, by default, browsers don't let you do that.

I think what GP commented is a very common misconception. CORS is not the source of the developer trouble (that would be cross-origin restrictions), it is a way around them. But it is perhaps understandable where the confusion comes from... Browsers always mention CORS headers as part of error messages, after all.

+1 - This is a large part of the trouble. It leads to a lot of stuff being defined as the inverse of some other behavior - behavior that the developer does not have control or understanding of.

But only one domain at a time. The lack of a way to specify multiple domains in a header makes it absurdly painful to actually implement when you want to make your content available to a specific set of domains.

I’m personally at the point where I’d rather handle unnecessary request authentication than trying to do anything with CORS.


I'm curious, why was it painful? We just had a middleware do an Origin check and copy the header if it matches.

90+ microservices with 5+ environments. Mostly a matter of scaling a solution across a rather large surface area and across a number of languages and teams.

Fair enough; seems like it could be done using a gateway like Kong, but if you're not already using one, it wouldn't be worth adding it just for that.

Back in the day, we used to use some Apache redirect magic to redirect to, say, an image of our choosing when the Referrer header was wrong. I had a relatively polite 'hey, you can see this image here:' message. Other people redirected to less friendly things.

You can still do this. The Referer header is sent by default on requests, and you can make your server interpret it to do anything you want.

This is true unless the referring URL is secured (HTTPS), and the destination URL is not. In that case a conformant user-agent will leave Referer out.

https://tools.ietf.org/html/rfc7231#section-5.5.2


True, though you should run HTTPS on your site. Which means you'll get the Referer unless the other site or the user's browser has been configured to suppress it.

> Original security article: briefly touches on CORS

> Follow up article: that thing about CORS is not right

> Hacker news top level comments: that article's not quite right

> First child comments: tlc is also not exactly right

I feel like I understand CORS even less now. Article title is right, I don't understand CORS.


I think the issue with understanding CORS is that you first need to understand same-origin policy and exactly when it applies. CORS is simply a method for bypassing same-origin policy. You also need to understand how a CSRF attack works.

Once you grasp both of these things, then you have the base for how and why CORS exists. Until then it's mostly an annoyance.


Totally agree. Something about CORS and the resources out there makes newcomers think that it's something the client has to do differently. I thought this when first doing cross-origin stuff, and thought it was just me until my dad (programmer for 30 years) got stuck on it the exact same way.

Also, the author makes a good point that `Access-Control-Allow-Origin: *` is pretty dangerous. I hadn't really worried about it in the past, because "I don't care who calls my api, they aren't authenticated". But if malicious client code got a hold of a user's session (using XSS or what have you), I'd be open to them steering my user's session and doing horrible things. Definitely going to review my current projects with this in mind.

Edit: of course, if I have an XSS vulnerability, they'd be able to do that from my domain anyway, so CORS doesn't 100% fix the problem. XSS is bad.


> But if malicious client code got a hold of a user's session (using XSS or what have you), I'd be open to them steering my user's session

I think the point of this stuff is that, if the user's session is stored in a cookie, and malicious javascript can send HTTP requests to your site (that will use whatever cookie is already present) and read the responses -- they can steer your session without needing to get a hold of a user's session using XSS or anything else.

They already HAVE a hold of the user's session, because they can control the browser, which does.

The point of cross-origin restrictions is to prevent JS loaded from one site from steering the user's session on a different site. (Maybe among other sorts of attacks).

CORS lets you disable those protections. `Access-Control-Allow-Origin: ` disables them entirely.

I could have some of this wrong, I find this stuff confusing too.

But I believe they do not need to "get a hold of a user's session (using XSS or what have you)" in order to steer a user's session under `Access-Control-Allow-Origin: `


I guess that's what I meant; XSS would give them the user's session implicitly on my domain, regardless of CORS. CORS prevents them from using that from another domain, which is valuable, but moot if you already have a breach.

OK, I think I was confused about what "XSS" means or how you meant it or I was thinking about it differently.

The important point though: You don't need any pre-existing vulnerability on your site in order for "Access-Control-Allow-Origin: *" to create a vulnerability.

This stuff is sure is confusing to talk/think about though.


> Something about CORS and the resources out there makes newcomers think that it's something the client has to do differently

I think that "something" is the fact that the client does the blocking. Generally the servers are perfectly willing to provide the data. Heck, if you sniff the packets, you could even look at the data blocked by CORS (assuming you're working around TLS). It's especially confusing at first that tools like curl work no problem, while browser block everything.


Yes, that's definitely a big part of it. Test it all with curl or your language's http client, then put it in the browser and it breaks? Must be the client's fault! Which it sorta is. But only because it's a client that doesn't belong to you — you're controlling it on the user's behalf. It doesn't help that when you're developing on your own machine, you're user and the developer, and you of course trust yourself.

I for sure haven't mastered CORS.

What would you use instead of that? Suppose you're doing something like client-side Blazor that calls to a WebAPI project. How can you improve security by restricting the origin of access requests when your client is open to the public?


Access to an API should be managed by a proper authentication or token validation scheme. However, protecting your users' authenticated API sessions, which are presumably initialized by the forementioned authentication scheme, is what CORS enables.

CORS, when implemented correctly, ensures that a session is not hijacked by a malicious website's JavaScript in order to call your API in the context of the session (effectively masquerading as the user who authenticated with your API). This scenario assumes that there is an authentication session cookie, tied to your API domain, that the browser would pass along with any request to your API domain (of course there are SameSite cookie and third party cookie blockers that can mitigate these situations as well, but perhaps "trusted" cross domain requests are desired in this use case)

With CORS allowing traffic from anywhere on the web, you can't reliably trust that the authenticated sessions to your API are not being used in phishing / side channel attacks: I discover your are authenticated on site on foo.example.com and I send you a link to my website, evil.com. Evil.com includes JavaScript to request data from an API on foo.example.com. Your browser executes the JavaScript and makes the request and gets a response payload, and since I'm a jerk I then post that same payload to my own endpoint on evil.com to capture the data.

Of course this all assumes that you WANT your API to be accessible cross origin. If you provide an API as a product this is common, since it allows other developers to build web apps against your API. If that is not a use case, then same origin policy (and no CORS headers from the API server) is sufficient to prevent malicious domains from doing bad stuff with your users' authenticated sessions.


I'm not familiar with Blazor or WebAPI, but if you're not expecting requests from client-side javascript specifically, you can just use `Access-Control-Allow-Origin: null`, since non-browser clients don't respect CORS anyway.

If by "open to the public" you mean "open to requests from any client domain", CORS isn't going to help. In that case I'd probably have api clients pre-register a whitelist of domains that they're planning to make client-side requests from, so you can check the domain against the whitelist and dyanamically build your allow-origin header.


Can you point me to some documentation for dynamically allow-origin header? I’m working on open sourcing our frontend and so if users have their own frontend on their own domain, how do we allow those calls to our backend with CORS? If we turn CORS off, is this a security issue? From the frontend, we send a JWT with the header and check this for protected routes on the backend.

CORS is really pretty simple, it's getting the threat model that is tricky. Some docs on Allow-Origin here: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Ac... and a more complete walkthrough here: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS

Dynamic allow-origin sounds magical, but is really straightforward. You just look at the `Origin` header of a request (e.g., in express `req.headers['Origin']`), compare it against your database of whitelisted origins, and if it's in there, return it as the value of `Access-Control-Allow-Header`.

If you don't have any relationship with the folks using your frontend, I'd just "turn it off", that is, use "Access-Control-Allow-Origin: *". It's a security issue only insofar as you don't trust the third party that owns the web frontend to handle their users' data securely, either by introducing their own security vulnerabilities, or by hijacking users' sessions themselves. The big question I think is whether the third party's users are your users too, in which case you're responsible to vet the third party to protect your users. If you're just a backend for whatever-the-heck, just make sure you have a good terms of service for the api so you're not assuming responsibility for other people's mistakes/malice.


It's not fair to call Dynamic Allow-Origin simple. This is super tricky and nonstandard usage that is only necessary to workaround the fact that browsers do not support multiple values on the Allow-Origin header even though the spec allows it.

That said, yes, when you want to allow multiple origins, reflecting the Origin request header in the Allow-Origin response header is the only solution that works. (Note however, that sometimes the Origin header is not present, an additional difficulty.)


Realistically, Dynamic Allow-Origin is the only way when you want to add more than a few allowed Origins. You wouldn't want to send back a 10kb header detailing all the clients of your service, would you?

Call it a bug in the spec if you want, but regardless the spec provides no guidance about whether reflecting the Origin is a good workaround.

You shouldn’t reflect the origin unless it matches your whitelist. If you wanted to allow all you would just use *. If its not allowed you should return invalid headers instead. Thats why its dynamic

The single-value constraint seems like a feature rather than a bug; if you included your full list of whitelisted domains every time, not only would your HTTP header size be unnecessarily heavy, but you'd be leaking private details about who else is using your service. This isn't an inherent problem, but it could give an attacker some ideas of who to target.

Maybe, but the CORS spec says otherwise.

https://www.w3.org/TR/cors/#access-control-allow-origin-resp...

See the note.


Interesting that the spec disagrees with the implementation. Maybe the multi-origin leakage was filed as a bug somewhere and fixed post-spec?

Yes so the users will be users of ours. The users won't have their own users per se, but will be able to customize the app. So if I understand correctly, I can turn off CORS, have each user that wants their own backend URL to enter this URL in our system, we whitelist this URL and check for it when a request is made?

Yes, though you wouldn't turn it off, you would put their domain in the header the response sends back to the client.

Is your frontend a static webapp that communicates directly from the browser to your backend, or another server that talks to your backend?

Yes frontend is a Next.js app and the backend is a GraphQL server.

Depends what you are trying to do. If you are offering an API for third parties to use, Access-Control-Allow-Origin: * is pretty appropriate. It basically communicates "I don't care where you call me from". Same with content on a CDN (fonts, css, etc.). Of course serving executable content like that is a bad idea; but, even so, people seem to trust certain domains with things like jquery.

WIth images, fonts, or other static content the risk is arguably fairly limited except for the fact that browsers tend to have all sorts of nasty bugs with malicious content overflowing buffers thus leading to arbitrary code execution.

But of course the elephant in the room is that doing things with headers adds a lot of friction for developers. Now their simple application has a devops component. E.g. modifying nginx to add that header or trying to make AWS CloudFront forward CORS headers is just a royal pain in the ass. It's also very fragile because these things break easily and are rarely covered by integration tests because these things tend to be specific to infrastructure and environment.


| Also, the author makes a good point that `Access-Control-Allow-Origin: *` is pretty dangerous.

But what is the alternative when you have a script that is going to be deployed on multiple sites that you do not control? Which origin do you specify? This is the scenario which always trips me up and results in kludgy workarounds.


I think the generally accepted solution to this is to set the allowed origin dynamically (IIRC nginx can do this) by looking at the request host header on the options request. If the origin is in some allowed list then you return that origin in `Access-Control-Allow-Origin`

I _think_ that is an the appropriate use for `Access-Control-Allow-Origin: `.

It would be up to you that only the URL for such scripts (not your entire site) have `Access-Control-Allow-Origin: ` , and to make sure that there is nothing malicious JS can do with `Access-Control-Allow-Origin: *` at those particular URLs.

Which is confusing to figure out, it's true, because the whole thing is confusing, indeed.


Could you give a realistic example where it wouldn't lead you to have a bigger problem on your hands? I think that'd be extremely helpful.

The company I work for provides embeddable JavaScript "web widgets". Our customers are companies, and their customers are consumers (we are business to business to consumer company). We host somewhat personal data specific to those consumers, and as such, the data should only be accessible by those consumers.

So, we provide the data API that these web widgets communicate with, and the web widgets themselves can be embedded on our customers' own websites (some-company.com). However, the customer can decide exactly which hostnames can embed the widget (through a control panel we provide), and these hostnames ultimately become the `Access-Control-Allow-Origin` value we provide with every API response. Perhaps they want it on foo.some-company.com or bar.some-company.com -- it's really up to our customer where they want these widgets to be embedded.

By doing this, the customer knows that no other website can host these widgets, thus exposing their consumers' data to a phishing attack as I outline here: https://news.ycombinator.com/item?id=20405169


The reason the developers didn't use CORS is because active mixed content isn't allowed. You simply cannot call 'http://localhost' from a https domain. For a medical SaaS application which needed to communicate via USB I created a self-signed certificate during installation and added it to the trust store to be able to call active content on localhost. Previously, we used NPAPI (which, when used correctly was more secure than the current plugin architecture), but that got deprecated.

Ask yourself why many developers need this path. The reason is that creating plugins for various web-browsers is a very expensive ordeal, with Firefox and Chrome having a completely different ecosystem. Moreover, in my opinion the plugin ecosystem is less secure than having good engineers take the localhost path (with challenge-response token authentication and origin hostname validation, obviously).


This is discussed in the post text, but happy to elaborate here in more detail here! Here is the patch links from Firefox[0] and from Chrome[1] which they specify that active mixed content policies do not apply to the localhost, because the w3c specification was updated to specifically allow this behaviour[2]. You might have to use 127.0.0.1 directly. So yes, it is allowed.

If for some reason that doesn't work for your app, the post also mention two secure alternatives: the native client can install a self-signed cert, or you can use a browser extension with the native messaging API. You touched on this too!

My point here is not the specific example solution -- there's lots of alternatives and yes it depends on the situation and browser support requirements. But NONE of them are a reason to allow requests from every origin.

[0]: https://bugzilla.mozilla.org/show_bug.cgi?id=903966

[1]: https://chromium.googlesource.com/chromium/src.git/+/130ee68...

[2]: https://github.com/w3c/webappsec-mixed-content/commit/349501...


I see it is fixed now. I thought up this work-around back in 2014, when NPAPI was phased out. Spotify and another large player came up with similar solutions.

Totally. I do get the constraints and I imagine Zoom had similar thinking here. The image approach, although hacky, can even be secure as-is if they need it for backward-compatibility but it needs to check the headers and not honour requests with the wrong value.

I kind of like the ingenuity of the approach. Never occurred to me to use non-active content.

It seems that the Chromium patches are from 2016, so presumably this would still break older Chrome users?

2016 is in the 50s, and this[1] claims that version <67 has single-digit usage share.

[1] https://www.w3schools.com/browsers/browsers_chrome.asp


I think that Chrome allows calling http://localhost from https domain. Other browsers should fix that instead of every application installing their custom certificates into trust store.

Installing trust store certificates for these purposes is asking for trouble. Sure, if the private key is unique per installation it theoretically should be fine, but in reality it can be hard to gather enough entropy to be satisfiably unpredictable, and it's very easy to get this wrong. It's worth noting that there isn't a solid way to limit your CA certificate in the trust-store to the domains you intend to use it with https://security.stackexchange.com/questions/31376/can-i-res...

> in reality it can be hard to gather enough entropy to be satisfiably unpredictable

I think this is a myth, especially on desktop/laptop PCs - a few hundred bytes of entropy at system boot initialize the kernel CSPRNG, which can then generate countless gigabytes of cryptographic-quality randomness on demand.


Can't you just install a specific self-signed certificate for a single domain instead of a CA certificate?

Yes, you can and you should do that.

Sorry, I might initially have caused this mess, thinking browsers would fix it sooner than later.

Well, you need your software to work now, hard to blame you :)

I thought about this problem and there are two workarounds. First workaround is to get agreement with major CA who would allow you to issue a valid certificates for users. So it's like user installs your software, generates private key and you generate signed certificate for that key on your server. I think that plex does that, but it's probably extremely hard and fragile scheme. Second workaround would be to proxy traffic from your localhost server to your remote server. Remote server would present valid certificate for something like local.yourcompany.com and would decrypt traffic and translate it back to your localhost server. Same with response. So you're doing encryption with remote server and never leak your private key. I'm not sure if CA would be happy with that implementation, but technically I believe it's not a key compromise.


The first workaround also crossed my mind, but it had a couple of drawbacks. First, it required contractual work with a CA and they can easily say: it's not our problem, it is the browser. The amount of time required to set this up would be around a year, maybe more. Also, like you said, it is fragile.

The second workaround I didn't think about. Do you mean we'd change the resolver to resolve 'local.yourcompany.com' to 127.0.0.1 on the local machine? That would work, but would introduce quite some extra latency and add some fragility.


Wouldn't part of this be solved by using a real domain that resolves to 127.0.0.1 instead of localhost? Then you can get a real cert and embed it in the client to make local requests work properly?

> You simply cannot call 'http://localhost' from a https domain.

That’s simply wrong. You have outdated information. Unless you don’t consider Firefox, Chrome, IE, and Edge to matter. https://chromium.googlesource.com/chromium/src.git/+/130ee68...

https://developer.microsoft.com/en-us/microsoft-edge/platfor...

Safari still has an open issue I think. But that’s no surprise since Apple has demonstrated they’re suckitude at software security and development in general.


This is not true. We tested this on April 2019. Chrome works with http://localhost. Firefox does not => https://bugzilla.mozilla.org/show_bug.cgi?id=1488740

Oddely, the issue was updated today after months of inactivity. Maybe they became aware of their part of responsibility in what happened to Zoom?


I find in most discussions of CORS, developers don't understand the _threat model_ it is meant to be guarding against.

I'm not sure I do, I get confused, although I think I have in the past.

I think better docs are needed, I haven't found really good educational materials that begin with the big picture (threat model, what are we actually trying to do here, what things are we trying to guard against, what things are out of scope for CORS to try to guard against), which to me is the necessary foundation to get your head around it: Where you need it, and what you will want to make it do.


I met many developers who thought that CORS can guard access to API endpoints, completely missing the fact that one can just use a client that doesn't follow CORS.

E.g. "lets restrict access-control to our API endpoints so it only responds to requests from our website". This is a valid use case, but it's meant to protect a web browser user. An evil website won't be able to request their profile data from our API, and CORS makes it possible to relax this protection to allow our other web property on a different domain access to this information. If someone wants to scrape the API, they can just use cURL and not care about CORS at all.


Recently, I was working on an issue where our new frontend client was not receiving custom response headers from the server API. It turns out I had to expose headers first using 'Access-Control-Expose-Headers' [0]. Neither me, nor the frontend dev have heard of that response header before and we thought we knew something about CORS.

[0] https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Ac...


Best CORS reference is MDN [1].

Here's a piece of unsolicited advice: avoid CORS if you can. For user facing web apps, put your APIs behind a proxy. For a dev environment in which you want to mix local assets with production assets, use a proxy. In general, if you can't solve a cross origin problem with a proxy server, then you should really stop and reconsider what you're trying to do.

CORS is full of unpleasant subtleties, as many of the comments below illustrate. Different browsers implement CORS differently. Want to cache that preflight request? You'll use the header `Access-Control-Max-Age` for that. Except Chrome doesn't respect respect that header; the cache TTL is actually hardcoded to 10 minutes [2]. Except, according to this bug [3], Chrome will respect your cache headers starting in July.

[1] https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS

[2] https://stackoverflow.com/a/23549398

[3] https://bugs.chromium.org/p/chromium/issues/detail?id=131368


Yet another explanation of CORS.

The basic security in we browsers is based on SoP: Separation of Origins. To simplify, there are number of actions that can be done within an origin (a domain, to simplify) that are prevented across origins (across domains, to simplify). CORS is way for bridge 2 different origins, i.e.e to allow some specific actions between 2 different origins.

One example: by default, XHR requests can only send a specific set of HTTP headers to a different origin. If b.com want to accept the header X-Special-Header from a.com, it has to whitelist it - Access-Control-Allow-Headers: X-Special-Header, Access-Control-Allow-Origin: a.com. On a.com, before the web browser makes an XHR request to b.com that includes the header X-Special-Header, it has to check whether b.com authorizes it or not. I sends an OPTIONS request to b.com and check for the HTTP response headers Access-Control-Allow-Origin and Access-Control-Allow-Headers.

Since CORS is a way to bypass the Separation of Origin basic security model, you should be careful with it. For example, you may allow any site to get read your content, i.e. trigger a CSRF request on behalf of the user logged in to bank.com, and with Javascript be able to read the page content, including the account number because bank.com set the CORS to authorize any remote site to do anything, basically putting bank.com in the same origin as any other domain.


The author suggests that using CORS to allow https://zoom.us with http://localhost:19421 is possible. This is factually incorrect. Mixed content policies prevent the http: origin from communicating with the https: origin.

"For very intentional reasons, the browser explicitly ignores any CORS policy for servers running on localhost."

That last sentence is incorrect – Chrome does respect CORS headers for localhost webservers.

It's only partially incorrect. In most cases, CORS will not work with localhost, at least without a self-signed certificate trusted by the browser.


This is discussed in the post text, but happy to elaborate here in more detail here! Here is the patch links from Firefox[0] and from Chrome[1] which they specify that active mixed content policies do not apply to the localhost, because the w3c specification was updated to specifically allow this behaviour[2]. You might have to use 127.0.0.1 directly. So yes, it is possible and not factually incorrect.

If for some reason that doesn't work for your app, the post also mention two secure alternatives: the native client can install a self-signed cert, or you can use a browser extension with the native messaging API.

[0]: https://bugzilla.mozilla.org/show_bug.cgi?id=903966

[1]: https://chromium.googlesource.com/chromium/src.git/+/130ee68...

[2]: https://github.com/w3c/webappsec-mixed-content/commit/349501...


I'm fairly certain that these patches didn't land until recently, at which point the design decisions were probably already made at Zoom. (I don't know much about Zoom, but that seems like a reasonable assumption.) Additionally, these changes aren't likely to apply to Firefox ESR or IE for a while.

In Zoom's case, I highly doubt CORS on its own was a viable solution. Maybe in 2026, a decade after the patch, sure, but in the current climate, it's not reasonable to expect that all users will be using browsers that have adopted these changes in 2019.


These are from years ago, and as mentioned there are two other alternatives discussed. There's no excuse to not checking origin at all. They could have even used the image hack and then checked the origin.

We tested this on Fifefox on April 2019.

It is still blocked.


> If for some reason that doesn't work for your app, the post also mention two secure alternatives: the native client can install a self-signed cert, or you can use a browser extension with the native messaging API.

(Also you might want to verify you were using 127.0.0.1 and that you had the headers correct)


Yes I know.

Installing a self signed certificate is easy on Edge and Chrome. But did you ever try to do it dor Firefox? Firefox has its own certificate store implementation and only native code to manipulate it. So it’s easier to provide a valid certificate instead even if it means that the private key is exposed.

Concerning the extensions, this was not practical until the recent unification under web extensions. I never tried.


Could they sign and distribute a cert for localhost.zoom.us and point the DNS at 127.0.0.1?

They could, but if they distribute the private key, which they would, the private key would be considered compromised, and the CA would be required to revoke the certificate within something like 24h of being notified with appropriate evidence (e.g. a copy of the key or a special message signed with the key).

In the case of LetsEncrypt certificates, there is even an API for this revocation.

However, given how ineffective revocation is, it unfortunately could still be a viable strategy.

The easier approach is, of course, using the fact that browsers now consider http://127.0.0.1 (and/or http://localhost) a secure origin to avoid this issue.


No, because you'd have to distribute the private key for the local webserver to be able to sign the connection challenge.

But that's just a reason why it would be a bad idea, not a reason that they couldn't do it or that it wouldn't work.

I would think that they could distribute the cert (and the key) and have it work. [Edit] Unless browsers detect that it's a local IP address behind the domain name and still consider it a special case of origin.


Plex solved this problem is pretty much the way you describe.

https://blog.filippo.io/how-plex-is-doing-https-for-all-its-...


it's not the nicest solution, but I don't see the problem with a public certificate and public private key (yeah not the most elegant wording) that is literally issued to `localhost` or `127.0.0.1` (not localhost.zoom.us because that still goes through DNS once and could be hijacked)

Rarely does a application actually need to enable CORS. If all of your webcalls are from the same domain YOU DONT NEED CORS. (Chatbots/socket.io)

You only need CORS if you need the browser to act as a middleman to pass information back. IE: Credit Card Payment IFRAME

If you screw up CORS implementation it just means that anyone can read any information set by your website.

https://www.moesif.com/blog/technical/cors/Authoritative-Gui...


Not so rarely. It's pretty common to serve the js frontend code on one domain the apis on a different domain

At this point I don't want to understand CORS.

Put together a little script package this week, threw in some Vue and a Bootstrap and put it on our fileserver.

Now the Font Awesome icons won't load due to CORS errors.

I don't want to spin up a server for 500 lines of HTML/JS. I don't want to download the fonts, fiddle around in Bootstrap etc.

CORS is broken or the web itself has become broken if we need integrity checks and cross origin protection for using fonts.


Base64 should allow you to do it. It's logical to limit remote font loading, even moreso than js. There used to be quite a few RCE bugs involving fonts on Windows and Adove.

Use a CDN!

So my scripts will randomly go down with the rest of the internet? Great idea!

The number one issue with CORS I've found is that merely defining it undoes alot of defaults and most of the top searches on the subject of how to do x in CORS doesn't explain what those defaults are. So you get devs breaking things trying to whitelist something not normally allowed then deciding to cast a wide net to fix those things.

The CORS standard needs to be radically redefined from the ground up with developer ease of use as part of the consideration.


Thesis: Developers don't understand CORS. So I read it. All along the author does not attempt to explain CORS to me. I then Google CORS, and think I do understand it already, but this guy is telling me I don't, but completely fails at explaining why I don't. Maybe I don't understand CORs, but the author didn't help anyone with this piece. This received 508 upvotes...how?

Many beginner tutorials on web development breeze by "how to disable cors" as the most natural obvious thing to do, usually when wanting to call your own API on a subdomain. I think because it sometimes "gets in the way" and how to properly work with it requires slightly more planning than just disabling, developers fall into bad habits.

This is my high level overview of CORS.

1. The website JS is served from a domain say "website.co" to the browser when you visit it.

2. If this JS tries to make a XHR to a domain that is NOT "website.co"(not the origin of JS) the browser first sends a preflight request (OPTIONS) asking for "guidance" from this second domain.

3. The Web Server on second domain responds with "a request" to block/allow XHR calls from JS served from certain domains.

4. The browser chooses (by default) to not make the GET/POST call if the JS domain(website.co/*) is not in "Access-Control-Allow-Origin" header.

There are other nuances but that is it really. Things to note

1. The browser enforces CORS. Not the web server. You can disable this enforcement with a flag in both Chrome and FireFox.

2. Since only browsers enforce CORS, other tools(cURL, PostMan) will successfully make GET POST request regardless CORS config on the webserver.

3. If you could intercept (using a proxy) and change headers in response to preflight request you can bypass CORS on browsers. 3.


Mostly correct, but the browser may or may not send an OPTIONS request depending on the request type, headers, and more.

https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS#Simpl...


From the title I expected a simple explanation how CORS works...my fault..

To be that guy... forget CORS. All localhost servers have a pinned public key. The zoom.us site has the private key. It passes along a signed request. The local server then validates the signature! How is that so hard?

Oh wait, I just described CSRF!! Zoom, here is your egg. Now promptly smash it on your face; thank you.


Wouldn't that allow the token to be replayed? Attack scenario:

1. Attacker installs zoom.

2. Attacker starts to join meeting foo.

3. zoom.us creates a signed request saying "join meeting foo" and gives it to Attaker.

4. Attacker takes that signed request and sends it from attacker.com to localhost inside Victim's browser.

5. Victim's zoom native app gets the request, validates the signature, and joins the meeting.

I think it can be modified to be safe if there's a key exchange between zoom.us and the native app, and zoom.us signs the key exchange with its private key. But this seems hugely overkill compared to a simple Origin check, or even compared to a traditional XSRF token (via a cookie on localhost).


One of the side effects of CORS is that you can't display most off-site images through WebGL. Displaying an off-site image through <img> is allowed. Otherwise ads would break. But you can't bring the same image into the WebGL system.

I hit this trying to display map tiles. The map tiles are on a server that doesn't send any CORS headers. A simple 2D display of the map works fine. But using a 3D library that allows rapid movement, perspective views, and flyover hits a CORS block. I can force through that with a browser add-on, but that's just for testing.

Incidentally, CORS-plugin for Firefox has a major security hole. You give it a URL to allow, but, randomly, it switches invisibly to "all URLs" and opens a security hole.


The statement is false. CORS - cross origin request sharing - only adds permission. CORS could allow you to display off-site images through WebGL, but it cannot prevent it.

CORB - cross origin request blocking - is what prevents you from doing it.

This sounds pedantic, but it seems that till people understand that browsers implement CORB unless a server implements CORS (or in whitelisted/legacy circumstances), they cannot understand their problems or the solutions. This is largely the fault of browser vendors, who seem to act as if everyone understand CORB and their only task is to educate you about CORS.


Maybe we don't understand CORS.

Here's a fact-based starting point to learn about CORS: https://en.wikipedia.org/wiki/Cross-origin_resource_sharing


That is not about CORS it is about "DevOps", implemented in a way: let just developers do what they do on their local machine so we don't spend money on Ops people and security people because our developers are smart enough, we pay them 400k a year.

Setting up some local shizzle on production server that is callable from frontend ... Just stop, you can have microservices that listen on local interface but please call it from behind api gateway. If you have some different domains you need to use then yeah make subdomains? I don't know just hire some Ops people and make them work together with Devs.


I'm in this semi-special place in web dev where almost everything I make is for a captive audience on local networks. So it feels like there's entire segments of web dev that I've only learned to subvert. CORS is one of them.

All I really learned is that you can't make calls to services on another domain unless that service allows it via CORS.

Eg. Every robot I want to websocket to has to be cool that the web application originally loaded off of (and in a way is owned by) the central server.


That is because CORS is a kludge to work around the fact that we decided to use session cookies "because developers understand it easily" instead of just re-doing everything.

See https://w3c.github.io/webappsec-cors-for-developers/#cors

> Enter Cross-Origin Resource Sharing. The challenge when designing CORS was how to enable web applications to request and receive more cross-origin permissions, without exposing existing applications to new attacks. By the time CORS arrived on the scene there were already billions of HTTP resources in use by a wide array of applications beyond the traditional browser. Many of those resources relied implicitly on the original permission table. For example, they might expose a privileged endpoint on an intranet, but check for a special HTTP request header that only a non-browser or same-origin client could set, to protect themselves from CSRF-type attacks. The design of CORS couldn’t suddenly make all those existing applications vulnerable; it needed to evolve the web platform in a backwards-compatible way.

If we had started with the assumption that you can make requests and receive responses from any site to any other site, then we would not have the confusing mess that is CORS.

> The story behind * Why does the * mode of CORS behave so differently than the credentialed mode? Given the already long history of web vulnerabilities attributable to abuse of ambient authority, there were two schools of thought about how to expand the web platform with cross-origin requests. One camp proposed extending the already-familiar cookie model to authenticate to cross-origin resources. The other camp felt that ambient authority was a mistake in the architecture of the web, and advocated for cross-origin requests free of any ambient credentials or origin-based authentication. (Such requests, they also argued, could be exempt from the Same Origin Policy entirely, as they would be equivalent to loading from an unprivileged proxy.) The XDomainRequest object in Microsoft Internet Explorer 8 and 9 retained some of this style - it had no support for credentials, only anonymous requests, although it also used the same Access-Control-[...] headers.

> The familiarity and compatibility of the cookie-based credentials model of CORS eventually won more favor with developers than re-designing their applications in a capability-security style. (https://www.w3.org/TR/capability-urls/) As a compromise, the anonymous request mode was retained as a “subset” of CORS. Unfortunately, the subtle differences in architectural style that persisted and the choice of * to represent such requests has been a consistent source of confusion.


I was hoping someone would mention the warnings that capability people had raised about the CORS proposal. Every time capability security principles have been compromised it turns out badly.

I was curious to see if there was any record of this and I found some interesting discussions from '09

https://lists.w3.org/Archives/Public/public-webapps/2009AprJ...

Very interesting to read this now.


Thanks for digging this up! The whole thread was quite long and involved a lot of head banging against my desk when I read it.

The whole pro-CORS argument essentially boiled down to "it's easier to understand when you've already internalized access lists". Capability folks then replying, "sure, but it doesn't help you understand the proper authorization contexts, and so doesn't solve the confused deputies, and in fact, hides some of them to bite you later". Basically, the same old arguments. Rinse, repeat.


Wrong conclusion. The kludge is making cross origin requests in the first place.

Maybe, but people want to write applications that can easily interact with each other in a secure way. Trying to do that when your browser implicitly authorizes you no matter what the origin of a request is makes that extraordinarily difficult to do safely. Adding in the backwards compatibility problem makes it even worse because any solution is going to have to work with the existing methods.

What's the alternative? The webserver requests the third party resources on your behalf and repackages them into its own response?

You should be able to make REST requests anywhere, without caring about origin.

Cross origin requests are legitimate in some use cases. I outline one here: https://news.ycombinator.com/item?id=20405275

TL;DR Hosting cross domain web widgets or customer engagement experiences like chat windows, etc


It's not just a lack of understanding but a lack of support in server tooling. I was setting up CloudFront on AWS and kept running into CORS issues. I did all the steps to set the necessary headers, but it still didn't work. I ended up just doing some layer 7 routing to route requests to /api towards the other server that was on the other domain to avoid the CORS issues, because now there is no cross domain request. It seems more secure that way anyway.

But were the headers reaching the browser or not? CORS is something strictly for the browsers to parse, so if it's reaching the browser, either the policy is wrong (and you can usually see that in the browser console, when it blocks the cross-domain request) or it's not a CORS problem.

It wasn’t reaching the browser.

Seems like you have to whitelist the headers: https://docs.aws.amazon.com/AmazonCloudFront/latest/Develope...

I did, but it still didn't work. It was extra complicated because it was coming out of API gateway calling Lambda.

But the point is, it shouldn't be that complex and I shouldn't have to dive deep into the docs to make it work. That is as much an impediment to CORS as is the lack of understanding.


How does not allowing CORS make any sense if any other application running on my system can bypass it except the browser? The technique to bypass it is to have a proxy, either local, or remote. Wouldn't it make more sense to simply allow the browser to make any request the app wants?

Are you saying that because a native app on my system can do something, a website should be able to do it too? A native app on my system can delete all my files without asking me. I don't want a website to be able to do that.

Even Cordova (which basically runs a web view on mobile) has a plugin to bypass CORS (https://hackernoon.com/a-practical-solution-for-cors-cross-o...). Are we entirely sure that the benefits of disallowing CORS by default outweigh the annoyances? Is there any study on this?

The point of CORB (i.e. cross origin request blocking i.e. what browsers are doing i.e. what CORS relaxes) is effectively to stop authenticated requests from going from a browser with multiple authentications to a protected server on the basis of a request produced by third party.

In other words, it's to stop the developer of fakegoogle.com using your web browser to access google.com as if they are you.

Therefore, if you've written an app which will only ever contain cookies and authentications you've permitted, and only ever access servers that you've specified, then yeah, sure, CORB is irrelevant and you can safely ignore them. You and the malicious coder can both write code that runs on Android or iOS safe in the knowledge that there's an absolute sandbox between them - nothing the malicious coder ever does will ever leak your user's secrets to your server.

Likewise, if you're a malicious coder and you convince the user to give you their legit google.com secrets, you can safely send them wherever you want.

If you're writing a web browser equivalent app, that will run arbitrary code from untrusted third parties and store and release private or secret information, CORB makes sense and you should pay attention to the CORS headers.

CORB is solving a problem inherent in web browsers - they run untrusted code and code can cause your secrets to go to your server. It is annoying to you as a web app writer in the same way that locked doors are annoying to plumbers. It would be so much easier for the plumber if they could just come around to my house whenever it's convenient. And it doesn't solve any problem the plumber has (if some random steals all your gold, how is that the plumbers' problem?). Inside your house you don't lock every door. But you do lock the doors that separate trusted and untrusted people, and no amount of difficulty to plumbers will change that.


That's certainly true, you are making the leap of faith that your customers are running standard compliant browsers, which, all considered, is true. All the techniques you mention require you to break either the customers or some element along the network chain to the server, and once you do, well, CORS is the least of your problems.

Yes. CORS is not so much a protection of your site against malicious user-agents;

It is a protection of your site's users, using good non-malicious browsers, against malicious other JS on the web.

I think this is one of the most basic misunderstood things about CORS.

Once you understand that, you can start actually trying to understand the threat model... which is still pretty confusing, to me anyway. But until you are there, you haven't even started.


If a native app can request any resource why shouldn't a web app be able to do that too? Does it matter whether that app runs in my browser, Electron (which can easily bypass CORS policies) or whatever other runtime? If it does then CORS restrictions should be put in place at the OS level, if it doesn't they should be removed altogether. There are a ton of other ways to make sure you don't "give your stuff" to unauthorized parties: authentication, CRSF tokens, hash validation.

Seriously, if it gets to the point of having to write "{Class of professionals} don't understand {Obscure restriction that doesn't make much sense nowadays}" it's likely not a security issue anymore, it's a UX issue (with the developers being the users). At the very least prompt the user to (dis)allow CORS when a request is being made, similarly to how the user is warned when running an unsigned executable on macOS.


A web browser is different than a native app, because simply clicking a link can execute code from an untrusted party.

First of all, when you are using a web browser, if it wasn't for cross-browser request restrictions, code executing on one site (or web app) would be able to _use credentials stored in cookies_ by another site altogether. Because all these web apps exist in the same browser context. Native apps are all separate, code running in an Native App A can't say "make a request to facebook using the credentials the user already logged into in the facebook app." But Web App A could do exactly that with the credentials the user logged into on facebook.com -- if it wasn't for cross-browser request restrictions. Which then CORS let some sites carefully opt out of.

Secondly, when you download an app, you are trusting the developers of that app.

You navigate the web, you are trusting every single site you visit (to also not have their own code injection or other vulnerabilities to which web pages/apps are particularly vulnerable to), and most users have no idea what sites they are visiting, they are just clicking links.

What we really need a more clear explanation of the CORS threat model, with examples. I think we'd all be more clear about what it's for and why we need it if we understood it better.

It is definitely a developer UX issue, but it is sadly one that is baked into the web for legacy reasons.


Browsers are ubiquitous, and they run third party code. That would be a deadly combination for anyone hoping to launch a DDOS attack if CORS protections didn't exist.

The same could be said about operating systems. In fact when just replacing "Browsers" with "Operating systems" your sentence would still hold true.

Not really. Third party code that you haven't installed does not run in an operating system at the click of a button. It does in the browser.

By the logic you're outlining there's no reason to sandbox a browser at all, since it's no different to an operating system. Experience suggests that would not be wise.


The default cross-origin blocking behavior is useful for many things, such prevention of data exfiltration via XSS vulnerabilties. CORS is a way to relax this behavior.

CORS is a minefield. It doesn't help that the RFC is incredibly dense. I may be totally missing something, but I've repeatedly wondered though if CORS still has its place in times of SPAs that exclusively use xhr + javascript + some sort of auth token talking to a REST API?

Consider this argument: CORS basically ensures that browser and servers cooperate to protect end users from a "smart" user agent that happily throws around cookies and/or auth tokens. Remove cookies and basic auth headers, what's the point of CORS?

Happy to hear thoughts from folks more knowledgeable in web security than me. If not, let's please get together to propose an RFC for the CORS-GTFO header (a server header indicating that a browser does not have to do wasteful preflights etc.)


> Remove cookies and basic auth headers, what's the point of CORS?

The ability to communicate with a domain other than the one your app is running on. For instance, if my site on www.example.com wants to send a POST to www.example2.com, I need CORS. example2.com needs CORS to specify that only example.com is allowed to send a POST to it, not anyoldaddress.com. example2.com could look at the Origin header and refuse connections, but it would be vulnerable to DDOS attacks.

Now, if your SPA is entirely self-contained and self-hosted then no, you don't need CORS. But there are plenty of situations where that isn't the case.


I don't think the act of sending the POST is blocked by CORS, receiving the POSTed response data is though.

yup, i have a signup form on our website and that signup form calls ajax if the email is in use. without cors i would have to use jsonp or some other workaround.

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: