Hacker News new | past | comments | ask | show | jobs | submit | page 2 login
Developers don't understand CORS (fosterelli.co)
931 points by chrisfosterelli 38 days ago | hide | past | web | favorite | 355 comments



Security practices that are hard to use correctly are often misused/not used at all can be worse than if they weren't there at all (because of the false sense of security). On this axis, and the axis of adoption CORS is a huge failure (despite the fact that when used correctly it does increase security).

The crypto security space has the same issues with hard-to-implement standards and impossible-to-configure libraries -- the likelihood of misuse/overly permissive settings in frustration is really dangerous.

I don't know if there is/was a reasonable alternative (there probably isn't really), but I guess at this point the only thing to do is to write better documentation/guides on how to use CORS.


An annoying thing with CORS is that one cannot allow all paths for an origin. I.e. there will be pre-flight requests for every new path that requests are made to in a REST API, making the response caching useless.

Apparently to work around some bug in Microsoft ISS server. But this has also greatly increased the performance hit from CORS under some common usage scenarios, which is sad.

http://lists.w3.org/Archives/Public/public-appformats/2008Ma...


I sort of get it and yet even when I'm sure I've configured it correctly, I'll hit road blocks in Safari, for example, when it works everywhere else as expected. It is such a frustrating experience.


Yeah, i'm dealing with CORS issue with Safari right now. On a XHR with a custom header over https, safari fails with a "protocol error" error rather than running the pre-flight request. The same query without the custom header works well. Every other browser (IE, Edge, Chrome, Firefox) handle it well and does the OPTIONS call to ensure I allow this header. If anybody has a solution for that, I'm all ears :)


I'm dealing with frustrating CORS issues right now. We have a CDN intercepting outgoing requests from a request resource and nobody here can seem to figure out how to get it to keep allowed headers for OPTIONS method requests.


Sounds like something that's going to have to be configured on your CDN. I'm assuming you already checked with them/their docs?


My point of view as a SQL developer reading a article about something front-end by curiosity:

Yet another article about some <acronym> ranting about usage and misconceptions about <acronym> without even caring to explain in the first 20 paragraphs what the hell <acronym> mean or is supposed to achieve in the first place.

You are free to re-use and adapt this sentence for each new <acronym> like feature trending at any point in time.

PS: My bad it was actually described in the last sentence of the citation inserted in the third paragraph.


Not a web dev. Can anyone tell me what CORS is ultimately used for? Not the literal technical details, the higher level what does it enable sites to do? And is that for them or their visiters?


Also not a web dev, but if I'm remembering it correctly (someone please correct me if I'm wrong), the way I understood it was:

Whenever your browser sends any request to any site, it sends the cookies(/other auth data) associated with that site along with it. In other words, cookies are fundamentally just associated with the receiver, not the sender. So the "solution" is for the receiver to block requests from the wrong sender, since otherwise any site could send authenticated requests to any random site. [Edit in response to comment below: I should've mentioned more here, but I understand what happened was browsers introduced the Same-Origin Policy to prevent this from happening, and introduced CORS as a dynamic bypass mechanism for that, which, unless you implement OPTIONS, can get you these half-baked insecure situations where requests still get sent and executed, but the client JS doesn't get to see the responses.]

To me this whole mess is stupid because the premise shouldn't be true in the first place (why the heck should a cross-origin request send auth data? cookies etc. should be "contained" to whatever domains the original site restricted them to), but that's apparently The Way Things Are, and so here we are: browsers do something completely unexpected and insecure, and we blame devs for getting caught off-guard and not protecting against a security hole browsers introduce.

(Yes, I have Opinions on this. Please tell me exactly where I'm wrong, because I suspect I might be, but I have yet to figure it out.)


I'm pretty sure you're incorrect - requests to sites different from the one you are currently on will not include auth data by default because of the same-origin policy. The auth data would only be included if the server that the request is being sent to responds with an Access-Control-Allow-Origin header whose value matches the origin the request was made from.


I think we're agreeing? Like using CORS headers is a mechanism used to bypass SOP when needed, which was implemented because HTTP handles cookies in a stupid way and now nobody wants to change how that's done, right? Like neither CORS nor SOP should've been necessary in the first place had cookies explicitly included the allowed senders instead of just the receivers.

(P.S. the way you phrased the request handling would violate causality, so I'm assuming you were referring to the OPTIONS check beforehand...)


> I'm pretty sure you're incorrect - requests to sites different from the one you are currently on will not include auth data by default because of the same-origin policy

Not necessarily true. You can have Javascript running on evil.com that submits a form to banking.com and it will send your session cookies for banking.com along with the request. It's just that evil.com can't read the response content.


I thought the browser at evil.com would first send a pre-fetch request without the cookies, but not send the full request when it doesn't get the correct value from Access-Control-Allow-Origin in the response from banking.com.

I'll admit I may be one of the developers that doesn't understand CORS...


CORS doesn't apply to navigations by default, so anaphor is correct: evil.com can just submit a form to banking.com and it will send the banking.com cookies. This is the whole field of CSRF (cross-site request forgery) mitigation.

CORS was introduced as a way to allow requests that browsers used to not allow at all: things like cross-site XHR in the first instance. Then it was expanded so that requests that are not normally subject to CORS checks (image loads, script loads, stylesheet loads) could opt-in to being subject to them, for various reasons. The default for those loads is still "no CORS". And there still isn't a way to do a navigation subject to a CORS check, even with opt-in.

Disclaimer: I work on Gecko and I've reviewed/implemented parts of the CORS spec.


Is my take on this also generally correct? That it wouldn't have been a problem had cookies and such been designed to take into account the origin properly (and hence why it's unintuitive and catches people off-guard)?


If cookies were scoped to the (source, target) pair, then that would remove one of the main motivations for CORS, yes: evil.com would not be able to get any information from bank.com by making your browser make the request that they could not get by doing the request server-side.

There's a second problem CORS kinda tries to solve, which is the ambient authority problem: services that run behind firewalls and assume that if someone can reach them the someone should have access. If someone runs a browser behind the firewall and opens a page on unsafe side of the firewall, that page can then issue network requests from the browser and thus end up access things on the "safe" side of the firewall. This is a large part of why CORS has the whole preflight complication and the rules around when preflights happen: the idea is that in this situation just making the request, not even receiving a response, is potentially damaging. There are the carve-outs for requests that could be generated without things like XHR that are subject to CORS (e.g. by doing a form submission or <img> load or whatnot); if your ambient-authority-using server responds in interesting ways to those, CORS is not going to help you... The _right_ fix for this stuff, of course, is for services to stop using ambient authority and/or for browsers to block requests from public sites to private IPs. Unfortunately in practice detecting "private IPs" reliably is not trivial, because fundamentally it depends on the routing and firewall topology, which the browser doesn't really know about.


Thank you for the reply! I didn't realize this actually gets to another question I've had in another context, which is why can't a browser at least assume 192.168.0.0/16 etc. are private networks and block requests from nominally-public addresses from being sent to those? (Or do they do that already?) This should be possible without needing to detect anything at all, right?


Pretty sure some tools like NoScript already do that, but to protect against DNS rebinding (which can subvert the SOP), not as a protection against cross-origin requests coming from a public site to an internal one.

https://en.wikipedia.org/wiki/DNS_rebinding

> The NoScript extension for Firefox includes ABE, a firewall-like feature inside the browser which in its default configuration prevents attacks on the local network by preventing external webpages from accessing local IP addresses.


Yeah but why shouldn't the browser do this itself to provide the cross-origin protection?


I think just because they're afraid of breaking things? I'm honestly not sure what the rationale was for not implementing this. They do DNS pinning which helps mitigate DNS rebinding attacks, but I don't think they do anything specifically to restrict access to internally routable IPs.

They do block certain ports that are known to be problematic (25, 6667, 5222, etc)


It's so stupid. Makes me want to just fork the browsers and add blatantly obvious protections like this if I can find the time...


If you do that, I would recommend looking at writing an extension first to see if there's any way to do it without a fork (maybe using the same technique that things like ublock origin have)

https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/Web...


Yeah you can almost certainly do it with an extension, but half the point of a fork would be to get the message across that they need to get their act together. (I almost certainly won't get around to it though, so this is just daydreaming.)


There turn out to be a bunch of complications when one tries to do that, if the goal is to not break existing legitimate uses.

I haven't been following this closely, but for Firefox https://bugzilla.mozilla.org/show_bug.cgi?id=354493 is the relevant bug, with some (failed) attempts to do that.


Interesting. It seems the "legitimate" uses are corporate? Seems like providing a group policy or config option to disable protection against this would be the sensible way to go, rather than increasing the attack surface of home users just because some corporate users do weird things. Although honestly major software vendors are already happy breaking so many things in the name of security that this one could just be another one on top...


I think you could implement that with CORS + CSRF tokens (blocking all types of Cross-Origin requests) but the default behaviour is to allow Cross-Origin writes (e.g. with form submissions).

See https://developer.mozilla.org/en-US/docs/Web/Security/Same-o...

This is why we have CSRF tokens. So that you can effectively block Cross-Origin writes.


I see what you mean, evil.com could still make a request that includes cookies, which is why we need CSRF tokens. But from my understanding, it wouldn't be able to do that in a XMLHttpRequest hidden on the page. It would have to be a request from something like submitting a form which would navigate the user off the page. Is that correct? Of course it doesn't make much difference from a security perspective.


You can work around that by submitting it within an invisible iframe element, e.g. https://stackoverflow.com/a/17953761/903589

But yeah, you can't just make arbitrary requests like this with XHR


I think pixelperfect is talking about preflights.


That makes more sense then. Yes, it will make sure not to send authentication data in that case.


If you use fetch or XHR, yes.

If you just create an HTML form with the bank as action, fill in the inputs and submit via JS, no, because that's _navigating away_. This is also why GET requests that perform actions are dangerous, you can just embed them as an image to provoke a request. Which is exactly how zoom built their API. (they also then measured the size of the image to determine the response code, which is.... inventive)


So, it depends on the request type, GET vs POST and others.

Before the client does the request for data, it does a preflight request via OPTIONS to determine what is allowable (is this domain allowed, this type of call).

If it is allowed, it can send and request cookies on the requested domain in the current standard.

However, this is not supported in older browsers, so the request will just work.

In newer browsers, without the preflight check, the server still receives and replies to the request, but the reply is ‘ignored’ by the browser and throws an error.


I'm aware of OPTIONS but it still seems like the same exact stupid browser security hole (edit: or perhaps I should say HTTP protocol flaw?) being half-patched on the server side. Like, I'm saying that -- independent of the HTTP method -- there should be no communication of privileged information in the first place by default. If a website really wants other arbitrary websites to send e.g. a cookie along, then there should be a way to mark that cookie as such at the time that it is originally set, not having it checked post-facto. It sounds like the only reason this is done is backward-compatibility?


Definitely not disagreeing.

The server still sees the request, so the data can be exfiltrated.

In terms of backwards compatibility, it is actually the opposite. Newer browsers will block stuff that worked in older versions.


You can mark a cookie as samesite:

https://github.com/OWASP/CheatSheetSeries/blob/master/cheats...

(as you mentioned, backwards compatibility requires that this is opt-in when the cookie is set, not opt-out)


Oh wow, 2017. Finally...


It prevents someone with the following pseudo-javascript on their random website (or injected from the ad network) from working:

  onPageLoad()
  { 
    for ( x = 1; x < 10000; x++ )
    {
      load_page(https://facebook.com/newpost?to=world&message="My mother's face resembles an unwashed buttocks");
    }
  }


CORS can be used to define exceptions to the same-origin-policy.

By default, the same-origin-policy doesn't allow all requests to other origins (domains/websites/ports). But if you want to allow another website to send requests to your site you can use CORS to do so.

So for example, if you have a contact form on one page (example.com) and the API for processing the submitted forms on another domain (processing.com; different origin), the receiving server tells the browsers via CORS that submissions from example.com are allowed.


Simplified, CORS allows servers to tell browsers which requests to allow or disallow between different domains. CORS is what blocks malicious-site.xyz from making a request to your-bank.com APIs with your credentials.


CORS is what allows your-bank.com to tell the browser that it can allow such requests (presumably because it is safe to do so).


So, javascript requests made from one site to another would be allowed/prevented from returning the information to the browser session.

It can still be abused to exfiltrate data.


In a nutshell, because of CORS, one domain cannot communicate with another domain. The domain being communicated with, will automatically reject requests, unless the server is setup to specifically allow the request to go through via setting CORS headers to white list specific domains or all domains (bad idea).


So, that is not my understanding, but I could definitely be wrong.

So, the one domain will attempt to communicate, the other domain will receive the request and return a response.

If the client doesn’t ‘like’ the response, it will error.


This only holds for indempotent requests (so in practice GET). Non-indempotent requests are "preflighted" and don't happen at all when the policy does not allow them.

So in fact CORS without any server-side configuration allows one to do arbitrary GET requests with cookies and authentication through XHR, but you could always just dynamically create IMG (as in the Zoom kludge) or SCRIPT (and the response will be executed in the page context, which is huge security risk as well as how JSON-P works) tag.

In other words: if your application does something security-sensitive (ie. write to anything other than log file) as part of GET request apart from producing the response, then your application is broken and same-origin policy nor CORS has nothing to do with that.


Not quite, POST requests with standard headers and one of the following content types are not preflighted:

application/x-www-form-urlencoded

multipart/form-data

text/plain

https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS#Simpl...


tl;dr: CORS is access control for cross-website requests. It's what stops HN from making requests to Facebook via your browser, and by extension your cookies and currently logged in account.


Just tried and I get CORS errors when trying to access localhost in Firefox, also on images via xmlHttp ...

Edit: It works with image elements!

    img = document.createElement("img")
    img.src = "http://localhost:1337/service"
Btw, CORS is a PITA when doing pure web apps, apps that doesn't requre a server. I recently made a RSS reader web app, but had to use a CORS proxy.

I wonder if there is any way I can use an image element to bypass CORS and read RSS xml that way !?


>> I wonder if there is any way I can use an image element to bypass CORS and read RSS xml that way !?

The article in question says they used the image dimensions to encode the error codes returned by the server when making a request.

I can imagine you could simply encode the payload in the dimensions of the image.


I was thinking about using img.src to replace xmlReq and Fetch. So that I can fetch sites that I do not own. eg. RSS feeds.


So how would you receive the data, then?


A little shortcut for this is:

    new Image(1,1).src = 'http://localhost:1337/service'


The webserver listening on localhost:19421 should [...] set a[n] Access-Control-Allow-Origin header with the value https://zoom.us. This will ensure that only Javascript running on the zoom.us domain can talk to the localhost webserver.

If you create a single-purpose webserver, you can really handle this better than asking the client to play nice via an HTTP header.


From what I see, the author do not understand CORS neither ! He blames pple and do can't even give the explanation of CORS using his words


Zoom does not need CORS-headers to fix the vulnerability, rather make the web server running on localhost drop requests with wrong origin header.


Hmmm.... never thought about this but now thinking about it... does anyone need CORS? Can server-enforced access controls on "Origin" substitute for CORS in all cases?


If you need access to the users data on another service, without having their credentials on the other service, CORS is the correct choice.

> Servers can also notify clients whether "credentials" (including Cookies and HTTP Authentication data) should be sent with requests.

https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS


The server responsibility of CORS is also about making sure the browser accepts the cross origin request (in case it's a valid origin). If it won't send the correct headers back, the browser will drop the request

Yes, the server can drop the request or return an error, but that's just half of it. If it wants the browser to accept the request, it has to explicitly say it by using the CORS headers.


Ah, right. But why not have browsers allow cross-origin requests, but let an individual server just deny it based on origin header?

I guess because we don't trust servers to do that, we need "don't allow by default, let the server opt-in" instead of "allow it by default, let the server opt-out", because too many servers would not opt out.


This article is a great illustration to me as well of how hard it is to do security right even if you're a professional, much less a developer who's mostly responsible for implementing features not building secure systems. The internet is rife with self-proclaimed experts, all offering directly contradictory advice as loudly as possible on best practices for security.


Most developers I interact with don't even understand something as fundamental how a TCP connection works (not blaming) let alone something as abstract as CORs.

What are you going to do? Some things are just complicated and it's hard to find good people who actually bother to understand these systems, it's what separates the best from the rest.


Bypassing it all (be careful) with no-cors?

Relatively recent Chrome debug messages about a no-cors "opaque" option, suggest the client has the ability to bypass the server rules without using any kind of proxy.

Is this true? Maybe I didn't notice it got added as an option when fetch was added in addition to the old style XHR requests?


I think the advise in the article and various comments here won't work, for the simple reason that Safari does not allow ajax requests from non localhost origins to localhost. I suspect this is the real reason they used an image request.


If an HTTP API is GET request only and there is no authentication for the API, think public weather info API, is there any risks with having "Access-Control-Allow-Origin: *" set?

I've tried to get my head around CORS a few times without much luck.


ACAO:* is appropriate for a publicly accessible, unauthenticated request.


The Firefox extension "CORS Everywhere" has let me test my web app on my computer without having to mess around with headers. This was while using PowerShell Polaris to create a website.


Would it have been so hard to spell out what CORS means? So people who don't do web development don't have to wade through the acronym soup?


Developers understand CORS, it's just like any security measure, it goes against the infinite list of features devs are pressured to implement.


I don't really understand how CORS adds much security wise; it relies on the web browser behaving and respecting your policy...


It protects against attacks where the bad actor is trying to get your browser to do something you don't want. It's OK for browser security measures to rely on web browsers to implement them.


Wouldn't this be stopped by address translation by routers for normal users? Don't know much about the app itself...


One thing I never understood was why server1.mycompany.com (origin) cannot make a GET request to server2.mycompany.com?


For cases where shared hosting issues subdomains for users I reckon.


One of the most enlightening threads I've come across in awhile. Thanks for the solid explanations


The browser rejects it sure. But what if I write my own custom nonstandards compliant browser?


Were we supposed to understand it? Voodoo programming is the solution to CORS.


CORS and preflighting, ugh, just adds more developer friction.

That's just my opinion on it.


> The webserver listening in on localhost:19421 should implement a REST API

Is this server started by Zoom? In order to implement an endpoint, they’d have to ship a server along with the client. And this server would always be running? Is this not overkill for such a small feature (being able to click a link)


They already ship a server along with the client. It's already always running.

The author is explaining how they could have done it more securely. I think everyone here including the author would agree it's a ridiculous solution for this "feature".


Yes, it is overkill. And insecure. All that to avoid a confirmation box.


> A security guy isn't 100% correct

ALL DEVELOPERS DON'T UNDERSTAND


dude, I've been a developer for over 30 years and I haven't ever heard of CORS. Let alone understand it.


Well just until 2014 CORS became a W3C recommendation. Before that CORS was mainly in Draft mode and loosely supported by browser vendors. I do find it surprising that you've never dealt with CORS in the last five years if you've ever deployed any production grade API or web service.


C -


Very arrogant post that doesn't explain CORS or offer any useful links for understanding CORS.


CORS is useless, unnecessary, insecure, and essentially only serves to annoy developers. Par for the course for all web tech from the last 20 years.

Another turd in the tower of shit that is JavaScript.


If attacker.com makes a request trying to read bank.com/my-bank-account-number should attacker.com be able to do that and read the response? The same origin policy blocks the response from being read.

Now that we've established that by default a.com cannot read a response from b.com , CORS allows b.com to relax this restriction so that a.com can read from b.com . This allows one website to communicate back and forth to the server of a different website, making certain APIs easier. I don't consider that useless.


I absolutely hate the current same-origin-policy (SOP) we have and therefore CORS. In the end, CORS is a way to work around the problems the same origin policy creates. Yes, I know there are good reasons why we have it, but in my opinion, it is the wrong solution to that problem.

I mean, the biggest problem the SOP solves is that some website could trick the browser into sending an authenticated request to your site/API. And while the SOP just kills interactions between different origins completely, I wonder why they didn't just go with not allowing the browser to include any state it got from an origin earlier when the request comes from a different origin. That way it would be possible to do requests between different origins, but without the problem of hijacked authentication.

Instead, we got this same origin policy which completely isolates different origins and makes browsers a lot less powerful than other HTTP/S clients and drives developers mad.

Edit: Feedback appreciated.


Not sure why you're getting downvoted. It's a valid point.

I think it's because people intuitively think of access control as an answer to the question "Who are you?", which means your authentication credential needs to be sent with every request to a given site.

The alternative solution is to use "capabilities" which are a way of accessing a given resource by the very fact that you possess a reference to it. E.g. the google drive feature where you can say "anyone with the link can {view,edit,comment,etc}".

The downside is obvious though: it would require everyone to adapt to this model, and rewrite all of their apps to use it instead of the session cookie model. Not gonna happen without a massive effort (see ipv6 rollout for an example of the effort required for something like this).


Thanks for the response. Yes indeed, there would be the cost of change. But if we want the web to be truly decentralized it doesn't make much to disallow any cross-origin interaction by default.

After all, cross-origin requests are a normal thing on the web. The problem is that browsers make credentials available to websites that shouldn't have control over them. It is like removing all doors from a house because otherwise, the stupid neighbor would give the keys (you gave him for watering your plants) to anybody that would ask.


Seriously wondering! Why is Zoom worth $2B and companies are paying per minute plans to host meetings when you can now host your own videoconferencing software easily on your own website or app or intranet using WebRTC, branded, with your own experience, widgets, and it can all be open source? All you need to pay for are dumb TURN servers eg from twilio. Plus it would be far more secure.

For example we built https://yang2020.app/meeting

You can have stuff like that yourself, for free, no Zoom required. It’s open source

SPECIFICALLY what does Zoom provide? People can install wordpress easily, and 30% of the Web has. Why not for videoconferencing?


This reads so much like the infamous dropbox comment.


How so? It's super easy to just download and put into any website. Look at the demo above.

It took us about 3 months of work to get all the quirks out, but anyone can do it. A developer can grab our library but if you don't know how to code, it's just a widget you get off the Internet. And it works on YOUR WEBSITE. This isn't like Dropbox because there is no desktop app.


So you're wondering why people use something that just works over something that took you (assumedly someone technical) 3 months to get all the kinks out?

How long do you think it would take a non technical team or company to get it up and running?


No. You’re just mistakenly comparing apples and trees on which apples grow.

It took us 3 months to turn WebRTC into something that “just works” on any website. The RESULT OF THAT is now available for anyone to use.

People can have it up and running in 5 minutes. On their own website.

Users have nothing to download. It just works, including on mobile web browsers.

Companies can also put it into their mobile apps.

So what is the downside again?



Can I host my own, or can I build my own? Is that platform ready to be installed and reused by other organizations?


How many people can you have in a single meeting?


It’s up to WebRTC and TURN server. Can be any number in theory. The layouts support 100 people, but each participant would have 100 peer to peer connections.

Perhaps one advantage of Zoom is that it rebalances things onto servers it owns eg via websockets? Is there documentation on this?


Recently, I have come across the so-called signaling server which seems to be another part being used with Nextcloud Talk. For Nextclouds the default signaling server can handle just about 4 participants. So there seems to be a bit more to it than just WebRTC and TURN.


What exactly makes this limit?


I have no idea (maybe PHP). I just saw that they a short info text in the backend and display a warning when you are having a conversation with more than 4 participants. Couldn't find any documentation on what the underlying problem is.

> An external signaling server should optionally be used for larger installations. Leave empty to use the internal signaling server.

> Please note that calls with more than 4 participants without external signaling server, participants can experience connectivity issues and cause high load on participating devices.


Probably it's just the sheer amount of connections you have to make. But it's possible to have even 10 people. Especially if you go through a central hub.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: