Hacker News new | past | comments | ask | show | jobs | submit login

The blog post author doesn't understand CORS either! Their advice on how to fix the problem is wrong:

> So what would a secure implementation of this feature look like? The webserver listening in on "localhost:19421" should implement a REST API and set a "Access-Control-Allow-Origin" header with the value "https://zoom.us". This will ensure that only Javascript running on the zoom.us domain can talk to the localhost webserver.

"Access-Control-Allow-Origin" doesn't block the request from going through, it just prevents wrong-origin Javascript from accessing the response.

The original vulnerability actually just a slight variant of cross-site request forgery (CSRF) -- "wrong site request forgery" :-)

For the localhost server to detect wrong-site requests, there are two simple options:

1. Use a CSRF token. This requires a shared secret between the localhost server and the Zoom website.

2. Check the "Origin" header. This requires that the request be an HTTP POST, which is what they should be.

(Note: If you're determined to use some CORS machinery, you could build something more complicated based on pre-flight requests, but there's no point.)

The advice isn't wrong, but you did misunderstand it. The CORS header will definitively block the request to an AJAX REST API from going through, because it will be a POST request with an `application/json` Content-Type, which will trigger a preflight request.

You're assuming the API will remain identical, just with new headers. I didn't advise this, what they have now is not a semantically RESTful API.

What they have now is a mess made to fit within their image-hack's constraints, there's no sensible reason to keep the same GET pattern without that.

No, I wasn't assuming the API will remain identical -- I saw the "should implement a REST API" part of your recommendation.

Here's what's wrong with your recommendation:

1. An AJAX REST API request doesn't necessarily mean "application/json". It could also be "application/x-www-form-urlencoded", which won't necessarily trigger a pre-flight request.

2. Your advice goes into detail on an aspect that is not the crux of this particular vulnerability ("Access-Control-Allow-Origin") but not on the aspects that are:

2a. How to handle the pre-flight request, which is the thing that would actually block wrong-site requests.

2b. If you're relying on a Content-Type of "application/json" to trigger a pre-flight request, then it should be made clear that this is now a security check -- it's common for server code to ignore the Content-Type header as long as the body is valid.

Ah I see -- so it sounds like you agree the advice is not wrong, but it's not in the level of detail you would like. It is only a 500 character example unfortunately, the details of the server API are not really in the scope of my post but I appreciate you elaborating here.

The advice is wrong. Someone could follow the advice in the article and still be vulnerable. The article should mention the actual vulnerability: CSRF, because there's a state-changing HTTP handler with no CSRF protection. CORS doesn't automatically protect against CSRF, and people shouldn't be given that impression.

I've been seeing more and more developers not being aware about CSRF. Maybe the reason is that frameworks got very good built in support for CSRF protection, which caused CSRF vulnerabilities to be less common, which lead to OWASP to remove it from #8 on the list in 2013 to to not be explicitly on the top 10 at all in 2017. (marked as "Merged or retired, but not forgotten"): https://www.owasp.org/images/7/72/OWASP_Top_10-2017_%28en%29...) but it's still a very valid vulnerability that I think every developer should understand.

> Merged or retired, but not forgotten

Oh the irony.

> Someone could follow the advice in the article and still be vulnerable.

This applies to any advice that is not an actual reference implementation in code. There's only a state-changing GET handler because they chose to work around CORS and you can't POST for images. Implemented properly this CORS approach does protect against CSRF without the need for an explicit token. I've written more about this before, with concrete examples in Node for example: https://fosterelli.co/dangerous-use-of-express-body-parser

>There's only a state-changing GET handler because they chose to work around CORS and you can't POST for images.

I'm not sure of that. There's tons of stuff in existence that has state changing GET handlers that don't have the goal of working around CORS, check out the HN upvote button. It's conceivable to me that if Zoom managed to send a CORS response header, they might have used an XHR with GET (considering GET is the default for XHR). But a discussion of GET vs POST seems largely irrelevant, because POST doesn't solve the problem.

>Implemented properly this CORS approach does protect against CSRF without the need for an explicit token.

The problem is that your definition of "properly" is "must reject requests with a Content-Type other than application/json". That's a non-obvious definition of properly that's not stated in the CORS article. That body parser article is better (although it does seem mistaken about XHRs being unable to make cross-origin requests). The CORS article would be better if it mentioned the Content-Type requirement and linked to the body parser article.

Although implementing CSRF protection via Content-Type is somewhat fragile. Someone might want to support a new Content-Type in the future and not realize that the Content-Type restriction is security-critical. Implementing it via an Origin check (or an X-Not-Simple-CORS: true header) would be clearer.

State-changing GET handlers are against the HTTP concept and specs and are trouble waiting to happen. Each and every one of them were designed and implemented by incompetent and/or inexperienced developers.

I've occasionally used stage-changing GET for admin-only functionality on small websites. I'd accept "reckless" but bristle somewhat at "incompetent and/or inexperienced".


It's not just the level of detail. For example, what if your advice included one fewer detail:

> The webserver listening in on "localhost:19421" should implement a REST API. This will ensure that only Javascript running on the zoom.us domain can talk to the localhost webserver.

Do you see how that is wrong?

You shouldn’t assume that a post titled “Developers don’t understand CORS” will only be read by people that understand all the intricacies of CORS. :) For everyone else, the need for a POST request [edit2: as well as a non-form Content-Type] may not be immediately apparent.

(Edit: It’s true that a proper RESTful API shouldn’t be using GET for operations with side effects anyway, but that’s different from knowing that avoiding it is mandatory for security.)

Some time ago, I wrote (quite popular) article about basics of how CORS works - perhaps someone may find it useful when reading the author's article:


Haha true! It's tricky to find a balance between a short post that nails home a point and a complete guide that hits all the nuances. I definitely intended the former here but I think this is good evidence there is room for the later.

i think the tragedy of web security is that, it's become too complicated to provide simple guidance to developers who are feature-focused. They don't want to become experts in security-header-machinery and crazy amount of domain knowledge needed, but there aren't many other options.

Just look at all the security options a dev has to contend with:

   CORS (with it's myriad of headers.. ACAO, ACAH, ACAC, etc.)
.. That doesn't even count the other fun stuff like SQLi, XXE, LFI, RFI, SSRF, and on and on. It's become real obvious to me, that if the framework or language they develop with doesn't provide it enabled by default AND the most commonly searchable/referenced docs don't explictly tell you to disable it - it's likely not going to happen.

Honestly, I think the real issue is being able to see exactly what the machines are doing when they talk to each other. And I don't mean diagrams either.

I just went through this when setting up an nginx reverse proxy to a gunicorn web server. Once I was able to see all the X- headers and how wsgi was setting up its environment against that, it all became very clear to me what was happening and why each piece was necessary.

I think the same would apply to being able to see exactly what happens with preflight requests. PS: non-interactive diagrams don't fill that gap.

Chrome Developer Tools shows the preflight request in the network tab, as it does every other request. It's even called out as being part of the same communication, if you group by communication instead of sorting by timestamp. What's not clear is why it's necessary sometimes and why it's not, where you have to dig into a mix of history and security policy, and do a bit of threat modelling.

> The CORS header will definitively block the request to an AJAX REST API from going through, because it will be a POST request with an `application/json` Content-Type, which will trigger a preflight request.

Then you should edit your article to say that, or at least put a caveat saying it's more complicated than you imply in your article and to go look up the details elsewhere.

If I wanted to design an API that was prone to security issues - I would design something like CORS. While I understand why some requests require pre-flights and others don't, looking at it without considering web history: it is absolutely insane that changing what appears to be a non-security related parameter (Content-Type), drastically changes how the request is executed. Making it worse, is that its super easy to ignore the Content-Type header on the server side and just assume a that the body will be whats expected. Making it even worse, is that if a request is not pre-flighted, it will be sent to the server to be executed and only the reading of the response will be blocked by a non-matching Access-Control-Allow-Origin header - which is not intuitive at all.

Worth mentioning that according to OWASP's latest recommendation #1 is considered as a first line of defense and #2 is considered more as a defense in depth technique (e.g. in addition to #1, not as a replacement): https://github.com/OWASP/CheatSheetSeries/blob/master/cheats...)

The SameSite=Strict is also an honorable mention as an up and coming technique once all browsers support it.

> "Access-Control-Allow-Origin" doesn't block the request from going through, it just prevents wrong-origin Javascript from accessing the response."

On a GET request (like the one generated by a IMG tag) and all the other cases that do not require a pre-flight request ( https://www.w3.org/TR/cors/#preflight-request ) this is correct.

Are we clear on the thread model and what Zoom is trying to do? As far as I know, Zoom wants to make "joining a meeting" as easy as clicking a link. I do not think Zoom wants people to be logged in a zoom account in order to join.

And what Zoom should try to avoid is to have random people joining meetings and website forcing actions on the laptop of the people.

A simple fix is to use a GUID or any other long not-guessable string for the meeting id. And, of course, do not expose any other potentially dangerous endpoints beside the one that starts the meeting.

In general it should not be about either CORS or CSRF. There are better ways to start a video call communicating with a plugin than exposing a port on localhost and using <img src="localhost:12345/a-command?aParameter=12345" /> to communicate with it.

The threat model seems to have 2 requirements:

1. Only if the top level url bar says zoom.us is it allowed to cause any noticeable change to the zoom native app. This means some arbitrary website cannot cause any noticeable change to the zoom native app without redirecting you to zoom.us . This also means that a sandboxed iframe, such as an embedded web ad, has no possible way to cause a noticeable change to the zoom native app, because it is not able to do top level navigation to zoom.us .

2. When the zoom native app is triggered, the user must click yes on a confirmation button before joining the meeting.

I don't see people advocating that you need to be logged in to zoom.us .

Achieving requirement 1 without requirement 2 is still bad, because some website could redirect to zoom.us and force you into a meeting.

Achieving requirement 2 without requirement 1 is not too bad, but is annoying, because some sandboxed ad in a website could cause the zoom native app to pop up, annoying you, and could maybe almost be a DOS.

I’m not sure you understand the threat model either. Even if the meeting ID is a GUID, you’re still leaving an avenue for people to have their visitors join arbitrary meetings.

Well, I guess I need to explicitly say that having the Zoom client asking something like:

"www.visited-site.com" wants to start a meeting with you. Do you want to join?

is (uncommon?) common sense that the Zoom app should follow.

They seem to have engineered this just to avoid such a prompt, because Safari brings up one when navigating to a protocol that launches an external app.

> "Access-Control-Allow-Origin" doesn't block the request from going through, it just prevents wrong-origin Javascript from accessing the response.

If the server only reacts to POST, then Access-Control-Allow-Origin may be enough, because the browser will fist do a preflight OPTIONS, and if Access-Control-Allow-Origin is not set, the POST will not be made by the browser.

There are some exceptional cases where a POST is allowed without a preflight check: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS#Simpl...

You are right, and reading this the cases are not really exceptional but rather usual; so yes, it's better to not rely on CORS for protection in this case.

Not really, you can just check that Content-Type is application/json and refuse to do anything if it’s not. That effectively shuts down application/x-www-url-encoded or multipart/form-data CSRFs. In fact, parsing POST payload as JSON without checking Content-Type first is frowned upon, and some frameworks refuse to do that by default.

(Throws up hands) But this is ridiculous!

The security of the API can’t rely on strict validation of the content-type, how is that code supposed to make any sense to the 3rd generation of developers who are tasked to maintain it 5 years from now?

CORS is clearly not designed as a means of access control to ensure the POST request will never be made. Therefore, it is fairly catastrophic that an article prognosticating about how nobody bloody well understands CORS has also gotten it terribly wrong.

What this tells me is that there is probably big money in bug bounties for finding sites which use CORS for access control, and almost certainly are not checking the content-type.

application/x-www-form-urlencoded and multipart/form-data POSTs predate CORS. They have to be whitelisted, or large swaths of the existing web would be broken.

Yes, but that doesn't mean we should build a website that has its CSRF security depend on the Content-Type header. There are other mechanisms to gain this security that are less confusing.

Sure, nothing wrong with an additional (trivial) check, though.

These things keep coming up... but how large a swath is it really?

Then again, only with TLS 1.3 do we get rid of RC4!! Except when downgrading to 1.2, 1.1, 1.0, ssl3 (is that even around?)

You should not implement SSLv3

If you are willing to do SSLv3 the POODLE attack downgrades you and steals one byte of encrypted data per 256 iterations.

If you demand SCSV to defend against this downgrade, every implementation that speaks SCSV also offers a better protocol version than SSLv3 so you won't end up talking SSLv3 anyway, thus you should just not implement SSLv3.

You also shouldn't implement RC4 in 2019. Refuse to connect to peers that only offer RC4 instead.

That was my point

TLS version can usually be upgraded transparently by, say, web hosts. To retrofit CORS you actually need to inspect legacy code and sometimes make modifications (in addition to modifying say nginx config), which is a fair bit harder.

Is this true if the request is a POST? Shouldn't a POST lead to a preflight request, and then the browser refusing to send the real request?

POST requests are not necessarily preflighted. If they used non-standard headers or content-type it would be.


The designers of CORS kept the previous security for certain types of POST requests because they are basically equivalent to how forms work, which operate at a much worse security standard and are a source of a great number of CSRF problems. The simplest way to remember it is that most things which break the legacy behavior of POST triggers the preflight. Speaking of form problems last I checked HN has a login CSRF.

It will trigger preflight if POST and not a form content type. (E.g. a POST application/json)

Is all of that worthless if the calls to localhost aren't encrypted?

All of what worthless? The advice that cakoose gives seems quite valid to me without any encryption to localhost.

What do you mean encrypt calls to localhost? Encrypt with https? That sounds hard since you can't get a real CA to issue you such a cert. I don't see much point in encrypting with https because people off your machine can't sniff or intercept your connection to localhost. Is your goal to protect yourself from other programs or users on your same machine? That might be a valid concern on a machine with multiple users, but definitely makes the problem harder.

Or do you want some custom application-level encryption? I can't think of a good way to implement that either, but maybe it's possible.

But since I don't know what attacks you're trying to protect against, I can't tell whether there's any benefit at all to encryption.

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact