Hacker News new | past | comments | ask | show | jobs | submit login

Also not a web dev, but if I'm remembering it correctly (someone please correct me if I'm wrong), the way I understood it was:

Whenever your browser sends any request to any site, it sends the cookies(/other auth data) associated with that site along with it. In other words, cookies are fundamentally just associated with the receiver, not the sender. So the "solution" is for the receiver to block requests from the wrong sender, since otherwise any site could send authenticated requests to any random site. [Edit in response to comment below: I should've mentioned more here, but I understand what happened was browsers introduced the Same-Origin Policy to prevent this from happening, and introduced CORS as a dynamic bypass mechanism for that, which, unless you implement OPTIONS, can get you these half-baked insecure situations where requests still get sent and executed, but the client JS doesn't get to see the responses.]

To me this whole mess is stupid because the premise shouldn't be true in the first place (why the heck should a cross-origin request send auth data? cookies etc. should be "contained" to whatever domains the original site restricted them to), but that's apparently The Way Things Are, and so here we are: browsers do something completely unexpected and insecure, and we blame devs for getting caught off-guard and not protecting against a security hole browsers introduce.

(Yes, I have Opinions on this. Please tell me exactly where I'm wrong, because I suspect I might be, but I have yet to figure it out.)






I'm pretty sure you're incorrect - requests to sites different from the one you are currently on will not include auth data by default because of the same-origin policy. The auth data would only be included if the server that the request is being sent to responds with an Access-Control-Allow-Origin header whose value matches the origin the request was made from.

I think we're agreeing? Like using CORS headers is a mechanism used to bypass SOP when needed, which was implemented because HTTP handles cookies in a stupid way and now nobody wants to change how that's done, right? Like neither CORS nor SOP should've been necessary in the first place had cookies explicitly included the allowed senders instead of just the receivers.

(P.S. the way you phrased the request handling would violate causality, so I'm assuming you were referring to the OPTIONS check beforehand...)


> I'm pretty sure you're incorrect - requests to sites different from the one you are currently on will not include auth data by default because of the same-origin policy

Not necessarily true. You can have Javascript running on evil.com that submits a form to banking.com and it will send your session cookies for banking.com along with the request. It's just that evil.com can't read the response content.


I thought the browser at evil.com would first send a pre-fetch request without the cookies, but not send the full request when it doesn't get the correct value from Access-Control-Allow-Origin in the response from banking.com.

I'll admit I may be one of the developers that doesn't understand CORS...


CORS doesn't apply to navigations by default, so anaphor is correct: evil.com can just submit a form to banking.com and it will send the banking.com cookies. This is the whole field of CSRF (cross-site request forgery) mitigation.

CORS was introduced as a way to allow requests that browsers used to not allow at all: things like cross-site XHR in the first instance. Then it was expanded so that requests that are not normally subject to CORS checks (image loads, script loads, stylesheet loads) could opt-in to being subject to them, for various reasons. The default for those loads is still "no CORS". And there still isn't a way to do a navigation subject to a CORS check, even with opt-in.

Disclaimer: I work on Gecko and I've reviewed/implemented parts of the CORS spec.


Is my take on this also generally correct? That it wouldn't have been a problem had cookies and such been designed to take into account the origin properly (and hence why it's unintuitive and catches people off-guard)?

If cookies were scoped to the (source, target) pair, then that would remove one of the main motivations for CORS, yes: evil.com would not be able to get any information from bank.com by making your browser make the request that they could not get by doing the request server-side.

There's a second problem CORS kinda tries to solve, which is the ambient authority problem: services that run behind firewalls and assume that if someone can reach them the someone should have access. If someone runs a browser behind the firewall and opens a page on unsafe side of the firewall, that page can then issue network requests from the browser and thus end up access things on the "safe" side of the firewall. This is a large part of why CORS has the whole preflight complication and the rules around when preflights happen: the idea is that in this situation just making the request, not even receiving a response, is potentially damaging. There are the carve-outs for requests that could be generated without things like XHR that are subject to CORS (e.g. by doing a form submission or <img> load or whatnot); if your ambient-authority-using server responds in interesting ways to those, CORS is not going to help you... The _right_ fix for this stuff, of course, is for services to stop using ambient authority and/or for browsers to block requests from public sites to private IPs. Unfortunately in practice detecting "private IPs" reliably is not trivial, because fundamentally it depends on the routing and firewall topology, which the browser doesn't really know about.


Thank you for the reply! I didn't realize this actually gets to another question I've had in another context, which is why can't a browser at least assume 192.168.0.0/16 etc. are private networks and block requests from nominally-public addresses from being sent to those? (Or do they do that already?) This should be possible without needing to detect anything at all, right?

Pretty sure some tools like NoScript already do that, but to protect against DNS rebinding (which can subvert the SOP), not as a protection against cross-origin requests coming from a public site to an internal one.

https://en.wikipedia.org/wiki/DNS_rebinding

> The NoScript extension for Firefox includes ABE, a firewall-like feature inside the browser which in its default configuration prevents attacks on the local network by preventing external webpages from accessing local IP addresses.


Yeah but why shouldn't the browser do this itself to provide the cross-origin protection?

I think just because they're afraid of breaking things? I'm honestly not sure what the rationale was for not implementing this. They do DNS pinning which helps mitigate DNS rebinding attacks, but I don't think they do anything specifically to restrict access to internally routable IPs.

They do block certain ports that are known to be problematic (25, 6667, 5222, etc)


It's so stupid. Makes me want to just fork the browsers and add blatantly obvious protections like this if I can find the time...

If you do that, I would recommend looking at writing an extension first to see if there's any way to do it without a fork (maybe using the same technique that things like ublock origin have)

https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/Web...


Yeah you can almost certainly do it with an extension, but half the point of a fork would be to get the message across that they need to get their act together. (I almost certainly won't get around to it though, so this is just daydreaming.)

There turn out to be a bunch of complications when one tries to do that, if the goal is to not break existing legitimate uses.

I haven't been following this closely, but for Firefox https://bugzilla.mozilla.org/show_bug.cgi?id=354493 is the relevant bug, with some (failed) attempts to do that.


Interesting. It seems the "legitimate" uses are corporate? Seems like providing a group policy or config option to disable protection against this would be the sensible way to go, rather than increasing the attack surface of home users just because some corporate users do weird things. Although honestly major software vendors are already happy breaking so many things in the name of security that this one could just be another one on top...

I think you could implement that with CORS + CSRF tokens (blocking all types of Cross-Origin requests) but the default behaviour is to allow Cross-Origin writes (e.g. with form submissions).

See https://developer.mozilla.org/en-US/docs/Web/Security/Same-o...

This is why we have CSRF tokens. So that you can effectively block Cross-Origin writes.


I see what you mean, evil.com could still make a request that includes cookies, which is why we need CSRF tokens. But from my understanding, it wouldn't be able to do that in a XMLHttpRequest hidden on the page. It would have to be a request from something like submitting a form which would navigate the user off the page. Is that correct? Of course it doesn't make much difference from a security perspective.

You can work around that by submitting it within an invisible iframe element, e.g. https://stackoverflow.com/a/17953761/903589

But yeah, you can't just make arbitrary requests like this with XHR


I think pixelperfect is talking about preflights.

That makes more sense then. Yes, it will make sure not to send authentication data in that case.

If you use fetch or XHR, yes.

If you just create an HTML form with the bank as action, fill in the inputs and submit via JS, no, because that's _navigating away_. This is also why GET requests that perform actions are dangerous, you can just embed them as an image to provoke a request. Which is exactly how zoom built their API. (they also then measured the size of the image to determine the response code, which is.... inventive)


So, it depends on the request type, GET vs POST and others.

Before the client does the request for data, it does a preflight request via OPTIONS to determine what is allowable (is this domain allowed, this type of call).

If it is allowed, it can send and request cookies on the requested domain in the current standard.

However, this is not supported in older browsers, so the request will just work.

In newer browsers, without the preflight check, the server still receives and replies to the request, but the reply is ‘ignored’ by the browser and throws an error.


I'm aware of OPTIONS but it still seems like the same exact stupid browser security hole (edit: or perhaps I should say HTTP protocol flaw?) being half-patched on the server side. Like, I'm saying that -- independent of the HTTP method -- there should be no communication of privileged information in the first place by default. If a website really wants other arbitrary websites to send e.g. a cookie along, then there should be a way to mark that cookie as such at the time that it is originally set, not having it checked post-facto. It sounds like the only reason this is done is backward-compatibility?

Definitely not disagreeing.

The server still sees the request, so the data can be exfiltrated.

In terms of backwards compatibility, it is actually the opposite. Newer browsers will block stuff that worked in older versions.


You can mark a cookie as samesite:

https://github.com/OWASP/CheatSheetSeries/blob/master/cheats...

(as you mentioned, backwards compatibility requires that this is opt-in when the cookie is set, not opt-out)


Oh wow, 2017. Finally...



Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: