Hacker News new | past | comments | ask | show | jobs | submit login

I never really understood why we have CORS. I mean, the problem with CSRF is that some random page can trick your browser into adding its authentication token to a request which does not originate from the authenticated page. So why the do we need the server to tell the browser that it should not send requests from other origins?

In my opinion, it would have been much better to improve the browsers to not include cookies in 3rd party requests automatically (only when they are explicitly specified via JS for example). It should have solved the issue equally well, without introducing some bulky server-side security feature to remote control browsers.




CORS is really for the opposite problem. Browsers do block requests from other origins by default (mostly). CORS is used to let the server decide which origins are allowed to request data and how it can be requested. If the client was allowed to decide via javascript, then attacker.com could make a request via javascript to facebook.com telling the browser to send cookies and return the user's data. This is actually what the client JS has to do anyway with CORS (using credentials: true), but the server side needs to be able to allow/deny it.


But why not just completely separate origins with regards to sessions, or at least let the user give permission to use that Facebook session here? That way, many use cases would already be covered without any danger. If a travel website is CORS-reading weather data from another origin, pre-existing sessions probably don't matter at all.


Well, yes, in fact, I was complaining about the Same-Origin Policy, and CORS is just the consequence of the way the SOP works. Nevertheless, this doesn't really change the situation.

If the browsers separated the session by origin (as blauditore wrote), the whole problem space would look very different.


Ya I generally think CORS is a waste of time. It would have been better to provide a hash of the file we're linking to and trust that rather than where it came from. Which is precisely what Subresource Integrity (SRI) does:

https://en.wikipedia.org/wiki/Subresource_Integrity

Sadly even though this is an obvious concept and trivial to implement, it took them over 20 years since the web came out to get it in most browsers. The cost to society of having thousands of copies of the same commonly used files (like jQuery) hosted locally on countless servers rather than having a centrally hosted version already cached from previously visited sites is staggering to contemplate. I'd really like to know who was behind the holdup on deploying SRI.


CORS is about a lot more than just static assets. SRI does not replace CORS.


CORS is for data retrieval, not subresource inclusion. In fact, you don't need CORS at all to include a script in your page; that has never been the case.


After sleeping on it, I realized that my comment was a bit too critical and also missing some context. I didn't mean to be as negative at CORS as I came across, I was more disappointed that something like SRI hasn't been part of the web from the start. Some background:

https://en.wikipedia.org/wiki/Content-addressable_memory

https://en.wikipedia.org/wiki/Distributed_hash_table

https://en.wikipedia.org/wiki/Merkle_tree

If we had something like SRI from the start, we could have linked to resources by their hash instead of their URL (more like how IPFS works). There's a name for this concept that eludes me, and also a great video explaining its potential but also difficulties when it comes to security and HTTPS. The short of it is that if we had routers that accepted hashes as well as URLs, then we could ask for a list of all data matching a given hash and download that file (or its pieces) from the closest cache(s). So instead of linking to jQuery at https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.mi... we could just ask the router for the file with SHA2 hash 160a426ff2894252cd7cebbdd6d6b7da8fcd319c65b70468f10b6690c45d02ef and it would return its contents regardless of where it came from, including the browser's own cache if it already had that file (I just used https://hash.online-convert.com/sha256-generator but there would be a better standard for this).

Anyway, hope this helps and sorry for any confusion.


The major hold up on deploying SRI is that a lot of third parties aren't supplying immutable content.

Google Analytics or recaptcha for example aren't versioned. Deploying SRI is just going to break your site when they update the script.


If they hashed and cashed the resource files, then they could be found locally most of the time.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: