Hacker News new | past | comments | ask | show | jobs | submit login

Because you used to be able to host your web site on www.compsci.tech.youruniversity.edu, and put a HTML <form> element in there, that would send the result to myawesomeguestbook.com.

Thus, cross-origin POST requests were allowed, and changing it was impossible without breaking the web. Likewise you can include images or iframes of other origins, allowing GET requests. However, you cannot read the responses! The JavaScript APIs just make it more convenient to make these requests, but they don't change what you can do - you still can't read the response. Not allowing it in the JS API would just make the attack more annoying (fall back to the legacy methods) and limit legit use cases.

It's important that you are not allowed to make arbitrary requests: As soon as you start using advanced features like custom headers (in other words, things you couldn't do with img tags, redirects, forms, and iframes), you run into preflight (where the browser asks the server whether the request is allowed before performing it, via OPTIONS, i.e. in a way that legacy servers will not understand and safely reject).




My reading, feel free to correct if I'm missing something.

This is a compatibility feature that can never change, and shouldn't change, because we can't break the web. But my reading is that CORS is mostly a band-aid on what was a bad security policy to begin with.

The problem isn't that a browser might let a web page on a separate domain send and read any arbitrary request to another domain without permission -- with the exception of DDOS and bot attacks against servers, it's difficult to think of what risks this would actually pose to users themselves. And any native program on your computer can already send an arbitrary request to an arbitrary domain and read the data -- so there are at least few arguments that it might even be beneficial for websites to be able to do the same.

The security problem is that requests are made with your current cookies regardless of what domain you're on. Ideally, browsers from the start would have isolated every domain into their own container, so that if evil.com made a GET/POST request to mybank.com, it wouldn't be authenticated as me, regardless of how the server was configured.

We made it so that cookies could only be read by certain domains, but we didn't think until very recently to make it so that cookies could only be sent by certain domains. And at this point, too much infrastructure relies on this behavior for us to ever come up with a better model, so we're stuck with hacks like CSRF tokens which are dependent on websites being coded well.


>The problem isn't that a browser might let a web page on a separate domain send and read any arbitrary request to another domain without permission

That's not fully correct. Even without cookies it's bad if a.com can read a response from b.com . Why? Because b.com might be hosted on a private network, and contain private information. We don't want a.com to steal that information. Even if cookies aren't sent, there can be other types of ambient authority, and connectivity to a private network is one of those types of ambient authority. Another similar type is if a website is protected using a whitelist of user IPs. Side note: any website that uses this type of ambient authority that isn't associated with a hostname must also check the Host header to protect against DNS rebinding.

Of course if the web was built from the start such that a.com can read b.com just without cookies, then such websites that rely on non-host-associated ambient authority (hopefully) wouldn't be built and you would be right.

Just because a native program can do something doesn't mean a website should be able to. A native program can delete all my files without asking me. A website can't.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: