It's common to have app.example.org point to a CDN and api.example.org point to an API.
And CORS implementation is terrible. The server has to transmit validation rules for the browser to enforce (with vendor specific caching differences), rather than just enforcing access itself.
The reason it's implemented this way is because of the organic evolution of web security.
That's a typical misunderstanding of purpose of CORS. Regardless of your website setting or not setting CORS, an attacker with a modified browser or a custom browser can ignore it. That's not what CORS protects from - CORS protects against a non-modified browser uased by Joe Random User installed via a factory/distribution path being tricked into doing something against the site policy, therefore exposing the user.
My beef with CORS is that I have to encoded the validation logic into its HTTP headers.
But maybe what I want doesn't fit in its headers, e.g. I want to allow requests from *.example.org. So then I am writing server code, and if I am writing server code anyway, why have the extra step of encoding validation logic in CORS headers in the first place?
Is it really that much more expensive to check the Origin header than to check the Authorization and Cookie headers?
The problem is that existing servers don’t generally check origin headers, so browsers needed some other mechansim to understand which requests were safe.
I disagree. The current model where the server has to opt-in to cross origin access by explicitly sending an Access-Control-Allow-Origin header is secure-by-default, whereas a model relying on the server to enforce access would be insecure-unless-the-server-properly-checks-the-origin-header, leading to all sorts of vulnerabilities.
If CORS aimed to only protect users, there would be no need for a preflight at all. The only reason preflights exist is to protect services from receiving requests they might really not expect (e.g. malformed data) and doing bad things as a result.
In particular, the idea is to protect non-publicly-routable services. Publicly-routable ones, where you can just issue an attack request with cURL or the like, have to be hardened against malformed requests to start with.
But there are tons of non-publicly-routable things (think printers and the like behind firewalls) that could be attacked via browsers that are running behind the firewall loading web pages from outside the firewall. And CORS aims to mitigate or prevent some of those attacks.
Could you elaborate on that? I can't picture the alternative you're suggesting.
Even still, a lot of people just put ‘Access-Control-Allow-Origin: *’ on everything as soon as they run into an issue, so that rule has to ban credentialed requests altogether.
Then the server responds with 200 or 403.
Huh? They already implement authorization. (Transport security is similar but different.)
And I've seen plenty of Allow-Origin-Access-Control: * because people get frustrated with CORS, e.g. they can't allow access for *. example.org.
The reply is always "yes, you can CSRF yourself, because it's not supposed to protect against that; it's supposed to protect you from other people". In exactly the same way, CORS is there to protect you from other people. You can always hack your own user-agent to disregard CORS, but the only person you can harm that way is yourself.
That's not true. You can set CORS: * and validate all requests in your server. The extra rules are for vast majority of servers that never inspect Origin headers.
But you can still use IE, edge, chrome and Safari and trust that they implement CORS and most other basic security features correctly.