
The Performance Cost of CORS Requests on Single-Page Applications - ankuranand
https://medium.com/@ankur_anand/the-terrible-performance-cost-of-cors-api-on-the-single-page-application-spa-6fcf71e50147
======
tootie
My number one solution to any and all CORS issues is to never ever set up an
api.* domain. Always do /api on the same domain. If the requests are for third
parties, put a proxy on your own hostname if at all possible.

~~~
JoachimSchipper
Putting a proxy on your own hostname solves the performance problem, but you
need something that parses and re-sends requests, not just a general HTTP
proxy - otherwise, the third party can e.g. serve malicious Javascript code to
your users within your origin.

~~~
tootie
I'm assuming this is a trusted endpoint. Either something you own or a vendor
with a contract. If you're calling random endpoints from your pages, then
that's another kettle of fish.

~~~
JoachimSchipper
Yes, sorry for writing such a short message - I figured you were probably
aware of these considerations, but I was pretty sure that not all your readers
were. ;-)

------
lucideer
Making 1 request instead of 2 is definitely ideal, so following the
recommendations here of using x/api instead of api.x is good advice, if it's
low hanging fruit. However, otherwise, is it just me or doesn't this seem like
a long tail optimisation?

> _If you have a slow network, this could be a huge setback, especially when
> an OPTIONS request takes 2 seconds to respond, and another 2 for the data._

Firstly if your OPTIONS is taking 2 seconds, CORS is not your bottleneck. Even
with a half-second PING due to lack of content distribution, and DNS
resolution (shouldn't be an issue after the first request), an OPTIONS request
should be of negligible size; the bulk of that 2 seconds isn't going to be
network latency, that's a serious server-side perf. issue.

Secondly, similarly, if your OPTIONS is taking equally as long as your GET,
CORS isn't your bottleneck.

Yes, considering CORS latency is worthwhile if having your API on the same
domain is an easy change, but the title overstates the performance cost.

~~~
always_good
Once again, the quote is talking about slow network, not the server taking
seconds to process an OPTIONS request.

Yes, two seconds and more can absolutely be network latency. If you don't
think this is possible nor a regular occurrence for many people, you have been
living in a bubble.

~~~
lucideer
> _two seconds and more can absolutely be network latency_

Not at all contesting this.

> _a regular occurrence for many people_

Maybe I am living in a bubble, but this seems unlikely unless your definition
of "many" is fairly liberal.

On a 2G connection you might get up to 1000ms latency, if you're in a remote
part of the world could could add an extra 500ms or more due to geography. DNS
lookup, TLS handshake, and the data transfer, you could potentially get up to
3+ seconds adding it all up. But—apart from aspects of that being one-off, as
I mentioned above—generally speaking the whole lot is an example of an
extreme. Your average user really shouldn't be seeing that.

~~~
pvorb
> this seems unlikely unless your definition of "many" is fairly liberal

In a world with more than 7 billion human beings and a large percentage of
them being users of the internet, it doesn’t matter if we are talking about 1%
or even only 0.01% of users. Any percentage will include many people.

It might only be many _actual_ users if you’re Google or Facebook, though.

~~~
lucideer
The title of the article is (HN mods have wisely edited it since my initial
comment) "The _Terrible_ Performance Cost...", which I'd say implies more
impact than the reality.

------
3pt14159
CORS has a performance cost, and that is shitty. But CORS has many benefits.

Fine grain control over the options response, in certain contexts, is great.
Being able to immediately ship 3rd party functionality and know that it will
work is great. A unified API for your JS, your 3rd party JS, and all the API
consumers out there, is also great. For me it's a security and speed-of-
development thing; not a performance-while-travelling thing. This means it's
great for full-featured apps used by businesses in Canada, the US, Japan, and
most of Europe.

If a connection is seconds slow; and you have a project that needs to be a
complex SPA, the right way of handling slow connections is to fallback to a
simple server side pure HTML / CSS bare bones website that sets aggressive
caches with crazy secure HTTP headers and uses encrypted cookies to handle
session between a separate web server that communicates through the main
backend API on the session's behalf. If you use Ember, this is trivial with
Fastboot[0] if you've followed best practices, and not much work if you
haven't.

Also, how you architect your API and frontend matters greatly. The reason I
like following JSON API[1] is that EmberData-like data model mappers /
persistence libraries mean that I can supply multitudes more information about
the world in one response if I want to. Sure I have multiplexed HTTP2 set up,
but if A needs B and B needs C1 through C20 and they each need their own D
it's nice to be able to just send that all down and skip all the requests; and
of course it's always possible to design the API to be optionally told to
fetch only a subset and everything works easily for out of the box because
these frontend libraries can rely on the standard.

The only thing I wish came with CORS was a small DSL, so I could do things
like whitelist a pattern. But I get it; someone would probably screw up at
some point and for what? It's kinda nice that a technology defaulted to
_secure_ for once.

[0] [https://www.ember-fastboot.com/docs/user-guide](https://www.ember-
fastboot.com/docs/user-guide)

[1] I use v1.1 because I like to stay ahead wherever I can.

[http://jsonapi.org/format/upcoming/](http://jsonapi.org/format/upcoming/)

------
vvoyer
At Algolia, we use simple requests ([https://developer.mozilla.org/en-
US/docs/Web/HTTP/CORS#Simpl...](https://developer.mozilla.org/en-
US/docs/Web/HTTP/CORS#Simple_requests)) in our JSON API. So that there's no
OPTIONS requests being done in the most important scenarios (GET, POST).

This is a very neat trick because you can definitely ask an API for JSON via
accept header ([https://developer.mozilla.org/en-
US/docs/Web/HTTP/Headers/Ac...](https://developer.mozilla.org/en-
US/docs/Web/HTTP/Headers/Accept)) while being compliant with simple requests
by using query strings and/or multipart/form for routes option.

~~~
RobertRoberts
If you work at Algolia, can you address (pass along) the issue that you can't
use your search boxes with browser search keywords? (ie, right-click > "Add
keyword for this search"/"Manage Search Engines")

At this point when I see the Algolia search box on a site, I know I won't get
a full search experience it will be slightly crippled with a custom interface
and a forced auto-complete with JS only interface support.

An example of this is vuejs.org vs developer.mozilla.org. With Mozilla dev
site, I have 'mdn' as my search key (both Firefox and Chrome) and I can type
"mdn grid-template-columns" and I get results. With vuejs.org and Algolia, I
have to go through multiple clicks and visual hunt and peck to find what I am
looking for. It's a productivity killer.

~~~
redox_
You should submit a GitHub issue here: [https://github.com/algolia/hn-
search/issues](https://github.com/algolia/hn-search/issues)

I'm wondering whether the
[http://hn.algolia.com/opensearch.xml](http://hn.algolia.com/opensearch.xml)
configuration could help here, might need to tweak it a bit.

~~~
RobertRoberts
I save my criticism for systems I trully want to get better. And I actually
regret posting my original comment here and hope the sites I use frequently
just drop algolia entirely.

(Algolia charges thousands of dollars for some of their services and I find it
the worst search system currently used on popular sites.)

------
tveita
The preflight cache can be quite effective if you're using a single-endpoint
RPC protocol like GraphQL.

~~~
infogulch
I noticed this the first time I opened the terminal on my app. One setting
later and it's not a problem anymore.

------
rwmj
More web app madness. The main lesson I took from Michal Zalewski's excellent
book "The Tangled Web" was I hope I never have to write a web application.
There are far too many traps.

~~~
toddmorey
I think we are seeing big changes in the platform that actually make it a very
exciting time to be a web developer.

\- Much better tooling & testing

\- Excellent editors like VSCode

\- Javascript ES6-7 (and the renewed energy of the language)

\- GPU accelerated CSS transitions and effects

\- Web assembly

\- Modern component frameworks & state management

\- So many APIs to cover things you used to have to do yourself: payment,
transactional email, authentication

\- More expressive APIs like GraphQL

\- No real need to maintain infrastructure: publish to CDN services; consume
APIs.

Compared to development on [favorite-platform-here] it may not be as seamless,
but that's an unfair comparison. You have to compare it other forms of cross-
platform development, and there the web really shines. It certainly has its
flaws and gaps, but I love what's currently happening in the space.

~~~
superkuh
Websites are at least supposedly are sandboxed so they are not as much of a
risk as running native binaries. But this is getting worse and worse though as
browsers expose more and more of their host operating system's functionality.
The benefits of using a website instead of a native app are quickly
disappearing, while the drawbacks have only been somewhat mitigated. We're
getting to the point where browsers are worthy of the decades old criticism
Emacs has received. They have eventually become an OS with many fine features
- simply lacking a good web browser.

~~~
tehlike
The main advantage to a browser is auto updates. It enabled a number of saas
apps that would otherwise be very costly to replicate and maintain.

------
jonny_eh
So no preflight is needed if Content-Type is not set? Any downside to having
the API server just assume json?

------
jarym
Yes, this aspect of CORS is not nice.

I've used
[https://github.com/jpillora/xdomain](https://github.com/jpillora/xdomain) to
get around this and thought it was a thoroughly excellent workaround. There
are other libraries that do this too (and according to the GH page, Google use
the same approach for their APIs).

~~~
Drdrdrq
Thanks for posting the link, this is a neat idea!

------
bluepnume
I built a little library to help solve this problem:
[https://github.com/krakenjs/fetch-robot](https://github.com/krakenjs/fetch-
robot)

All requests are channelled through an iframe on the domain you want to make
requests to; then the iframe publishes a policy for which requests are allowed
to go through.

------
jondubois
I think that web apps will increasingly start using WebSockets for their APIs
instead of HTTP. HTTP was originally designed for serving static files and
it's very good at that; it's also conveninent for file uploads. For exchanging
data, however, the WebSockets protocol is slightly better in my opinion; maybe
just better enough that developers might very slowly switch to it over time.
There are several things which make WebSockets appealing:

\- No CORS issues.

\- Data can be pushed to clients in real-time (this goes well with reactive
UIs).

\- No need to send headers with every call/response.

\- The server is aware of the client's presence (and can detect the exact
moment when they go offline).

WebSockets add a few extra challenges related to recovering from lost
connections and rate-limiting but a lot of libraries and frameworks have
already solved these problems.

The main reason why HTTP might stick around for a very long time in the future
is because HTTP REST APIs are extremely widespread and well understood.

~~~
callumjones
The problem with WS is that is significantly more complex than HTTP when it
comes to caching.

With WS you can’t leverage a CDN hosted solution and instead would need some
complicated CDN that is aware of state.

I like HTTP because it just works, especially behind buggy corporate and
mobile network firewalls.

~~~
pas
You only cache GET responses, no? For those you don't need CORS anyway, and
anything that's user specific should remain strictly between you and the user,
right? (And for that WSS is perfect.)

~~~
al2o3cr
> For those you don't need CORS anyway

CORS is needed for GET if the request needs to send headers like
`Authorization`, AFAIK

~~~
lostcolony
Yes. There are a few safelisted headers (and relatedly, content-types) that do
not trigger a pre-flight; any GET that uses something outside of them (such as
'authorization") gets preflighted.

------
superasn
Now add in JWT and CSRF tokens and the payload size per request adds up even
more. It's just turtles all the way.

~~~
pas
Why do you need JWT and CSRF at the same time? If you have the JWT you can use
that as the CSRF token.

~~~
superasn
I don't think that you can. The CSRF token must either be in the headers or
the meta tags. Storing it inside the JWT cookie would defeat the purpose
(unless you are not using cookies at all in which case you don't have to).

~~~
bosyr
JWT don’t have to be in a cookie. It may very well be a bearer token in the
Authorization header as well. I would assume that is the most common use of a
JWT, but that is just an assumption.

------
nour_js
I must be missing something! to avoid the cross domain issue, how about
hitting the API endpoint from back-end function for example using CURL, and
simply return the json response to the browser ?

------
spacenick88
I wonder what the impact of HTTP/2 on this is. Presumably connection
multiplexing and keeping them open for longer works surely mitigate a lot of
the latency?

