Can someone knowledgeable please explain where these workers are useful?
Serverless components within an main infa. make sense - it's an easier way to deploy.
But these 'edge' functions ... what is the advantage of saving a few ms on a transaction?
I understand that we may want standard content pushed out to the edge, but in what situation is it really worth all the added complexity of risk of pushing out functions to the edge, to save a few ms?
We’re using it to customize Cloudflare’s default caching policies so that we can cache more content at the edge. For example, we can segment the cache based on the geolocation or device type of the client. We can also normalize the URLs before doing the cache lookup, by stripping query params which we know aren’t going to affect the content in the response.
This can save hundreds of ms from the response time of the initial HTTP request, which means all of the other page resources will load more quickly too.
It does add some additional complexity, but for large sites and hosting platforms, this can have very significant cost savings. It’s usually way cheaper to serve bytes from Cloudflare’s cache than to serve them from your origin.
I wonder this too, given ultimately, 99% of non-static sites will need to reach out to a central database. So to render a dynamic page, your worker then has to go to the DB, no?
Curious what is the use case. You can cache stuff, as detailed in the post, but assuming you have huge variance in page contents per user, I can't see too much use. I must be missing something.
A lot of applications are read heavy on their database. When this is the case, you can make reads to a read only slave closer to the edge. There are other options, but depending on your applications architecture this might be a relatively easy way to reduce latency on a large portion of your traffic.
> in what situation is it really worth all the added complexity of risk of pushing out functions to the edge
If you are talking about developer point of view, then there is no additional complexity. All the complexity is covered by the underlying platform
> what is the advantage of saving a few ms on a transaction?
one example - if a transaction consists of a few separate sequential transactions, then ms add up and might affect user experience.
Also an app might need to issue lots of requests on a page load and taken that there is a limit on parallel requests (6 requests per domain), the advantage might be sensible.
Having said that, I tend to agree that many use cases are not sensible to a few ms advantage
It's often not actually more complex, it's simpler. With Cloudflare Workers, for example, you don't think about regions, availability zones, provisioning resources, or cold starts. You just write code, and it can scale from one request per second to thousands without any thought or work on your part, partially because of how its designed and partially because it's scaled across so many locations and machines.
Except you need to do it in a totally new paradigm (serverless) where you can't require any `npm` packages and you can't query your database unless it's over the `fetch()` API.
Serverless are not all the same.
Cloudflare uses V8 and you can't require npm packages, right.
Vercel and many other implementations use NodeJS and you Can require npm packages.
if it's a real issue and you have to issue lots of subrequests, then you don't really get advantage from all Cloudflare micro-optimisations.
In such situation I would suggest to look for other Serverless providers, or maybe traditional approach works better in such case
Serverless components within an main infa. make sense - it's an easier way to deploy.
But these 'edge' functions ... what is the advantage of saving a few ms on a transaction?
I understand that we may want standard content pushed out to the edge, but in what situation is it really worth all the added complexity of risk of pushing out functions to the edge, to save a few ms?