BTW If anyone at Cloudflare is interested in doing the work of making GitLab Serverless https://about.gitlab.com/product/serverless/ compatible with this we're all ears.
We're definitely interested in these integrations, but our engineering team is still pretty small and doesn't have the bandwidth... yet.
On that note (for others reading): We're hiring for the Workers team in Austin, SF, and London. https://www.cloudflare.com/careers/
Sort of shocking Amazon or Microsoft have not yet acquired them given where the markets seem to be going
CF blog is a marketing tool, and a lot of their claims are inflated or are capturing a hype train.
> Roughly five percent to 10 percent of web traffic now goes through CloudFlare
https://www.wired.com/story/cloudflare-spectrum-iot-protecti... (April 2018)
> 10 trillion requests per month, which is nearly 10 percent of all Internet requests for more than 2.5 billion people worldwide
They’re also doing a lot with Rust, which I know you love, incidentally.
If anyone from Cloudflare is reading this you should consider blacklisting undesirable subdomains.
CDN extensions: People who have always wanted to be able to cache POST requests, or aggregate responses but haven't been able to do it with the limitations imposed by traditional CDNs. Workers moves what was a configuration language into a full Turing-complete programming environment which can make subrequests, interact with the Cache API directly, etc.
Pushing out to the edge: There is a long list of services which are best done close to users but don't require a huge amount of state. For example, you might want to validate access tokens before requests hit your infrastructure, or do many of the other things commonly done in an API gateway. You also might want to move performance critical components of your API which don't require much state closer to users, things like aggregating and caching GraphQL or prerendering your React app. We have a Workers KV product which gives you just enough state to handle many of these cases in beta now.
Serverless apps: This is what workers.dev is all about. The idea that if you _can_ deploy code to a network almost as large as the Internet itself is it possible that is a better model than deploying to a single region or availability zone? Without cold-starts you get instant scaling to almost any volume of traffic which is a big advantage, to the point where this type of serverless seems faster and cheaper even if you ignore the network. As you mention it will require better distributed storage tools, but we're optimistic those will become available in the near future.
You can read more using the tag on our blog (https://blog.cloudflare.com/tag/workers/) or community forum (https://community.cloudflare.com/c/developers/workers). Thanks for caring, a lot of people work on Workers every day and seeing people get invested in what we're doing is everything.
Overall it's worked pretty well; it's also useful for doing things like setting up fully 'virtual' domains when using an object store, e.g. set a CNAME for 'foo.bar.com' to 'example.com' and then write a worker that just intercepts every `foo.bar.com` request and routes appropriately. This has worked surprisingly nicely. For example, I host one of my recent projects under https://riscv.ls0f.pw/ but this subdomain doesn't really "exist" beyond being a CNAME for example.com; it's just routed directly to a B2 bucket subpath, in the background.
At the moment I'm working on a mechanism of automating my worker deployments using GitHub Actions and seeing how writing them in OCaml/BuckleScript goes (a much more tedious process), but overall it's pretty neat and useful for writing quick routing glue and whatnot.
If anyone from the CloudFlare team is reading this, probably the most important feature I'm looking forward to is resource bindings for cryptographic keys, to be used with the WebCrypto API, so plaintext keys are not visible directly in the worker source code. Is there any ETA on when this might be GA? In my case it's the most important feature currently missing, and it's really crucial for a lot of practical use cases.
Given the specificity of your request, I'm guessing you're aware that this feature already unofficially exists, with unlisted-but-googlable documentation. :) As long as you always deploy workers via the API -- never the UI -- then it's safe to use the feature now. If you accidentally edit your script in the UI, the secret bindings might be dropped, since the UI doesn't understand them yet.
> I could simply re-upload every time but I'm not sure if this invalidates any 'hot' worker instances that are already running.
Good news: If you upload exactly the same script, your live workers won't be restarted. This is a side effect of the fact that we use a content-addressable store for scripts. (Similarly, if you go to cloudflareworkers.com and enter the exact same script twice, you'll notice the resulting URL is the same.)
Are you actually observing non-negligible cold start times, though? Usually the cold start time is so low that you can't really tell when it happens.
> Are you actually observing non-negligible cold start times, though? Usually the cold start time is so low that you can't really tell when it happens.
Honestly, no, it was just something I realized when writing my workflows -- that I don't know from the API how to tell if a worker is "up to date or not" other than downloading and comparing. All of my tests have been exceedingly simple (for now) so it was more just a cautious pre-optimization to make sure I didn't tank something later on a script unrelated to the one I was working on (my workflows upload all scripts across domains during CI, iff they changed).
Oh! Here's one more feature request, though: atomic route/filter updates for scripts would be really nice. Right now I kind of iterate through them and match/delete/update them one by one, because overlapping routes cause errors. This means actually doing an insert or update or create is a bit complicated -- you need to actually grab the list of existing routes up front and match IDs with the corresponding route paths, before you can determine what to do (if it already exists, update, if it no longer exists, delete, otherwise, insert, etc)
But it would be a huge simplification if I could say "Atomically replace the current routes with this set of routes all at once". I'm not sure how easy this is on your end, though. Now that I think of it, I guess I can just immediately wipe and insert a whole new set, though, since it's not like the current solution is atomic anyway!
Note: try to use an A record instead and set to 192.0.2.1 (or any IP in one of the testnets) based on RFC 5737: https://tools.ietf.org/html/rfc5737
Maybe using ENV like Heroku does would work
So it's not injecting environment variables, it's inserting lexical bindings into the script and then running it. But the end results are more or less the same.
Also, the worker includes a mini-router I wrote, making it relatively easy to redirect subdomains or subpaths of a domain to different buckets or subdirectories of a bucket, etc. It's really only like 30 lines of code, tops, but it makes it relatively easy to route multiple projects in a single bucket onto shorter, more stable names.
So it's not really anything special at all. It's just convenient and now I don't have to run anything else to serve things "safely" from my object storage. Also, if you use Wasabi, you get an S3 API endpoint, so you don't have to run proxies or write adapter code for your upload-path, which is a huge boon. I actually saved money on my upload-path by being able to get rid of my minio proxy and supporting code by using Workers in this way.
I think my favorite use case at the moment is to use them as a fault tolerant highly available proxy solution. So you could have mydomain.com/blog be served by a microservice that is separate from your primary monolithic app.
I 'reserved' a domain, but haven't received an email yet.
Basically it lets you run JS code on every request, much like Lambda. Differences:
1. It's JS but not Node.JS - You write against the service worker API and they run it in a vanilla V8. You'll want to webpack your stuff together or something like that.
2. Because it's V8 and not Node, they can run many customers in the same process (as safely as multiple sites can coexist in Chrome tabs), and therefore it's cheaper than Lambda.
3. It runs on the cloudflare edge nodes, i.e. close to the user, instead of some AWS region on the other side of the world, so latency can be lower. Can be, because for many (most?) use cases you'll still need to call some API somewhere else, eating up most latency wins you might make. They do also have a distributed KV store that might help you address this, however.
Chrome tabs usually run in different processes. It sounds like Cloudflare is using JS isolates within the same process to run many customers.
This sounds scary to me. For security, the Chrome team assumes that any code running in the same process has access to the whole address space of that process due to all of the recent speculative execution issues (Spectre/Meltdown).
I've seen comments on HN say that this isn't a problem for cloudflare because they do special things with all of JS timing APIs and a few others to mitigate.
Will be interesting if they are able to make this secure. Millisecond cold startup times will be amazing for development.
Well, maybe I'm wrong :-) my relationship with CloudFlare ends at me being a customer, there is a significant chance that my impression of how Workers (or its security setup) works is wrong. If you want to be sure, ask CF.
"The community has assumed for decades that programming language security enforced with static and dynamic checks could guarantee confidentiality between computations in the same address space. Our work has discovered there are numerous vulnerabilities in today’s languages that when run on today’s CPUs allow construction of the universal read gadget, which completely destroys language-enforced confidentiality"
CloudFlare believes this doesn't apply to them and that they have created defences which allow them to run multiple customers' code in the same address space without leaking memory. I think the burden of proof is on CloudFlare, and I'm yet to see them actually publish any information about this.
Although as you say, whether that ends up reducing latency in particle depends very much on your use case.
> Functions triggered by origin request and response events as well as functions triggered by viewer request and response events can make network calls to resources on the internet, and to AWS services such as Amazon S3 buckets, DynamoDB tables, or Amazon EC2 instances.
Seems like an interesting service for someone, to do something...
 - https://blog.cloudflare.com/building-with-workers-kv/
This is not 'luck'. You pay the $11,500 fee and with a domain like this you would most likely get it. Many domains are available at the highest fee (previous post). The only way you would need 'luck' is if more than one person wanted this and beat you out. With this particular domain this is unlikely.
For example as of right now (way past launch) all of the following are still open at the highest rate:
bitcoins.dev (bitcoin.dev is taken already)
Honestly knowing a bit about this they could have waited and gone in at the lower tiers.
Also the singular is open worker.dev and as a best practice they probably should have bought that (edit: also).
Yes, and thank you.