I'm curious how much farther they might take the platform. Like, for example, is a distributed relational data store going to happen at some point? Or is that not a space they want to go?
Cloudflare co-founder Michelle Zatlyn likes to say "We're just getting started".
It's a total protocol layering violation, but makes images appear on screen much faster.
Any plans to support web-sockets terminated in the isolate(not just proxied)?
Any plans for a local development? Some open source locally run-able or low volume self host-able stack that mirrors the global stack would I think really help dev workflow and adoption.
edit: just poking around their blog saw this. https://blog.cloudflare.com/making-magic-reimagining-develop...
It used to be super slow to upload the worker to Google Cloud but it's much faster now (that's what happens behind the scenes when you do wrangler dev).
But yeah, this completely eliminates any question about cold starts or warm starts.
I'm asking because sometimes it's necessary to do some expensive work in order to respond to the request and doing the same work on every invocation is not necessary. For example on AWS Lambda, one of our functions launches a Chromium instance which can take a few seconds, but because the function can stay alive after a request is done, the same Chromium instance is immediately ready on the next function invocation. Other use cases involve e.g. connecting a database (which I see CF is tinkering with over at ).
This is covered in more detail in my talk: https://www.infoq.com/presentations/cloudflare-v8/
EDIT: It looks like Julia has some Wasm support. I haven't tried it though. https://github.com/Keno/julia-wasm
CF workers possibly wouldn't work cuz we run on AWS and would have to pay for bandwidth to and from CF. (but don't know these figures, we're very early in research).
What is people's actual experience on OpenFaas vs lambda vs other self hosted options vs things like CF Workers?
I am able to work within Lambda@Edge's constraints (us-east-1, minimal code size, no layers, low max execution time) reasonably well, but it would be nice for them to start removing some of those limitations as well as adding a speedup (although I have no real complaints about speed).
The tradeoff between "using an external FaaS system" and "just embedding v8" is pretty important. If your user customizable code can be reduced to basically one HTTP call, the options you mentioned will probably work fine.
When we did this though, we found that it was really irritating to have coarse grained for customer interactions and we were better off just embedding our own runtime and building a really nice JS based API for people to use.
So if I were you, I might actually look at whether embedding Deno (https://deno.land/manual/embedding_deno) and building your own runtime API is valuable.
We went: Lambda hooks -> v8 runtime -> Firecracker VMs for our particular use case.
I haven't connected a worker to a datastore (sql or kv), so data and start-times might be orthogonal to each other, but was curious what people's experiences are.
> For now, this is only available for Workers that are deployed to a “root” hostname like “example.com” and not specific paths like “example.com/path/to/something.” We plan to introduce more optimizations in the future that can preload specific paths.
1. First run: time_starttransfer: 0.507267s
2. Second run: time_starttransfer: 0.035244s
So it looks like at least some parts of the cold start taking much longer and aren't eliminated.
We probably should have included a discussion of this in the blog post... sorry about that.
how did they eliminate those? are they keeping all functions warm 24/7?
It’s impractical to keep everyone’s functions warm in memory all the time. Instead, serverless providers only warm up a function after the first request is received. Then, after a period of inactivity, the function becomes cold again and the cycle continues.
For Workers, this has never been much of a problem. In contrast to containers that can spend full seconds spinning up a new containerized process for each function, the isolate technology behind Workers allows it to warm up a function in under 5 milliseconds.
So, we've always had very low cold start times, and now we've made them disappear "inside" the TLS handshake.
Or deep dive here: https://www.infoq.com/presentations/cloudflare-v8/
(Disclosure: That's me giving that talk.)
I don't know enough about cloudflare workers, but function cold start latencies are not fully in control of the platform. You can hide the latency of scheduling and loading, but the function might load a ton of dependencies; and it might have its own custom init code that you generally have little control over. So you can hide an SSL handshake's worth of latency, and that's a nice little win, but whether that makes the user-visible cold start latency zero depends on what it was in the first place.
But I'm guessing cloudflare workers are used for relatively simple stuff, so they don't often have dependencies or complex init code.
(And sorry, I don't mean to sound too negative, this stuff is pretty cool!)
Can I use npm with Workers?
Workers has no explicit support for npm, but you can use any build tool or package manager you need to create your Worker script. Just upload to us the final, built, script with all dependencies included.
Anyway, what I'm getting at is, the platform has to load a bunch of code, sometimes that will take longer than the latency that you can hide behind an ssl handshake; plus the code might run something on init.
But yes, workers seem to be oriented towards relatively small bits of code, so maybe the vast majority of load times are a few ms.
The cover may be better than the original song, but this isn't "All along the Watchtower" or "Feeling Good" as far as I can see.