Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

OP meant 30/50 ms under the guise of "Workers is wayy cheaper, wayy faster". You can have unbounded workers that do whatever you want. But the cheap Bundled workers need to stay under 50ms https://developers.cloudflare.com/workers/platform/limits/#w...


I was specifically trying to clarify the second half of "fit your workloads in 50ms (CPU time) or 30ms (IO time)". The only time IO time is relevant for Workers is billing of Unbound Workers, not whether your workload fits. The only time-based workload limits for Workers are 50ms of CPU time (Bundled), 30s of CPU time (Unbound), or 15 min (via Unbound Cron Triggers).

I thought our Unbound Workers are supposed to also be cheaper as well but I need to double-check that piece.

Bundled and Unbound Workers are equally fast.


From what I knew, you couldn't have a Bundled Worker wait on IO or sleep for more than 30s. May be it isn't true anymore?

For most workloads, I'd reckon that Unbound Workers are about the same cost as Bundled. In fact, Unbound will be ~2x cheaper than Bundled if your average workload completes within 50ms IO or 10ms CPU.


It's not true now AFAIK [1]. Not sure if it was ever true as it's kind of core to how Workers works, but I've only been here for just over a year. Can't find anything to suggest this was ever the case.

As for cost, a Bundled Worker definitely had a price advantage if you have CPU-light but IO-heavy workload. If my math is right then Unbound is cheaper up to roughly 220 ms of wall clock (I used 100M requests as an example). So if it takes > 220ms of time to send the response fully, Unbound will be the same price as Bundled and only get more expensive the longer the response takes. This isn't the RTT time to your origin. It's the total request time. So if you're doing lots of round-trips to origins, proxying WebSocket messages back and forth over a long time, proxying a large response body from somewhere else etc. This gets more complicated since we put in an important optimization that makes Unbound much cheaper if you're just proxying a response without modifying it since billing will stop once you return the Response so now the Unbound Worker has to actually be meaningfully involved in generating the Response body for it to bill until the response finishes sending to your client [2].

[1] https://stackoverflow.com/questions/68720436/what-is-cpu-tim... [2] https://blog.cloudflare.com/workers-optimization-reduces-you...


There's no time limit on requests, as long as the client is still connected. However, two things you might be thinking of:

* If you use waitUntil() to schedule async work that completes after the HTTP response has been sent, this work is limited to 30 seconds.

* In general, if a request runs longer than 30 seconds, the chance of random cancellation increases a lot. For example, when we upgrade the Workers Runtime to a new version, we will give in-flight requests 30 seconds to finish before the process exits, which will cancel all remaining requests. (Of course, any application that relies on long-running connections needs to handle random disconnects regardless, due to the general unreliability of networks.)


Thanks kentonv and vlovich123. I stand corrected.

I've been using Workers since 2019 and quite haven't kept up with Cloudflare's pace of innovation ever since. It has been dizzying. Looking forward to handling TCP and WebRTC workloads (announced last year) with Workers next.


Double-checked and Unbound Workers are indeed priced more economically than Lambda@Edge. I had misread their pricing to be 8x cheaper than what it was which is why I was confused.

So TLDR Workers is faster, can run longer, and costs less than Lambda@Edge.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: