Hacker News new | past | comments | ask | show | jobs | submit login
Cloudflare Workers Announces Broad Language Support (cloudflare.com)
104 points by jgrahamc 10 days ago | hide | past | favorite | 45 comments





Okay, that Python example is pretty cool - plain virtualenv + pip install, that's a long way from where Lambda is. SAM comes close, but this looks nicer and less painful to work with. Although, probably kinda misleading since I imagine most of Python is out of reach because of the transpilation. Does `pip install <any popular dependency>` actually work?

> The main requirement for compiling to JavaScript on Workers is the ability to produce a single js file that fits in our bundle size limit of 1MB.

Compared to Lambda, that's a pretty restrictive size limit. Theirs is 50Mb, or 250Mb if you upload the package to S3. Is there any plan to change that to be closer to Lambda or will anything with larger dependencies always be out of scope?


I come from an embedded perspective, but 50MB+ to me says either "assets are bundled in" or "more complexity than a full operating system + userspace". I can think of a few products that might hit the latter, but not many. Can you help me understand if there's some massive hidden complexity here that I'm simply unaware of or if this a pretty rare limit to approach?

It's really about dependencies. An example would be something like data analysis in Python - you want to read some data, do some fairly complex numeric computation on it, write the result out somewhere else. Under some circumstances the Lambda trade-offs might be pretty good here - you might get cold starts of a few seconds but that's still faster than spinning up a new server to run your one time job, and you're not paying to keep servers up all the time.

Some of the libraries you might want to use for that numeric analysis - things like Numpy, Pandas, are pretty large. There's not really a culture of small libraries in Python like a lot of JS libraries, probably because the tradeoffs make less sense the way Python things are traditionally deployed. And there's no concept of tree shaking where dependencies get shrunk to only what you're using either - although maybe you could theoretically make that work on the JS side after transpiling the Python.

But basically, if you have really any common data analysis library from Python you're going to be in that 1-50Mb range, and if you have quite a few you can easily be over 50Mb. That is a workload that sometimes works well as a Lambda, and I'm curious if it's one Cloudflare are thinking about for Workers too.


I hadn't considered the transpiling case. That seems like a pretty reasonable and common scenario, thanks.

Things like scikit-learn can add up fast.

Yeah to me it doesn't say that it can bundle dependencies at all the way I read that.

At the same time, the runtime has been upped from 50ms to 15mins, and much faster cold starts, according to https://www.theregister.com/2020/07/27/cloudflare_serverless.... That might get them more customers.

It’s always been confusing because that 50ms is actual CPU time, not to walk-clock time.

Whilst it’s great to be billed on actual usage, it’s hard to tell how expensive a function is going to be.


I appreciate their support for customization, but what I as a customer really want is pre-built, tested, supported, maintained, features.

For example, CloudFront has features that let me create redirects for different content to go to different origins and prefixes, but Cloudflare does not have this option. You can write a .js edge worker to do it, but I'd rather not trust the integrity of my site to some crappy javascript that I threw together, and I'd rather not have to maintain my custom code indefinitely. (Actually Cloudflare's support technicians will offer to write the custom code, but it's still not something I would like to have sitting out there outside of a VCS and SDLC, with bit rot, tech debt and other nasties lurking in the future)


We'll be releasing this functionality this year—some very soon, depending on what exactly you're trying to do.

You'll be able to self-serve these in the Firewall Rules engine, which is being generalized to handle "Rules" broadly speaking (including Page Rules such as redirects, rewrites, etc.).

If you drop me an email (pat at cloudflare dot com) I'll get you connected with the right PM who can put you on the beta list.


This is good news. I've only ever used workers once, to do exactly this, and I found the documentation to be a little lacking.

Scala/Kotlin goes to webassembly too right? I'd think that would be better.

Can you access a running CloudFlare worker from the internet? It would be fun to spin up ephemeral fibridge[0] instances to provide byte range access to large local files for backend services.

[0]: https://github.com/anderspitman/fibridge-proxy-rs


Access is only via the service workers API. Basically it's exactly what your browser (Chrome, anyway) will run as a service worker, but running a little farther away. There are some extra APIs available as well, for key value storage and caching.

Would this be a good target for ClojureScript? It is not in the examples but it might be possible to add a template.

Yeah, I'd expect Clojure to work. It's on my list of languages to investigate, but if you want to take a crack at it, submit a PR, see details at https://github.com/cloudflare/template-registry/blob/master/... Feel free to ping me @ koeninger on github if you need help on the PR.

I’m currently looking into porting some lambda functions over but I still haven’t seen anything that would indicate I can compile inside a cloud flare VM and then push custom binaries up to the workers to run.

Curious, what kind of workloads do you find valuable to be out on the edge of a CDN?

In the case of cloudfront I have only ever used it to hack up some auth controls for something real quick and dirty.



The speed advantage of edge compute reduces when most compute for your application has to happen back in a data center anyway.

The fabled iot, self-driving car examples, as well as services like Stadia could never be run in a worker, which is why for Workers at least, speed isn’t a primary advantage.


I don't really see the data locality being solved either. Sure data travels the shortest path to Cloudflare and you could use Cloudflare KV for storage. But in reality, who uses a KV store for everything?

One would still have to run databases with regional sharding in the critical locations (I'm thinking Crockroach even though I've never ran it myself). And if you're already running a database you could be running code there too, and just proxy requests through to yourself in the closest location (maybe this is what all the rage is about with workers, proxying differently depending on origin region), but i could see this being done without Turing complete language support.

EDIT: I do see how Cloudflare KV could be used as the caching layer and help there, but that's all i see. I would love to hear how someone else can/has made use of this.


Same here with my use cases being mostly for auth and control over cache objects.

I've found Fastly's Varnish recipes interesting though: https://developer.fastly.com/solutions/recipes


The headline is misleading in my opinion. You can't just write workers in Python now. Instead, they are providing a template for how you could call an existing Python-to-JavaScript transpiler to convert your Python code into JavaScript. They still only run JavaScript.

So instead of writing your worker in JS, you get to write it in a Python syntax except all the functions you calls are still to JS functions, some of the syntax is non-standard, and most of the Python standard library is missing.

I don't think many people are going to be terribly excited by that. You just get more headaches with very little upside. It's more like a neat demo than something you would want to seriously use.


Agreed that it's far from perfect, and there's a lot of work to be done to make sure all the common native code libraries are well-supported.

But, what I find really interesting is, we are finding it a lot easier to transpile other "managed" languages to JavaScript than to target WebAssembly. Building Python to Wasm hasn't worked well so far because the code footprint is quite large, and brings with it a whole language runtime including interpreter, garbage collector, API glue, etc. Whereas if we transpile to JS, we get V8's built-in GC, and it's much easier to call into existing high-level JS APIs. The result is nice, low code footprint, which can be distributed to thousands of edge nodes at low cost and can start up very quickly.

Both transpiling and Wasm approaches come with challenges when you want to support native-code extensions: the corresponding native library itself needs to be ported to Wasm. (Once it has, then it can be called by Wasm code or JS code, so it seems like whether or not the rest of the app is Wasm or transpiled is not critical.)

There are plans to extend Wasm to support built-in GC. Maybe then Wasm will make sense for managed languages. But, right now we're seeing Wasm is best-suited to C/C++/Rust, while transpiling is a better way to support most other languages.

(I'm the tech lead for Workers, though I wasn't directly involved in this specific project.)


I totally understand that CPython is too heavy to build for WebAssembly and the limitations of Transcrypt are very reasonable for what the Transcrypt project is aiming to do. It's neat that it works at all as a backdoor way to run some Python-looking code on CF and it's a cool OSS project.

But I also don't think that a light transpiler is ever going to get anywhere close to a "real" Python developer experience. Python has an enormous standard library and Transcrypt implements almost none of it. The standard library is core to what makes Python Python. If I can't use any of the built-in functions that I know and love, I don't think it's really fair to claim I'm using Python except in a very superficial way.

I think the disappointment is that I was excited to have an alternative to Google Cloud Functions and AWS Lambda (which both more or less support Python "for real"). This feels more like 20% support in my opinion.


Agree - I do think they have something novel with Workers, but with the upsides (small overhead, SW-like API, WASM) comes downsides (poor native multi-lang support, JS API drives other languages).

But this announcement is all “man behind the curtain”: you’re not really writing Python, but instead JavaScript-as-Python, without any of the good parts of Python or modern JavaScript. Instead you’re likely to end up with a Franken-program that is neither.

As a way to appeal to some users, sure: this would have been a solid set of guides/tutorials on the edge of what they support. But as a major announcement, it feels like Cloudflare is trying to trick users into seeing Workers as something it’s not.


Also, it has to fit in < 1 MB, and you won't be able to import anything that uses C extensions. Considering how bloated Python projects are (no dead code elimination whatsoever), I don't expect this to be very usable in practice.

I wonder why they don't target WASM in addition to JS? In the latter case, anything that compiles to WASM would be fair game (I still wouldn't recommend Python in that case, but it would at least be less kludgy for other languages).


cloudflare workers does support wasm! I've written workers in C before and compiled them with emscripten.

Check this one out: I was able to compile the lua interpreter to wasm in order to run lua code on the fly with a worker. It's absolutely a stupid toy but it's still pretty fast, and gives you a decent idea of just how powerful workers can be.

https://github.com/veggiedefender/lhp


[flagged]


I wouldn’t be surprised if when the CTO posts something on HN that a chat/email doesn’t go out to let everyone in the company know to visit the article on Hacker News with the unwritten but implied suggestion to upvote it.

That would be a bad idea as HN has a voting ring detector. If you look at my post history you'll see which Cloudflare blogs got upvoted and which went nowhere.


> Of course, this comment will get downvoted, which just proves the point.

No, it really doesn't "prove the point." Your entire comment is violating HN guidelines[1] left and right, so of course it is getting downvoted (and flagged), but not because you've uncovered some grand conspiracy.

The cloudflare posts that get upvoted are just actually interesting to the HN target audience, myself included.

[1]: https://news.ycombinator.com/newsguidelines.html


> The cloudflare posts that get upvoted are just actually interesting to the HN target audience, myself included.

And it would have been fine, if it was you who submitted them, not the CTO, who uses HN primarily for promotion, which is something guidelines explicitly forbid [1]. But mods obviously allow corporations to violate the guidelines and it's getting pretty annoying.

[1] Please don't use HN primarily for promotion. It's ok to submit your own stuff occasionally, but the primary use of the site should be for curiosity.


As far as "vote rings" are concerned, the CTO (and others) would emphatically remind people not to upvote to ensure we didn't end up getting flagged.

I thought Cloudflare switched to hCaptcha?

hCaptcha's accessibility is poor. You have to make an account to login and set a cookie if you're unable to work with their image captcha.

HCaptcha founder here.

We think this is by far the most accessible solution: for example, audio captchas discriminate against all the people with auditory processing difficulties as well as some visual impairment.


There doesn't seem to be anywhere on the widget that actually tells people to go to your website and register.

I must be the only dev on the planet that doesn’t think cloudflare is that great.

They are the last of the major CDNs to offer this sort of functionally — code you can inject at different points along the request.

Yes, they did a huge service to the internet by giving everyone free SSL but their control panel is maddening and it’s very hard to understand what each switch/knob actually is doing behind all of the marketing speak.

I think the free DNS are great too but I see all this worker stuff as a catch up “we do this too” feature and not a new thing. They aren’t doing it better than anyone else.


> They are the last of the major CDNs to offer this sort of functionally

Eh? Which other CDNs let normal (non-big-enterprise) users deploy Turing-complete code to the edge before Workers was launched in 2018?

I can only think of Lambda@Edge, which itself was quite new at the time. But maybe I'm forgetting some. Which ones are you thinking of?


Fastly?

VCL isn't turing-complete, and their new thing (still in closed beta, I think?) was definitely after Workers.

No wireless. Less space than a Nomad. Lame.

> their control panel is maddening and it’s very hard to understand what each switch/knob actually is doing behind all of the marketing speak

Really? I find their control panel to be quite straightforward and intuitive. Certainly more so than AWS, GCP or Azure shudder


> They are the last of the major CDNs to offer this sort of functionally — code you can inject at different points along the request.

That is just false. There are many others, like CloudFront Edge Lambda.


Isn't that what the GP said? Cloudflare are the last.

More stuff on cloudflare? if they are down... Even more of the Internet goes with it?



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: