
You can now run WebAssembly on Cloudflare Workers - kentonv
https://blog.cloudflare.com/webassembly-on-cloudflare-workers/
======
zackbloom
Each person can take their own meaning away from this, but for me the most
impressive part will always be @kentonv's hand-rolled libc replacement:
[https://github.com/cloudflare/cloudflare-workers-wasm-
demo/b...](https://github.com/cloudflare/cloudflare-workers-wasm-
demo/blob/master/bootstrap.h)

~~~
citilife
Everyone who's in CS at UIUC has to write their own libc replacement:

[http://cs241.cs.illinois.edu/malloc.html](http://cs241.cs.illinois.edu/malloc.html)

There is (or at least was) a score board you can compare against. By the end
of the MP, there are usually handful beating libc.

The thing is.. libc is battle tested, so I'd always trust it over a hand
rolled solution. It's always interesting to see improvements though (usually
at a cost).

~~~
weberc2
> libc is battle tested

I'm confused; isn't libc an interface and not an implementation?

~~~
tejasmanohar
OP probably means glibc

------
elsigh
There is such a steady stream of excellent work and innovation coming from
Cloudflare these days - it's pretty amazing.

------
writepub
Documentation is sparse on why emscripten compiled wasms are incompatible.
Please provide more information

~~~
kentonv
It's a very minor incompatibility:

Traditionally, all WebAssembly modules are essentially eval()ed. You need
JavaScript to download the module into an ArrayBuffer or the like, then pass
that to the WebAssembly API to compile it.

However, in Cloudflare Workers, we didn't want you to have to fetch
WebAssembly remotely at startup. Instead, you upload your WASM module to the
Cloudflare configuration UI/API together with your JavaScript code. At
startup, the WASM is compiled and the resulting `WebAssembly.Module` appears
as a global variable in your script, which you can then instantiate.

Emscripten normally automatically generates JavaScript for you to load your
WASM. But Emscripten's generated script doesn't understand this delivery model
where the module shows up as a global variable. It should be trivial to add
support, but it would be awkward for us to try to submit a patch upstream
without the functionality being public yet.

~~~
buu700
Actually, emscripten has a built-in flag (SINGLE_FILE) to embed the wasm code
as base64 (which I'm the author of, incidentally).

~~~
hackcasual
May I recommend also supporting hex encoding the WASM? I think you'll find
that while your file size is larger, it's actually significantly smaller once
gzipped.

Edit: Just tested this on [https://public.tableau.com/vizql/v_public-
release1809140800/...](https://public.tableau.com/vizql/v_public-
release1809140800/javascripts/runtimeweb.wasm)

    
    
      runtimeweb.wasm       2,484,043
      runtimewebwasm.b64    3,355,640 
      runtimwebwasm.hex     4,968,086
      runtimewebwasm.b64.gz   974,065
      runtimewebwasm.hex.gz   701,052
      runtimewebwasm.b64.br   718,918
      runtimewebwasm.hex.br   466,221
    

So with gzip -9, hex encoding is 72% of the size, with brotli (defaults) size
is 65% of the size.

~~~
buu700
Hmm, thanks for the tip, and thanks for testing that out so I don't have to!
It hadn't occurred to me that that would be the case, but it makes some sense
after reading through
[https://stackoverflow.com/q/38124361/459881](https://stackoverflow.com/q/38124361/459881).
I'll go ahead and open an issue with emscripten about this.

~~~
hackcasual
Yep, it's one of those counter intuitive things. Another benefit from using
hex is the decoder is a lot simpler and easier for the JIT to vectorize.

------
IanCal
Really interesting development, but I do wonder how well this fits with a
5-50ms CPU time limit.

Are there plans to change this? Or allow charging on higher use or something?

~~~
kentonv
Indeed, introducing WebAssembly paradoxically creates more demand to increase
the CPU time limits -- because with WebAssembly, it's actually reasonable to
imagine doing signal processing in Workers, whereas in pure JavaScript it made
less sense. We'll probably need to go back and re-evaluate the CPU time limits
in the near future. In the meantime, feel free to contact us if you have a
specific use case in mind that doesn't seem to be fitting the limits, and
we'll figure something out.

------
devwastaken
The web was supposed to be open, but if the technologies are only well
implimented privately it's not open at all. Props to cloudflare certainly, but
this is a further nail in the coffin of open software. Wasm was supposed to be
well useable outside the browser and to date there are no well made libraries
that deliver on it. It's all experimentation in the open. Technology that I
cannot use well in my own applications is effectively proprietary.

~~~
ryanworl
You can use WASM from Node. There is even an open source module called
“isolated-vm” which implements a similar security scheme to the one employed
by Cloudflare. You could install that on whatever you want and run essentially
the same thing (minus the rest of Cloudflare, obviously).

~~~
devwastaken
[https://blog.cloudflare.com/introducing-cloudflare-
workers/](https://blog.cloudflare.com/introducing-cloudflare-workers/) the
original workers release explains why they dont use node and instead build
with v8 directly.

Node is not a cure all. It's for prototyping with wasm. You need significantly
more integration with the execution environment to have DOS protection. How
cloudflare does this I don't know. They'd have to remove features like
multithreading and continually patch around v8 afaik. Even spidermonkies API
is unuseable in a server context.

~~~
ryanworl
I know what Cloudflare workers are. Did you look at isolated-vm? It uses the
V8 isolate API exactly how Cloudflare describes it being used, except it is
exposed as a Node module so you can write the “parent” process in JavaScript
instead of C++. It exposes memory limits and wall & cpu time limits. It is
sponsored by the Fly.io CDN from what I can tell for exactly the same use case
as Cloudflare workers.

~~~
devwastaken
I only found mention of v8-isolate when searching for it specifically. Coming
up in only [https://blog.cloudflare.com/serverless-performance-
compariso...](https://blog.cloudflare.com/serverless-performance-comparison-
workers-lambda/) and a few comments on HN. I don't see any description of its
use other than a summary of 'we use v8 isolates', which begs a
/r/restofthefuckingowl. Even an interview with KV doesn't reveal much.
[https://softwareengineeringdaily.com/wp-
content/uploads/2018...](https://softwareengineeringdaily.com/wp-
content/uploads/2018/02/SED513-CloudFlare-Workers.pdf) Isolate also doesn't do
anything for sandboxing from what I can tell. Cloudflare quotes 'That said, we
have added additional layers of our own sandboxing on top of V8.' I do not see
any actual technical information on how they're using v8.

v8 and by extension isolated-vm by itself does not cover things like fetch if
we're talking JS. [https://github.com/laverdet/isolated-
vm/issues/63](https://github.com/laverdet/isolated-vm/issues/63) This is an
immediete killer as js browser features are part of why a js engine is wanted.

Having node handling these things is a time bomb of debugging where instead of
debugging v8 you're debugging node. There are node version problems with
stability as mentioned on the isolate-vm readme.

DOS goes beyond just CPU time and memory. Wasm is being treated as an
environment of its own with some lower level accessibility. Unless the API
allows feature whitelisting - I can't find any docs relating to that - then I
don't see anything stopping thread creation. I'd be interested to know how
cloudflare prevents these features. isolate-vm doesn't appear to interact with
the v8 api with options relating to this.

Fly.io has their fly engine meant for local development so you can deploy onto
their servers.
[https://github.com/superfly/fly](https://github.com/superfly/fly)

------
mwcampbell
What's the limit on code size? I imagine we'll have to be careful about
bringing in large existing libraries.

~~~
rita3ko
The limit on the code (the Worker + WASM), after compression is 1MB. Please
reach out if you run into limitations with it (rita at cloudflare).

~~~
justinclift
Heh Heh Heh

Minimum size of wasm files generated by Go (so far) looks to be 2MB. That
should improve over time.

Go wasm is currently hard coded to target browser environments too. Another
thing which should improve down the track.

~~~
kentonv
I believe the 2MB figure is before compression. After, it's 500k?

~~~
justinclift
Good point. I've not tried compressing them personally, but people have indeed
mentioned the 500k-after-compression thing before.

