I wish they added some way of exporting wasm funcs as well so that they could be called from host. Tinygo wasm target supports both "exports" and "imports". The other thing that is worse compared to tinygo is the generated binary size seems to be about 10x larger. From my brief look at the transpiled .wat printout a lot of included funcs aren't being called anywhere...
Exports are something we'd like to work on but it turns out it's pretty complicated to reconcile the Go runtime with the WebAssembly environment, especially when there's only a single thread. We'll get to exports as soon as we can but it may require Wasm threads to be stable first.
Just curious what state is it (thread support) in now? I saw wasmtime already supporting some pthread style threads so assumed proposal has already been accepted but it’s super hard to actually figure out what state anything is in with wasm…
Don't know about the server side, but I've been using threads on the browser for ~2 months, I didn't hit any bug specific to it yet. I use it both with Rust (wasmbindgen with async/await) and C (Emscripten with pthread support). HTTPS with some headers is required for `SharedArrayBuffer`.
I still build a single-threaded binary for Firefox, and fallback to it if `SharedArrayBuffer` is `undefined` or if the `WebAssembly.Memory` constructor fails (some iOS devices might throw when `shared` is `true` due to a bug).
Yes it's been in phase 3 for what like 2-3 years at this point (judging by when it landed in browsers)? No eta and no next steps. The tp says they are waiting until "it's stable" so I'm assuming phase 4. It qualifies browser support and "at least one toolchain" criteria[0] (zig) and seems like all the other conditions too except maybe "CG consensus" whatever that means, so for all I know it could take anywhere between tomorrow and in a few years from now...
FWIW we simulate exports by instead having `main` call an imported function that blocks until it's ready to return with the needed data.
So instead of:
`host -> call foo on guest -> return to host`
`host -> call guest main -> call foo on host -> host returns when ready -> guest calls foo when done`
FWIW the scheduler (so goroutines) don't work in go if you're not calling from an main, so anytime you call a custom export then try to use a goroutine you'll get panics.
10x size is about the blowup we see as well. It's also likely to be slower (some of the Tinygo authors said ~20% slowdown compared to tinygo) probably due to the simpler/smaller runtime and LLVM being better at optimizing.
The only time I tried wasm in Go, the wasm compiled by the native Go was so slow that the equivalent Javascript was faster. Tinygo produced decent performance however.
There's no CGO involved when compiling to Wasm. The sometimes slow performance is due to the hoops the compiled code has to jump through to support the Go runtime and goroutine preemption on a single thread.
I understand there's no CGo specifically, but I'm wondering if the Go runtime when running under WASM still has to manage switching out the goroutine stacks for "WASM stacks" when it's calling out through the WASM VM.
Edit edit: from this comment it sounds like it is, as you say, just the general overhead of managing goroutine stacks. I wonder if TinyGo is more performant.
I feel like we need better WASM performance in go before we get WASI. In my experience go wasm performance is pretty bad, usually significantly worse than vanilla JS.
Rust (or really anything LLVM backed) is still probably the best WASM language in terms of performance and support, but .NET (don't forget to turn on AOT) is starting to get really good too (except for the fact that .NET compiler barfs out a bazillion files that the browser needs vs. 1 self contained .js or .wasm file which sucks if you are trying to build a self contained library like OpenCV.js)
This is a good callout, although we probably won't be able to significantly improve the performance of Go compiled to WASM until WebAssembly evolves and introduces support for threads or stack switching so we can define goroutines based on those; right now the main reason Go compiled to WASM isn't in the ballpark of native perf is due to the stack switching emulation we have to do to get the cooperative scheduling of goroutines to work. We'll need WASM runtimes to offer more advanced primitives that we can rely on to implement those features of the Go language and produce much higher-quality code.
Would the emulation penalty still occur if Go routines aren't used at all? I have many small domain-specific libraries that I am planning to port to wasm. These libraries only allocate dynamic memory and differ anonymous functions.
This is can be true if you're interacting with browser API's such as the DOM frequently, because there's an overhead.
But I've seen several projects where (non-GC) WASM has improved performance significantly for specific tasks. You won't get native performance obviously.
The overhead of a runtime can easily make WASM code run slower than native JS functions. This only applies to GC languages like Go which require that.
>But I've seen several projects where (non-GC) WASM has improved performance significantly for specific tasks. You won't get native performance obviously.
You absolutely can if you're writing the raw WASM or compiling from C.
In my experience WASM performance is not that much better than JS in most cases, not so much because WASM is lacking, but because most JS runtimes performance is really, really good for a dynamic language.
When I measured C compiled with clang -O3 vs JS performance the only noteworthy speedup were on math-heavy tasks and even there only for integer-heavy math (floating point math was better than JS, but not by much). In a few cases the performance was worse. Notably recursive algorithms were _much_ better in WASM though even with algorithms that can't have tail-call optimisation (I guess function invocation has a lot of overhead in JS compared to C)
I think people over-value WASM speed. With regards to performance I imagine that the biggest gains compared to JS would be from not using garbage collection language. GC overhead, especially JS GC (compared to golang GC) can be painful in very large applications, especially in things were timings matter like 3d rendering. But GC can be optimised for in JS by avoiding allocations in the critical paths of the app
It consolidates Go compiled to WASI as an alternative of doing containers.
Linux containers don't "Run anywhere." as docker.io says. You need a specific architecture and kernel features, which is not obvious from afar.
There's also other benefits. Example: the team I work on compiled Kyverno, a CNCF K8s policy engine written in Go, to a WASI target. We are building Kubewarden, a CNCF policy engine where policies are Wasm binaries shipped in OCI registries. We strive to build "a Universal Policy Engine".
Now, we have an experimental Kubewarden policy `kyverno-dsl-policy` that allows you to reuse Kyverno DSL with us.
We also provide WaPC as a target, more performant and secure, hence normal SDKs for Go, Rust, C#, swift, typescript... In addition to supporting Rego, again compiled to Wasm.
IMHO you only benefit from the real sandboxing from WaPC, as WASI's posix-like interface allows you to attack the host.
The next step for the official Go compiler is to export the function symbols, to allow for WaPC.
Just gave a talk on Monday about it in containerdays.io, but the video is not in youtube yet!
In a nutshell, with Kubewarden we strive to build the universal policy engine by:
- Provide all personas (policy consumer, policy developer, policy distributor, engine admin, engine developer/integrator, etc) with current and future industry-standard workflows, not only a subset of personas, nor more than needed knowledge for those personas. It's a bold statement, and if it would be universal it should indeed cater to everyone.
- This is achieved with policies as code, which are Wasm modules: Wasm policies allows us to support Rego DSL (OPA/Gatekeeper), YAML, SDKs for Wasm-compiled languages, and now an experimental Kyverno DSL policy by compiling it to WASM with WASI. Great for using your language and tools of preference.
- Wasm modules have first class support In OCI registries, just like container images: Use same tools that you know as artifact distributor: SBOMs, signing and verifying with cosign, airgap, slsa.dev, etc.
- Policies can be evaluated out-of-cluster: great for CI/CD, dev loop, integration tests, etc.
- Modular architecture informed by using Wasm policies: OCI registry, policy-server, k8s controller, out-of-cluster cli (kwctl), etc. This also helps in adopting future industry-standard workflows.
- Usual features of a Policy engine (mutating, context-aware, recurring scanner of already in-cluster resources, etc). Plus ample room for new features thanks to the architecture. E.g: possibility to run the policy-server directly in the k8s apiserver (one colleague already presented that in Kubecon), possibility to evaluate out-of-cluster policies outside of clusters like OPA just by running the policy-server standalone, more DSLs compiled to Wasm, more languages, etc.
- Vendor neutral, CNCF project, open source, developed in the open.
Technically you don't need a full kernel, edge workers will likely have their own custom trimmed down kernels for running WASM binaries, cutting out a lot of the OS overhead. As long as they implement a limited set of POSIX syscalls they can run WASM binaries, in fact you might even want to limit the WASI syscalls you implement for certain targets.
It also needs root access, and 3.10 is only the lowest supported kernel. Not a docker expert but I could bet that they only support a subset of features there.
What I do know is that docker images are specific to the host architecture, supporting either one architecture, or a blessed list. wasm binaries on the other hand aren't. wasm can theoretically also run on bare-metal embedded scenarios without an OS entirely.
I believe the primary benefit is the sandboxing, but I imagine there are secondary benefits such as that the host platform can be any CPU architecture or OS.
Can someone hit me with the value proposition of all this WASI stuff and WASM and ELI5? (I get the browser use-case)
My understanding is as follows:
WASM - a portable, platform-independent virtual machine for executing a "web assembly"
WASI - an extension to the virtual machine that adds APIs for interacting with the system and breaks all the WASM sandboxing (presumably NOT platform-independent?)
Is the point of this addition to Go that I can now target "WASM implementations that have WASI" with Go source code compiled to WASM?
Why would someone want to do that? Just for edge functions in cloud workers?
Think “JVM, but better this time”. Better isolation, and more language-agnostic. So, Kubernetes with WASM application servers as an alternative to container runtimes. All the old Java ideas, but hopefully much better. It being originally made for the web has the advantage of it being built with security in mind from the get-go. The JVM was always unsuited for running untrusted code, among other failings.
You should go watch any of the numerous blackhat presentations on wasm or just talk to some of the security researchers out there. You can do attacks that most people haven't been able to do for 20+ years.
> You can do attacks that most people haven't been able to do for 20+ years.
This is a bad and roundabout way to say that vulnerabilities in WebAssembly modules may still cause a corruption in their linear memory. Which is absolutely true, but those attacks still matter today (not everyone turns ASLR on) and similar defences also apply. In the future multiple memories [1] should make it much easier to guard against remaining issues. WebAssembly is a lucrative target only because it is so widespread and relatively young, not because it has horrible security (you don't know how the actually horrible security looks like).
Technically you don't need a full kernel, edge workers will likely have their own custom trimmed down kernels for running WASM binaries, cutting out a lot of the OS overhead. As long as they implement a limited set of POSIX syscalls they can run WASM binaries, in fact you might even want to limit the WASI syscalls you implement for certain targets.
There are also other things of great value, for example providing a way to write plugins for cloud-based SaaS solutions, as in you compile your binary, hand it over to your SaaS service and the service itself runs your binary when some event happens. Basically plugins for cloud based stuff, much more powerful and simpler than Web Hooks.
Another example was to write custom database functions, for example, adding a complex math function to postgres so you can do "SELECT myFunc(COLUMN_A, COLUMN_B) FROM TABLE".
Another example was to sandbox plugins for desktop applications (including mods for video games), plugins are a huge security issue when it is native code running in your machine.
These plugin examples the entity that is running the plugin code can limit the syscalls used by your WASI-compatible WASM binary so you don't allow, for example, your video game mod to read/write files outside a specific folder in the file-system
It enables a future world where CPU architecture, OS, and runtime don't matter. Code from any language can run on any hardware and interop with any other code. Cloud hosts can buy whatever hardware is most economical and your code will work there. Just like Docker eliminated a lot of tedious dependency management for deployment, this eliminates another chain of dependencies, but CPU, language runtime, and operating system.
Badass. The `GOOS=js` build had so many workarounds needed that it was barely worth it to port existing code, and wasm_exec.js always felt like a terrible hack. I'll be updating all of my stuff with this and pulling out the shims.