So far in my experience WebAssembly has been great for when I want to import a *little* bit of native code into my browser. I've yet to see it work well as a full on container based solution. I could certainly see a future where that is true, but it'd need the following:
- A way way better tooling story. Emscripten is an incredibly fiddly tool that requires a lot of flags and config. wasm-pack is decent for Rust but seems to be focused on the browser.
- Better interoperability. WASM works great if you write everything in Rust, or everything in C or everything in Zig. Not so much with multiple languages.
- Memory64 and WASI need to land. We need more than 4 gigs and we need proper syscalls.
- Near native needs to be, well, near native. I find it weird that people keep on saying WebAssembly is fast while in the same text decrying "a low-double-digits percentage hit" in performance. I'm fairly certain WebAssembly's "near native" is using near to mean "same order of magnitude", not "within 10%".
I'm also just confused as to what WebAssembly will look like for something that's more than a lambda conceptually. Like if we're running a classic back-end server with a database, are we going to run the database in wasm? Are we running it in a different wasm process? How do you share memory? Is wasm going to reinvent the operating system?
I like WebAssembly as an idea but I'm very unclear on how it'll get to the "replace Docker" level.
I think this is a real paint point of WebAssembly but it's getting better. You can now compile C/C++ code to WASM using clang as it became a first class citizen in v8 (or later?). Of course, clang does not provide all the runtime goodies that emscripten does but if you limit yourself to use WASM for some part of your code that require performance then all you need to provide is a few Javascript glue code. For prototyping purposes I've bundle an npm package that bring clang to your frontend project without the need to install it through a package [0].
Chrome debugger now support WASM [1] but you still need to install an extension. I've started to work on a way to convert the DWARF symbols from the WASM binary into sourcemap on the fly [2][3] but it still rough around the edge and need some work but it can definitely be done. This would allow WASM debugging natively in any browser supporting sourcemaps.
You also have all the WASM binary tools that you would find for any other kind of executable format in WABT [4].
AFAIK, the examples you give all target a basic C ABI [0] or can be made to target the same ABI. In Rust, it means targeting wasm32-unknown-emscripten
The Rust team is also working on a "WASM ABI"[1] which would be useful in taking advantage of stuff like multi-value returns, and other compilers could just choose to target that. More likely, the C ABI on WASM will be updated to account for missing features, and that'll be the standard for interoperability in the WASM ecosystem.
You'll see similar performance variations of the same C code across different compilers, different CPU architectures, on the same CPU when the OS decides to switch between performance- or efficiency-cores, or even just slightly different compiler optimization options (e.g.: real-world performance of native code already varies for many reasons, and WASM is roughly in that same ballpark).
FWIW, in my home computer emulators written in vanilla C (e.g. no language extensions like SIMD intrinsics) the performance difference on an M1 Mac between the native version and WASM version running in Chrome (e.g. https://floooh.github.io/tiny8bit/c64.html) is indeed within 10%.
This is mostly straightforward integer bit-twiddling code, other types of code may behave differently (one should expect up to 2x slower as worst case).
- A way way better tooling story. Emscripten is an incredibly fiddly tool that requires a lot of flags and config. wasm-pack is decent for Rust but seems to be focused on the browser.
- Better interoperability. WASM works great if you write everything in Rust, or everything in C or everything in Zig. Not so much with multiple languages.
- Memory64 and WASI need to land. We need more than 4 gigs and we need proper syscalls.
- Near native needs to be, well, near native. I find it weird that people keep on saying WebAssembly is fast while in the same text decrying "a low-double-digits percentage hit" in performance. I'm fairly certain WebAssembly's "near native" is using near to mean "same order of magnitude", not "within 10%".
I'm also just confused as to what WebAssembly will look like for something that's more than a lambda conceptually. Like if we're running a classic back-end server with a database, are we going to run the database in wasm? Are we running it in a different wasm process? How do you share memory? Is wasm going to reinvent the operating system?
I like WebAssembly as an idea but I'm very unclear on how it'll get to the "replace Docker" level.