The article brings up an interesting point:
Webassembly really could enable seamless cross-language integration in the future.
Writing a project in Rust, but really want to use that popular face detector written in Python?
And maybe the niche language tokenizer written in PHP?
And sprinkle ffmpeg on top, without the hassle of target-compatible compilation and worrying about use after free vulnerabilities?
No problem use one of the many WASM runtimes popping up (like , ) and combine all those libraries by using their pre-compiled WASM packages distributed on a package repo like WAPM , with auto-generated bindings that provide a decent API from your host language.
Sounds too good to be true?
Probably, and there are plenty of issues to solve.
But it sounds like a future where we could prevent a lot of the re-writes and duplication across the open source world by just tapping into the ecosystems of other languages.
I agree with the latter two due to runtime uniqueness, but not so sure it holds for Rust. Since C FFI has become the defacto WASM standard (with higher level interop coming as the article suggests), some low-overhead langs like Rust can be reasonably used elsewhere via WASM (though in most cases, you'd use them as native libs if you had the option).
Essentially it comes down to two things: How much of the WASM conformance is language-specific at compile time (e.g. for Go, stack resumption is manually built causing bloated WASM), and how much overhead does WASM interop cause at runtime (e.g. can it use the string interface type instead of converting to what it considers a string in local memory and what are the tradeoffs of each approach). If your language has a low WASM impedance mismatch at compile time and runtime, like Rust, it is a decent candidate for a WASM target. If your runtime has a low WASM impedance mismatch (e.g. the browser, native, and to some extent the JVM), it is a decent candidate to run that other decent-candidate WASM code without a huge overhead.
I've run Rust on the JVM , C in the browser, etc with decent performance because of those characteristics.
0 - https://github.com/cretz/asmble/tree/master/examples/rust-re...
On topic though, I'll be curious to see how bad it really is. The demo video showed code that ended up looking and behaving like normal Rust code. Maybe you'll end up with some oddities like Rust enums not being supported, but w/e.
At the end of the day, I don't need to make weird and often gross C bindings in my Rust or Go code. Is that not a huge win?
But you're right, maybe this will go the way of the JVM and no one will care. Or maybe not. Does it matter?
edit: And I should add, maybe I keep forgetting where my well supported JVM Python <-> Rust or Go <-> Rust bridge is. Maybe someone can remind me, because I've not seen it.
Far more likely is that WASM influences the evolution of various languages, resulting in homogenization of semantics. C++ is already going in this direction--preferring some proposals over others because of easier interoperability with WASM's constraints. Other languages with irreconcilably distinct semantics, such as Go's goroutines, are likely doomed to be second-class citizens as compared to Rust or C++. Languages like Python and PHP are likely to see either major refactoring of their engines, or else see the rise of alternative implementations that are more performant in WASM environments and offer more seamless data interchange.
I think even if WASM gets a limited goto sufficient to resolve the most problematic performance issues, languages like Go that use more sophisticated control flow constructs (e.g. stackful coroutines) will always be at a disadvantage relative to C or Rust when it comes to WASM as absence of other various low-level constructs (e.g. ability to directly handle page faults) will incur higher relative penalties. The same thing could be said about Rust's new async/await--WASM will compound the costs of the additional indirect branching in async-compiled functions (not to mention the greater number of function invocations, which is at least 2x even if a function never defers).
That said, I don't think anybody uses Go for the insane performance, so as long as the performance impact isn't too severe (and limited goto support will help tremendously) it shouldn't be much of an issue. OTOH, these sorts of tradeoffs are why "native" code will always have a significant place. Especially as the wall we've hit in terms of single-threaded performance begins to be felt more widely.
You just need to manually insert stack growth checks before all function calls in anything that can be called from go to support goroutines, because those can't live on a normal stack and still be sufficiently small to support desired go semantics. Async has similar issues, where the semantics are incompatible, which means that many things go horribly wrong in cross-language calls
Or you use a lowest common denominator and essentially treat the thing like an rpc, with a thick layer to paper over semantic mismatches.
And that's before we get to things like C++ or D templates, Rust or Lisp macros, and similar. Those are lazy syntactic transformations on source code, and are meaningless across languages. Not to mention that, if unused, they produce no data to import. But, especially in modern C++, they're incredibly important.
That stuff only works if you limit yourself to exclusively run in a wasm runtime. If you want to compile to any other platform you still need C glue code.
For example, the WasmExplorer already lets you compile C++ -> wasm -> x86 assembly: https://mbebenita.github.io/WasmExplorer/?state=%7B%22option...
The CLR already supports a larger variety of programming languages from dynamically typed languages like Ruby and Python, to C#, to F# for functional style programming.
It already exists with a rich ecosystem of libraries. The tools to generate CIL opcodes are pretty good and the developer tools with Intellisense are pretty good as well.
It has good packaging support with assemblies. And you can already compile .NET code to WASM using Blazor.
And with .NET already running on Linux, Win, Mac, iOS, Android, and IOT devices, I don't really see targeting WASM adding anything that doesn't already exist.
I think the statement that the CLR ties you to C# is not justified. Can you elaborate why you think this is the case?
Given the large amount of code in JS and the number of JS developers, not being able to support a major general purpose language rules out WASM as a cross platform target in my book.
it is. F# developers are stuck with the language progress and can't (or don't want to) add big and important features like Type Classes or Higher-kinded Types, until C# adds them first
Java tied a whole ecosystem and was a pain to connect to anything outside, Parrot is a perennially incomplete half-arsed experiment, and C# has been historically tied to the MS ecosystem (and is also a pain to link to from outside).
The promise here is that the new system would have cross platform, cross vendor support (e.g. MS, Apple, Google, Mozilla), and that it wont be a pain to link to from, say, C or Rust or Python.
>WASM won't be able to "solve" these issue any better than the JVM, Parrot, or CLR did.
Doesn't need to. It's enough that it solves the same issue linking to a C library with bindings does, but without re-compilation, with a sandbox, and an easier interface.
I'd like to use e.g. something like Pillow or a numeric lib, etc, in WASM from languages outside Python...
I do believe WASM is different, in a good way, but not just because it works in a browser.
Not Really. They worked With a plugin side to the browser and not really IN the browser.
And that makes a big difference. Nor Java Applet, nor Sliverlight not even Flash were offering a smooth, integrated experience with in web page or application.
I have seen flash applet crashing more than I can count. JVM applet launch was cumbersome at best and bringing the beauty (irony) of Java-style GUI in my browser. Sliverlight remained a joke unusable outside of Windows.
WebAssembly made something unthinkable before: Make all Web Browsers Manufacturers agreed on a Standard for a Virtual Machine and implementing it.
If you think that WASM apps are not going to crash... :)
Or for that matter, the myriad of bytecode formats devised since UNCOL, like the IBM and UNISYS's language environments on their mainframes.
Can't speak to mainframe VMs, since I ain't familiar with them. It's reasonable to assume they were proprietary, though (that's how things typically were back in those days), so WASM has a leg up there.
The WASM that everyone dreams about will be the same size, if it is supposed to match in features.
Plenty of bytecode formats since have been AOT/JIT since the early 70's.
We wouldn't even be discussing WASM if Mozilla wasn't unwilling to implement PNaCL. Apparently being open source, with initial support for C, C++ and OCaml support wasn't enough.
The WASM I dream about (can't speak for anyone else) won't be a whole lot bigger than it is now , and I don't think anyone expects it to match the bigger VMs in features (the sheer number of features in the bigger VMs is part of the problem!).
: It's already pretty great, IMO, and the additions I'd like to see - namely, support for multiple memories, and the handful of extra instructions necessary to support tail call optimization [0.1] and garbage collection [0.2] - ain't expected to be that huge of an addition in terms of instruction set bloat. I'm hopeful that they won't be especially complex to implement, either.
As a long-time JVM user, I am curious about WASM and how it compares to the JVM and CLR.
In fact it had two versions of it, Managed C++ released in version 1.0, replaced by C++/CLI with version 2.0.
JVM has support for C-style linear memory model on its unsafe interfaces (the non-public ones), and is being improved via Graal (which understands LLVM bitcode) and Projects Panama/Valhalla.
That alone should make WASM quite different from the JVM, no?
The evolution from MaximeVM, which is now increasingly being integrated into OpenJDK as part of project Metropolis.
As for the CLR, it supports VB.NET, C++ and C# since day one, including a Common Language Specification and Common Type System.
Then alongside its history got COBOL, Eiffel, F#, Ruby, Python, Nermele, Clojure,....
IBM and UNISYS mainframes language environments support a mix of COBOL, C, C++, Fortran, RPG, NEWP, also with a common type system for interoperability.
While UNISYS documents are a bit hard to come by, IBM ones are available as RedBooks.
Graal does not have a standard binary interchange format for compiled C/Rust/etc.-style programs; it requires per-language support for them and works in terms of program source.
The CLR's C++ support is nothing like WebAssembly C++- you either FFI to native code, bypassing the VM, or you compile a limited subset of C++ to unverified bytecode- and the latter doesn't support newer language standards and is being phased out.
WebAssembly solves both of these problems- existing compilers for any unmodified language can target its standard binary interchange format the same way they would target a hardware architecture, and run within its sandbox. WebAssembly doesn't need to add support for them, or know anything about their source form.
WASM in its current form is still quite limited forcing language runtimes to bring along features that other battlefield proven solutions support out of the box.
Hardly a synonym for unmodified.
Meanwhile "must distribute a runtime with the app" does not affect the language at all, and indeed applies to these other solutions as well.
It does - LLVM bitcode.
Google tried using it for this purpose with PNaCl and it wasn't great. Apple makes it work by strictly controlling the target hardware they use it for, and with massive investment in the toolchain.
Wasm today is a toy hardly anyone uses. It can't run many real world C programs which means they'll need to be "ported" to the subset of C wasm supports. Well, if you're willing to work with that constraint you can constrain bitcode in the same way.
By the way Graals LLVM support can execute multiple bitcode versions, with seamless and fast language interop, and it virtualises execution so that e.g. a subset of x86 assembly can be cross compiled on the fly to other arch's. It's needed because so much real C doesn't target an abstract machine but real CPUs.
I endorse this point of view, from hard-won experience.
(Also: memory model. Always memory model.)
> Writing a project in Rust, but really want to use that popular face detector written in Python?
So, how exactly do you magically enforce affine type semantics in the data you pass to wasm-python, and transparently forward calls to the bound __call__ method of python objects?
The calls are easy, even today. Semantics are hard.
In Rust, calling foreign functions is considered unsafe, precisely for that reason: once you leave the boundaries of a single language, the compiler can no longer enforce these invariants for you.
I'm not sure how satisfactory that is, especially when it is fairly idiomatic for many calls in garbage collected languages to both keep and return a reference (eg, builder pattern), which may lead to rust calling a destructor where the other language doesn't expect it.
These interface types can describe, eg, how to map a Python String or list to a generic representation (pointer + length), and Rust can then take this generic form and map it into a valid Rust string safely - with the intermediate guarantees verified by the WASM runtime.
At Wasmer (disclaimer, I'm the founder!) we've been creating integrations with a lot different languages (C, C++, Rust, Python, Ruby, PHP, C# and R) and we agree that this is a pain point and an important problem to solve. We're excited that Mozilla is also pushing this forward.
If you want to start using WebAssembly anywhere: https://wasmer.io/
Keep up the good work! Let's bring WebAssembly everywhere!
As a library implementer, I'm a bit lost about how to expose functions from a wasm module to the outside world.
wasmer's interfaces have the advantage of being very simple to write. But they are not very expressive.
WebIDL is horrible, albeit more expressive.
At the same time, I feel like this is not enough.
My experience with libsodium.js, one of the first library exposing wasm (and previously asm.js) to other environments, has been that IDLs simply exposing function interfaces are not good enough.
Also, we need ways to preallocate buffers, check their size, etc. in order to make things appear more idiomatic.
In libsodium.js, the description language for the WebAssembly <-> JS glue is JSON-based. It describes the input and output types, but also their constraints, and how to interpret the output. That was necessary, and if more languages are targeted, this is even more necessary.
I feel like WebIDL is both complex, and insufficient.
Another thing that I don't necessarily get is why such a description language was designed specifically for WebAssembly.
Instead, we could have (yet another) way to describe APIs and how to encode that description. That description can then be included in ELF libraries, in Java objects, in WebAssembly modules, whatever. You know, like, what debuggers already use.
It is easy to understand, very informative and the hand-drawn infographies makes it pleasant to read.
Of course, cross-language interfaces will always have tradeoffs. But we see Interface Types extending the space where the tradeoffs are worthwhile, especially in combination with wasm's sandboxing.
I've only done web development for a year now and I've seen some cool WebAssembly demos but mostly those are demos of various frameworks (or no framework) reusing random WebAssembly components.
It's cool, but I'm missing from a large web application stance how WebAssembly is supposed to work, specifically when it comes to stuff like you see in some frameworks with state management across components and etc. Is there an example of that?
Amendment: Perhaps it's debatable what "substantive benefit" would be, but writing the whole app in it is a huge benefit if you really want to do that. That is to say, for years people wanted to write Web UI stuff in various languages, this also allows for that.
So while the concrete performance boost may not be meaningful in writing an entire web app in C, Rust, Go, Python or w/e - to the developer happiness the boost may be huge for those devs that want it.
My next app I'm planning on using a Rust WASM DOM framework, and I know there are some in Go as well. This is primarily for developer UX.
One example among many ramping up
As for designers, thank you for finding a much more diplomatic phrasing than the one I was about to write.
And thanks to many, including HN readership, Chrome and Safari are the only browsers that many businesses care about nowadays.
I should just have linked something done in Unity instead.
i.e. Figma said it reduced page load times by a factor of 3, though they were using WASM to replace existing asm.js code.
This seems like its introducing buffer overflow vulnerabilities, if the code can be tricked into using the wrong numbers. Sure, just into WebAssembly memory, but if everything's implemented in WebAssembly, won't there often be sensitive information in there?
Doing some more research, it seems like this may be a common problem: https://stackoverflow.com/questions/41353389/how-can-i-retur...
So any vulnerability in any piece of wasm in your webpage effectively compromises everything. One compromised module can also likely reach out into the page and then use other modules' public interfaces to compromise them too. There is not sandboxing in place to prevent this sort of attack, the sandbox merely protects the browser from attack by content.
Most (all?) WASM runtimes currently enforce this isolation, though, for security reasons. If any do allow memories to be shared between modules, I'd imagine it'd be very explicit and opt-in.
How well will this interact with the upcoming (hopefully before the heatdeath of the universe) rich gc types extension?
We're working on APIs too, in WASI, but there's a lot of work to do to build everything you'd need to build Electron-like apps.
An untrusted executable code, no matter how much sandboxed, virtualised, and isolated it is, would never be a good thing. It wasn't 20 years ago, and would never be, invariably of what "tech sauce" it is served with.
I advocate for proactive removal of WASM lobbyists from standard setting bodies, and countering their promotion of WASM.
I'll explain you why in the most neutral tone I can.
Do you understand that biggest peddlers of inscrutinable executable format on the web are people who are not fine with the open nature of the Internet, and who can not make their business when all code available through the browser is available to at least minimal form of inspection?
In other words, the people who are mad because they can't run business in the web because their business model depends on their code being closed, they WANT THIS big time. Their business thrives off non-interoperability and brokennes of software at large. Their success comes at the detriment of the Internet ecosystem at large. The more they brake the Internet for the rest of its users, the more money they can make, and this is why there can not be reconciliation with them and, us, the wider and more sane part of software developer community.
And we, the engineering cadres, the most influential class of people in the new tech driven world, have all the influence needed to deny WASM adoption.
For example, please try to read this code: https://apis.google.com/js/client.js
You would need a "decompiler" to make anything out of that.
>People are excited about running WebAssembly outside the browser.
This article isn't about the internet. Read through it with a mindset of webassembly outside of web. It's an interesting article.