Hacker News new | past | comments | ask | show | jobs | submit login
WebAssembly Interface Types: Interoperate with All the Things (hacks.mozilla.org)
324 points by skellertor on Aug 21, 2019 | hide | past | favorite | 114 comments

I'm very happy to see the WebIDL proposal replaced with something generalized.

The article brings up an interesting point:

Webassembly really could enable seamless cross-language integration in the future.

Writing a project in Rust, but really want to use that popular face detector written in Python?

And maybe the niche language tokenizer written in PHP?

And sprinkle ffmpeg on top, without the hassle of target-compatible compilation and worrying about use after free vulnerabilities?

No problem use one of the many WASM runtimes popping up (like [1], [2]) and combine all those libraries by using their pre-compiled WASM packages distributed on a package repo like WAPM [2], with auto-generated bindings that provide a decent API from your host language.

Sounds too good to be true? Probably, and there are plenty of issues to solve.

But it sounds like a future where we could prevent a lot of the re-writes and duplication across the open source world by just tapping into the ecosystems of other languages.

[1] https://wapm.io/ [2] https://github.com/CraneStation/wasmtime [2] https://wapm.io/

JVM (Java), Parrot (Perl 6 VM) and CLR (C#) shipped easy, multi-language environments over a decade ago. For some definitions of "easy" and "multi-language".

Different programming languages exist for many reasons, only one of which is syntax. Many of these reasons relate to data structures and runtime control flow. WASM won't be able to "solve" these issue any better than the JVM, Parrot, or CLR did. (Spoiler: they didn't, and WASM won't.) At best you'll get a Rust'ish WASM, Python'ish WASM, PHP'ish WASM, etc. And all of those will feel like a cousin to JavaScript, just as anything on the JVM feels similar to Java, on Parrot feels similar to Perl 6, and on the CLR feels similar to C#. WASM's choice of data structures, control flow, and data representations will be strongly influenced by the requirements of the major JavaScript engines (in Firefox, Chrome, and maybe WebKit). This is already the case when it comes to control flow semantics.

> At best you'll get a Rust'ish WASM, Python'ish WASM, PHP'ish WASM, etc

I agree with the latter two due to runtime uniqueness, but not so sure it holds for Rust. Since C FFI has become the defacto WASM standard (with higher level interop coming as the article suggests), some low-overhead langs like Rust can be reasonably used elsewhere via WASM (though in most cases, you'd use them as native libs if you had the option).

Essentially it comes down to two things: How much of the WASM conformance is language-specific at compile time (e.g. for Go, stack resumption is manually built causing bloated WASM), and how much overhead does WASM interop cause at runtime (e.g. can it use the string interface type instead of converting to what it considers a string in local memory and what are the tradeoffs of each approach). If your language has a low WASM impedance mismatch at compile time and runtime, like Rust, it is a decent candidate for a WASM target. If your runtime has a low WASM impedance mismatch (e.g. the browser, native, and to some extent the JVM), it is a decent candidate to run that other decent-candidate WASM code without a huge overhead.

I've run Rust on the JVM [0], C in the browser, etc with decent performance because of those characteristics.

0 - https://github.com/cretz/asmble/tree/master/examples/rust-re...

Ah, there it is, the JVM discussion in any WASM thread. I kid.

On topic though, I'll be curious to see how bad it really is. The demo video showed code that ended up looking and behaving like normal Rust code. Maybe you'll end up with some oddities like Rust enums not being supported, but w/e.

At the end of the day, I don't need to make weird and often gross C bindings in my Rust or Go code. Is that not a huge win?

But you're right, maybe this will go the way of the JVM and no one will care. Or maybe not. Does it matter?

edit: And I should add, maybe I keep forgetting where my well supported JVM Python <-> Rust or Go <-> Rust bridge is. Maybe someone can remind me, because I've not seen it.

I don't doubt that the ubiquity of WASM environments will result in more software targeting the environment. My contention is with the notion that we'll all be seamlessly mixing different languages.

Far more likely is that WASM influences the evolution of various languages, resulting in homogenization of semantics. C++ is already going in this direction--preferring some proposals over others because of easier interoperability with WASM's constraints. Other languages with irreconcilably distinct semantics, such as Go's goroutines, are likely doomed to be second-class citizens as compared to Rust or C++. Languages like Python and PHP are likely to see either major refactoring of their engines, or else see the rise of alternative implementations that are more performant in WASM environments and offer more seamless data interchange.

Goroutines only require a slightly fat runtime. Haskell uses basically the same mechanism, with C interoperability, and all it takes is starting (and optionally, stopping) the runtime.

It's not having a runtime that's the problem, it's limitations in WASM's control flow constructs: https://github.com/WebAssembly/design/issues/796#issuecommen...

And WASM has this limitation as a consequence of limitations in the V8 and SpiderMonkey engines. See http://troubles.md/posts/why-do-we-need-the-relooper-algorit... and https://news.ycombinator.com/item?id=19997091 And those engines have those limitations because they're a reasonable design tradeoff in the context of executing JavaScript.

I think even if WASM gets a limited goto sufficient to resolve the most problematic performance issues, languages like Go that use more sophisticated control flow constructs (e.g. stackful coroutines) will always be at a disadvantage relative to C or Rust when it comes to WASM as absence of other various low-level constructs (e.g. ability to directly handle page faults) will incur higher relative penalties. The same thing could be said about Rust's new async/await--WASM will compound the costs of the additional indirect branching in async-compiled functions (not to mention the greater number of function invocations, which is at least 2x even if a function never defers).

That said, I don't think anybody uses Go for the insane performance, so as long as the performance impact isn't too severe (and limited goto support will help tremendously) it shouldn't be much of an issue. OTOH, these sorts of tradeoffs are why "native" code will always have a significant place. Especially as the wall we've hit in terms of single-threaded performance begins to be felt more widely.

It's a balance. You probably won't see things like Rust's `Vec` exposed as a wasm API by itself, because at that granularity, language-speciric API details are really important, and the actual code you could reuse is relatively small. But at larger scopes, the advantages of mixing languages and introducing sandboxing become more interesting in the balance.

> At the end of the day, I don't need to make weird and often gross C bindings in my Rust or Go code. Is that not a huge win?

You just need to manually insert stack growth checks before all function calls in anything that can be called from go to support goroutines, because those can't live on a normal stack and still be sufficiently small to support desired go semantics. Async has similar issues, where the semantics are incompatible, which means that many things go horribly wrong in cross-language calls

Or you use a lowest common denominator and essentially treat the thing like an rpc, with a thick layer to paper over semantic mismatches.

And that's before we get to things like C++ or D templates, Rust or Lisp macros, and similar. Those are lazy syntactic transformations on source code, and are meaningless across languages. Not to mention that, if unused, they produce no data to import. But, especially in modern C++, they're incredibly important.

I've often wondered for client-side apps if a simple thread-per-goroutine model would work for Go. Sure, you'd lose some of the easy scalability. But I think that's mostly useful for server software and client apps don't usually have thousands of concurrent tasks anyway. You could also lower the overhead some for calling C libraries (which client software does more often).

I think Rust macros and C++ templates are not lazy wrt execution.

They are lazy wrt compilation. So, unless you use the specialization in C++ or Rust, there's no code to call.

Simply doesn't exist for interop code to call.

> At the end of the day, I don't need to make weird and often gross C bindings in my Rust or Go code. Is that not a huge win?

That stuff only works if you limit yourself to exclusively run in a wasm runtime. If you want to compile to any other platform you still need C glue code.

If you're distributing an executable (not a library), you can just AOT compile the generated wasm to machine code.

For example, the WasmExplorer already lets you compile C++ -> wasm -> x86 assembly: https://mbebenita.github.io/WasmExplorer/?state=%7B%22option...

I doubt you get the same performance as direct compilation. LLVM ir carries lots of annotations useful to optimizing passes, e.g. aliasing information.

The twist is, you can produce wasm from LLVM, including running the full mid-level optimizer first, which is the part of LLVM where those annotations and aliasing information are most valuable.

If this turns out to be useful then what's stopping us adding similar annotations in a custom section for Webassembly?

I'm a full time React developer but I would much rather see the CLR than WASM as a universal platform target.

The CLR already supports a larger variety of programming languages from dynamically typed languages like Ruby and Python, to C#, to F# for functional style programming.

It already exists with a rich ecosystem of libraries. The tools to generate CIL opcodes are pretty good and the developer tools with Intellisense are pretty good as well.

It has good packaging support with assemblies. And you can already compile .NET code to WASM using Blazor.

And with .NET already running on Linux, Win, Mac, iOS, Android, and IOT devices, I don't really see targeting WASM adding anything that doesn't already exist.

Yes but like the parent poster said, all of the CLR features are really to serve one language: C#. All other languages primarily targeting the CLR must be understood in the context of “how would you express this thing in C# and how will a C# client interact with your assembly”.

Thus, WASM will forever be tied to javascript and all other languages will need to deal with questions of “how would I express this in JavaScript”.

C# is the largest language for sure but it was designed for multiple languages from the start. There are things you can do in CIL and other languages that you can't do in C# even. Functional languages and pattern matching can be expressed quite well in .NET which aren't in C# (at least not yet fully).

I think the statement that the CLR ties you to C# is not justified. Can you elaborate why you think this is the case?

Also, WASM is not tied to JavaScript. You can't even compile JavaScript to WASM because it doesn't support the necessary primitives. In fact other languages that are completely different than JavaScript compile to WASM much easier.

WASM is tied to existing JavaScript engines and their constraints. For example, WASM doesn't support more efficient control flow because of the design of V8 (and possibly SpiderMonkey):



Tied to the engine is orthogonal to tied to the language. You can compile C, C++, and Rust to WebAssembly. But you still can't compile JS to WebAssembly.

Given the large amount of code in JS and the number of JS developers, not being able to support a major general purpose language rules out WASM as a cross platform target in my book.

You can compile a subset. See AssemblyScript. Actually more like a subset of TypeScript.

> statement that the CLR ties you to C# is not justified

it is. F# developers are stuck with the language progress and can't (or don't want to) add big and important features like Type Classes or Higher-kinded Types, until C# adds them first


>JVM (Java), Parrot (Perl 6 VM) and CLR (C#) shipped easy, multi-language environments over a decade ago. For some definitions of "easy" and "multi-language".

Java tied a whole ecosystem and was a pain to connect to anything outside, Parrot is a perennially incomplete half-arsed experiment, and C# has been historically tied to the MS ecosystem (and is also a pain to link to from outside).

The promise here is that the new system would have cross platform, cross vendor support (e.g. MS, Apple, Google, Mozilla), and that it wont be a pain to link to from, say, C or Rust or Python.

>WASM won't be able to "solve" these issue any better than the JVM, Parrot, or CLR did.

Doesn't need to. It's enough that it solves the same issue linking to a C library with bindings does, but without re-compilation, with a sandbox, and an easier interface.

I'd like to use e.g. something like Pillow or a numeric lib, etc, in WASM from languages outside Python...

I mostly agree with you, but it's important to point out that the big difference between WASM and the runtimes you mentioned is that WASM works in the browser, so you get out-of-the-box support for the world's biggest app platform. That's something that no other cross-platform runtime has achieved in any meaningful way, as far as I know.

Java used to work in the browser a quarter-century ago, albeit not very well. Even .NET sort of worked in the browser for both the people who enabled Silverlight.

I do believe WASM is different, in a good way, but not just because it works in a browser.

> Java used to work in the browser a quarter-century ago, albeit not very well. Even .NET sort of worked in the browser for both the people who enabled Silverlight.

Not Really. They worked With a plugin side to the browser and not really IN the browser.

And that makes a big difference. Nor Java Applet, nor Sliverlight not even Flash were offering a smooth, integrated experience with in web page or application.

I have seen flash applet crashing more than I can count. JVM applet launch was cumbersome at best and bringing the beauty (irony) of Java-style GUI in my browser. Sliverlight remained a joke unusable outside of Windows.

WebAssembly made something unthinkable before: Make all Web Browsers Manufacturers agreed on a Standard for a Virtual Machine and implementing it.

> I have seen flash applet crashing more than I can count.

If you think that WASM apps are not going to crash... :)

This probably a good subject for a separate thread, but I do wonder why JVM-in-browser failed so hard. I know there were "security issues" but surely those weren't any worse than the JS security issues we have today?

Steve Klabnik wrote a blog post around this last year. More challenging the "isn't WASM just another Java?" take, but I think still relevant to your question.


That immediately clarifies the difference. Thank you.

It didn't fail that hard. It was pretty widely used, but the JRE was a large and not very fast runtime back then. It got better over time but the browser makers got pretty determined to kill off any competing platforms to HTML under the guise of security.

Could it be that JVM and CLR were technically correct, but policy wrong?

It is more politics than anything else, with WASM being the new cool kid on the block.

You keep saying this, but WASM has some very specific technical differences from these previous VMs that make a huge difference.

Please, what wonderful feature does WASM have that neither JVM nor CLR are capable of?

Or for that matter, the myriad of bytecode formats devised since UNCOL, like the IBM and UNISYS's language environments on their mainframes.

WASM is much smaller and simpler than either. This makes it easier to implement, both within browsers (as evidenced by most mainstream browsers already supporting it natively, and the rest via a polyfill) and beyond (e.g. the myriad of WASM runtimes out there - both interpreted and JIT/AOT compiled - written in all sorts of languages and usable outside a web browser). Same goes for most other bytecode VMs of a similar style/vintage (Dis, Parrot, BEAM, Dalvik, etc.).

Can't speak to mainframe VMs, since I ain't familiar with them. It's reasonable to assume they were proprietary, though (that's how things typically were back in those days), so WASM has a leg up there.

WASM MVP is smaller and simpler.

The WASM that everyone dreams about will be the same size, if it is supposed to match in features.

Plenty of bytecode formats since have been AOT/JIT since the early 70's.

We wouldn't even be discussing WASM if Mozilla wasn't unwilling to implement PNaCL. Apparently being open source, with initial support for C, C++ and OCaml support wasn't enough.

> The WASM that everyone dreams about will be the same size, if it is supposed to match in features.

The WASM I dream about (can't speak for anyone else) won't be a whole lot bigger than it is now [0], and I don't think anyone expects it to match the bigger VMs in features (the sheer number of features in the bigger VMs is part of the problem!).


[0]: It's already pretty great, IMO, and the additions I'd like to see - namely, support for multiple memories, and the handful of extra instructions necessary to support tail call optimization [0.1] and garbage collection [0.2] - ain't expected to be that huge of an addition in terms of instruction set bloat. I'm hopeful that they won't be especially complex to implement, either.

[0.1]: https://github.com/WebAssembly/tail-call/blob/master/proposa...

[0.2]: https://github.com/WebAssembly/gc/blob/master/proposals/gc/O...

Would you please share the "very specific technical differences"?

As a long-time JVM user, I am curious about WASM and how it compares to the JVM and CLR.

The simple C-style linear memory model is the most important one IMHO, since this preserves the explicit control over the entire memory layout of an application which languages like C, C++ or Rust allow. Optimizations for efficient use of CPU caches carry over into WASM. AFAIK both CLR and JVM use a much higher level model which is built around fine-grained allocations managed by a GC.

CLR supports C++ since version 1.0.

In fact it had two versions of it, Managed C++ released in version 1.0, replaced by C++/CLI with version 2.0.

JVM has support for C-style linear memory model on its unsafe interfaces (the non-public ones), and is being improved via Graal (which understands LLVM bitcode) and Projects Panama/Valhalla.

Ah alright, thanks for the clarification. Pre-WASM emscripten actually shows that it's possible to compile C to any language or virtual machine that has simple linear arrays to use as C's heap, it's just not as efficient.

As far as I know, WASM was thought of as a compilation target for languages like C and Rust, which are quite different from Java and C#.

That alone should make WASM quite different from the JVM, no?


The evolution from MaximeVM, which is now increasingly being integrated into OpenJDK as part of project Metropolis.

As for the CLR, it supports VB.NET, C++ and C# since day one, including a Common Language Specification and Common Type System.

Then alongside its history got COBOL, Eiffel, F#, Ruby, Python, Nermele, Clojure,....

IBM and UNISYS mainframes language environments support a mix of COBOL, C, C++, Fortran, RPG, NEWP, also with a common type system for interoperability.

While UNISYS documents are a bit hard to come by, IBM ones are available as RedBooks.

Again, you keep bringing these up, by they are not the same thing as WASM.

Graal does not have a standard binary interchange format for compiled C/Rust/etc.-style programs; it requires per-language support for them and works in terms of program source.

The CLR's C++ support is nothing like WebAssembly C++- you either FFI to native code, bypassing the VM, or you compile a limited subset of C++ to unverified bytecode- and the latter doesn't support newer language standards and is being phased out.

WebAssembly solves both of these problems- existing compilers for any unmodified language can target its standard binary interchange format the same way they would target a hardware architecture, and run within its sandbox. WebAssembly doesn't need to add support for them, or know anything about their source form.

Besides chrisseaton's answer, not only is C++/CLI already compatible with C++17, it will be part of .NET Core Windows variant, because it was yet another reason why we wouldn't migrate away from .NET Framework.

WASM in its current form is still quite limited forcing language runtimes to bring along features that other battlefield proven solutions support out of the box.

Hardly a synonym for unmodified.

C++/CLI is already full of incompatibilities with C++17 (and C++14, and C++11), and there are plans to cut off C++/CLI support with newer standards. Existing support is mostly by accident.

Meanwhile "must distribute a runtime with the app" does not affect the language at all, and indeed applies to these other solutions as well.

> Graal does not have a standard binary interchange format for compiled C/Rust/etc.-style programs

It does - LLVM bitcode.

LLVM bitcode is far worse than WASM bytecode as a standard binary interchange format. It is intentionally non-portable and unstable by its designers.

Google tried using it for this purpose with PNaCl and it wasn't great. Apple makes it work by strictly controlling the target hardware they use it for, and with massive investment in the toolchain.

That's more a criticism of C rather than LLVM. Bitcode isn't as portable or stable as JVM bytecode because real world C programs use processor specific intrinsics, make assumptions about endianness and word sizes, embed assembly etc etc. LLVM is designed to support all real C programs, which means it can't be entirely portable.

Wasm today is a toy hardly anyone uses. It can't run many real world C programs which means they'll need to be "ported" to the subset of C wasm supports. Well, if you're willing to work with that constraint you can constrain bitcode in the same way.

By the way Graals LLVM support can execute multiple bitcode versions, with seamless and fast language interop, and it virtualises execution so that e.g. a subset of x86 assembly can be cross compiled on the fly to other arch's. It's needed because so much real C doesn't target an abstract machine but real CPUs.

Actually LLVM Bitcode can be made portable, that is what Apple does in their store apps for watchOS.

I didn't claim it was a good binary format - I just claimed GraalVM supports it, which it does.

But it is not an application delivery format which is the whole point here.

I ship LLVM to deliver applications just fine.

Sure it is, that is what watchOS uses.

Different programming languages exist for many reasons, only one of which is syntax. Many of these reasons relate to data structures and runtime control flow. WASM won't be able to "solve" these issue any better than the JVM, Parrot, or CLR did. (Spoiler: they didn't, and WASM won't.)

I endorse this point of view, from hard-won experience.

(Also: memory model. Always memory model.)

> Webassembly really could enable seamless cross-language integration in the future.

> Writing a project in Rust, but really want to use that popular face detector written in Python?

So, how exactly do you magically enforce affine type semantics in the data you pass to wasm-python, and transparently forward calls to the bound __call__ method of python objects?

The calls are easy, even today. Semantics are hard.

> So, how exactly do you magically enforce affine type semantics in the data you pass to wasm-python

In Rust, calling foreign functions is considered unsafe, precisely for that reason: once you leave the boundaries of a single language, the compiler can no longer enforce these invariants for you.

So, the solution is to make more or less every foreign call unsafe. That's pretty close to the state of the art without wasm.

I'm not sure how satisfactory that is, especially when it is fairly idiomatic for many calls in garbage collected languages to both keep and return a reference (eg, builder pattern), which may lead to rust calling a destructor where the other language doesn't expect it.

If I understand the proposal correctly, it should actually be possible to auto-generate safe glue code.

These interface types can describe, eg, how to map a Python String or list to a generic representation (pointer + length), and Rust can then take this generic form and map it into a valid Rust string safely - with the intermediate guarantees verified by the WASM runtime.

You mean like I would use IronPython, with the codecs coded in C++/CLI, ML analysis in F# and the UI in XAML/C# all on top of the CLR?

Besides the technology questions, there are also non-technical considerations too. Who do you want governing the package repo you host all your code in?

Super happy to see WebAssembly Interface Types in development!

At Wasmer (disclaimer, I'm the founder!) we've been creating integrations with a lot different languages (C, C++, Rust, Python, Ruby, PHP, C# and R) and we agree that this is a pain point and an important problem to solve. We're excited that Mozilla is also pushing this forward.

If you want to start using WebAssembly anywhere: https://wasmer.io/

Keep up the good work! Let's bring WebAssembly everywhere!

Is this going to replace wasmer's WebAssembly interfaces?

As a library implementer, I'm a bit lost about how to expose functions from a wasm module to the outside world.

wasmer's interfaces have the advantage of being very simple to write. But they are not very expressive.

WebIDL is horrible, albeit more expressive.

At the same time, I feel like this is not enough.

My experience with libsodium.js, one of the first library exposing wasm (and previously asm.js) to other environments, has been that IDLs simply exposing function interfaces are not good enough.

For example, C functions returning 0 or -1 to indicate an error or not, would not be idiomatic at all if exposed that way in Scala, Javascript or Python. We want these to raise an exception instead.

Also, we need ways to preallocate buffers, check their size, etc. in order to make things appear more idiomatic.

In libsodium.js, the description language for the WebAssembly <-> JS glue is JSON-based. It describes the input and output types, but also their constraints, and how to interpret the output. That was necessary, and if more languages are targeted, this is even more necessary.

I feel like WebIDL is both complex, and insufficient.

Another thing that I don't necessarily get is why such a description language was designed specifically for WebAssembly.

Instead, we could have (yet another) way to describe APIs and how to encode that description. That description can then be included in ELF libraries, in Java objects, in WebAssembly modules, whatever. You know, like, what debuggers already use.

We are exploring using Wasmer as a scripting/modding platform in the Amethyst game engine: https://github.com/amethyst/amethyst/pull/1892

Side comment: a Thumbs-up for Lin Clark and her usual Mozilla blog posts including this one.

It is easy to understand, very informative and the hand-drawn infographies makes it pleasant to read.

Would also like to add that the video in the article (https://www.youtube.com/watch?v=Qn_4F3foB3Q) is really well made and to the point

Note that Wasmtime is the engine starring in the blog post and demos :-).

Of course, cross-language interfaces will always have tradeoffs. But we see Interface Types extending the space where the tradeoffs are worthwhile, especially in combination with wasm's sandboxing.

I'm gonna go out on a limb here and expose my n00bishness, and honestly a lot of that article has my head spinning as I google a lot of terms ;)

I've only done web development for a year now and I've seen some cool WebAssembly demos but mostly those are demos of various frameworks (or no framework) reusing random WebAssembly components.

It's cool, but I'm missing from a large web application stance how WebAssembly is supposed to work, specifically when it comes to stuff like you see in some frameworks with state management across components and etc. Is there an example of that?

Web assembly is generally an optimization, not an app alternative. Think of it the way you would the C bindings Node.js has. If you've got a compute-heavy code path, writing it in C (or Rust etc) and shipping it as web assembly could be a dramatic perf improvement, but there's not really a substantive benefit to writing your entire app with it.

> but there's not really a substantive benefit to writing your entire app with it.

Amendment: Perhaps it's debatable what "substantive benefit" would be, but writing the whole app in it is a huge benefit if you really want to do that. That is to say, for years people wanted to write Web UI stuff in various languages, this also allows for that.

So while the concrete performance boost may not be meaningful in writing an entire web app in C, Rust, Go, Python or w/e - to the developer happiness the boost may be huge for those devs that want it.

Well, you still can't do that—WASM doesn't have DOM bindings, you'd still have to ship out to JS. I believe that's a goal, so yes eventually that could be a benefit, but not at the moment.

Fwiw, you can - just perhaps with not pure WASM and perf.

My next app I'm planning on using a Rust WASM DOM framework, and I know there are some in Go as well. This is primarily for developer UX.

DOM bindings are kind of irrelevant when one has WebGL.

One example among many ramping up


Rendering UIs in webgl is fairly user-hostile. No accessability, no extensions, no custom styles, ...

Yes it keeps coming up, yet Flash will have its revenge.

For a smallish subset of use cases, maybe. But replacing the DOM with a canvas-based UI for general use would be reinventing many of the problems of Flash.

Many designers and game devs see it otherwise.

Gamedev is a separate topic. As for designers, they are on the opposite side from users in the tug-of-war of control over content rendering. I don't believe they should get 100% of their way.

Gamedev falls squarely into my "smallish subset of use cases". It does legitimately need non-DOM UI, but it's not a huge part of the Web (yet).

As for designers, thank you for finding a much more diplomatic phrasing than the one I was about to write.

WebGL bindings are on the same footing as DOM bindings- they're not an alternative in this sense, because insofar as we have one we already have the other.

No because accessing WebGL contexts directly from WebAssembly, specially WebGPU ones, is something that keeps coming up in some Chrome presentations.

And thanks to many, including HN readership, Chrome and Safari are the only browsers that many businesses care about nowadays.

What I'm saying is, the feature/mechanism that lets WebAssembly call WebGL or WebGPU is exactly the same one that lets it call DOM APIs. The same implementation work supports both.

I looked into their WebAssembly demo, but it doesn't seem to use WebGL, just a regular DOM (with an ungodly number of elements).

That was just an example, maybe not the best one.

I should just have linked something done in Unity instead.

Uno uses DOM bindings, not WebGL.

Another benefit is initial page load times, if your page has a lot of JS it can take a pretty significant amount of time to parse it.

i.e. Figma said it reduced page load times by a factor of 3, though they were using WASM to replace existing asm.js code.


Another benefit I suppose is that you can choose from many more languages. Whether that is substantive or not depends on the person, I guess.

That's a use case, but there are many others, as the article mentions, like lightweight isolation for server side apps or writing your frontend app in a different language (not pragmatic today, but could be in the future).

Maybe that article can help about real world use case : https://tech.ebayinc.com/engineering/webassembly-at-ebay-a-r...

Thank you.

> Document.createElement() takes a string. But when I call it, I’m going to pass you two integers. Use these to create a DOMString from data in my linear memory. Use the first integer as the starting address of the string and the second as the length.

This seems like its introducing buffer overflow vulnerabilities, if the code can be tricked into using the wrong numbers. Sure, just into WebAssembly memory, but if everything's implemented in WebAssembly, won't there often be sensitive information in there?

Doing some more research, it seems like this may be a common problem: https://stackoverflow.com/questions/41353389/how-can-i-retur...

Doesn't each wasm module get its own isolated memory? If so, then you could only shoot your own sensitive foot.

Yes, but a "module" can be an application and a full set of its dependencies. It's unlikely that its dependencies would have isolated heaps, as there would be no way for them to communicate (they can't touch each others' pointers, etc) other than through intermediary JS.

So any vulnerability in any piece of wasm in your webpage effectively compromises everything. One compromised module can also likely reach out into the page and then use other modules' public interfaces to compromise them too. There is not sandboxing in place to prevent this sort of attack, the sandbox merely protects the browser from attack by content.

There's no guarantee that the module's memory is strictly isolated; I don't recall what the specification says, but on a technical level there's nothing stopping a WASM implementation from exposing the same memory range to multiple WASM modules (and in fact, I can imagine this to be a very common use case for multiple memories, e.g. defining STDOUT and STDIN as memories shared with other modules, with one module reading and the other writing).

Most (all?) WASM runtimes currently enforce this isolation, though, for security reasons. If any do allow memories to be shared between modules, I'd imagine it'd be very explicit and opt-in.

That is already enough to trigger CVE on WASM modules.

So this is what they've been up to!

How well will this interact with the upcoming (hopefully before the heatdeath of the universe) rich gc types extension?

Interface types will make it easy for two modules to interface with each other, regardless of whether both sides use GC, just one side does, or neither side does. And even within GC types, different languages have different ways of representing strings, and Interface Types can allow them to talk to each other.

Good effort. But the bit about using Protobuf, Cap'nProto only in processes that don't share memory is wrong. Sure, that may not be how they're traditionally used. But that doesn't exclude me from defining opaque C FFIs that pass/return opaque bytes that are serialized messages and have it be deserialized on the other end.

Glad to see this being worked on. The difference between a heap vs. runtime managed object is a huge perf gap for tightly interoperating JS and WASM. At my company, we built a specific object representation that would allow zero copy, through array buffer sharing, views of C++ data from Javascript. Sounds very similar to this.

I got the impression from the article that it will always copy data (e.g. strings) between wasm modules. Did I misread it?

I hope someone with their hand in these specs experienced the dumpster fire that was SOAP interoperability and is taking pains to be sure we don't have the same class of problems in wasm.

Oh, this is wonderful. Getting closer to that multilang WASM world.

WOW that was too much to read. Can someone TLDR when we'll be able to get rid of Electron using this for me?

Much of what makes Electron Electron are the APIs it provides. The blog post here is a way to describe and work with APIs, not actual new APIs.

We're working on APIs too, in WASI, but there's a lot of work to do to build everything you'd need to build Electron-like apps.

If anything you'd use wasm in electron.

I'm a strong opponent of WASM because I think that WASM will break the internet the very same way and same reason ActiveX and Java applets broke it over 20 years ago.

An untrusted executable code, no matter how much sandboxed, virtualised, and isolated it is, would never be a good thing. It wasn't 20 years ago, and would never be, invariably of what "tech sauce" it is served with.

I advocate for proactive removal of WASM lobbyists from standard setting bodies, and countering their promotion of WASM.

JavaScript is also "untrusted executable code". How is WASM any worse than JS?

Do you also advocate for removing ECMAScript lobbyists from standards bodies, and countering their promotion of JavaScript?

ActiveX and Java applets were proprietary when they got introduced. WASM is an open standard that it supported by all browser vendors and already integrated without any third-party plugins. There is a spec for it that you can contribute to.

It doesn't change a thing. Both ax and java were quite well documented and free for everybody to use, but they definitely broke the Internet for everybody without any exaggeration.

I'll explain you why in the most neutral tone I can.

Do you understand that biggest peddlers of inscrutinable executable format on the web are people who are not fine with the open nature of the Internet, and who can not make their business when all code available through the browser is available to at least minimal form of inspection?

In other words, the people who are mad because they can't run business in the web because their business model depends on their code being closed, they WANT THIS big time. Their business thrives off non-interoperability and brokennes of software at large. Their success comes at the detriment of the Internet ecosystem at large. The more they brake the Internet for the rest of its users, the more money they can make, and this is why there can not be reconciliation with them and, us, the wider and more sane part of software developer community.

And we, the engineering cadres, the most influential class of people in the new tech driven world, have all the influence needed to deny WASM adoption.

Modern minified javascript is just as inscrutable as anything else.

For example, please try to read this code: https://apis.google.com/js/client.js

You would need a "decompiler" to make anything out of that.

The very first sentence in the article is

>People are excited about running WebAssembly outside the browser.

This article isn't about the internet. Read through it with a mindset of webassembly outside of web. It's an interesting article.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact