Hacker News new | past | comments | ask | show | jobs | submit login
Calls between JavaScript and WebAssembly are finally fast (hacks.mozilla.org)
907 points by lainon on Oct 8, 2018 | hide | past | favorite | 143 comments

Very exciting! And really amazing work. It appears that WASM, at least in Firefox, is now going to be on par with JS in terms of performance when calling other JS facilities. Those comparison times are quite impressive.

I think the thing I found most exciting is at the end: “WebAssembly is getting more flexible types very soon. Experimental support for the current proposal is already landed in Firefox Nightly.” Implication being that WASM will have native access to the DOM.

I think at that point WASM could be used to fully work with the DOM with no negative costs, right?

Almost, that will set the foundation for fully using the DOM APIs. The host bindings proposal mentioned at the end of the article should close the gap.

Minor clarification:

The DOM builtins should be helped by the Reference Types Proposal for WebAssembly. [1]

`The host bindings proposal` is going to help speed up new and getter/setter calls. [2]

1: `Currently the only built-ins that we support this for are mostly limited to the math built-ins. That’s because WebAssembly currently only has support for integers and floats as value types.

That works well for the math functions because they work with numbers, but it doesn’t work out so well for other things like the DOM built-ins. So currently when you want to call one of those functions, you have to go through JavaScript. That’s what wasm-bindgen does for you.

But WebAssembly is getting more flexible types very soon. Experimental support for the current proposal is already landed in Firefox Nightly behind the pref javascript.options.wasm_gc. Once these types are in place, you will be able to call these other built-ins directly from WebAssembly without having to go through JS.`

-- https://github.com/WebAssembly/reference-types

2: `there are still a couple of built-ins where you will need to go through JavaScript. For example, if those built-ins are called as if they were using new or if they’re using a getter or setter. These remaining built-ins will be addressed with the host-bindings proposal.`

-- https://github.com/WebAssembly/host-bindings

(BTW I could be mistaken, just quoting the article)

What is `wasm_gc` all about?

The reason the switch is called that...

"A new proposal has been made to the WebAssembly specification committee a few months ago: to add reference types to the type system. Reference types are a new way to represent a reference to any host values. In a Web environment, this means being capable of playing with JavaScript values within WebAssembly. This is a huge difference with the existing type system, which only contains primitive types: integers represented on 32 or 64 bits, IEEE754 floating-point numbers represented on 32 or 64 bits. This is also a first step for implementing garbage collection (GC) integration within WebAssembly: since these reference values have been allocated on the GC heap in JavaScript, they need to be traced during wasm execution."


maybe the most important improvement will be direct memory access to the canvas bitmaps for super fast game rendering.

Wouldn't "super fast game rendering" be done by the GPU via WebGL?

For 2D games it's much more convenient to pixel push directly than to deal with vertex buffers, index buffers, shaders... Unfortunately graphics cards are not optimized for that scenario.

Convenient, but "super fast"?

And though it's a matter of opinion, I disagree. The GPU is easier for me to use for things like screen-space shading, alpha blending, sprite transformations and distortions, and it all performs much faster than the equivalent CPU render code.

A lot of game logic also runs on the CPU

canvas bitmaps are only useful for rendering though.

Assuming that also implies access to typed arrays in general, then applications that involve number crunching become a lot more feasible in JS too.

Huh? We've had typed arrays in JS for years. I think they originally went in to support WebGL, but you can use them for other stuff too.


Direct access to typed arrays from WASM without having to copy them over whole first, or use the memory buffer of the WASM module. You can't really write a "true" generic in-place sort in WASM right now, so to speak, because there is always a copying back and forth step.

Another interesting flag I found was javascript.options.wasm_cranelift, which enables a WASM code generator written in Rust: https://github.com/CraneStation/cranelift

Good find! At the moment it is very very experimental and you might experience crashes and slowdowns if you're using it in place of the regular flags (that is why it is enabled only on Nightly builds and will not be enabled on Beta/Release for a while). We'll make sure to talk about it when the right time has come.

Exposing Web APIs (e.g. DOM) to WASM and projecting them into native languages seems like it overlaps a lot with defining actual native API projections that would be shared between browser implementations.

It doesn't read like this is an explicit goal of the project, but are we going to get this by accident? Being able to use the same X code (where X is your favourite statically-compilable language) to generate both a WASM version that runs on the web and a statically-compiled version that links to the Y browser source (where Y can be whichever browser works for you because they all support the same native API) would be awesome.

Kudos to the author of this post - it takes a lot of skill to explain complex concepts like this, let alone in plain enough english that someone not skilled in the art could follow along.

WebAssembly is one of those technologies where we haven't seen the true extent of its capabilities yet. This is an exciting time to be working in browsers.

All of Lin's stuff is amazing. https://hacks.mozilla.org/category/code-cartoons/

Don’t get me wrong this is a great article. But I found the analogy with the people and pieces of paper a bit hard to read. (The analogy with the people made me think the author was talking about some sort of prototype delegation chain, https://giamir.com/alternative-patterns-in-JS-OLOO-style, rather than nested function calls.) Who is clicking on this but doesn’t know what a stack frame is?

> Who is clicking on this but doesn’t know what a stack frame is?

JS developers that don't have experience in lower level languages or some CS concepts. In other words, probably a good portion of the expected audience given it's about JS and possible improvements, and is likely to trickle into the news sources to cater to those people.

Notably, with the advent of things like Unity that compile to WebAssembly, people could be programming frequent communications between WebAssembly and JS, without ever having learned a low-level language involving pointers (which would be the situation in which one would learn about stack frames).

It's funny, I know about stacks, call stacks, and call stack frames. And yet the analogy to a literal stack of papers with information written on each one still gave me something new to chew on. (For some reason, I never actually thought to visualize like... a physical stack of data before.)

I made a visual programming and debugging environment for PostScript, a stack based language, that let you perform "direct stack manipulation" by dragging object up and down and on and off of a "spike" stack that impaled the little tabs sticking out of objects.


As someone who spent a few years having to debug PostScript with pstack and GhostScript running in a fixed-size command-prompt window in the 2010-ish timeframe... this looks futuristic even today.

Was the idea with the direct stack manipulation dragging you describe to provide visual analogues to PostScript commands like roll/pop/etc?

That's right -- you could rearrange their vertical order to manipulate their place on the stack, or you could lift them off of the stack and let them float by themselves, just to have a visual reference you can click on, drag and drop into other objects, etc.

It was very dangerous since you were editing the live data structures of the window system! The most complicated program I used it to debug was itself.

It was integrated with NeWS's multithreaded debugger, so when some process hit a bug or a breakpoint, you could "enter" the process and see what was on its stack, peek and poke at the objects, rearrange the stack and edit the objects, then send it on its way.

Since PostScript is homoiconic like Lisp, code and objects are made of arrays and dictionaries, which you could visually browse, edit, and execute. Very much inspired by Smalltalk, but with a twist of Lisp and FORTH!

> Who is clicking on this but doesn’t know what a stack frame is?

The primary target audience is Web developers interested in new technologies. I'd wager most of them don't know what a stack frame is and would appreciate the explanation.

> Who is clicking on this but doesn’t know what a stack frame is?

Probably a lot of JavaScript developers who started in web land and are only now starting to be exposed to the underlying architecture.

The majority of developers who work in scripted or higher level languages like C# or Java. For the most part, you don't have to think about them (even in C#/Java interop with C/C++, it's usually a non-issue).

IMO developers who work in scripted or higher level languages and never inspected a call stack have a serious problem (unless they write perfect code that works on the first attempt 100% of the time and never have to debug). I mean, printf debugging is okay, but at some point, preferably early on, you’re expected to graduate out of that (not throwing it out completely, but rather, learn a better way for more complex situations). Plus the fact that most scripting languages throw stack traces all the time.

Right, but the fact that this actually corresponds to memory layout and the consequences of that aren't something that are obvious just from a stack trace.

I'm not casting judgement in either direction.. merely stating that it's a given.

>Who is clicking on this but doesn’t know what a stack frame is?

99% of all the JS developers I've ever met?

Feeling insecure?

He's not wrong IME.

I hire C# developers, and most of candidates, even with a couple of years of experience, have no idea about how stack works.

A _lot_ of developers who have graduated to the next layer, who can focus even more on building stuff without worrying about implementation details.

Yeah, I stopped reading when I got to the cartoons. It's interesting to me that it's well received generally though -- I can appreciate that it might be helpful.

It was well written, but throughout the entire article I was hoping I'd learn what this thing was and why it was represented this way[1].

[1] On the right: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/fil...

That came from one of Lin's first posts about WebAssembly: https://hacks.mozilla.org/2017/02/a-crash-course-in-just-in-...

The "thing" on the right is a reference to the aliens in the movie Arrival where the main character (a linguistics professor) communicates with tentacled smoke beings by holding up white board messages.

Thanks, the reference went right over my head. Was the movie any good?

very good

Yep. Lin is amazing.

This definitely deserves praise.

You can see the difference in the WebAssembly Benchmark: http://iswebassemblyfastyet.com

On my machine I got,

Firefox WASM: 1809ms

Chrome WASM: 2855ms

Edge WASM: 12872ms

Firefox JS: 5413ms

Chrome JS: 4779ms

Edge JS: 8512ms

Chrome linux (v69.0.3497.81) 4916

Firefox linux (v62.0.3)1904

Firefox v62.0.3 doesn't have this feature enabled?

From article: "This means that in the latest version of Firefox * Beta *, calls between JS and WebAssembly are faster than non-inlined JS to JS function calls."

"The score is the total benchmark time. The lower the score, the better."

iPad Pro 9,7 (2016), iOS 12: 19542.

Chrome linux 11835

Firefox linux 2132

Do wasm -> builtin calls, e.g. DOM methods still call the JS code if JS code has been monkey-patched?

E.g. if userscript replaces the built-in window.fetch() API to modify page behavior will wasm also be intercepted?

At the moment, wasm can only call functions that were passed as values of the import object which is passed to to one of the instantiation functions (e.g. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...), so which function gets called is up to the embedding JS code.

Lin is awesome, but reading this nice article I have one noob question.

> We took the code that C++ was running — the entry stub — and made it directly callable from JIT code. When the engine goes from JavaScript to WebAssembly, the entry stub un-boxes the values and places them in the right place. With this, we got rid of the C++ trampolining.

So they replaced unboxing C++ trampoline with entry stub. But isn't that stub technically written in C++ too?

Based on a quick look through https://bugzilla.mozilla.org/show_bug.cgi?id=1319203 it looks like the entry stub is generated by the JIT itself. So no, the entry stub is not written in C++. It gets output by the JIT as part of the generated jitcode.

Specifically, see the GenerateJitEntry bits in https://bug1319203.bmoattachments.org/attachment.cgi?id=8949... -- the masm.whatever() calls are calls into an architecture-dependent assembler that will output the relevant machine instructions.

There is still an out-of-line codepath that does call into a C++-implemented entry stub in some cases when the argument conversions involved are too complicated to do them directly in the JIT-generated code.

Thanks for this, I was also quite confused as to why the "jumping through C++" stages were so expensive. Thinking about what you've said, it makes a lot more sense that you're moving between two worlds and marshalling arguments back and forth in a variety of expensive ways. It's a shame Lin didn't explain that a little more in an otherwise excellent article, but I guess she was focussing on the JS/JIT/WASM story.

It's not clear to me why calling into C++ is expensive to begin with. My best guess is it has to do with switching to a native stack (switching "folders"). So making the stub directly callable from the JIT means not having to change activations. Also presumably it uses the parameter passing conventions and semantics of all other JIT functions (whereas with a C++ function it presumably has to marshal the parameters to the function).

Function calls from WASM to JS don't seem to have improved so much.

I don't know what I want more, DOM calls from WASM, or C++ modules. A web without javascript would be welcome.

> Function calls from WASM to JS don't seem to have improved so much.

Yeah, Wasm => JS calls used to be reasonably fast because we had optimized that before. The work described here made that path much nicer, though (a more unified stack layout for JIT/Wasm frames) and also a bit faster still.

JS => Wasm calls being slow was one of the big performance cliffs in SpiderMonkey and I'm really glad that's fixed now.

Timing for calls from WASM to JS went from 750ms to 450ms for 100 million calls, so from 7.5 nanoseconds per call to 4.5 nanoseconds per call.

A 3 GHz CPU would do 3 ticks per nanoseconds, doing such a bridging call in 23 or 15 CPU ticks is quite good IMHO (I just hope I didn't blunder the math, but function call overhead from WASM to JS isn't much of an issue, it was already fast after the previous round of optimizations).

> A web without javascript would be welcome.

Why? What is it that wasm or c++ provides you in making a DOM call that modern JS does not?

Well, kind of looking forward to getting back to using languages like Java, Kotlin, C#, Swift, ObjectiveC and other languages people have been using to make UIs for quite some time in a browser. It will take some time for proper frameworks to emerge. But things are moving fast. Driving the DOM via react and other js frameworks from these languages is kind of possible. But it's probably just a stepping stone for getting something a bit more appropriate to these languages.

I have high hopes for Kotlin here. With Kotlin native they are targeting native IOS and native Android. Being able drive browsers from the same code-base via WASM seems like it will be a useful feature. Also, they seem to be really serious about tool support with Kotlin; providing dom/webgl and other bindings, and having decent end to end support with IDEs and other tools.

I imagine, Microsoft might have similar plans for C#. And it wouldn't surprise me if Apple was working on some Swift WASM support either.

Wasm promises a better compiler target than JS for alternative languages. It always seems odd to me that JS is so heavily defended when pretty much everything that makes up a modern JS stack are other compile-to-js languages: JSX, Elm, TypeScript, Clojurescript, Coffeescript -- hell even ES6 is for all practical purposes a compile to JS language.

Changing the game into "browsers run WASM" and JS is a built-in JIT compiler for it promises to make everyone's lives much easier.

If you're working directly with the DOM or other JavaScript APIs it's still going to be a garbage-collected, object-oriented data model.

The JavaScript object model is likely to remain fundamental for cross-language within-browser interop, just like C api's are fundamental for foreign function calls in other environments. (Or Java API's in a JVM.)

i’d imagine that at its core, all the DOM access will be just another WASM call, and WASM has no concept of objects AFAIK? it would be up to each language to represent those bindings however it likes. i don’t think the functions would even have names per se; similar to minified JS you know some magic reference number but a human readable name is not relevant

all assumptions here

If you're willing to custom-design a new API by hand, you can do it however you like. But if you want to, say, autogenerate bindings from a typescript definition file (the defacto standard for describing Javascript API's), you're probably going to end up using or adapting their function names, type names, parameter names, and so on, or you'll confuse the developers.

It can be minified if both the web assembly and foreign code pass through the same minifier, but source code still has to be readable.


The ability to use existing non-JS code

The ability to write loops involving non-trivial computations without spamming a hundred million heap objects


> The ability to write loops involving non-trivial computations without spamming a hundred million heap objects

How does being able to call the DOM from webasm give you that? That's what webasm is for in the first place.

I think the idea is that they can't use webasm (exclusively) to do that without also being able to call the DOM from it.

wasm makes using a statically-typed language possible in web applications. For those that have embraced JS, that may not be compelling but for those of us that prefer strong, static typing it is a positive development.

I'm using typescript, so I already have a statically-typed language for my web apps. What is compelling with WASM is that it lets me provide a web-browser version of my C++ libraries, for stuff which needs decent execution speed and cannot afford a GC.

Emscripten let you do that too, compiling to asm.js. WebAssembly certainly is an improvement, but it's not like you couldn't do this at all before.

And heck, even without that, there are languages that transpile directly to JavaScript.

Look at the comment I was responding to: I wasn’t claiming there weren’t other ways to do this. I was responding to the question as to why you might use wasm over JS. I understand it was based on previous efforts like asm.js.

Also, for several languages, specifically those based on LLVM, the path to wasm is much clearer than compiling to JS.

The big thing isn't so much the performance of WASM by itself, but rather that this performance enables the use of data models which are different from Javascript.

Typescript, Coffeescript and a host of similar options have essentially the same data model. What makes other languages, like Java, Rust or Python actually useful is their way of handling data. Which is why you can't transcribe these languages to Javascript trivially.

Still, the ecosystem of javascript is a lot better than the languages which compile to WASM as of right now. I guess that will change...

The 2nd coming of Flash?

Hopefully not.

Some devs are treating WASM as an entire runtime and outputing canvas/webGL code. I think that is mostly a mistake, and I'll be very disappointed if that becomes the defacto approach for new web devs.

However, the cool thing about WASM is that it's just the code part. So you can build an app that's still normal CSS/HTML, just like every web-app should be, and then you can replace all of your JS with WASM (with the current exception of a little bit of JS-to-DOM binding code which Mozilla has promised to eventually get rid of).

In the world of Flash/Java you're essentially working in a completely separate platform, it just happens to be launched from a web browser. So shortcuts don't work, accessibility is awful, users can't customize the page, extensions are broken, etc, etc...

With WASM, you don't have to be in that position. You can say, "I'm a web developer, I treat the web like a medium, I write responsive HTML/CSS. I just use C instead of Javascript."

> In the world of Flash/Java you're essentially working in a completely separate platform, it just happens to be launched from a web browser. So shortcuts don't work, accessibility is awful, users can't customize the page, extensions are broken, etc, etc...

True, but until Web development catches up with the state of Flash and other native RAD GUI designers, it is a very attractive proposition.

One thing I hate when doing web development is playing around with CSS and tag soup while trying to make it work the same way across all target platforms required by the customer, that in native code would be a couple of graphical calls to the visualization engine and platform widgets, which also provide layout managers.

Stuff like Web Components and Houdini would make it better, but who knows when they will be widely available.

> while trying to make it work the same way across all target platforms

This goes back to what I was talking about with treating the web like a medium, not a distribution platform. Fundamentally, the web is about giving you a "controlled" way to give up control. The native paradigm is "I want control over what each individual pixel looks like", and the web paradigm is, "heck off, well-written apps don't do that."

I understand why it's attractive, and why it'll continue to be attractive. And I highly agree about Houdini being an exciting development (I don't personally think that Web Components are offering much, but whatever). I'm really looking forward to being able to polyfill CSS. But that difference you're talking about is probably not ever going to go away completely, because the web is optimizing to solve different problems than most native frameworks.

Not to say one approach is better than the other (Flash had some legitimate use cases), but I find that there is an actual cultural difference between how web-apps and native apps are designed and how they balance between user control and developer control. It's not just technology.

The sad story is that Flash was superior, but bred a whole generation of bad front-end devs who started as designers and nerver really learned to code or plan front-end architectures, creating good looking garbage in the process.

I worked with many of them and they are struggling in the post Flash world. This led to a removal of designers from the implementation and a rise of quality of applications, at least on the technical side.

Hopefully with design tools lile Pagedraw and FramerX, we will get tools as powerful as Flash, with better integration behavior and more control over the code quality.

What I am really waiting for is winform being ran and rendered in wasm/canvas!

This is one of my nightmares. But considering how much devs hate js it's inevitable. I almost see it. HTML replaced with some binary WASM bs framework.

Then someone will reimplement a dom and a script language in wasm. History is going in circles.

Not quite a circle. There'd be progress in that you could choose your "Javascript" and "DOM", which really opens up a lot of possibility for progress. No need to force eternal compatibility or put up with bad APIs- just pick a better version.

This! You will be able to implement your own browser in the browser, with your own brand new accessibility API, i18n and l17n API's, and your own bugs, of course. We can compile Chrome or WebKit into Wasm and then use them everywhere, even in Firefox!

But you'd still need a browser that supports HTML to load the WASM and display it.

Realistically, that would be not much different than current sites using JS, or Flash when that was a thing, or even Java applets. None of those deprecated or replaced HTML, and WASM won't either. At worse, it will be just another thing you can block with a script blocker.

You may have said this in jest and are getting down-voted for it but this isn't an impossible scenario. I think of this article in relation to what WebAssembly means.


I trully believe it will happen in a couple of years, unless browser vendors change course regarding WebAssembly.

Ongoing POC of running Unity games, .NET, Go, Qt and many others already show it is the direction many are moving into.

I agree and think it's amazing. I'm looking forward to that day and when it comes. The other side is that it's also likely to open up a whole new avenue for web based security exploits.

> I'm looking forward to that day and when it comes.

That is the day where the open web is a little closer to dead. Possibly a lot closer. The move away from flash was a move towards empowering users. That did cost a little bit for developers, as they had worse tools to control the experience, but that also meant that it was more likely (eventually) that the experience they did develop would work in more cases. How many flash sites dealt with different size displays well, or dynamically resized and flowed correctly? How many worked with screen readers?

That's one of the benefits we've reaped by pushing the framework to the browser, and using open standards. Items that traditionally would haven't gotten much attention were also seen as important, and saw advances. I don't know how much some random React-like framework that just draws on the canvas would focus on screen reader support, but I suspect it wouldn't be high on their priority list. And even if it is on theirs, what about the 5 other main competitors that will be sharing market space with it?

I just hope that the sites that opt for these types of system are few and far between, and have specific operating needs that make it worthwhile. I think it's good that we can, but usually we shouldn't.

WebAssembly can't close the open web anymore than it's already closed today. The vast majority of JS in the browser today is minified jibberish. WebAssembly doesn't take away HTML and CSS.

WebAssembly makes it much more feasible to implement your own layout and display engine on top of a canvas element.

Why more feasible?

Because an application like that is both computationally expensive and expensive to ship as a runtime. WebAssembly can provide benefits in both cases (WASM binaries can be close an order of magnitude smaller than equivalent JS code, in select cases), and is quick to start.

As a simple example, it's not impossible that Adobe could literally take the flash VM code, compile to WASM with emscripten, and ship it along with an SWF file and a little setup code. This completely bypasses the need for a browser plugin for flash. Now, I suspect the flash VM might be large enough to make this noticeable, but it probably wouldn't be hard for Adobe to streamline it, and then re-purpose a lot of those tools used to build flash that everyone loves to rave about...

I believe there are financial incentives for this to happen, therefore I think it's only a matter of time until it does.

Using WebAssembly + WebGL/Canvas/whatever to basically replicate Flash is perfectly fine, for games. Using Flash for games was never a problem. Flash was a problem when it was used for non-game things. And I suppose it's possible someone could use Unity exporting to WebAssembly + WebGL to produce a crappy restaurant webpage (to name one common Flash offense), but it doesn't really seem all that likely.

Yeah, games is fine. So it some sort of demo (in the demo scene sense), or video player interface, honestly.

But I saw far to many flash websites in the past, because sometimes people really just want a powerpoint presentation for a website because they have a "vision", and usability is a foreign concept.

I think it's highly likely some service like Squarespace will deploy sites using something like this, because it both lets them more finely control some aspects and features, as well as make it harder for people to move. All marketed under "website plagiarism protection" or something similar. "Want to stop scraping? Just use this middleware package..."

WebAssembly doesn't really provide any more protection than just highly-obfuscated/minimized JavaScript.

That depend on how dynamic the site is. If your site is implemented on Canvas and doesn't even allow text selection, copying the site to make a competitor would be harder, as would scraping price/inventory lists, or any sort of scraping. That's both the selling point and the problem.

One issue regarding security is that until WebAssembly gets some kind of fat pointers or tagging, modules generated by unsafe languages can be exploited by corrupting internal state, thus changing the behaviour of exported entry points.

You can easily test it by creating a WebAssembly module in C with a function that overwrites an internal memory buffer, placed besides other data, which is used to parametrize behaviour of another function, for example a boolean stating that the user is authenticated.

WASM doesn't get raw access to host memory, so in practice this does not expose any new security risks. You can already pause Javascript, modify the internal state, and even overwrite existing functions while code is running. This is one of the reasons why you should never `eval` untrusted code.

This kind of thing is a risk for NodeJS, but that's only because NodeJS inexplicably allows host access by default. I believe Dahl is looking to address this in his next project.

> for example a boolean stating that the user is authenticated.

You should never do authentication clientside. Clients are untrustworthy.

Usually memory corruption isn't a JavaScript feature, unless there is a bug in the VM.

> You should never do authentication clientside. Clients are untrustworthy.

Nice way of picking up on my example, that was just an idea for a quick POC.

Well, sure. If there's a bug in the VM, then you have a problem. But your problem in that case is a lot bigger than this specific behavior. You don't need memory corruption to break clientside JS.

I wasn't trying to cherry-pick your example, but I see your example as indicative of the only types of bugs that are exposed by allowing unsafe memory access.

If you're writing a web app today, any unvalidated code you run on the client is a risk. WASM changes nothing about that. The only reason clientside memory access in a sandbox would ever be a security issue is if you were relying on a client not being able to access itself (ie, for stuff like authentication). And since you should never, ever trust the client anyway... I don't see why anyone should care about that class of issues. Don't run unvalidated code, and don't put any serverside logic with security implications on the client.

If you are going to run unvalidated code, you can put it in a WASM sandbox with its own dedicated chunk of memory. That's actually easier to do in WASM than it is in Javascript, since you don't need to worry about references across iframes anymore.

If you can think of a valid POC that's not already a security risk using current technologies, I'm open to it, but I can't think of one.

"Some kind of fat pointers or tagging" is exactly the sort of thing WebAssembly was created to get away from.

You can use a language that provides them and targets WebAssembly, without taking away the thing that makes the platform uniquely useful.

Right, but how can you trust a WebAssembly library?

It might have been compiled from Ada, Rust or plain C, just as an example.

Telling just to use a language we trust is not an option if one is just using other people libraries.

Thus bringing to the web the same security level as depending on regular native libraries, because a regular user is not going to check what a web page depends on.

If you're bringing in multiple separate WASM libraries, they'll run separately from each other. Don't give them access to any memory they shouldn't have access to.

Note that this is considerably better than Javascript, because without going out of your way to make an iframe, separate Javascript scripts aren't isolated from each other, so you don't know if one of them is modifying a prototype or hooking around other variables in the global scope.

By default, separate WASM modules don't have access to the same memory.

The whole point is about corrupting WASM modules INTERNAL memory via public exported functions, just because they happen to have typical out-of-bounds errors.

Linux Security Summit 2018 had a report that 68% of kernel exploits are caused by out-of-bounds errors.

User space doesn't write on kernel level, but can call syscalls, which happen to trigger such exploits.

WebAssembly is no different, regardless how they are sandboxed between modules.

If you have access to JS on the host system, why would you use such a roundabout, overcomplicated way to attack a module?

Why wouldn't you just use JS to remove the WASM module, replace it with a malicious module you wrote, and then overwrite the exported functions? Safe pointers won't protect you from a malicious host environment.

And like the original parent said, if you have code where you want safe pointers, use a language that requires them. Your module will be protected from any other WASM dependencies you bring in because they run in separate contexts and can't access each other. Don't give them access to critical memory. WASM makes this easy. You're just not protected from a compromised host environment -- but that's not new, that's the case for everything on the web.

WASM protects the page from your module, not the other way around. Imagine that I'm running Linux, and I put Windows in a VM inside of Linux. Is it a problem that Linux could compromise or mess with my Windows VM? Of course not, that's intended -- the point of a VM is not to protect the system running inside the VM, it's to protect the system running outside of it.

Maybe I don't understand what your threat model is. Are you worried about serverside code? I don't know that many people are really looking into running WASM on the serverside, and I don't see how the threat model is any different from writing a server in something like C++.

Not sharing memory is a pretty big restriction, though. For example, if you have variable-length data like a string, you can't pass it to a WASM module instance without either calling a function repeatedly for each data unit (slow) or sharing memory and passing a pointer (unsafe).

Someone can correct me if I'm wrong[0], but I believe the proposal for shared linear memory allows importing multiple chunks.

So assuming nothing changes and my understanding is correct, you'll just have to split your memory into two chunks; one that's safe to share and one that you keep private to your module.

And of course, unless you're writing WASM yourself by hand (which is possible, but I don't know how many people are doing it), your compiler should handle all of this for you -- so at the point where Rust or whatever says, "we want to allow you to split your code into multiple WASM modules instead of just bundling it all into one", it would have to come up with some kind of allocation strategy for this.

[0]: https://webassembly.org/docs/semantics/#linear-memory

That's a link to the spec, which documents the fact that every memory operator refers to a single "default" linear memory (which can be shared or private). I don't think any of the features on the roadmap (https://webassembly.org/docs/future-features/) include support for this kind of chunked memory. I agree that it would be very useful, though!

Linear memories (default or otherwise) can either be imported or defined inside the module.


In the MVP, linear memory cannot be shared between threads of execution. The addition of threads :unicorn: will allow this.

To me that sounds like, "define multiple linear memories, then import them after you instantiate the module." But I could be misinterpreting.

I don't think the thread proposal will change anything here. Like the existing memory instructions, the atomic instructions proposed for threads (https://github.com/WebAssembly/threads/blob/master/proposals...) don't take a linear memory parameter. That means they operate on the same default linear memory.

You are very right, dug into it a bit more and none of the memory functions (new atomic ones included) have parameters to specify a memidex. Thanks for looking into that, it's good to know.

> In the current version of WebAssembly, at most one memory may be defined or imported in a single module, and all constructs implicitly reference this memory 0. This restriction may be lifted in future versions.

But I don't get the impression that it's completely unplanned. Modules still take a vector of linear memories, not just one[0], which would be pointless if it wasn't planned to let you access the others at some point. Data segments also explicitly allow referencing a memidx[1] (even if right now 0 is the only one that's allowed).

Still, I've been under the impression that this was going in as part of threads, and I was all excited since threads were making such good progress. I'm a little disappointed to find out it's not.

It's also possible globals[2] might solve some of the same problems? But I haven't dug into them enough to know what their memory model looks like or how well they work across multiple modules, and in any case, sharing multiple globals is probably going to be less convenient than just sharing memory, so I'd still like to see the ability to import multiple chunks of memory.

[0]: https://webassembly.github.io/threads/syntax/modules.html#

[1]: https://webassembly.github.io/threads/syntax/modules.html#sy...

[2]: https://developer.mozilla.org/en-US/docs/WebAssembly/Using_t...

For what it's worth, I think I remember that .NET is still using reasonable web standards under the hood. It's a framework, but it's still using CSS/HTML for actual rendering.

If I'm remembering correctly, I wouldn't put that in the same category as Unity and Qt.

Unity is also .NET, and then there is UNO as well, UWP for WebAssembly


Blazor[0] was what I was referring to, but you're right, I shouldn't treat .NET like it's only being used in that context.

[0]: https://blogs.msdn.microsoft.com/webdev/2018/03/22/get-start...

Minified javascript does as much to obfuscate the code than wasm if your concern is the black box model.

I can reformat minified JS, then pause at a function, see it arguments, understand it purpose, rename variables, add a comment, and so on.

Can I do all that in Wasm?

For practical purposes of reverse engineering, minified JS is far better than Wasm. Wasm is the new Flash.

Hopefully without the Swiss cheese security.

At the moment I don't think there is a frontend language/framework which compiles to WASM and is actually worth using, at least from a perspective of frontend productivity. For performance or porting legacy code, WASM is already useful.

I mean, yes, there are things like Blazor or similar things for Rust or Nim, but these aren't yet compelling to me.

What would make it compelling?

(I don’t think this is where wasm will be used as much but I’m still interested!)

Something like react or vuejs in Python, with similar performance and equally or more productive.

There is something like this for Rust, I think.

But Python for the Browser is not compelling unless we both have a faithful implementation of the language and data model and a productive and beautiful frontend framework. Call it a React for Python or a Django for the frontend.

Similar goals apply for other languages which have an edge on Javascript in some regard.

Okay yeah, I wasn't sure if you were saying "the frameworks that exist are bad" or "a framework should exist". Thanks.

(And yeah, there are several of these in Rust now, but they're extremely early days, and not really ready for anything serious at all.)

I doubt we'll see that. Python just isn't that different to JS (so what's the point), and is in any case much slower than JS in all current implementations (so performance parity is unlikely).

Probably you don't know Python that well, then...

Also we don't need performance parity. We just need enough performance for such a framework to not suck. Interpreting Python bytecode in Javascript already doesn't suck that much, compiling bytecode to WASM and then to native machine code would suck even less. And there are always ways to improve the performance of critical parts.

While searching for more information on a configuration value mentioned in the article (javascript.options.wasm_gc) I came across another article on the topic, which I also found interesting because it's a bit more technical:


I hope the chrome team is looking at this too, with 65+% market share, it'll be real impactful when chrome makes similar optimizations.

Lin is amazing. I do enjoy reading her posts and learn a lot from them.

Did anyone re-implement the basic React API surface, using Wasm as a backend for the Virtual DOM? To me, it sounds like this would be an interesting project.

Not totally the same, but Blazor[1] is a similar framework powered by dotnet on wasm. As a React dev, looking at a Blazor component feels very familiar. I'd love to see something implemented in Go or another language without all the extra baggage dotnet comes with currently.

[1] https://blazor.net

There's also yew[1], which cites ReactJS as a direct influence. It implements a virtualDOM and is quite fast[2]. One of my main motivations for learning Rust right now.



Is there F# support for Blazor? Not a fan of C# for UI work (especially the kind the browser forces onto you).

I’m a little surprised that making cross-language inlining trivial wasn’t (apparently) a design goal of the original WASM spec.

(Maybe it’s functionally easy but getting the heuristics right is hard? Curious what the primary obstacles are for Mozilla or others.)

The article pretty much described what the big obstacle was: boxing. If WASM had boxing, then it wouldn't be any faster than JavaScript and there'd be no point. Since JavaScript requires boxing, there's no way to avoid the impedance mismatch at the boundary between them, and a JIT that wants to support both has to be aware of both boxed and unboxed numbers.

The WASM spec already tries to avoid pretty much every other big problem that would've confounded attempts to cross-language inline. For example, WASM requires structured control flow (IOW, WASM doesn't have `goto`), avoids type punning in favor of a reinterpret primitive, uses function addresses that are orthogonal to memory addresses, and allows non-deterministic floating point bit representations, all to match up closer with the way existing JavaScript JITs already work. But they can't do anything about boxing without either breaking the web or making WebAssembly essentially a binary encoding of the JavaScript language.

It's pretty hard to do well. If you look at Graal, a lot of the cutting edge research work they've done is about how to do cross-language inlining and subsequent optimisations better. It's more of a function of the JIT compiler design than language or bytecode specs though.

Firefox loads and runs Unity3D WebGL apps MUCH faster than Chrome.

The point of UnityJS is to tightly and efficiently integrate Unity3D and JavaScript, so it does a lot of JavaScript <=> C# calls, and I'm looking forward to it getting even faster!


You can pass delegates to C# functions that are directly callable into JavaScript using some magic PInvoke attributes and the Unity Runtime.dynCall function.

Declare a delegate that describes the signature of your C# function you want to call from JavaScript:


    public delegate int AllocateTextureDelegate(int width, int height);
Then declare a C# static method with the MonoPInvokeCallback attribute, to implement you C# function:


    public static int AllocateTexture(int width, int height) { ... }
Then pass those specially marked delegates to JavaScript and stash them in JS variables when you initialize (it doesn't work unless you use the magic MonoPInvokeCallback attribute):


    public static extern void _UnityJS_HandleAwake(AllocateTextureDelegate allocateTextureCallback, FreeTextureDelegate freeTextureCallback, LockTextureDelegate lockTextureCallback, UnlockTextureDelegate unlockTextureCallback);

    public override void HandleAwake()
        //Debug.Log("BridgeTransportWebGL: HandleAwake: this: " + this + " bridge: " + bridge);

In the awake function on the JavaScript side of your Unity WebGL extension (a .jslib file), wrap the C# delegate in a JavaScript thunk that calls into it via Runtime.dynCall:


    // Called by Unity when awakened.
    _UnityJS_HandleAwake: function _UnityJS_HandleAwake(allocateTextureCallback, freeTextureCallback, lockTextureCallback, unlockTextureCallback)
    { [...]
        function _UnityJS_AllocateTexture(width, height)
            //console.log("UnityJS.jslib: _UnityJS_AllocateTexture: width: " + width + " height: " + height + " allocateTextureCallback: " + allocateTextureCallback);
            var result = Runtime.dynCall('iii', allocateTextureCallback, [width, height]);
            //console.log("UnityJS.jslib: _UnityJS_AllocateTexture: result: " + result);
            return result;
        window.bridge._UnityJS_AllocateTexture = _UnityJS_AllocateTexture;
Then you can call the C# method from JavaScript:


    params.cache.backgroundSharedTextureID = id = 
        window.bridge._UnityJS_AllocateTexture(params.width, params.height);
This is zillions of time faster and more flexible than using Unity's terrible SendMessage technique to send messages from JS=>C#, whose only parameter is a single string, and which inefficiently dispatches messages by looking up Unity objects by name, and is asynchronous and can't return a result.

I use this technique to efficiently copy binary textures and arrays of numbers between JavaScript and C#. MUCH better than serializing it as JSON, or base 64 encoded PNG files in a data: url (yuck!).


    function DrawToCanvas(params, drawer, success, error)
    { [...]

                    var id = params.pie.backgroundSharedTextureID;
                    if (!id) {
                        params.pie.backgroundSharedTextureID = id = 
                            window.bridge._UnityJS_AllocateTexture(params.width, params.height);
                        //console.log("game.js: DrawToCanvas: WebGL: AllocateTexture: width: " + params.width + " height: " + params.height + " id: " + id);
                    var imageData =
                        context.getImageData(0, 0, params.width, params.height);
                    window.bridge._UnityJS_UpdateTexture(id, imageData);
                    texture = {
                        type: 'sharedtexture',
                        id: id
                    success(texture, params);
This lets me draw 2D user interface stuff, pie charts, diagrams, data visualizations, etc, in JavaScript with canvas, d3, or whatever library I like, and then efficiently use those images in Unity3D as user interface overlays, 3D textures, etc. It works great, and it's smooth and interactive, mixing up 2D canvas graphics with 3D Unity stuff!

Unity is sorely lacking a decent 2D drawing library like canvas, not to mention fancy stuff built on top of it like d3.

I'm currently working on the plumbing to send binary arrays of floats from JavaScript to Unity, so I can pass them right into shaders!

Here's some discussion about the magic MonoPInvokeCallback attribute:


And about Unity.dyncall and the WebGL runtime:




« ... calls between JS and WebAssembly are faster than non-inlined JS to JS function calls. »

Good note taken.

Looking forward to testing this with Blazor. This was the biggest problem currently.

Editing note: the first sentence under "Optimizing JavaScript » WebAssembly calls" seems to be repeated twice with different phrasings.

some people are just good with explaining stuff,

This was explained really well! I feel more knowledgeable now.

wasted opportunity to mention apple pie in the example function.

Fast? What makes webapps slow isn’t JavaScript, it’s downloading 2 fucking megs of total shit to show the user a bunch of text and form inputs.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact