Hacker News new | past | comments | ask | show | jobs | submit login
Pay attention to WebAssembly (sheth.io)
344 points by hsheth2 on Jan 31, 2022 | hide | past | favorite | 244 comments

> Figma makes use of a low-level C++ library called Skia for 2D graphics rather than building their own graphics engine

This is not quite right. The bulk of Figma rendering is Figma-custom GPU code. It's true that Figma uses Skia, but only for some specific graphics algorithms in Skia, not as a general purpose rendering library.

Source: I work on this code.

What do you mean by GPU code? Do you mean some code that calls WebGL?

Yes, sorry! I should've written GL, not GPU, but it's too late to edit it to fix. ...though there are some shaders involved.

FWIW I think GPU is fine as Webgl is the only api browser can use before Webgpu takes off.

I wonder why we don't get WebVulkan instead. Is it just because Apple Metal is insufficient to implement Vulkan on?

WebGPU is WebVulkan

Oh, good.

Interesting the default to blame Apple, but OK

We know that Apple does not expose a synchronization primitive needed for full Vulkan support. Apple could add such support to Metal any time they came to care enough. Anyway nobody else can.

Thanks for the correction - I'll update my post shortly.

Thoughts on gaming’s role to play with WebAssembly..?

My team and I are current working on Unreal Engine WASM support, with WebGPU integration on the way. Personally, I believe native games on the web is going to disrupt Steam and the app stores and enable a whole new distribution channel for developers, especially indies. No 30% cut, works on any device with a browser.

Some advantages that Steam will still have:

1. Steam provides a system for user accounts and profiles

2. Steam handles payment securely for both buyers and sellers

3. Steam handles social networks/friend list/multiplayer

4. Steam allows you to have all your games in one place

5. Steam is a platform for reviews

6. Steam has its own internal economy/marketplace

What’s your point? All of that can be done on the web, besides it isn’t really targeting the Steam audience at the end of the day. Think the mobile audience, but on the web. You can do a Sims or PUBG in HTML5 today, and all that’s required is a hyperlink to share it with others. Same strategy that made Wordle go viral and get acquired today for over $1M.

> What’s your point?

He's addressing the problem with your hypothesis that "native games on the web will disrupt Steam" by pointing out that you're comparing apples (a technology platform) and oranges (a distribution channel). Steam doesn't exist because games don't use WASM.

There's all that stuff on other platforms. Reddit subreddit indie games would be neat if I could just open the post and play the game.

I can't imagine I'd use Steam much after that. (And I guess facebook would do the same.)

As for payments, I already pay mostly through Google pay or through paypal. Steam isn't adding a huge amount of value.

What other platforms offer all that stuff? Because I've been looking and nothing comes even close.

> What’s your point?

Why you expect your Unreal-to-WASM product to succeed when Epic Games can simply build your entire product as a feature of the engine? You are building on their platform and not an open web stack, after all.

Despite having an open web, most people have elected to use platforms. At some point, people will defer revenue for simplicity.

I think OP is working on the Epic team. They already have workable emscripten support in UE4

I don't think so. The Discord in his HN profile points directly at his startup:


This appears to be a 3rd party attempt to bring UE4/5 to the web.

I'd love to hear OP's take on how they think they can outpace or complement Epic on their own internal implementation. At face value I don't see how this isn't a f.lux vs. Night Shift type situation, where the official feature destroyed the market for third parties.

They officially dropped support for their web export. Even if you use the older version of the engine that did support it, mileage varies heavily.

7. Handles refunds without hassle

8. Has a platform for modding (Workshop)

9. Has sales every few months

10. Has festivals that give new games exposure to a big audience

The list goes on... It's always better to have more competition, but anyone trying to disrupt Steam has several big challenges ahead, not just running games on the browser. I imagine Steam could start offering games in WASM as well.

I think it depends

Problems with *traditional* games via WASM/WebGPU in the browser

* Many games are 10-100gig+ in size. The browser provides no good way to store this data for your game and the fact that you had to wait 15mins to multiple hours means you gain no advantage. Further, browser have a balance to keep between letting any site put gigs of data on your machine vs not. And on top of that, the browser provides no way to prevent losing the data. The user clicks "refresh" or something similar and now the user has to re-do that 10-100gig download.

* Browsers have to work around driver bugs and it takes time to fix them and get them through the release cycle. For a native game, if a new driver come out that breaks something you can try to work around it immediately. That's harder on browsers. Your game can try but it's a moving target.

* Browsers change stuff that affects perf more often than native. Today "for = i to N" is faster than for of, tomorrow "for of" is faster (not a real example). My point is, in my experience, it's much easier to optimize for native since you're writing native code. In WASM you're not. Further, you're closer to the metal in native. Today video -> texture is fast, tomorrow it's slow, the next day audio is no longer allowed without a click, etc... I guess I don't have any stats which change more, browser APIs or native. My gut though is that I've had to change browser content often to keep it running

* Running games on any device is mostly fiction. Users range from 3090s to 7yr old intels to 7yr old androids. From touch screens, to mouse and keyboard and with other different limits (no fullscreen on iOS, no pointer lock, ...). Depending on your game that's half your market.

In other words, IMO, games you generally find on Steam are not a good fit for the browser.

On the other hand, you could design games that load fast, start up fast, possibly stream data if they need more, etc and you could possibly make some hit games. Maybe even some of the biggest hit games ever. Remember when Farmville was #1?

Still, my feeling is UnrealEngine in particular, is not a good match for making web friendly games. Most game devs won't pay attention to what it would take to make a web friendly game. Instead they'll just follow the patterns for native games and pick "Export to Web" and basically put out a very poor experience for web.

> Many games are 10-100gig+ in size.

For web games it doesn't really matter how big the entire game is, only how much data it consumes per second of game play, the local storage is just another caching layer not meant to hold the entire game but just the data that's most likely needed next. As long as the user's average bandwidth is higher than what the game needs to keep the 'disc cache' filled it's fine.

Of course this means the entire asset-streaming, and probably the whole game needs to be designed around this 'number of bytes per second to be presented to the user' limitation, but that's not a new thing. In the past, games were designed around CD-drive bandwidth and seek times.

As to your other points, I mostly agree, the browser is a too unstable platform, and the people building the web APIs (other than WASM and WebGL/WebGPU) usually don't care much about games.

But there's a huge space below what's called "AAA" which still can make absurd amounts of money (and provide absurd amounts of fun), and for which the tech limitations in browsers are okay-ish. Those games need to be designed from the ground up for running in the browser, porting existing modern games will mostly not work.

Additionally, WebGL 2.0 is stuck on a GL ES 3.0 subset, and when WebGPU 1.0 comes out later this year, it will be a subset of Vulkan 1.0, DX 12 1.0, Metal 1.0 until it ever gets adoption across all browsers, before they even think about moving forward.

It took about 10 years between 1.0 and 2.0 for WebGL broader adoption.

And in any case they are hardly available on game consoles.

I don’t know. Nothing prevents indies from selling games directly to consumers right now — it’s not like it’s hard to integrate a payment processor to your website and an auto-updater to your software. WebAssembly only replaces the auto-updater part. The actual value that Steam adds is discovery and trust.

"WebAssembly only replaces the auto-updater part. "

No, it replaces the need to install something on your computer.

For trying out an unknown game, I rather have a sandboxed web application, than having to install something.

And most who think more in terms of convenience, will prefer the non install solution, too.

Installing new software is a big hurdle to many non technical people. I witnessed many children tears, because their parents did not wanted some game to potentially mess up their computers.

(and some games actually do that)

Are you writing from theory, or from personal experience with game app stores? Because IMO, this is mostly a solved problem in practice with Steam, Epic Store, Windows Store etc. That’s partly why those platforms are so popular.

"Because IMO, this is mostly a solved problem in practice with Steam, Epic Store, Windows Store etc."

Erm. Yes, but wasn't your point that indie developers can market directly(without those plattforms) with ease right now?

"WebAssembly only replaces the auto-updater part. The actual value that Steam adds is discovery and trust"

Using webassembler brings in trust. Worried parents do not have to approve installing another unknown app.

When people purchase the game, will they have a digital copy on their hard drive? Or will they be beholden to the company staying in business and keeling the servers running?

That train has left the station for at least 15 years. Your 'digital copy' is useless if the DRM servers are down (GoG's DRM free games are a notable exception).

Might be a notable exception but i have almost a thousand of those exceptions :-P. Also while Steam itself is needed for the initial download, some games can just be copied to another place and work fine, especially indie games that do not bother with DRM. A bigger issue is the Steamworks APIs but if you only care about singleplayer games there are drop-in replacements like Goldberg's emulator[0] which is opensource (LGPL).

(also there are more places to buy DRM-free games than only GOG - e.g. itch.io, Zoom Platform, GamersGate and Humble Store to name a few - though GOG has most of the games)

[0] https://gitlab.com/Mr_Goldberg/goldberg_emulator

From a technology perspective, I think you're totally right that Wasm and WebGPU support is super exciting for game engines like Unreal/Unity. The ease of distribution could be a game-changer.

That said, I'm not super familiar with the world of game development and distribution. While web-based games have great distribution, the "best" technology or product doesn't necessarily win. Steam has massive power as an incumbent in the space, it does provides a useful service of facilitating discoverability for indie games, and there's definitely some scenarios where web-native games don't make sense [1].

[1] https://news.ycombinator.com/item?id=30157817

Yeah, because of instead targeting Vulkan 1.3, they get to target a subset from Vulkan 1.0, really exciting.

If you are happy to play Playstation 2 like games on the browser yes, because that is as much 3D hardware capabilities they are aware of.

Just interested because you're in this business: do you genuinely think 30% keeps being the number? Seems like that's not long for this world from my limited perspective.

Hard to disrupt Steam if they're taking 10-15%, and I assume Valve still makes a mint at that price point because Steam is ~all they make.

The problem I see here is more that this will just allow Microsoft to undermine one of the few competitors it hasn't yet managed to.

It's really more about "undermining" Apple's app store monopoly ;)

> The bulk of Figma rendering is Figma-custom GPU code. It's true that Figma uses Skia, but only for some specific graphics algorithms in Skia, not as a general purpose rendering library.

Can I ask why you went that route, instead of just using Skia?

I don't know the history, but I think Figma predates much public knowledge about Skia.

Further, I have the impression that Figma has pretty specific and complex graphics needs. For one example of the sort of thing I mean, we decode image data in JS and put it into GL textures without the Wasm side ever seeing the pixel data, so that the browser can (potentially) decode on a background thread and uncompressed pixel data doesn't contribute to Wasm heap memory usage. However, I don't know Skia well, so it's possible Skia has some mechanism to composite GL textures it's not responsible for too.

> it's possible Skia has some mechanism to composite GL textures it's not responsible for too.

I'm pretty sure this can be done in Skia in C++, so it should be possible with WASM. I think it possible to create a Skia surface using an existing GL texture buffer object.

One thing that I was curious about, if I may ask, is how does Figma goes about font rendering?

Not sure if they use the same, but here's an article about text rendering from the CTO of Figma: https://medium.com/@evanwallace/easy-scalable-text-rendering...

Thanks, I actually remember seeing this one (even did a toy implementation which was quite fun). Wasn't aware it was by the CTO of Figma.

> “Near-Native Performance”: Wasm is often described as having “near-native performance”. What this actually means is that WebAssembly is almost always faster than JavaScript, especially for compute-intensive workloads, and averages between 1.45 and 1.55 times slower than native code, but results do vary by runtime.

Yeah, nah: https://github.com/zandaqo/iswasmfast

JS is about 10x faster than wasm in simple linear regression, and 30% faster in levenstein distance calculation.

At a glance, the bindings for wasm copy the data,


If the running code is short enough then that copy might easily make the wasm version much slower. That is indeed a known downside of wasm (calls to JS are somewhat slow, and copying of data even more so - wasm shines when you can avoid those things, which certainly limits where it makes sense!).

If it's not that, then a 10x difference suggests you are running into some kind of a VM bug or limitation.

Yep, that's it. And the data marshalling overhead is even more pronounced with the native addon example (N-API Addon). But that's the thing, it's not "WASM always faster than JavaScript" as the author presents it. And on top of that, JS engines are no slouches either, you can optimize JS code to be pretty competitive in terms of performance even for computationally heavy tasks.

WASM is _predictively_ performant, being less subject to the vicissitudes of JIT, and offers better startup performance, allows for code reusability for existing C++/Rust code, but it's not a cure-all performance solution.

It is pretty much always faster though. The same code completely written in JavaScript replaced by the same code written in Rust and compiled to WASM will nearly always be a speed up if you haven't screwed up and made the comparison apples to oranges by introducing copying or something like that. What's not always faster is mixing because of conversions. But this is just a reason to use more WASM :)

The author isn’t saying it’s a “cure all performance solution.” The quote you copied uses cautious language like “almost always” and “results do vary.”

The author also cites real world apps that have switched to WASM and seen big performance gains (Figma and 1Password), which is much more compelling than the benchmarks you shared.

You do realize that I didn't put up those benchmarks and wrote the article way back in 2017 just so that I can pick on a passage in this article? I was (and still am to a degree) excited about WASM, actually ported a significant chunk of my business logic from JS to C++ only to discover that the whole thing offered only ~20% performance boost, which didn't merit maintaining a whole separate toolchain. I'm not questioning the exciting results that Figma and such achieve, I'm saying that it's not "almost always" a case, and it's disingenuous to present it as such.

I’m just noticing that my comment went from 5 points to 0 after your reply. I’m sorry if it was upsetting. Of course I know you are not the author of the benchmarks repo.

I made that comment in good faith pointing out that the author was not saying WASM is a catch all performance booster.

But I could get on board with your claim here that “almost always” is disingenuous (which wasn’t what you were saying in the previous comment).

Yeah, that repository is measuring performance of a WASM/Node.js hybrid, and not straight WASM. Interoperability costs can certainly dominate in such cases, where straight WASM can bypass much of the cost. Such a hybrid is what you want to measure often, but certainly not always, and a large part of what this article is talking about is the potential of pure WASM unshackled by Node.js.

And how is the data going to reach the "unshackled" WASM in the browser, for example? I'm not aware of any way for the web page to interact with WASM other than through the JS land with the said overhead. The article explicitly mentions usage inside the web page and presents WASM as a performant alternative.

If you look at the success examples of 1Password or Skia they do a lot of compute based on little interaction (not no interaction). "Shackled" wasm occurs when doing things like running a string validation function as a user inputs keys into an input box, when you're constantly going between JS and WASM just to do a little work each time. It's not "I passed the data from JS and therefore it will always be slow now". In short it's not "replace any JS with WASM" it's "replace JS that isn't going to complete quickly with WASM" which is exactly the JS you should want to replace, not the stuff already running too fast to notice.

PSPDFKit is another example that added WASM to great success regarding rendering and searching. And uBlock Origin as well with its massive rule lists.

You’re still missing much the point of the article: WASM isn’t for the browser only. WASI, for example, is a full environment you can run stuff in without any such interoperability cost.

But even inside something like Node.js or the browser, most of these sorts of benchmark attempts are of converting only small parts of a system, so that message passing or format shifting ends up a comparatively large fraction of the work being done. If you migrate as much as possible into WASM and treat JS as the foreign side rather than WASM, you will tend to find that there’s much less serialisation/deserialisation overhead in the performance-sensitive parts, because the data they needed was on the WASM side from the start, rather than having to be copied in from the JS side. (This is, of course, a simplifying generalisation.) A somewhat more conservative strategy is to still treat JS as the native side as WASM as foreign, but expand the WASM as far as necessary to reach a point where very little data interchange will be needed. What that is may vary enormously, and such a boundary doesn’t always exist.

> You’re still missing much the point of the article: WASM isn’t for the browser only. WASI, for example, is a full environment you can run stuff in without any such interoperability cost.

Of course, but we are not arguing about the point of the article, I'm contesting the exact claim that "WebAssembly is almost always faster than JavaScript". Admittedly, not because it's such a big mistake on the author's part, but because I'm a bit irate with the usual flow of discussions about WebAssembly here on HN that take this claim for granted and regurgitate it incessantly. This is not the case for the mentioned reasons: data copying overhead and JS itself being compiled by JIT resulting in pretty performant code if well-optimized. Also, the comparison to JS only makes sense when we are talking about running both in the JS engine, not against a dedicated WASM runtime, otherwise it would be tantamount to arguing that Java is not faster than JS.

By the way, if someone really wants to dig into the issue I recommend an article from one of the V8 authors where he dissects one such WASN success story: https://mrale.ph/blog/2018/02/03/maybe-you-dont-need-rust-to...

I think part of the pushback you're getting here is that you're contesting "WebAssembly is almost always faster than JavaScript" (which IMO is correct and appropriately couched) with "WASM + JS can be slower than just JS" (which is also correct), but the way you're communicating it comes across as "WASM is just slower than JS" (that's obviously not literally what you're saying, but that's how it's coming across).

If we're going to be pedantic and nit-picky about claims, please use more precise counterclaims. It only takes a few extra words to clearly state you're talking about WASM + JS, which is an important distinction since many† people here are interested in WASM-only, without the JS (with the alternative in their mind being JS-only, without the WASM).

†In the spirit of pedantry, I mean many. Not all. Maybe not even a majority. But reasonably many.

> I'm a bit irate with the usual flow of discussions about WebAssembly here on HN that take this claim for granted and regurgitate it incessantly.

It is still more honest to regurgitate the claim that Wasm is faster than JS than to try and argue that "JS is about 10x faster than wasm in simple linear regression."

It's quite obvious and simple: by reducing 'context switches' between JS and WASM. E.g. don't call from JS into WASM for a simple computation, but instead batch thousands of computations into a single call. Also works the other way around. When calling from WASM into web APIs, think of those calls as expensive 'syscalls' which need to be minimized.

Consider a browser where the DOM is implemented in Rust and exposed to Javascript via some form of FFI.

The same DOM API could be exposed directly to WASM via WASI-like calls, without going through Javascript.

This is already being addressed in thread but I’d like to share some “it depends” thoughts based on experience from recent experiments with WASM and N-API. As others here and elsewhere have said, it depends on:

- how compute-heavy your workload is

- how long it runs

- how much you need to pass data back and forth

That last point is hard to overstate. It’s such an overhead that “notoriously slow” (however outdated the notoriety) JS operations like structured clone run circles around highly optimized native-bridge/WASM data transport solutions.

It’s such an overhead that—for short bursty interop around values that can’t meaningfully benefit from being passed through—you have to write JS that might as well be native integer wrangling but has to be written in JS to reduce the cross-talk. That covers pretty much any workload that involves passing around JS functions as values, or preserving inheritance chains, or dynamic usage of functions generally.

I’m excited about WASM and other JS<->compiled interop, it has a bright future. But unless and until JS as a standard and JS VMs at runtime specifically provide ways to optimize crossing that boundary, the vast majority of already hard to optimize JS use cases are either better solved natively in general with minimal interop (where WASM et al will shine) or better solved by optimizing the JS implementation where the use case necessarily deals with JS functions.

I actually tried wasm a couple of months ago with Golang in the browser, and was very disappointed.

It is indeed way slower that JS (and yes, I tried to care about performance and understand what was going on, removed any data exchange with the browser and even ran the tests without any dev tools opened - just in case).

Not even mentioning the fact that to use it, you need to include and hardcode a dubious JS file that does not even integrate properly with npm or any build system. And that the standard wasm way to exchange variables and events is not supported (instead you are forced to use some automagic and slow DOM bindings).

The performance with alternative compilers (tinygo) was a bit better on the performance side, but the integration was still disappointing and unsatisfying.

Levenstein distance with WASM is 40% faster in the benchmarks on the linked medium article.

WASM (145,086 ops/sec)

JavaScript (102,775 ops/sec)


And that's another thing about JavaScript engines--they evolve and usually result in better performance. The article is written in 2017 and the results are from Node.js of the time, meanwhile the results in the repository are from the last year.

Can confirm, I ran the benchmark in that repo and rather than JS being 30% faster in Levenstein distance it was ~10% slower but both had improved relative to the native score.

Is it a naive recursive implementation or the matrix formulation?

Only because this is a bad demo. Do the whole thing in WASM without marshalling and it'll smoke JS.

I did some tests on my side (Rust WASM, quantum computing numerics) and was surprised that the WASM code is only 1-2x slower than native, depending on the browser. Since this code is at least NumPy-fast (a lot of optimizations purely for quantum operations), I don't believe that JS can do it 10x faster.

Your Benchmarks May Vary.

Hey, I'm Syrus From Wasmer [1]!

I'm really happy to see more people bringing their attention to the Wasm ecosystem. There are tons of opportunities on this space. Regarding WAPM [2] and how its development had become a bit dormant, expect news about it soon. Can't wait to share what we have been working on!

[1] https://wasmer.io

[2] https://wapm.io

Hey Syrus, that's great to hear - I'm excited to see what WAPM has in store!

please consider raising the priority of no_std wasmer support. This is the biggest barrier for seeing wasm in some really cool areas, kernel, embedded, etc

Looking forward to it!

WebAssembly will not displace Server Side languages.

"oh but it compiles once and then runs everywhere!!!!"...ceased to be an argument over 15y ago, when it became clear that the "backend" is synonymous with linux, and almost every machine has the same architecture. Plus, the time required by the compilation step is not prohibitive in a CI/CD pipeline.

I understand that compilation is not an issue, but WDYM by the other stuff? Even python is a bit of a PITA to package and run where interpreters already exist, not all distros are the same - The advantage of a VM or vm-like container is exactly that you don't need to care about that stuff as much.

JS, in virtue of being properly sandboxed, seems to be in a position to prevent coupling outside expected interfaces, and hence reduces the scope for compatibility creep. maybe? ...

> Even python is a bit of a PITA to package and run where interpreters already exist

That's mostly a result of everything and everyone having a different opinion where python modules should live in the filesystem, and the default behavior of a lot of package managing software to replace python installations between versions instead of shoving them into different packages and be done with it.

> The advantage of a VM or vm-like container is exactly that you don't need to care about that stuff as much.

The same advantage exists for compiled languages, provided that dependencies are either staticaly compiled, unambigously listed, or shipped with the software as a package.

A similar effect is achieved in Python Projects by virtualenving everything, essentially stuffing all dependencies, including the correct version of the interpreter, in one location as a single package.

there are also differences between lib/interpreter versions (str differences, use of Byte arrays) and stuff like Unicode/crypto support, the c-lib used (e.g. musl).

The "provided that" is exactly what I'm sceptical of - if the language is general use, not everyone will play ball unless the language/language-derivative is for the explicit purpose of maintaining this compatibility, and this is somewhat enforced. Python packaging would also be great if it shipped with an unambiguous and bulletproof/portable solution, but it didn't, and then the wheel was (partially) reinvented several times.

This is before we even consider bad-actors purposefully attacking (python packages are also susceptible to this I believe).

Reading the article, I felt like I was back in 1996 reading about the Java bytecode compiler and the JVM plugin for the browser. Java had such great promise for web page-hosted code, that would be portable, performant, and safe. Why would one think WebAssembly will succeed where Java failed? (Well, Java did not exactly fail, but its purpose and typical usage radically changed over the years.)

Java failed because

- it depended on an installation outside the browser, poorly versioned

- and a plugin too, which was extra noise for the user, and not nearly so pain-free as Flash's was

- it was not, under any circumstances, performant (this was before HotSpot)

- AWT was truly, thoroughly, god awful (this was before Swing)

Finally, it was caught between two realms. Clunkier than JS and more difficult to work with than Flash, it tried to do both and ended up doing neither.

So, why should WASM succeed where Java failed? Well, for pretty much every actual reason Java failed. There's really no places to compare them. WASM is implemented in-browser (and the runtime is very small), it's got a ~1.5x slowdown compared to the 2.5x-3x of the big clunkers like Java and C#, and most importantly it knows its place: low level computation. It does not try to do GUI, but leaves that up to the environment; it does not try to do a high level object model, but leaves that up to the language being compiled for it. What reason do you see that Java failed that's also applicable to WASM?

In the 1990s I was building, among other things, web-based controls (in Java) for backend systems. People were completely blown away with what could be done. (The major competing technology was HTML forms and HTTP PUT with Perl CGI.) With the exact same language I was also able to build a high-performance peer-to-peer file sharing system for backend systems. (2+GB files to personal workstations across a large geographic area through ATM -- practically in real time compared to the NFS alternatives.) There was every reason to believe that Java's portability and just-in-time technologies would dominate the computing landscape. I think we can look to Sun Microsystems (and now Oracle) for reasons why Java was eclipsed[0] by other technologies. I have no idea what kind of longevity WASM will have, but I have seen so many technologies come and go over the decades, I know technical excellence is not the primary predictor of what stays or goes.

[0] Yes, there is a pun buried in there.

A big difference: As a user I could "feel" whenever a Java Applet was loaded. Suddenly everything became slow and only rarely the GUIs were acceptable. That diminished reputation in users, aside from security problems etc.

With WASM the user doesn't notice it. Without looking at dev tools you can't tell if some of the functionality is coming from wasm or JavaScript/typescript/..

In consequence users won't develop the "yikes, Java" reaction.

wasm got the chance to slowly creep into stacks.

I always feel these comparisons do not make much sense - it’s like comparing some pre-industrial thing with its today’s equivalent: browsers were simply absolutely not what they are now.

Like, 2 of your points were simply politics, where js just happened to get chosen, while performance got rapidly faster in the coming years. If anything, Java was well ahead of its time and the surrounding environment was not yet ready.

>- it was not, under any circumstances, performant

It was performant enough for one of the most popular games of all time, Minecraft, which was originally a game in a Java applet.

As mentioned, that happened much later, so it is irrelevant.

But for the others, java is insanely fast today, with pretty much the state of the art GC it has. In practice, a really significant chunk of all important server backends run on the JVM.

Minecraft was started in 2009. Applets were one of Java's flagship features, i.e. released well before the VM became in a performant enough state.

Incidentally, Minecraft Java has always had horrible performance, to the point where it was rewritten in C++.

There are two versions of Minecraft now.

The main reason Minecraft Java is still alive is the hackability and the resulting extensive mod ecosystem.

It was rewritten in C++, because Playstation, XBox, iOS, Android[0] don't do Java and Microsoft wanted a single codebase.

[0] - Android Java isn't really the same Java as in Minecraft.

Write once, run anywhere ;)

There are fundamental differences here.

Webassembly is a small stack machine based on an open standard.

It is built bottom up, with a very small core. Additional features like SIMD, garbage collection,posix style system interfaces (WASI), linking, garbage collection,... are built as optional extensions based on real world feedback.

There already are a multitude of different implementations. (the three browsers, the Wasm3 interpreter, wasmtime, Wasmer, ... )

Java, in contrast, was and is a huge runtime , not based on a standard, and has extremely complex semantics, many of them tied to a single language model.

Wasm is already seeing adoption across a wide range of domains.

I do believe the potential for WASM to succeed where Java failed is wide open.

With all due respect, do you know anything about Java?

What do you think it is? https://docs.oracle.com/javase/specs/jvms/se7/html/

And no, Java is absolutely not a huge runtime - it is a simple stack-based vm with garbage collection made originally for goddamn TV set-top boxes, so it has an absolutely small instruction set. It just happens to be so good for multitudes of reasons that the biggest implementation (yes, it has absolutely insane amount of ones, tour claim of wasm having multiple implementations is just ridiculous compared to the amount of jvms), OpenJDK basically runs the backend of a great deal of all significant web applications.

Fun exercise: write a simple compiler with a JVM output. The JVM stack language is so easy, and it's the best way to explore it / something like it!

(Otherwise you're spot on and the GP clearly has no idea about Java)

I am (somewhat) familiar with the JVM.

Yes, it's also a stack based VM that is relatively small, at least superficially.

It's really not that simple though. Things like GC, exception handling, a whole class model with constructors and methods, ... All of that brings countless implicit requirements that are only passingly mentioned in the spec. Plus the whole host of complexity needed in practice for supporting the Java Platform.

Core Webassembly has nothing of that.

But to be fair: Webassembly is also growing more and more complex (SIMD, reference types, GC, interface types, module linking, tail calls, exceptions, ...)

Can the JVM be what Webassembly is? Or the CLR, for that matter? Technically: of course.

My point was really more about the ecosystem as a whole. JVMs were never really intended as a general purpose, multi/cross language ecosystem of generic computation. And everything around it is noticeable. In the build tooling, in the libraries, in the development process and focus, the lack of AOT until recently (in open source implementations), ....

Sure, that's changing now with GraalVM and a focus on cross language support. There are some extremely cool things happening there.

I'd still much rather bet on an ecosystem based on an open standard with lots of interest from different parties.

In fact, GraalVM already supports Webassembly, including WASI! So in certain sense Wasm is already more general.

> the lack of AOT until recently (in open source implementations)

The gcc project had a now discontinued AOT java compiler 2 decades ago.

Java is as open as it gets, with a standard, that can be changed through a community process. I feel the web is much less open with behemoths having basically infinite veto powers like google apple microsoft.

> In fact, GraalVM already supports Webassembly, including WASI! So in certain sense Wasm is already more general.

And teavm can run java byte code both in js as well as wasm :D

> Java is as open as it gets, with a standard, that can be changed through a community process.

Isn't the JCP also controlled by a hand full of corporations? Seems pretty much the same to me.

> And teavm can run java byte code both in js as well as wasm

Hah! I didn't know TeaVM had a WASM target, that's actually great to know.

> The gcc project had a now discontinued AOT java compiler 2 decades ago.

Was that ever really production ready?

Yes, Red-Hat used to AOT compile Eclipse with it.

So, honest question. if we're compiling our code to run on servers, why are we compiling it to run on a bytecode interpreter rather than native?

My single use for docker is to containerize / isolate, not to run across architectures.

I get the in-browser optimized / compiled code. That makes sense.

Wasm doesn't have to run in an interpreter. Many of the runtimes (like Wasmtime, Wasmer) compile the Webassembly to native machine code first. But that native code is still constrained by the Wasm security/sandboxing model, which includes memory isolation. (there is no direct memory sharing between the host and a Webassembly instance)

A few benefits why it is even useful in a server context:

You can distribute a single (small!) artifact and run it everywhere. On a developer machine, on Windows, on a x86 server, on an ARM server, ... Right now you have to build a separate Docker image for each architecture. And Docker is really a second class citizen on Windows.

Docker images have a huge dependency: an entire operating system syscall API and an entire userspace (with libraries, binaries, a file system, ...) The surface for a Wasm module is much smaller.

Building a Docker image is a somewhat redundant exercise of picking a base image, figuring out the dependencies, keeping it up to date, ... None of that should be necessary.

Security is another benefit. The vulnerability exposure of a Wasm module is much lower than that of an entire OS + userspace sandbox.

Wasm is designed around isolation. In addition, the interface types proposal + WASI is pushing capability based security that works by passing around capabilities, which is a pretty great model.

I outlined some more benefits here a few days ago: https://news.ycombinator.com/item?id=30020121&p=2#30020964

One thing I am missing here. Docker, as much as it has the shortcomings you list, has an entire fleet of posix libraries, utils and tools at its availability. This has meant 99% of applications could wholesale lift their application and deploy it within a container. When I look at WASM/WASI there is no networking API, no storage API, crypto, etc etc. I can see its value for offloading parts of a web application that require a more speedy execution, but I cannot see anyway of running a full headless application which is where containers are hugely utilised right now.

Yeah, progress on WASI has been painfully slow.

I think a big reason is that it was pushed by Mozilla, but they laid off all the related employees, and now no-one is really allocating resources to push it forward.

Plus it's still blocked by related proposal like interface types.

I know what WASI is, my point is that there is no where near parity with existing posix implementations. I have been following the bytecode alliance and webassembly communities for a while and they can't seem to get beyond the design phase. You only need to look at the networking API [0] and the thread has not been updated for two months, with prior to that folks asking is the work even happening still

[0] https://github.com/WebAssembly/WASI/issues/370

What's preventing the compilation of dependencies to WASM? Couldn't I (at extreme) run an alpine linux container distro inside WASM? At that point we're full circle, but it should be possible, right?

Thanks for the explanations.

What is that wasm project you're working on, if it has been publicly announced / released? It sounds very similar to Microsoft's Krustlet.dev and Suborbital's Atmo: https://github.com/suborbital/atmo

Very informative, thank you.

A little bit of an addendum: what will make or break WASM is language support.

If enough languages get official or high quality wasm targets, it has a bright future.

Sadly progress has stalled a little bit here, partially due to very slow progress on some features important for higher level languages.

But we'll see how things develop.

The parallels with the JVM are obvious on multiple levels.

In case of WASM I'm curious specifically about cross-language portability. We have learned from the JVM that while technically possible it may not be a popular choice. Java/Kotlin/Scala ecosystems diverged quickly.

So WASM could become a browser technology the way JVM became a strictly server-side thing. Or allow backend developers build frontend applications in a language they are used to.

On a related note, there's something funny about everyone stacking on top of each other. At some point Java will be probably compiled to WASM. You already can go in the other direction with https://www.graalvm.org/22.0/reference-manual/wasm/ . And GraalVM itself is designed to support a wide range of programming languages.

I'm paying attention, but we are still in the hype phase.

"near-native performance" yeah a few seconds per hour. With all the jit and garbage collection, it's way too unreliable for anything that needs real-time performance and more than a pittance of CPU power.

Also, there's a canyon of difference between proper TensorFlow and its browser lite variant. Like the wasm backend chokes on 256x256 px background segmentation while xnnpack easily handles 200fps on CPU only.

For everything that can be inefficient, emscripten was already good enough. For stuff that needs efficiency, Wasm is not reliable enough yet.

> With all the jit and garbage collection

Web assembly doesn't have a GC. It is currently in the works so WASM can support other languages, such as python, more natively. As for the JIT, WASM in most implementations has only 2 levels of compilation. [1]

> Also, there's a canyon of difference between proper TensorFlow and its browser lite variant. Like the wasm backend chokes on 256x256 px background segmentation while xnnpack easily handles 200fps on CPU only.

Native TesnorFlow has GPU access, WASM does not. You are seeing the difference between all CPU and CPU + GPU.

> For everything that can be inefficient, emscripten was already good enough.

WASM is the spiritual successor to emscripten's asm.js. In fact, emscripten emits WASM [2]. It's a little strange talking about it as if it were something different.

[1] https://v8.dev/docs/wasm-compilation-pipeline

[2] https://emscripten.org/docs/compiling/WebAssembly.html

I have low faith in the current WebAssembly GC proposal. It's been such a daunting task that they've started talking about "mini-mini MVPs". The GC proposal does not do what most people think it does, the current draft is basically impossible for most languages to target, and even contributors are finding it difficult to reach consensus [0]. It might be a while before it becomes useful.

[0] https://github.com/WebAssembly/gc/issues/254

This is the most concerning thing I have about the web in general. It feels like nothing is getting done because everyone is fighting with everyone else about everything.

I feel like WASM has just sort of rotted. It feels like so many advancements from threads to GC have been stuck in committee for years now.

I feel the same way about WebGPU. Just doesn't feel like anything is getting done when it comes to new compute standards.

Everyone fighting with everyone else is how every standard is made. Standards evolving slowly is generally a good thing, because it gives people time to understand each standard and learn something before the next standard is designed.

> have been stuck in committee for years now.

I've been noticing this for years too and I've concluded that a significant driver is the fact that certain major corporations (eg Apple, Google, MSFT) who sit on central web standards committees feel their app store and operating system businesses would be threatened if the full potential of Web Assembly gets in standards, implemented and widely deployed.

Perhaps everything always bogging down in a way that delays or precludes fully unleashing these disruptive capabilities may not be entirely accidental. To be clear, I'm not asserting there's an intentional, pre-mediated conspiracy afoot. It's always a possibility, of course, but it's also possible that the various stakeholder business units inside these corporations are themselves conflicted, causing internal battles leading to confusion, delay and mixed signals on standards bodies. Sadly, that can end up having very much the same net effect as intentional sabotage.

I came to the conclusion that 3D APIs on the browsers are for ecommerce, PS2 like graphics and stuff like shader toy, specially when they have Khronos in the mix.

Check this month WebGPU session, it is always the same "we hope the community will get together", "it is past MVP 1.0", "no experience with native GPGPU debugging", "you need to put up with what we have, it will improve",....

Or just make use of a middleware targeting native APIs and move on.

TIL that Wasm doesn't have GC. But I did observe periodic CPU spikes and app freezes with Wasm+WebGL. So now I wonder where they came from.

As for TensorFlow, no I deliberately compared their CPU-only Xnnpack backend against Wasm. So both didn't use the GPU. But obviously Sse3 and avx also help a lot.

Emscripten is a compiler while Wasm is an execution backend. I think it makes sense to treat them as separate because I wanted to highlight that Wasm didn't add anything for my use cases (yet).

Of course, I expect things to get better when the technology matures, but we're not there yet.

Edit: Actually, as soon as you use Wasm with the regular APIs to access the Webcam or use WebGL, they do GC again. That's why Wasm+WebGL has unreliable performance where C+GL is real-time capable.

Plus Wasm obviously can't use zero copy paths when accessing Webcam data on the GPU, whereas I can use DMA with C, UVC, and GL. Copying a 4K frame between buffer formats 2x60 times per second also adds up.

> TIL that Wasm doesn't have GC. But I did observe periodic CPU spikes and app freezes with Wasm+WebGL. So now I wonder where they came from.

Very likely you are observing those spikes because, AFAIK (and I could be out of date here). The WASM -> JS bridge isn't a free one to cross and was likely allocating stuff onto Javascript's GC.

WASM is/was hyper isolated and sandboxed from javascript so any of the javascript APIs, such as webgl, have to jump through a moat of handshakes (similar to Java's JNI stuff).

> But obviously Sse3 and avx also help a lot.

Yeah, probably the case then. I'd have hoped that wasm would do a better job there, but then it'll depend on the browser/runtime.

WASM SIMD support only recently started shipping in browsers as well so depending on when the GP did the benchmark it probably wasn’t used at all which is obviously a huge performance problem.

on an M1 I think TF.js is within 2x of native XNNPACK on a single core. Since aarch64 simd width is also 128, it works out well.

I'm willing to believe that, but who would use a single core for an embarrassingly parallel problem such as image convolution? I'd expect it to use all 8 cores, but AFAIK, Wasm really only uses one. Probably that's the biggest performance factor then.

It seems strange for WASM to get a GC. Shouldn't it just expose a malloc and free call?

The point of giving it a GC is to let it partake in the host environment's GC instead of bundling its own. It's based around the assumption that the host can not only do a better job (eg browser JS VMs have plenty of experience with it) but also the GC can be shared in a larger context than an individual wasm program (eg all programs in the same browser tab can use the same GC).

It's also needed for giving host objects to the wasm program since the host objects would be managed by the host GC.

The thought is less about WASM having a GC and more about WASM being able to leverage the runtime's GC.

The assumption is that WASM will run in something like a v8 where the it already has a GC for Javascript. So, rather than distributing a WASM executable with all the code needed to implement a GC, you'd have it rely on the inbuilt GC. As it stands, WASM is really only a good target for C/C++/Rust. Any other language with a more substantial runtime will end up blowing the byte budget.

The question about "Should WASM do this" is really all about what WASM should/could be. Is WASM going to be a universal language target? Should it support more than low level languages?

And yes, WASM exposes malloc and free.

> And yes, WASM exposes malloc and free.

No it doesn't. Unless you're referring to WASI, but WASI is not WASM, it's just a common API which modules can call when running on WASI-compliant WASM runtimes.

All WASM itself has is linear memory, and that is not exposed via malloc/free, but via lower level instructions to set bytes on a page and grow when necessary (but not "free"). See the available instructions in the spec[1].

[1] https://www.w3.org/TR/wasm-core-1/#memory-instructions%E2%91...

After the political wars against PNaCL and Adobe's CrossBridge.

The Open Policy Agent has been able to compile Rego policies into executable Wasm modules [0] for a couple of years, with some nice performance benefits [1].

[0] https://www.openpolicyagent.org/docs/latest/wasm/ [1] https://medium.com/open-policy-agent/opa-v0-15-1-rego-on-web...

5 years ago they told me I could write multi threaded web applications using C, C++ (or Rust).

Then they realized that web assembly could solve world peace and got to work on that.

Today I am still unable to create a div from a wasm compiled C application.

- yes, yes, I know interface types are cool and yes proposals take a while but please, I just want to write better web apps.

Given wasm will need to be imported as a script src and have a robust threading model before the dream of web applications competing with native applications for performance, I suspect I won't see it usable in my lifetime

There are many things to love about WebAssembly, but let’s not forget it’s downsides. Like it’s threading model relies on Web workers and SharedArrayBuffers and is a royal pain in the … to work with. Not to mention that you’ll need to become COOP/COEP compliant to mitigate side-channel attacks.

Then there is the plain fact that it requires a compilation step, which is OK for large apps. But it makes me wonder if wasm really is the natural replacement for JavaScript in serverless cloud apps (aka “FaaS”).

So yes, WebAssembly has potential and is almost without an alternative for in-browser apps that do some heavy compute. But I am less convinced about server-side. Time and again, we’re told how great portability across architectures supposedly is. And time and again, the only cloud architecture that matters is x86-64. Maybe ARM or even RISC-V in some cases, but let’s not fool ourselves about these niche platforms.

WebAssembly itself is independent of the web or the browser. Therefore it is impossible for it to depend on Web Workers or SharedArrayBuffers. Rather it is the browser implementation of WASM (the embedder) that uses these technologies you seem to hate.

Just think about it, you even mentioned serverless cloud apps. Are these cloud apps going to run inside a browser and therefore use Web Workers for threading? The answer is obviously no.

WASM itself merely provides shared memory and instructions needed in multithreaded environments like i32.atomic.rmw.cmpxchg and i32.atomic.store, but it doesn't stipulate how threads are created.

I don't blame you but lots of people confuse WASM itself with the browsers' implementation of WASM (which, among other things, decides which functions are imported into WASM and how they work). But to me, the most exciting future of WASM is outside the browser. So we must clearly delineate features of WASM from the browser-specific bits.

Oh, I have implemented wasm multi-threading both in-browser and on a node server. On the latter, it uses node's "Worker threads" API, which is very similar to in-browser Web workers. In either case, `SharedArrayBuffer` serves as the backing store for the (shared) `WebAssembly.Memory`. Wasmer is different and more like what WebAssembly should offer as part of the standard. Alas, that's not how wasm works in-browser.

But anyway, I think you confuse my point: the very fact that threading isn't fully covered by the core wasm programming model, but relies on a "hybrid" approach where the embedder needs to plug in those capabilities is a problem. Before wasm, I used Google's PNaCl quite heavily, which didn't suffer this problem and was fully self-contained.

> the very fact that threading isn't fully covered by the core wasm programming model

Does any assembly language have a concept of threading?

WASM is in many ways a virtual machine and not just an ASM dialect. Things like built-in garbage collection are in the plans.

> WebAssembly itself is independent of the web or the browser.

Sure it is, but let me, a backend developer, ask a question:

Why would I want to use WASM for anything running on my server, when I can just write it in Golang, and get actually native performance (not "near-native"), plus real GC, on top of an already thriving ecosystem of libraries and tools?

And lets replace Golang with Rust, C, C++, or Java, or even with a more dynamic language like Python or node if its not performance-critical...the question remains.

My point is, why would I use WASM for something other than what it was originally designed for (running things in the browser), when I already have a big collection of really powerful, established, stable, well supported backend Languages?

What does WASM bring to the backend-table that the current tools cannot provide, or provide badly?

If you are building multi-tenant backend for i.e. SaaS app and you want to provide better isolation between your tenants (than relying on hardening, security/pen testing of your own code, or want to allow user/3rd party code to execute on your BE)

How exactly does WASM provide that better than, say, docker?

Business opportunities for new startups trying to sell it as a new idea.

Looking at history from polyglot bytecodes in computing since 1960, there is hardly anything exciting about WASM on the server besides business opportunities to jump into WASM gold rush.

> Just think about it, you even mentioned serverless cloud apps. Are these cloud apps going to run inside a browser and therefore use Web Workers for threading? The answer is obviously no.

Yes actually. CloudFlare and several other cloud providers are trying to push towards V8 based sandboxing on the edge instead of the node-in-KVM model used by firecracker because the former is lighter and easier to deploy.

That's more of a "sort of". It is V8, but no "webworkers as threads", and then things like the cache api, cron-like scheduling, k/v store and durable objects added on.

Like deploying MSIL DLLs and JVM Jars....

The only significant perf advantage WebAssembly has in browser vs JS is currently access to (Pentium II era) SIMD. This is a regrettable choice, because most languages targeting the browsers are now lacking access to SIMD.

Heavier computation is currently being done with WebGL shaders (eg Google Meet background swap, TensorFlow tfjs).

> Heavier computation is currently being done with WebGL shaders

Re: in-browser heavy-computation: Where can I learn more about these patterns to get started? Books / blogs? Thx.

About the BG swap feature there's at least this blog post: https://ai.googleblog.com/2020/10/background-features-in-goo...

Rereading the blog post, they do seem to use WebAssembly as well there and the abovementioned SIMD feature (through XNNPACK).

WebAssembly is full of potential. But so are Java bytecode, SPIR and various others. Why will WebAssembly take off where these haven't?

I think what makes thing succeed or not can be subtle (and therefore it's easy to disagree with). Ultimately, I think WASM will succeed because it's a uniting force among communities. Rust people are behind it, Microsoft and C#/.NET are behind it, people in the Go, Python, and other communities are behind it, and the browser vendors are behind it.

Java applets were things that were slow to load and didn't interface well with the rest of the web. People can talk about how you could technically connect things until they're blue in the face, but Java didn't provide a good experience with HTML. It was its own thing just like Adobe Flash.

I think that comparison is important: Java applets and Adobe Flash were these separate non-HTML entities that felt proprietary and didn't play nicely with others. Maybe something might be open-source, but Java applets felt like they were trying to usurp the web. JavaScript augments the web rather than usurping it - and WASM similarly fills that role.

I think that WASM will succeed where others haven't because it has been designed by the parties you need to get on board to make it succeed. Java wanted to create its own little world against the wishes of the vast majority of developers and users. Flash created its own little world against the wishes of most developers (though most users didn't mind). WASM feels like a neutral target that most of the community can get behind.

While WASM might not be perfect or anything, it does seem to unite a lot of different communities. It doesn't feel like Sun is coming along and imposing Java applets on everyone so that they can better sell Java to enterprises. It doesn't feel like Adobe trying to lock developers into expensive Flash tools. It doesn't feel like Microsoft saying, "Silverlight is our Flash! You should use that!" It's something that engineers from lots of different communities are supportive of and that works with the web rather than trying to usurp it.

Java applets downloaded over 56k and were run (not jit'ed) on pentium 2's. In other words if we took Java, replaced the standard library with an interface onto the browser, would we not end up with exactly the same thing? They're even both stack machines.

Happy to support an argument that removing Oracle's influence is worth reinventing an entire runtime for.

I don’t think that Flash and Java “created their own little world”, but rather that they were entities not “accepted” by browsers, so they remained in a “third-party” status, but that was not their decision to make.

The irony being that now all of them have WebAssembly based runtimes available, so one gets to run two runtimes instead of one.

Not owned by shit for brains Oracle.

Neither was Java at the time when browsers had support Java applets. At a closer look, Java Applets suffered from much of the same problems that plague WebAssembly.

Like the inability of interact with the DOM (or other browser APIs) without calling out to JavaScript.

Or the fact that you had to split up the code on your side into a JavaScript part (these days, perhaps transpiled from Typescript) and a part that was Java code then and is now Rust or C/C++ in case of WebAssembly. It goes without saying that these are very different developer experiences with virtually no overlap in toolchains and programming languages.

Oh and of course even WebAssembly suffers from browser fragmentation. Not only will performance be different, depending on whether you run your wasm module in Chrome, Safari or Firefox. These browsers do also not implement the same feature set (atop the basic wasm MVP). Like Safari doesn’t have SIMD. Other important additions (like tail calls, garbage collection and others) are also not yet universally supported.

For the sake of WebAssembly I do hope that they’ve learned their lessons from why Java applets (and Silverlight, Flash, PNaCl) all failed.

>Like the inability of interact with the DOM (or other browser APIs) without calling out to JavaScript.

we did it more than 20 years ago :) A Java Applet could call directly (Java Applet -> Java DOM API and implementing it JNI of our plugin -> native XPCOM API of Mozilla including DOM API) native DOM API of the owning browser window as well as a Java application could embed Mozilla Gecko browser engine and call the native DOM API of that embedded browser:


> Like the inability of interact with the DOM (or other browser APIs) without calling out to JavaScript.

They're working on this. It's a tough problem, because someone has to own the resources.

I don't think browser fragmentation is an issue for the same reason that CPU instruction set fragmentation isn't an issue: you can always compile to the lowest common denominator or ensure you are only running code segments that the target platform supports using runtime checks, just the same way we do it in native applications today.

Java also didn't have to face the fact that, now, Javascript runtimes are really good. There's not that much between Javascript and WASM ... and JS can access the DOM.

Just like JavaApplets could.

Sure, because being driven by the likes of Google is so much better.

This is the best response I've seen to this question

Fuck Oracle.

> WebAssembly is full of potential.

Eh... from what I can tell from a mostly casual glance at it (enough for a half-hearted half-complete attempt at writing my own VM for it), I'd say that it is almost a good idea. A lot of the base concepts show a lot of potential, but I feel like the implementation is full design-by-committee derp. Why are there only 32 and 64 bit types? Why are there parametric instructions at all? Why is there no instruction for shrinking or freeing a memory? Why does binary format layer 0 use LEB128 instead of a fixed-width number? Just a bunch of weird stuff like that.

But I'm hardly an expert on JIT or VM implementations, so I could be way off base here.

Even if it's weird and designed by committee, at least we have agreement with all browsers. IMO, WebAssembly has got over the biggest hurdle: agreement in a format to feed an abstract processor to program it to do math. Nothing more, nothing less. In the web ecosystem, the most powerful thing right now is agreement. The design might be ugly or weird, but it works and it's done. Lets build on top of it.

Frankly, fuck the web. It's a warzone where Google throws its weight around to enhance the ability of advertisers to make our lives terrible. I was more interested in WASM as a universal VM for all platforms, but it doesn't fit nearly as well for that as it could because of some of the strange decisions this web-focused committee made.

At least Java was also designed to run on not-the-web.

Yes, I am as well. But anything built on top of wasm to run everywhere should probably at least operate well inside a browser environment. Almost as an incubation environment for something bigger. The browser platform is too big to ignore. We rise up from Chrome, Firefox, Safari etc!!! Here and now!!! We Rise!

* Already in all the browsers natively, without plugins or similar third-party optional components, and

* Already widely supported as a compilation target, and not just for special languages targeting the platform

To continue the Java bytecode prior comment, at one point, java was in all browsers of the time (i.e. Applets). However, it lost, and seemingly removed with extreme prejudice with no nods to backwards compatibility.

Java also enjoys multiple HLLs that target its bytecode (Scala, Kotlin, Groovy, Jython, etc) and there's nothing in the runtime capabilities that tie it to any kind of constrained platform (threading, IO, async)

Don't get me wrong, I'm really hoping that WASM succeeds but I'm concerned that the same set of slam dunks we thought back in 1996 don't result getting posterized (again) by DOM/JS.

> To continue the Java bytecode prior comment, at one point, java was in all browsers of the time

No, it wasn't.

> (i.e. Applets).

Applets required (1) installing Java on the system, and (2) installing a plugin for Java in the browser. The capacity to run them was not built in natively to any major browser.

> However, it lost, and seemingly removed with extreme prejudice with no nods to backwards compatibility.

It progressively lost to (1) other plug-in based tech (Flash), and (2) expansion and optimizayiom of the web platform to make plug-in based tech less needed while security problems of the model became more visible, and finally (3) the rising importance of web browsers that didn't support plugins (which are now dominant even on desktop.)

> Java also enjoys multiple HLLs that target its bytecode

But not C/C++ (or, now, Rust/Go), in which a lot of common code on which native-targeted code in other languages rely is written. WASM was specifically designed with being a target for C/C++, and those and languages like Rust and Go already target it.

The capacity to run them was not built in natively to any major browser.

It was at first. In the early days of Java, Netscape and Internet Explorer had their own embedded JVMs.

> But not C/C++ (or, now, Rust/Go), in which a lot of common code on which native-targeted code in other languages rely is written.

It does actually. Graal can run LLVM bytecode. Also, if you mean some specific C library used by everything, the Java ecosystem is absolutely huge and has the benefit of being almost completely written in Java with very little native code.

[re: what compiled to Java when applets lost vs what compiled to WASM today]

> > But not C/C++ (or, now, Rust/Go), in which a lot of common code on which native-targeted code in other languages rely is written

> It does actually.

It didn't when applets lost.

> Graal can run LLVM bytecode

GraalVM came around a long time after applets failed, so it isn't really relevant to assessing what capability applets had when they failed vs. what WASM has now.

SPIR is special purpose and can't be compared. JVM in the browser might be a similar comparison, but back then we didn't have the threats we have today. We now have closed hardware, closed operating system, and even closed browsers. The browsers are just extensions of these closed companies. We now more than ever need an open computing stack untethered from the underlying chips, devices, operating systems, etc.

It's already in all browsers.

Why not .NET? Surely Microsoft is better than Oracle

Surely being eaten by a shark is better than torn apart by a hyena.

Instead we picked getting mauled by a panther.

WASM has been at an inflection point for 5 years. Where are we at with garbage collection and DOM access?

> garbage collection

The draft WASM garbage collection proposal is partially implemented in Chromium, you can try it by enabling the enable-experimental-webassembly-features feature flag.

> DOM access

WebAssembly will never have direct access to the DOM, but at least with Rust the wasm-bindgen+web_sys crates make interacting with the DOM as simple as it is from JavaScript.

> WebAssembly will never have direct access to the DOM

I haven’t been paying much attention to WASM proposals for the last few years, but I thought that direct DOM access (avoiding JavaScript trampolining) was one of the driving goals of the reference type, interface type and garbage collection proposals. https://github.com/WebAssembly/proposals/issues/16 mentions “call Web APIs (passing primitives or DOM/GC/Web API objects) directly from WebAssembly without calling through JavaScript”, and WASM’s high level goals document <https://github.com/WebAssembly/design/blob/main/HighLevelGoa...> lists “access browser functionality through the same Web APIs that are accessible to JavaScript” (not “through JavaScript”, but “through the same APIs”). Am I misunderstanding things?

Assuming the GC proposal eventually reaches general availability, do you think the approach of the wasm-bindgen+web_sys crates you mention could serve as a model for garbage-collected languages wishing to serve as a replacement for JavaScript?

I know the original intention of WASM was to peacefully coexist with JavaScript, not replace it, but one can dream.

This is the question I want answered more than anything. If WASM can get these, there's no doubt that it will completely reshape the future of web development.

yep was my thoughts too, WASM solves some problems well. But it isn't and can't replace general containers like Docker does, for some specific workloads sure but not a general solution. And yeah i don't quite think everyone talking about WASM understands its not just 'faster javascript' but a totally different world where you have to for example, BYO GC (or manage memory yourself if you like!).

GC is the job of the language, Rust doesn't need it

It does if it wants to interoperate with JS, unless you want to pin all memory when it's in the other runtime.

DOM access? why?

You lose the regular Linux process you are used to and the interaction you can have between processes: start new ones to offload a small task, pipe data, trace them with strace or debug them with gdb. I think this convenience also held back people from moving to Unikernels.

Security considerations aside, if you control the WASM runtime (for instance if WASM is used for a plugin/extension system) you can expose most of this functionality to the WASM code through your own runtime API.

Yet they are running containers on a top of type 1 hypervisor that they can only control over Web dashboards and cluster telemetry.

This article mentions inflection points, but doesn’t delve into clientside apps significantly (e.g. SPAs). How far away are we from transitioning from component based JavaScript frameworks like react, to something which compiles to wasm?

I doubt JavaScript will ever be fully replaced here. JavaScript is a pretty good language for these use cases, and it has a huge ecosystem.

But if you want to use a front end framework that compiles to warm then you can do that today. Rust, Go, C# and more all have libraries (only Rust really makes sense though, as the others involve shipping a huge runtime). Just don’t expect it to be any faster than JS.

Agree with sibling comment - the JS ecosystem has a huge momentum behind it and probably isn't going away anytime soon.

On the web, Wasm has currently found the most success with compute-intensive applications, since the JS <-> Wasm bridge is still pretty expensive. There are already some Wasm-based frameworks like https://platform.uno/ that work on the web, but things like React/React-native and Flutter have a huge head start.

React is for connecting state to the DOM and managing the redraws. I don’t see how Wasm could ever replace React, if you want to deal with UI. It’s a different thing.

If we get to a point where interacting with browser DOM is comparably fast with Wasm, either through JS or with native APIs, then we could see Wasm replace React. That said, you're right that we probably won't see anything of that sort in the short term.

But how? React is a library to implement and structure your architecture in a certain way. Wasm is just a language. Wouldn’t you need a React-like framework in Wasm to get the same benefits?

Edit: I guess you mean you’d compile your React app to Wasm, no?

Yes you’re right - I somewhat conflated two different things.

More clearly: once Wasm interactions with the browser are fast, a new framework + Wasm could replace React + JS.

I am surprised there is no isolation schema for binaries. WASM may look lite and fast compared to Docker, but its still painfully slow compared to AVX or Neon enabled assembly.

We went from VMs to Containers. From Containers to WASM. Now, I guess, we just need to learn how to properly isolate simple binaries.

> [WASM is] still painfully slow compared to AVX or Neon enabled assembly.

There's an open proposal to add SIMD intrinsics to WASM.


The fixed width SIMD proposal was standardized a while ago and is available in quite a few runtimes.


Yeah, saw it, but never tried. Java, C# and other high level languages have libraries for basic SIMD operations. It's completely different world from manually invoking intrinsics in C or writing raw assembly. I am wondering where WASM will land.

Visually it's definitely not Assembly. It actually looks more like Python than C. I guess my subjective assumptions are based entirely on looks. Will have to try it for smth, will definitely share afterwards :)

That's an issue from 2018 with no replies to it.

For current status ("phase 3" as that link says is maybe not obvious enough), there is a solid spec and multiple implementations, see the "Fixed-width SIMD" line here:


And here is an overview of this spec:


> Now, I guess, we just need to learn how to properly isolate simple binaries.

I think unikernels are appealing as a solution. Every program is "the operating system" running in something like Firecracker on a baremetal host.

There is a fixed-width SIMD extension though. It conforms to a common subset of instructions with similar semantics in SSE/AVX and the Neons.

> Wasm-native orchestrators will eventually build bridges that ease migration from or integration with Docker

That's a good point. And it's actually already happening (although not on the shared memory space yet as the post mentions).

Youki and curn (OCI/Docker runtimes) have already integrated Wasmer to enable running WebAssembly on their runtime.



Another random reason webassembly is really cool is that it let's you run ML models way faster after page load than through the GPU (which can take a ton of time to load all the buffers into VRAM).

In contrast, webgpu Webassembly bindings have the potential to enable fast ML code execution everywhere. Both for browser and native use cases.

OpenGL ES 3.2 is already capable of that, but politics killed the support for compute shaders in WebGL.

Now we have to wait for WebGPU.

I'm afraid WASM is going to break the internet. There will be the real, now fully closed apps (silos, walled gardens, call them whatever you want) and the remnants of what the internet was, written in HTML, an observable, hackable text.

In the other hand... Java couldn't pull this off. Flash couldn't pull this off either. So we'll see.

Also, accessibility is going to take quite a hit if developers favor canvas UI libraries over (re)generating DOM elements.

>There’s good reason to believe that Wasm represents the future of containerization. Compared to Docker, it has 10-100x faster cold start times, has a smaller footprint, and uses a better-constrained capability-based security model. Making Wasm modules, as opposed to containers, the standard unit of compute and deployment would enable better scalability and security.

I don't see how these are "big" wins over containers. The services I run don't have to worry about cold start times. It takes much longer than starting a container for my services to be ready anyways. I also don't see how it has a smaller footprint. The application needs to be stored somewhere. While containers may only have coarse capability based security using namespaces I'm not sure having super fine grained capability based security would make it be worth switching over to wasm.

Cross-language interactions suck. We need WebAssembly components and good code generators for a critical mass of languages before people actually start to use Wasm across different languages.

Unpopular opinion: Users will eventually realize that the lowest common denominator between languages is ... a BYTE STREAM (e.g. JSON/CSV/HTML), or what I think of as shell / Unix / Web -style composition.

IDLs and code generators are useful in many limited domains (e.g. when you control both sides of the wire), but they bake in a lot of assumptions that people don't realize are language-specific.

e.g. Protobufs are very good for C++ <-> C++ communication, but Java and Python users seem to dislike them equally, and even Go users do too.

COM is probably better than what most people are proposing now -- it recognizes the problem is dynamic, rather than trying to create the leaky abstraction of "fake" static system. It's true that IDL is reinvented every 5 / 10 / 20 years. (Related recent story: https://news.ycombinator.com/item?id=30128048)

I expect WASM will get some kind of component system (if it doesn't already exist), but many apps will still need to fall back to something more general.


I'm writing about byte streams as a narrow waist of interoperability now, and this is the "lead in" review post: http://www.oilshell.org/blog/2021/12/review-arch.html

Even though disparate WebAssembly components can run in the same process (with memory potentially shared by the host), for something as wide as the "Web", the lowest common denominator is still the common text-based interchange formats we already have.

Related comment on WebAssembly: https://news.ycombinator.com/item?id=28581634

Programmers underestimate the degree to which languages and VMs are coupled i.e. I question whether WebAssembly is truly polyglot, i.e. GC requires a richer type system in the VM, and types create languages that are winners or losers. Losers are the language implementations that experience 2x-10x slowdowns.

> e.g. Protobufs are very good for C++ <-> C++ communication, but Java and Python users seem to dislike them equally, and even Go users do too.

Well, hilariously, when I looked at using it on a project, JSON outperformed protobuf in Python. JSON is implemented in C, Protobuf in Python, and the C decoder for a less efficient format won out.

(Now, technically, Protobuf has a C implementation, but at the time I was testing, it segfaulted reliably. Which is the problem with C…)

I also recall there were some severe problems with protobuf's ability to represent some types, but I don't remember what they are at this point. I thought it was something to do with sum types, but I'm looking at it now and it does have oneof, so IDK.

Yup, it is crazy how much optimization these "narrow waist" formats get, and it goes even further with simdjson and so forth [1].

The Python protobuf implementation has a somewhat checkered history (I used protobuf v1 and v2 for a long time, and reviewed v3 a tiny bit).

The type system issue is that protobufs to a large extent "replace" your language's types. It's essentially language-independent type. So that means you are limited to a lowest common denominator, and you have the issues of "winners" and "losers"... I would call Python somewhat of a "loser" in the protobuf world, i.e. it feels more second class and is more of a compromise.

This doesn't mean that anybody did a bad job; it's just a fundamental issue with such IDLs. In contrast, JSON/XML/CSV are "data-first" and there are multiple ways of using them and parsing them. You can lazily parse all of them, DOM and SAX, for example, and you have push and pull parsers, etc. Protobufs have grown some of that but it wasn't the primary usage, and many people don't know about it.

[1] https://github.com/simdjson/simdjson

The Interface Types Proposal is Wasm's answer to what you're describing: https://github.com/WebAssembly/interface-types/blob/main/pro...

I always thought Java was the original protobuf library. Of the big three (PB, Avro, Thrift) only protobufs have a great modern RPC library included. I suspect more people start using PB because of gRPC. Unfortunately many people are still stuck with JSON over HTTP. I believe gRPC is more popular among systems where mobile clients dominate.

As a side note, if you expect to programmatically change your records (e.g. inject a new field on the server into a record received from a client) Avro is a much better choice. Avro also has a stronger schema migration story. But both are row-oriented formats with nearly identical IDLs anyway.

It should be possible to create a lower-level IDL that starts with byte streams and makes no assumptions about performance requirements and the transport layer.

Any language would be able to decode byte streams according to provided schema into intermediate formats (same semantics, different implementations) with a library, and then decode that significantly easier to understand data into application-specific formats.

Rust's Serde is almost there, but uses dynamic structure instead of using a schema that allows custom types.

I don't think it is unpopular at all. I started writing some emacs plugin in the past, then ported the whole thing to the browser, and since then I have a hard time thinking in any other terms as byte stream, even though I like to call it buffer.

Why not LLVM intermediate representation (IR)? Why do we need a whole new assembly language?

See "Why not just use LLVM bitcode as a binary format?" on the Wasm website FAQ:


Thank you. For a minute there I thought I asked a stupid question, but I should RTFM

I don't know

but you can generate LLVM IR and then use wasm-ld in order to generate WASM :P

LLVM IR is not an executable target, it is a compiler IR. It is not safe, machine-independent, or even specified.

I've heard of webassembly replacing containerization before. As an avid abuser of gitlab-runner: does this mean I'll eventually be able to run my pipelines using webassembly instead of docker? What might this look like?

Your pipeline processes would be compiled to wasm, and any syscalls they make would be serviced by the wasm runtime, which can apply whatever restrictions it wants to to contain what the process can do.

If your pipeline processes are not compiled to wasm, they could alternatively be run on an ELF loader + interpreter that is compiled to wasm and functions just like in the previous paragraph. Of course that'd likely be slow, just like running CI for a foreign arch in qemu is slow.

That does sound promising. It makes me wonder why, though? What does WASM do that containers don't?

edit: actually no, I answered my own question in my head: I can think of some applications for WASM that docker simply can't handle at the moment. ledger-cli, for one. I've wanted to include that in a flutter application for almost two years now. WASM seems like the perfect candidate to take care of this.

Great articles!

> the unification of Docker and Wasm containers will happen at the orchestration layer. In fact, it happens now. The integration with WasmEdge and K8s toolings has been done. Developers could use crun, CRI-O, containerd, KubeEdge, KIND, OpenYurt and K8s to start, manage, and orchestrate WasmEdge Apps. See more here: https://wasmedge.org/book/en/kubernetes.html

WebAssembly will run side by side with Docker using the same orchestration tools.

I’m waiting for a Docker container with a WASM runtime. Wait, maybe someone’s already done that…


PS - actually, originally I was being sarcastic, but there probably are some very good security use cases for it.

This is Liam, co-founder of wasmCloud, founder of Cosmonic, and co-chair of the CNCF Cloud Native Wasm Day.

I don't think this is actually as crazy as it sounds - when we designed the messaging around CNCF, container, and WebAssembly we deliberately went with a better together story.

Why? In a large enterprises you have huge systems that each represent a variety of concerns - security, compliance, governance, reputation, etc, etc. These systems show up as stakeholders in software development like gates in CI/CD.

Starting with something like wasmCloud inside a container today lets enterprises leverage their existing benefits while still achieving _many_of the benefits of WebAssembly and wasmCloud. wasmCloud publishes docker containers, helm charts, and integrations with service mesh for that reason.

Start where your users are today and take them where you want to go.

Thanks for the comment, Liam. In honesty, I was only thinking about the simple security use cases, but your comment makes it obvious that this will become a huge marketplace very quickly. You are clearly much closer to the puck than I am !

Hey, it's a gigantic tent and their is room for all kinds of approaches. wasm and wasmCloud both work great on their own and can go to a lot of places that k8s can't - non Linux OS's and places where we don't even think in terms of process (ie the browser).

WebAssembly really gives us two degrees of portability: 1. CPU / OS / Execution portability; as standard way to package code. 2. A deny by default _security_ model that works the same across all of these different platforms.

You joke, but this is a thing: https://krustlet.dev/

I took OP's comment to mean having a wasm runtime in a container, not as a container; AFAICT, krustlet seems more like a replacement for Docker containers.

This is great, my hope is for wasm packaging and delivery to have more metadata and code signing signature support. I say this because I fear potential security nightmares in the future. Imagine Log4J in wasm except it is also in cars, IoT devices , corporate intranet sites,etc... It's easy to leave that problem to whoever maintains the app but being able to revoke vulnerable or backdoored wasm apps/components will make everyone's lives easier I think

just a stupid question: is it / will it be possible to use some language (python for instance) compiled to wasm to replace javascript for webpage scripting.

i'm not a full time web developer, but do sometimes some simple web apps with python flask for backend and html js for front end, sometimes w socketio for communication.

for my little use case it would be amazing to have the same language on both sides. i'm certainly missing something here, but i guess it's allowed to dream.

Not a stupid question. This has already been done for numerous languages. Your search terms are "X in the Web browser" and "X compiled to wasm".

20 years ago,

> More than 20 programming tools vendors offer some 26 programming languages — including C++, Perl, Python, Java, COBOL, RPG and Haskell — on .NET.


I believe .NET was not open initially. Moreover, there are some patents involved I believe. Add to that the general distrust of Micro$oft among the open source community (especially in 2001).

Sure, that was just one example, there are plenty of others to reach for since 1960's.

Chip8 for example.


Emscripten port on Adventure Game Studio was just merged. :)

Sick! I didn't know emscripten used webassembly I should have guessed by now they would have merged it or something.

Fredrik here, founder of Grafbase.

Grafbase is an edge-first data platform for developers. It will allow you to deploy globally fast backends using a Git workflow.

We're also betting big on server-side WASM.

We're launching the private beta in a few months. Sign up for early access if you're interested in building this with us: https://grafbase.com

I have it disabled along with WebGL, WebRTC, etc and refuse to use a browser that has WASM.

Is WASM still 32-bit based? (at least in browsers)

> Just as Docker could not replace virtual machines entirely

I keep seeing this. Docker in no way replaces virtual machines. Can someone explain?

Previously: "We want to deploy apps as isolated units we can have variable amounts of instances of, so we make VM images and deploy them to AWS/our VMWare cluster/...". Now: "..., so we make Docker containers and deploy them to AWS/our Kubernetes cluster/...". Hence, Docker replaced virtual machines in a way.

It replaced them from something VMs were unsuited for to begin with (running isolated environments which don't need sepearately emulated hardware).

they weren't really "unsuited", just more than necessary. Hence why they replaced them quite thoroughly. (And places like AWS spent effort on making them low-overhead, e.g. the microVMs that run your lambda functions)

Prebaked VMs, sure. That never seemed like something that was used that much to deliver software.

Is escaping WASM container theoretically harder or easier?

I would like to see a time line of technologies making computers considerably slower.

> “near-native performance”. What this actually means is that WebAssembly is almost always faster than JavaScript

Nice try! That is not how I look at it.

Maybe with the explosion of IoT we will see it explode.

Applications are open for YC Winter 2024

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact