This is not quite right. The bulk of Figma rendering is Figma-custom GPU code. It's true that Figma uses Skia, but only for some specific graphics algorithms in Skia, not as a general purpose rendering library.
Source: I work on this code.
My team and I are current working on Unreal Engine WASM support, with WebGPU integration on the way. Personally, I believe native games on the web is going to disrupt Steam and the app stores and enable a whole new distribution channel for developers, especially indies. No 30% cut, works on any device with a browser.
1. Steam provides a system for user accounts and profiles
2. Steam handles payment securely for both buyers and sellers
3. Steam handles social networks/friend list/multiplayer
4. Steam allows you to have all your games in one place
5. Steam is a platform for reviews
6. Steam has its own internal economy/marketplace
He's addressing the problem with your hypothesis that "native games on the web will disrupt Steam" by pointing out that you're comparing apples (a technology platform) and oranges (a distribution channel). Steam doesn't exist because games don't use WASM.
I can't imagine I'd use Steam much after that. (And I guess facebook would do the same.)
As for payments, I already pay mostly through Google pay or through paypal. Steam isn't adding a huge amount of value.
Why you expect your Unreal-to-WASM product to succeed when Epic Games can simply build your entire product as a feature of the engine? You are building on their platform and not an open web stack, after all.
Despite having an open web, most people have elected to use platforms. At some point, people will defer revenue for simplicity.
This appears to be a 3rd party attempt to bring UE4/5 to the web.
I'd love to hear OP's take on how they think they can outpace or complement Epic on their own internal implementation. At face value I don't see how this isn't a f.lux vs. Night Shift type situation, where the official feature destroyed the market for third parties.
8. Has a platform for modding (Workshop)
9. Has sales every few months
10. Has festivals that give new games exposure to a big audience
The list goes on... It's always better to have more competition, but anyone trying to disrupt Steam has several big challenges ahead, not just running games on the browser. I imagine Steam could start offering games in WASM as well.
Problems with *traditional* games via WASM/WebGPU in the browser
* Many games are 10-100gig+ in size. The browser provides no good way to store this data for your game and the fact that you had to wait 15mins to multiple hours means you gain no advantage. Further, browser have a balance to keep between letting any site put gigs of data on your machine vs not. And on top of that, the browser provides no way to prevent losing the data. The user clicks "refresh" or something similar and now the user has to re-do that 10-100gig download.
* Browsers have to work around driver bugs and it takes time to fix them and get them through the release cycle. For a native game, if a new driver come out that breaks something you can try to work around it immediately. That's harder on browsers. Your game can try but it's a moving target.
* Browsers change stuff that affects perf more often than native. Today "for = i to N" is faster than for of, tomorrow "for of" is faster (not a real example). My point is, in my experience, it's much easier to optimize for native since you're writing native code. In WASM you're not. Further, you're closer to the metal in native. Today video -> texture is fast, tomorrow it's slow, the next day audio is no longer allowed without a click, etc... I guess I don't have any stats which change more, browser APIs or native. My gut though is that I've had to change browser content often to keep it running
* Running games on any device is mostly fiction. Users range from 3090s to 7yr old intels to 7yr old androids. From touch screens, to mouse and keyboard and with other different limits (no fullscreen on iOS, no pointer lock, ...). Depending on your game that's half your market.
In other words, IMO, games you generally find on Steam are not a good fit for the browser.
On the other hand, you could design games that load fast, start up fast, possibly stream data if they need more, etc and you could possibly make some hit games. Maybe even some of the biggest hit games ever. Remember when Farmville was #1?
Still, my feeling is UnrealEngine in particular, is not a good match for making web friendly games. Most game devs won't pay attention to what it would take to make a web friendly game. Instead they'll just follow the patterns for native games and pick "Export to Web" and basically put out a very poor experience for web.
For web games it doesn't really matter how big the entire game is, only how much data it consumes per second of game play, the local storage is just another caching layer not meant to hold the entire game but just the data that's most likely needed next. As long as the user's average bandwidth is higher than what the game needs to keep the 'disc cache' filled it's fine.
Of course this means the entire asset-streaming, and probably the whole game needs to be designed around this 'number of bytes per second to be presented to the user' limitation, but that's not a new thing. In the past, games were designed around CD-drive bandwidth and seek times.
As to your other points, I mostly agree, the browser is a too unstable platform, and the people building the web APIs (other than WASM and WebGL/WebGPU) usually don't care much about games.
But there's a huge space below what's called "AAA" which still can make absurd amounts of money (and provide absurd amounts of fun), and for which the tech limitations in browsers are okay-ish. Those games need to be designed from the ground up for running in the browser, porting existing modern games will mostly not work.
It took about 10 years between 1.0 and 2.0 for WebGL broader adoption.
And in any case they are hardly available on game consoles.
No, it replaces the need to install something on your computer.
For trying out an unknown game, I rather have a sandboxed web application, than having to install something.
And most who think more in terms of convenience, will prefer the non install solution, too.
Installing new software is a big hurdle to many non technical people. I witnessed many children tears, because their parents did not wanted some game to potentially mess up their computers.
(and some games actually do that)
Erm. Yes, but wasn't your point that indie developers can market directly(without those plattforms) with ease right now?
"WebAssembly only replaces the auto-updater part. The actual value that Steam adds is discovery and trust"
Using webassembler brings in trust. Worried parents do not have to approve installing another unknown app.
(also there are more places to buy DRM-free games than only GOG - e.g. itch.io, Zoom Platform, GamersGate and Humble Store to name a few - though GOG has most of the games)
That said, I'm not super familiar with the world of game development and distribution. While web-based games have great distribution, the "best" technology or product doesn't necessarily win. Steam has massive power as an incumbent in the space, it does provides a useful service of facilitating discoverability for indie games, and there's definitely some scenarios where web-native games don't make sense .
Hard to disrupt Steam if they're taking 10-15%, and I assume Valve still makes a mint at that price point because Steam is ~all they make.
Can I ask why you went that route, instead of just using Skia?
Further, I have the impression that Figma has pretty specific and complex graphics needs. For one example of the sort of thing I mean, we decode image data in JS and put it into GL textures without the Wasm side ever seeing the pixel data, so that the browser can (potentially) decode on a background thread and uncompressed pixel data doesn't contribute to Wasm heap memory usage. However, I don't know Skia well, so it's possible Skia has some mechanism to composite GL textures it's not responsible for too.
I'm pretty sure this can be done in Skia in C++, so it should be possible with WASM. I think it possible to create a Skia surface using an existing GL texture buffer object.
Yeah, nah: https://github.com/zandaqo/iswasmfast
JS is about 10x faster than wasm in simple linear regression, and 30% faster in levenstein distance calculation.
If the running code is short enough then that copy might easily make the wasm version much slower. That is indeed a known downside of wasm (calls to JS are somewhat slow, and copying of data even more so - wasm shines when you can avoid those things, which certainly limits where it makes sense!).
If it's not that, then a 10x difference suggests you are running into some kind of a VM bug or limitation.
WASM is _predictively_ performant, being less subject to the vicissitudes of JIT, and offers better startup performance, allows for code reusability for existing C++/Rust code, but it's not a cure-all performance solution.
The author also cites real world apps that have switched to WASM and seen big performance gains (Figma and 1Password), which is much more compelling than the benchmarks you shared.
I made that comment in good faith pointing out that the author was not saying WASM is a catch all performance booster.
But I could get on board with your claim here that “almost always” is disingenuous (which wasn’t what you were saying in the previous comment).
PSPDFKit is another example that added WASM to great success regarding rendering and searching. And uBlock Origin as well with its massive rule lists.
But even inside something like Node.js or the browser, most of these sorts of benchmark attempts are of converting only small parts of a system, so that message passing or format shifting ends up a comparatively large fraction of the work being done. If you migrate as much as possible into WASM and treat JS as the foreign side rather than WASM, you will tend to find that there’s much less serialisation/deserialisation overhead in the performance-sensitive parts, because the data they needed was on the WASM side from the start, rather than having to be copied in from the JS side. (This is, of course, a simplifying generalisation.) A somewhat more conservative strategy is to still treat JS as the native side as WASM as foreign, but expand the WASM as far as necessary to reach a point where very little data interchange will be needed. What that is may vary enormously, and such a boundary doesn’t always exist.
By the way, if someone really wants to dig into the issue I recommend an article from one of the V8 authors where he dissects one such WASN success story: https://mrale.ph/blog/2018/02/03/maybe-you-dont-need-rust-to...
If we're going to be pedantic and nit-picky about claims, please use more precise counterclaims. It only takes a few extra words to clearly state you're talking about WASM + JS, which is an important distinction since many† people here are interested in WASM-only, without the JS (with the alternative in their mind being JS-only, without the WASM).
†In the spirit of pedantry, I mean many. Not all. Maybe not even a majority. But reasonably many.
It is still more honest to regurgitate the claim that Wasm is faster than JS than to try and argue that "JS is about 10x faster than wasm in simple linear regression."
- how compute-heavy your workload is
- how long it runs
- how much you need to pass data back and forth
That last point is hard to overstate. It’s such an overhead that “notoriously slow” (however outdated the notoriety) JS operations like structured clone run circles around highly optimized native-bridge/WASM data transport solutions.
It’s such an overhead that—for short bursty interop around values that can’t meaningfully benefit from being passed through—you have to write JS that might as well be native integer wrangling but has to be written in JS to reduce the cross-talk. That covers pretty much any workload that involves passing around JS functions as values, or preserving inheritance chains, or dynamic usage of functions generally.
I’m excited about WASM and other JS<->compiled interop, it has a bright future. But unless and until JS as a standard and JS VMs at runtime specifically provide ways to optimize crossing that boundary, the vast majority of already hard to optimize JS use cases are either better solved natively in general with minimal interop (where WASM et al will shine) or better solved by optimizing the JS implementation where the use case necessarily deals with JS functions.
It is indeed way slower that JS (and yes, I tried to care about performance and understand what was going on, removed any data exchange with the browser and even ran the tests without any dev tools opened - just in case).
Not even mentioning the fact that to use it, you need to include and hardcode a dubious JS file that does not even integrate properly with npm or any build system. And that the standard wasm way to exchange variables and events is not supported (instead you are forced to use some automagic and slow DOM bindings).
The performance with alternative compilers (tinygo) was a bit better on the performance side, but the integration was still disappointing and unsatisfying.
WASM (145,086 ops/sec)
Your Benchmarks May Vary.
I'm really happy to see more people bringing their attention to the Wasm ecosystem. There are tons of opportunities on this space.
Regarding WAPM  and how its development had become a bit dormant, expect news about it soon. Can't wait to share what we have been working on!
"oh but it compiles once and then runs everywhere!!!!"...ceased to be an argument over 15y ago, when it became clear that the "backend" is synonymous with linux, and almost every machine has the same architecture. Plus, the time required by the compilation step is not prohibitive in a CI/CD pipeline.
JS, in virtue of being properly sandboxed, seems to be in a position to prevent coupling outside expected interfaces, and hence reduces the scope for compatibility creep. maybe? ...
That's mostly a result of everything and everyone having a different opinion where python modules should live in the filesystem, and the default behavior of a lot of package managing software to replace python installations between versions instead of shoving them into different packages and be done with it.
> The advantage of a VM or vm-like container is exactly that you don't need to care about that stuff as much.
The same advantage exists for compiled languages, provided that dependencies are either staticaly compiled, unambigously listed, or shipped with the software as a package.
A similar effect is achieved in Python Projects by virtualenving everything, essentially stuffing all dependencies, including the correct version of the interpreter, in one location as a single package.
The "provided that" is exactly what I'm sceptical of - if the language is general use, not everyone will play ball unless the language/language-derivative is for the explicit purpose of maintaining this compatibility, and this is somewhat enforced. Python packaging would also be great if it shipped with an unambiguous and bulletproof/portable solution, but it didn't, and then the wheel was (partially) reinvented several times.
This is before we even consider bad-actors purposefully attacking (python packages are also susceptible to this I believe).
- it depended on an installation outside the browser, poorly versioned
- and a plugin too, which was extra noise for the user, and not nearly so pain-free as Flash's was
- it was not, under any circumstances, performant (this was before HotSpot)
- AWT was truly, thoroughly, god awful (this was before Swing)
Finally, it was caught between two realms. Clunkier than JS and more difficult to work with than Flash, it tried to do both and ended up doing neither.
So, why should WASM succeed where Java failed? Well, for pretty much every actual reason Java failed. There's really no places to compare them. WASM is implemented in-browser (and the runtime is very small), it's got a ~1.5x slowdown compared to the 2.5x-3x of the big clunkers like Java and C#, and most importantly it knows its place: low level computation. It does not try to do GUI, but leaves that up to the environment; it does not try to do a high level object model, but leaves that up to the language being compiled for it. What reason do you see that Java failed that's also applicable to WASM?
 Yes, there is a pun buried in there.
In consequence users won't develop the "yikes, Java" reaction.
wasm got the chance to slowly creep into stacks.
Like, 2 of your points were simply politics, where js just happened to get chosen, while performance got rapidly faster in the coming years. If anything, Java was well ahead of its time and the surrounding environment was not yet ready.
It was performant enough for one of the most popular games of all time, Minecraft, which was originally a game in a Java applet.
But for the others, java is insanely fast today, with pretty much the state of the art GC it has. In practice, a really significant chunk of all important server backends run on the JVM.
There are two versions of Minecraft now.
The main reason Minecraft Java is still alive is the hackability and the resulting extensive mod ecosystem.
 - Android Java isn't really the same Java as in Minecraft.
Webassembly is a small stack machine based on an open standard.
It is built bottom up, with a very small core. Additional features like SIMD, garbage collection,posix style system interfaces (WASI), linking, garbage collection,... are built as optional extensions based on real world feedback.
There already are a multitude of different implementations. (the three browsers, the Wasm3 interpreter, wasmtime, Wasmer, ... )
Java, in contrast, was and is a huge runtime , not based on a standard, and has extremely complex semantics, many of them tied to a single language model.
Wasm is already seeing adoption across a wide range of domains.
I do believe the potential for WASM to succeed where Java failed is wide open.
What do you think it is? https://docs.oracle.com/javase/specs/jvms/se7/html/
And no, Java is absolutely not a huge runtime - it is a simple stack-based vm with garbage collection made originally for goddamn TV set-top boxes, so it has an absolutely small instruction set. It just happens to be so good for multitudes of reasons that the biggest implementation (yes, it has absolutely insane amount of ones, tour claim of wasm having multiple implementations is just ridiculous compared to the amount of jvms), OpenJDK basically runs the backend of a great deal of all significant web applications.
(Otherwise you're spot on and the GP clearly has no idea about Java)
Yes, it's also a stack based VM that is relatively small, at least superficially.
It's really not that simple though. Things like GC, exception handling, a whole class model with constructors and methods, ... All of that brings countless implicit requirements that are only passingly mentioned in the spec. Plus the whole host of complexity needed in practice for supporting the Java Platform.
Core Webassembly has nothing of that.
But to be fair: Webassembly is also growing more and more complex (SIMD, reference types, GC, interface types, module linking, tail calls, exceptions, ...)
Can the JVM be what Webassembly is? Or the CLR, for that matter? Technically: of course.
My point was really more about the ecosystem as a whole. JVMs were never really intended as a general purpose, multi/cross language ecosystem of generic computation. And everything around it is noticeable. In the build tooling, in the libraries, in the development process and focus, the lack of AOT until recently (in open source implementations), ....
Sure, that's changing now with GraalVM and a focus on cross language support. There are some extremely cool things happening there.
I'd still much rather bet on an ecosystem based on an open standard with lots of interest from different parties.
In fact, GraalVM already supports Webassembly, including WASI! So in certain sense Wasm is already more general.
The gcc project had a now discontinued AOT java compiler 2 decades ago.
Java is as open as it gets, with a standard, that can be changed through a community process. I feel the web is much less open with behemoths having basically infinite veto powers like google apple microsoft.
> In fact, GraalVM already supports Webassembly, including WASI! So in certain sense Wasm is already more general.
And teavm can run java byte code both in js as well as wasm :D
Isn't the JCP also controlled by a hand full of corporations? Seems pretty much the same to me.
> And teavm can run java byte code both in js as well as wasm
Hah! I didn't know TeaVM had a WASM target, that's actually great to know.
Was that ever really production ready?
My single use for docker is to containerize / isolate, not to run across architectures.
I get the in-browser optimized / compiled code. That makes sense.
A few benefits why it is even useful in a server context:
You can distribute a single (small!) artifact and run it everywhere. On a developer machine, on Windows, on a x86 server, on an ARM server, ... Right now you have to build a separate Docker image for each architecture. And Docker is really a second class citizen on Windows.
Docker images have a huge dependency: an entire operating system syscall API and an entire userspace (with libraries, binaries, a file system, ...) The surface for a Wasm module is much smaller.
Building a Docker image is a somewhat redundant exercise of picking a base image, figuring out the dependencies, keeping it up to date, ... None of that should be necessary.
Security is another benefit. The vulnerability exposure of a Wasm module is much lower than that of an entire OS + userspace sandbox.
Wasm is designed around isolation. In addition, the interface types proposal + WASI is pushing capability based security that works by passing around capabilities, which is a pretty great model.
I outlined some more benefits here a few days ago: https://news.ycombinator.com/item?id=30020121&p=2#30020964
I think a big reason is that it was pushed by Mozilla, but they laid off all the related employees, and now no-one is really allocating resources to push it forward.
Plus it's still blocked by related proposal like interface types.
What is that wasm project you're working on, if it has been publicly announced / released? It sounds very similar to Microsoft's Krustlet.dev and Suborbital's Atmo: https://github.com/suborbital/atmo
If enough languages get official or high quality wasm targets, it has a bright future.
Sadly progress has stalled a little bit here, partially due to very slow progress on some features important for higher level languages.
But we'll see how things develop.
In case of WASM I'm curious specifically about cross-language portability. We have learned from the JVM that while technically possible it may not be a popular choice. Java/Kotlin/Scala ecosystems diverged quickly.
So WASM could become a browser technology the way JVM became a strictly server-side thing. Or allow backend developers build frontend applications in a language they are used to.
On a related note, there's something funny about everyone stacking on top of each other. At some point Java will be probably compiled to WASM. You already can go in the other direction with https://www.graalvm.org/22.0/reference-manual/wasm/ . And GraalVM itself is designed to support a wide range of programming languages.
"near-native performance" yeah a few seconds per hour. With all the jit and garbage collection, it's way too unreliable for anything that needs real-time performance and more than a pittance of CPU power.
Also, there's a canyon of difference between proper TensorFlow and its browser lite variant. Like the wasm backend chokes on 256x256 px background segmentation while xnnpack easily handles 200fps on CPU only.
For everything that can be inefficient, emscripten was already good enough. For stuff that needs efficiency, Wasm is not reliable enough yet.
Web assembly doesn't have a GC. It is currently in the works so WASM can support other languages, such as python, more natively. As for the JIT, WASM in most implementations has only 2 levels of compilation. 
> Also, there's a canyon of difference between proper TensorFlow and its browser lite variant. Like the wasm backend chokes on 256x256 px background segmentation while xnnpack easily handles 200fps on CPU only.
Native TesnorFlow has GPU access, WASM does not. You are seeing the difference between all CPU and CPU + GPU.
> For everything that can be inefficient, emscripten was already good enough.
WASM is the spiritual successor to emscripten's asm.js. In fact, emscripten emits WASM . It's a little strange talking about it as if it were something different.
I feel like WASM has just sort of rotted. It feels like so many advancements from threads to GC have been stuck in committee for years now.
I feel the same way about WebGPU. Just doesn't feel like anything is getting done when it comes to new compute standards.
I've been noticing this for years too and I've concluded that a significant driver is the fact that certain major corporations (eg Apple, Google, MSFT) who sit on central web standards committees feel their app store and operating system businesses would be threatened if the full potential of Web Assembly gets in standards, implemented and widely deployed.
Perhaps everything always bogging down in a way that delays or precludes fully unleashing these disruptive capabilities may not be entirely accidental. To be clear, I'm not asserting there's an intentional, pre-mediated conspiracy afoot. It's always a possibility, of course, but it's also possible that the various stakeholder business units inside these corporations are themselves conflicted, causing internal battles leading to confusion, delay and mixed signals on standards bodies. Sadly, that can end up having very much the same net effect as intentional sabotage.
Check this month WebGPU session, it is always the same "we hope the community will get together", "it is past MVP 1.0", "no experience with native GPGPU debugging", "you need to put up with what we have, it will improve",....
Or just make use of a middleware targeting native APIs and move on.
As for TensorFlow, no I deliberately compared their CPU-only Xnnpack backend against Wasm. So both didn't use the GPU. But obviously Sse3 and avx also help a lot.
Emscripten is a compiler while Wasm is an execution backend. I think it makes sense to treat them as separate because I wanted to highlight that Wasm didn't add anything for my use cases (yet).
Of course, I expect things to get better when the technology matures, but we're not there yet.
Edit: Actually, as soon as you use Wasm with the regular APIs to access the Webcam or use WebGL, they do GC again. That's why Wasm+WebGL has unreliable performance where C+GL is real-time capable.
Plus Wasm obviously can't use zero copy paths when accessing Webcam data on the GPU, whereas I can use DMA with C, UVC, and GL. Copying a 4K frame between buffer formats 2x60 times per second also adds up.
> But obviously Sse3 and avx also help a lot.
Yeah, probably the case then. I'd have hoped that wasm would do a better job there, but then it'll depend on the browser/runtime.
It's also needed for giving host objects to the wasm program since the host objects would be managed by the host GC.
The question about "Should WASM do this" is really all about what WASM should/could be. Is WASM going to be a universal language target? Should it support more than low level languages?
And yes, WASM exposes malloc and free.
No it doesn't. Unless you're referring to WASI, but WASI is not WASM, it's just a common API which modules can call when running on WASI-compliant WASM runtimes.
All WASM itself has is linear memory, and that is not exposed via malloc/free, but via lower level instructions to set bytes on a page and grow when necessary (but not "free"). See the available instructions in the spec.
Then they realized that web assembly could solve world peace and got to work on that.
Today I am still unable to create a div from a wasm compiled C application.
- yes, yes, I know interface types are cool and yes proposals take a while but please, I just want to write better web apps.
Given wasm will need to be imported as a script src and have a robust threading model before the dream of web applications competing with native applications for performance, I suspect I won't see it usable in my lifetime
So yes, WebAssembly has potential and is almost without an alternative for in-browser apps that do some heavy compute. But I am less convinced about server-side. Time and again, we’re told how great portability across architectures supposedly is. And time and again, the only cloud architecture that matters is x86-64. Maybe ARM or even RISC-V in some cases, but let’s not fool ourselves about these niche platforms.
Just think about it, you even mentioned serverless cloud apps. Are these cloud apps going to run inside a browser and therefore use Web Workers for threading? The answer is obviously no.
WASM itself merely provides shared memory and instructions needed in multithreaded environments like i32.atomic.rmw.cmpxchg and i32.atomic.store, but it doesn't stipulate how threads are created.
I don't blame you but lots of people confuse WASM itself with the browsers' implementation of WASM (which, among other things, decides which functions are imported into WASM and how they work). But to me, the most exciting future of WASM is outside the browser. So we must clearly delineate features of WASM from the browser-specific bits.
But anyway, I think you confuse my point: the very fact that threading isn't fully covered by the core wasm programming model, but relies on a "hybrid" approach where the embedder needs to plug in those capabilities is a problem. Before wasm, I used Google's PNaCl quite heavily, which didn't suffer this problem and was fully self-contained.
Does any assembly language have a concept of threading?
Sure it is, but let me, a backend developer, ask a question:
Why would I want to use WASM for anything running on my server, when I can just write it in Golang, and get actually native performance (not "near-native"), plus real GC, on top of an already thriving ecosystem of libraries and tools?
And lets replace Golang with Rust, C, C++, or Java, or even with a more dynamic language like Python or node if its not performance-critical...the question remains.
My point is, why would I use WASM for something other than what it was originally designed for (running things in the browser), when I already have a big collection of really powerful, established, stable, well supported backend Languages?
What does WASM bring to the backend-table that the current tools cannot provide, or provide badly?
Yes actually. CloudFlare and several other cloud providers are trying to push towards V8 based sandboxing on the edge instead of the node-in-KVM model used by firecracker because the former is lighter and easier to deploy.
Heavier computation is currently being done with WebGL shaders (eg Google Meet background swap, TensorFlow tfjs).
Re: in-browser heavy-computation: Where can I learn more about these patterns to get started? Books / blogs? Thx.
Rereading the blog post, they do seem to use WebAssembly as well there and the abovementioned SIMD feature (through XNNPACK).
Java applets were things that were slow to load and didn't interface well with the rest of the web. People can talk about how you could technically connect things until they're blue in the face, but Java didn't provide a good experience with HTML. It was its own thing just like Adobe Flash.
I think that WASM will succeed where others haven't because it has been designed by the parties you need to get on board to make it succeed. Java wanted to create its own little world against the wishes of the vast majority of developers and users. Flash created its own little world against the wishes of most developers (though most users didn't mind). WASM feels like a neutral target that most of the community can get behind.
While WASM might not be perfect or anything, it does seem to unite a lot of different communities. It doesn't feel like Sun is coming along and imposing Java applets on everyone so that they can better sell Java to enterprises. It doesn't feel like Adobe trying to lock developers into expensive Flash tools. It doesn't feel like Microsoft saying, "Silverlight is our Flash! You should use that!" It's something that engineers from lots of different communities are supportive of and that works with the web rather than trying to usurp it.
Happy to support an argument that removing Oracle's influence is worth reinventing an entire runtime for.
Oh and of course even WebAssembly suffers from browser fragmentation. Not only will performance be different, depending on whether you run your wasm module in Chrome, Safari or Firefox. These browsers do also not implement the same feature set (atop the basic wasm MVP). Like Safari doesn’t have SIMD. Other important additions (like tail calls, garbage collection and others) are also not yet universally supported.
For the sake of WebAssembly I do hope that they’ve learned their lessons from why Java applets (and Silverlight, Flash, PNaCl) all failed.
we did it more than 20 years ago :) A Java Applet could call directly (Java Applet -> Java DOM API and implementing it JNI of our plugin -> native XPCOM API of Mozilla including DOM API) native DOM API of the owning browser window as well as a Java application could embed Mozilla Gecko browser engine and call the native DOM API of that embedded browser:
They're working on this. It's a tough problem, because someone has to own the resources.
I don't think browser fragmentation is an issue for the same reason that CPU instruction set fragmentation isn't an issue: you can always compile to the lowest common denominator or ensure you are only running code segments that the target platform supports using runtime checks, just the same way we do it in native applications today.
Eh... from what I can tell from a mostly casual glance at it (enough for a half-hearted half-complete attempt at writing my own VM for it), I'd say that it is almost a good idea. A lot of the base concepts show a lot of potential, but I feel like the implementation is full design-by-committee derp. Why are there only 32 and 64 bit types? Why are there parametric instructions at all? Why is there no instruction for shrinking or freeing a memory? Why does binary format layer 0 use LEB128 instead of a fixed-width number? Just a bunch of weird stuff like that.
But I'm hardly an expert on JIT or VM implementations, so I could be way off base here.
At least Java was also designed to run on not-the-web.
* Already widely supported as a compilation target, and not just for special languages targeting the platform
Java also enjoys multiple HLLs that target its bytecode (Scala, Kotlin, Groovy, Jython, etc) and there's nothing in the runtime capabilities that tie it to any kind of constrained platform (threading, IO, async)
Don't get me wrong, I'm really hoping that WASM succeeds but I'm concerned that the same set of slam dunks we thought back in 1996 don't result getting posterized (again) by DOM/JS.
No, it wasn't.
> (i.e. Applets).
Applets required (1) installing Java on the system, and (2) installing a plugin for Java in the browser. The capacity to run them was not built in natively to any major browser.
> However, it lost, and seemingly removed with extreme prejudice with no nods to backwards compatibility.
It progressively lost to (1) other plug-in based tech (Flash), and (2) expansion and optimizayiom of the web platform to make plug-in based tech less needed while security problems of the model became more visible, and finally (3) the rising importance of web browsers that didn't support plugins (which are now dominant even on desktop.)
> Java also enjoys multiple HLLs that target its bytecode
But not C/C++ (or, now, Rust/Go), in which a lot of common code on which native-targeted code in other languages rely is written. WASM was specifically designed with being a target for C/C++, and those and languages like Rust and Go already target it.
It was at first. In the early days of Java, Netscape and Internet Explorer had their own embedded JVMs.
It does actually. Graal can run LLVM bytecode. Also, if you mean some specific C library used by everything, the Java ecosystem is absolutely huge and has the benefit of being almost completely written in Java with very little native code.
> > But not C/C++ (or, now, Rust/Go), in which a lot of common code on which native-targeted code in other languages rely is written
> It does actually.
It didn't when applets lost.
> Graal can run LLVM bytecode
GraalVM came around a long time after applets failed, so it isn't really relevant to assessing what capability applets had when they failed vs. what WASM has now.
The draft WASM garbage collection proposal is partially implemented in Chromium, you can try it by enabling the enable-experimental-webassembly-features feature flag.
> DOM access
But if you want to use a front end framework that compiles to warm then you can do that today. Rust, Go, C# and more all have libraries (only Rust really makes sense though, as the others involve shipping a huge runtime). Just don’t expect it to be any faster than JS.
On the web, Wasm has currently found the most success with compute-intensive applications, since the JS <-> Wasm bridge is still pretty expensive. There are already some Wasm-based frameworks like https://platform.uno/ that work on the web, but things like React/React-native and Flutter have a huge head start.
Edit: I guess you mean you’d compile your React app to Wasm, no?
More clearly: once Wasm interactions with the browser are fast, a new framework + Wasm could replace React + JS.
We went from VMs to Containers. From Containers to WASM. Now, I guess, we just need to learn how to properly isolate simple binaries.
There's an open proposal to add SIMD intrinsics to WASM.
Visually it's definitely not Assembly. It actually looks more like Python than C. I guess my subjective assumptions are based entirely on looks. Will have to try it for smth, will definitely share afterwards :)
And here is an overview of this spec:
I think unikernels are appealing as a solution. Every program is "the operating system" running in something like Firecracker on a baremetal host.
That's a good point. And it's actually already happening (although not on the shared memory space yet as the post mentions).
Youki and curn (OCI/Docker runtimes) have already integrated Wasmer to enable running WebAssembly on their runtime.
Now we have to wait for WebGPU.
In the other hand... Java couldn't pull this off. Flash couldn't pull this off either. So we'll see.
I don't see how these are "big" wins over containers. The services I run don't have to worry about cold start times. It takes much longer than starting a container for my services to be ready anyways. I also don't see how it has a smaller footprint. The application needs to be stored somewhere. While containers may only have coarse capability based security using namespaces I'm not sure having super fine grained capability based security would make it be worth switching over to wasm.
Unpopular opinion: Users will eventually realize that the lowest common denominator between languages is ... a BYTE STREAM (e.g. JSON/CSV/HTML), or what I think of as shell / Unix / Web -style composition.
IDLs and code generators are useful in many limited domains (e.g. when you control both sides of the wire), but they bake in a lot of assumptions that people don't realize are language-specific.
e.g. Protobufs are very good for C++ <-> C++ communication, but Java and Python users seem to dislike them equally, and even Go users do too.
COM is probably better than what most people are proposing now -- it recognizes the problem is dynamic, rather than trying to create the leaky abstraction of "fake" static system. It's true that IDL is reinvented every 5 / 10 / 20 years. (Related recent story: https://news.ycombinator.com/item?id=30128048)
I expect WASM will get some kind of component system (if it doesn't already exist), but many apps will still need to fall back to something more general.
I'm writing about byte streams as a narrow waist of interoperability now, and this is the "lead in" review post: http://www.oilshell.org/blog/2021/12/review-arch.html
Even though disparate WebAssembly components can run in the same process (with memory potentially shared by the host), for something as wide as the "Web", the lowest common denominator is still the common text-based interchange formats we already have.
Related comment on WebAssembly: https://news.ycombinator.com/item?id=28581634
Programmers underestimate the degree to which languages and VMs are coupled i.e. I question whether WebAssembly is truly polyglot, i.e. GC requires a richer type system in the VM, and types create languages that are winners or losers. Losers are the language implementations that experience 2x-10x slowdowns.
Well, hilariously, when I looked at using it on a project, JSON outperformed protobuf in Python. JSON is implemented in C, Protobuf in Python, and the C decoder for a less efficient format won out.
(Now, technically, Protobuf has a C implementation, but at the time I was testing, it segfaulted reliably. Which is the problem with C…)
I also recall there were some severe problems with protobuf's ability to represent some types, but I don't remember what they are at this point. I thought it was something to do with sum types, but I'm looking at it now and it does have oneof, so IDK.
The Python protobuf implementation has a somewhat checkered history (I used protobuf v1 and v2 for a long time, and reviewed v3 a tiny bit).
The type system issue is that protobufs to a large extent "replace" your language's types. It's essentially language-independent type. So that means you are limited to a lowest common denominator, and you have the issues of "winners" and "losers"... I would call Python somewhat of a "loser" in the protobuf world, i.e. it feels more second class and is more of a compromise.
This doesn't mean that anybody did a bad job; it's just a fundamental issue with such IDLs. In contrast, JSON/XML/CSV are "data-first" and there are multiple ways of using them and parsing them. You can lazily parse all of them, DOM and SAX, for example, and you have push and pull parsers, etc. Protobufs have grown some of that but it wasn't the primary usage, and many people don't know about it.
As a side note, if you expect to programmatically change your records (e.g. inject a new field on the server into a record received from a client) Avro is a much better choice. Avro also has a stronger schema migration story. But both are row-oriented formats with nearly identical IDLs anyway.
Any language would be able to decode byte streams according to provided schema into intermediate formats (same semantics, different implementations) with a library, and then decode that significantly easier to understand data into application-specific formats.
Rust's Serde is almost there, but uses dynamic structure instead of using a schema that allows custom types.
but you can generate LLVM IR and then use wasm-ld in order to generate WASM :P
If your pipeline processes are not compiled to wasm, they could alternatively be run on an ELF loader + interpreter that is compiled to wasm and functions just like in the previous paragraph. Of course that'd likely be slow, just like running CI for a foreign arch in qemu is slow.
edit: actually no, I answered my own question in my head: I can think of some applications for WASM that docker simply can't handle at the moment. ledger-cli, for one. I've wanted to include that in a flutter application for almost two years now. WASM seems like the perfect candidate to take care of this.
> the unification of Docker and Wasm containers will happen at the orchestration layer.
In fact, it happens now. The integration with WasmEdge and K8s toolings has been done. Developers could use crun, CRI-O, containerd, KubeEdge, KIND, OpenYurt and K8s to start, manage, and orchestrate WasmEdge Apps. See more here: https://wasmedge.org/book/en/kubernetes.html
WebAssembly will run side by side with Docker using the same orchestration tools.
PS - actually, originally I was being sarcastic, but there probably are some very good security use cases for it.
I don't think this is actually as crazy as it sounds - when we designed the messaging around CNCF, container, and WebAssembly we deliberately went with a better together story.
In a large enterprises you have huge systems that each represent a variety of concerns - security, compliance, governance, reputation, etc, etc. These systems show up as stakeholders in software development like gates in CI/CD.
Starting with something like wasmCloud inside a container today lets enterprises leverage their existing benefits while still achieving _many_of the benefits of WebAssembly and wasmCloud. wasmCloud publishes docker containers, helm charts, and integrations with service mesh for that reason.
Start where your users are today and take them where you want to go.
WebAssembly really gives us two degrees of portability:
1. CPU / OS / Execution portability; as standard way to package code.
2. A deny by default _security_ model that works the same across all of these different platforms.
i'm not a full time web developer, but do sometimes some simple web apps with python flask for backend and html js for front end, sometimes w socketio for communication.
for my little use case it would be amazing to have the same language on both sides. i'm certainly missing something here, but i guess it's allowed to dream.
> More than 20 programming tools vendors offer some 26 programming languages — including C++, Perl, Python, Java, COBOL, RPG and Haskell — on .NET.
Grafbase is an edge-first data platform for developers. It will allow you to deploy globally fast backends using a Git workflow.
We're also betting big on server-side WASM.
We're launching the private beta in a few months. Sign up for early access if you're interested in building this with us:
I keep seeing this. Docker in no way replaces virtual machines. Can someone explain?
Nice try! That is not how I look at it.