Hacker News new | past | comments | ask | show | jobs | submit login
Why WebAssembly is innovative even outside the browser (tetrate.io)
102 points by abc_tkys 5 months ago | hide | past | favorite | 119 comments

With the current state of things WASM is if anything more interesting outside of the browser than in it. Having to shim through JS to interact with browser APIs makes interacting with the outside world sufficiently slow and cumbersome that WASM in the browser usually only makes sense if you have some large computation which requires little-to-no I/O.

Outside of the browser you at least have a chance of interacting with the underlying APIs in a performant enough manner to enable a wide variety of applications.

Genuine question: What does it bring new to the table?

Solomon Hykes, author of Docker: If WASM+WASI existed in 2008, we wouldn't have needed to created Docker. That's how important it is. Webassembly on the server is the future of computing. A standardized system interface was the missing link. Let's hope WASI is up to the task! cf https://twitter.com/solomonstre/status/1111004913222324225?l...

This is strange. WASM can't handle half the things Docker needs to run. I'm genuinely confused as to how he can say the two are equivalent in any way, despite being the author of docker.

VMs existed at the time. Java was one of them, probably the most prolific of them all, and certainly had a standardized system interface.

Further WASI is a nightmare. Most projects I've personally seen disregard it completely.

You are putting the cart before the Solomon. He is saying they would have built on WASM+WASI in the way that Docker is built on cgroups.

Java is not a target for C++ code compiled via Clang/LLVM. Wasm is.

Expert advice done cheap.

But... you can't. They still would have had to have built a custom VM that could forward full-on Syscalls, so Docker (or something similar) would still have to be written.

This still doesn't make sense to me. Cgroups are a Linux kernel feature, WASM is a VM. They do not have the same (or similar) scope of features.

You no longer need syscalls when you can interpose the whole runtime, you don't need cgroups either for a wasm process. Sandboxing becomes a purely user mode construct, no OS magic needed.

So you're saying you'd run an entire Linux kernel within a WASM VM?

You lose hypervisor speedups, kernel feature parity, introduce a whole new surface area for security exploits since you're throwing away the entirety of linux's history, and what's more, it was still possible with the JVM.

No, you live Linux behind (or below). You could run the entire OS in Wasm, but what does an OS mean when all the same affordances are now in userland?

Userland doesn't have any semblance of I/O. WASI doesn't support arbitrary syscalls, so running containers would not work in WASM.... I'm not seeing your solution to this. Docker and would-be-WASM-docker would have very, very different scopes and thus would not be comparable.

Both sides of the Wasm env are userland, you can expose any method you wish into the Wasm environment. You don't even need WASI, its just a crutch for programs that expect libc.

Wasm is two things, CFI and capabilities, everything else is an implementation detail.

Your definition of a container is too limited in scope. You don't understand what I or Solomon Hykes are saying wrt Wasm and isolation. I tried to help.

Here let me try:

What's important to understand is, Docker exists because Oracle made Java too scary to touch, thanks to big ongoing lawsuits against Google, and demanding royalties just for running a JVM server. Also the JVM is fast, but it's slow enough that it's within striking distance of a full cgroup, running a full Linux kernel.

So it made sense to just create Docker, and allow people to abandon things like jruby, and just run normal Ruby in a Docker container.

WASM is what people would have wanted when Docker was created. A bytecode format like JVM, but faster, and not encumbered by big bad Oracle's coffer-rattling.

So in other words, if WASM had existed when Docker was created, people would have just recompiled their apps for WASM bytecode, and would have also recompiled things like MongoDB and MariaDB for WASM. Things like network syscalls would be replaced by traps to the webassembly runtime.


Experts happen to know what is actually possible.

Graal has AoT, you cannot compile Java to native with LLVM. But you know this already. You cannot compile C++ to the JVM w/o heroics or interpreting a RISC machine (nestedvm).

Are you going to quip that Burroughs did all this already as well?

Of course you cannot compile Java to native with LLVM, the support isn't there, that doesn't mean there aren't other Java AOT compilers to choose from, like the now gone Excelsior JET, or PTC and Aicas still on the market.

Not Burroughs, but AS/400 TIMI (now IBM i), Microsoft MSIL, TenDRA compiler suite, if you want to talk about C++ aware bytecodes.

> Java is not a target for C++ code compiled via Clang/LLVM. Wasm is

Well, it is with GraalVM with proper cross-language inlining.

He should have gone talk to Microsoft and learn about MSIL.

For me, the most interesting potential of WASM is it provides a platform-independent sandboxed way to run untrusted code written in multiple programming languages.

This means it's in theory possible to run the same code on an embedded hardware platform, a desktop app or in the browser.

And while I'm sure there's "serious" business uses for that capability :) I'm most interested in what it enables in terms of user customisation/modding of games.

Which was my main motivation for creating a Wasmtime WASM runtime add-on for the Godot game engine: https://gitlab.com/RancidBacon/godot-wasm-engine

And also designing the "WebAssembly Calling Card" specification as a way of demonstrating how the same code could produce graphical output that is then used in 2D or 3D environments: https://wacc.rancidbacon.com

This is exactly what I'm doing right now with SNES emulators, bsnes-plus specifically. The idea is to be able to extend certain games and give wasm module the ability to read/write the emulated memory and send/receive messages over a network connection to another process, among other things.

Sounds cool!

Do you have anywhere that interested people can read more about the project/progress?

Hm not really but I do update my repo on github.com. It's all work in progress stuff being designed as I go.


> This means it's in theory possible to run the same code on an embedded hardware platform, a desktop app or in the browser.

Maybe want to talk to the Java guys, and ask them why this doesn't work out in practice? ;-)

Just having a common byte-code interpreter isn't enough for portability of applications.

Admittedly, "How is this different to Java [planned]?" isn't an entirely unreasonable question to ask in this context. :)

What's your theory on why it doesn't/didn't work out in practice in Java's case?

There is one key difference in our respective wording which might point to a difference of perspective:

I referred to "the same code" and you referred to "portability of applications".

I don't really foresee "portability of applications" so much as "portability of modules/functions" (or "portability of libraries/plugins", I guess).

In which case, you're still correct when you say a common byte-code interpreter isn't enough--there also needs to be common specification/API for specific domains. (Which is where WASI is aiming for in terms of general system-level APIs; interface types are aiming for lower levels; and, other higher level domain/application-specific solutions may be developed for other situations.)

> And while I'm sure there's "serious" business uses for that capability :)

There surely is, Java, .NET were there first.

If we leave the browser out, then there are plenty of examples since the early days of 1960's.

I think the main thing it brings is a portable compiler target that actually stands a chance at being widely adopted.

Think of WASM as a JVM alternative, but it has heavy sandboxing.

By default, WASM is pure computing. It can't interact with the outside world. You need to allow WASM to talk to the outside world explicitly.

You have native performance. It's fast.

It's platform-independent, and multiple languages can compile to WASM.

The properties lead to some useful applications.

Serverless, FaaS. If you use WASM for Serverless you get very fast execution + cold starts. Cloudflare Workers can do 200ms cold starts and 20ms a request. Fastly Cloud@Edge not being a Chrome browser, but instead, its own runtime can do better and serve requests in 10ms. The low overhead and secure sandboxing means you can bin-pack these at a massive scale and call them with little overhead. It's much more efficient than Lambda etc. Cloudflare and Fastly as edge CDN providers allow you to deploy fast apps, globally distributed by default, piggyback off their existing caching functionality with no need for API gateways/complex plumbing etc. Fastly's C@E is especially interesting. Unfortunately, Fastly have failed to capitalise on it product-wise. As a product, Cloudflare is way ahead. The best tech doesn't win. I'm not disparaging Cloudflare here. I personally use Cloudflare over Fastly because, at this point, Cloudflare are a more complete product/ecosystem with superior pricing model.

For serverless, it's early days but WASM/WASI will probably disrupt the current FaaS providers once the rough edges are worked out. I wish Fastly had a better product/business execution team to realise C@E properly.

Outside of Serverless/FaaS it makes massive sense instead of people deploying full OS's in a Docker image on Kubernetes, instead deploy WASM modules. Wasmcloud (https://wasmcloud.dev/) looks to be one of the first here running an actor system in Kuberenetes where the Actors are WASM apps.

The other place where it has big potential is to be the Electron from hardware devices. Here's a talk for building a WebAssembly runtime for IPlayer https://www.youtube.com/watch?v=28paRXqI-Gk . They essentially made the bulk of the logic/app in Webassembly then created small shims for different devices to load it. Webassembly allowed the hardware providers to offer a portable runtime that can execute 3rd party software in a restricted runtime.

WebAssembly in the browser is the least exciting part about WebAssembly. I'm incredibly excited to see how it is used for Microservices and serverless.

> By default, WASM is pure computing. It can't interact with the outside world. You need to allow WASM to talk to the outside world explicitly.

That only applies to MVP 1.0, the long term roadmap hardly makes it any different from the myriad of other bytecode distribution formats.

WASM is to Node what C/C++ extensions are to Python.

That doesn't particularly sound right to me. Node also supports C/C++ extensions, and AFAICT the use of Wasm in Node for performance is still relatively uncommon.

WASM has far more potential for serverless usage than it does in the browser.

I think Cloudflare's Workers are a good example of where things might be going with the tech, but there is room for a lot more. I would love to see something like coroutines adopted for WASM that allow for asynchronous WASM computation.

Coroutines are usually a language level construct (syntax sugar for a switch-case statemachine). Machine code doesn't have "coroutine support" either yet all languages with coroutine support run just fine on regular CPUs.

> Machine code doesn't have "coroutine support" either yet all languages with coroutine support run just fine on regular CPUs.

Yes and no.

Some ISAs do have real "co-instructions" that would be executed concurrently to the "main" program, for example Lisp Machines use this approach for executing bounds-checking concurrently, as well as other kinds of validation/assertions (and even had a limited form of hardware time-travel to jump-back to before a co-processor exception occurred).

This approach died out due to the complexity of implementation.

Coroutines are present at the language and runtime level. Most languages with coroutines have runtimes baked in that handle them behind the scenes (think of JS), but some (like Rust) require you to provide your own. Async executors are quite large in terms of code size, so I think it would be useful for WASM to expose a coroutine interface for compilers to target. I think that not having to ship your asynchronous runtime would be very beneficial.

Usually a syntax sugar? Not true, unless you give up ability to call function and switch/resume coroutine seamlessly inside it. You need the ability to manipulate call stack. Regular CPUs have no problem with that but wasm expressly disallows it. So any language with coroutines needs to avoid using wasm stack.

I guess you mean fibers (which require call stack switching). That's one way to implement coroutines, but most languages with async/await implement coroutines differently by transforming async functions into a simple switch-case state machine (where each "yield block" is one case-branch) plus associated local state which doesn't live on the regular call stack. This code transformation is also the reason for "function coloring", which isn't needed for a fiber-based coroutine system.

Naturally it had to leave out CLR, TIMI, z/OS language environment, BREW, TenDRA, Amsterdam Compiler Toolkit as examples of bytecode formats that support multiple languages, including C and C++.

The only innovation is being more marketable.

Successfully shipping the runtime in multiple web browsers works pretty well as social proof. People figure that if the sandboxing and portability is good enough for web browser vendors to trust it, it's probably good enough for them.

Java rode that train in the early days of the web with applets. But the user experience wasn't good and there were security holes, and Java support was gradually removed.

A lot of other languages tried this, but didn't get wide adoption.


"Everything Old is New Again: Binary Security of WebAssembly"


Other research papers are bound to appear as the security community finally starts looking into WebAssembly security claims.

Marketing is everything.

I skimmed through the pdf in less than 10 minutes, but from what it seems, it is pointing out how vulnerable binaries compiled to WebAssembly are still vulnerable to exploitation. Well, duh. What WebAssembly protects is the host machine, not the binary running inside the VM. None of these are attacks on the VM itself.

Basically just nothing more than an OS process.

It is not duh when WebAssembly advocates use the security sales pitch.

The entire point is running completely untrusted code safely contained from the host. Not protect the code from itself (which was never the intention). It might bare some resemblance to OS processes in its sandbox capabilities, but it offers a lot more:

- OS independent

- CPU independent

- Easy target for compilers

- Easily embeddable

- Code cannot modify itself in memory (which makes the point below easier)

- Easy to prove safe from accessing the host (very limited instruction set, can only access a certain address range of memory). It is hard to make a mistake implementing a VM. It's not too complex, and you can make sure every memory load/write instruction emitted is written to target "(BASE_ADDRESS<<32) | x", with x being the pointer as seen from inside the wasm, and BASE_ADDRESS being the upper 32 bits of the 4 GiB aligned range you have of virtual memory for the module

The fact that it is widely implemented by browsers helps prove all these points.

I'm sure the next big fun project for some people is gonna be writing a microkernel that runs wasm binaries in ring 0. It's that safe (of course, spectre might add a bit of overhead to context switching still).

All valuable points when one ignores the history of bytecode distribution formats since the early 1960's, capabilities and containers security measures on OS processes.

No need to modify the code when data can be corrupted and thus be misused to influence the decisions taken by the code.

Yes, it will be probably a fun project replicating Burroughs architecture in 2021.

By the way, had Chrome the same market share in 2010 that it enjoys today, everyone would be talking how great PNaCL is.

Flash and Java had the market, but failed, so I wouldn’t be too excited about PNaCL. Isn’t the open-source, foundation-based standard the selling point of wasm?

They did not fail, politics killed them, the same politics that pushed for WebAssembly, but thanks to the WebAssembly pandora box, they were reborn just like Phoenix out of the ashes,




I don't understand why you are so bitter about this.

The fact that Wasm has a binary representation is orthogonal to the design. It has CFI and capabilities. CFI allows it to be compiled down to native code safely and run inside the same address space of the containing process. Capabilities control what gets in or out. You would be well served to ignore the whole bytecode aspect and just accept the Wast.

Yes, WA has a defined security scope: it's right between the host and the WA program, and if you choose to sidestep it by eval-ing the output of a potentially malicious WA program, or simply handing it secrets, obviously you can get poor outcomes.

No, WA has not solved the halting problem and fixed all bugs in sandboxed programs.

Yes, it's important for people developing WA-based systems to acknowledge the above limitations.

You've been brandying about this paper for the last year as if it means something that it doesn't [1-12]. Every time someone actually reads it they come to the same conclusion that this paper says nothing about the WA-host memory barrier -- and "everyone" includes the authors themselves [13]. And every time you respond by moving the goal posts with "but but WA isn't magic security pixie dust that solves all security problems". If I sound annoyed it's because I've been pushing back on your FUD with nary a direct response that acknowledges these limitations to your narrative and I am annoyed. Also, it's been a year now, where are all these papers that break WA that you keep promising? Are you privy to WA-host memory barrier breaking results that have yet to be published (and aren't just repackaged CPU vulnerabilities)? I'm open to evaluating the claim of such a break being found in the future, but there's no evidence for it yet.

[1-12]: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

[13]: https://news.ycombinator.com/item?id=24223979

I will keep pushing FUD as you call it, because the so called "security" improvements of WebAssembly aren't there.

Just marketing, it doesn't matter that the host isn't directly exploited, when it can be affected by the behaviour of internal corrupted datastructurs inside the WASM module, just like an exploited OS process afects the surroundoing environment with its actions.

A security sales pitch used to stagnate the state of RIA during the last 10 years.

As for the papers not being there, I guess it is more fun to try to exploit other stuff than WebAssembly.

Rest assured that you will be notified when they get announced.

An exploited WASM module can affect all the things it has access to, just like an exploited OS process can affect all the things it has access to. Agreed.

Where we disagree is when you refuse to acknowledge that these two isolation systems can have vastly different access. The key thing to observe is that WASM's access is configurable; yes that means it could be up to the same level of access as a process if you're an idiot and give it that, but it could also be much more locked down than a process ever could be and in fact that's the point of using WASM. A process can attack the entire kernel via all syscalls, the entire filesystem, anything accessible over the network, any devices attached to the computer, etc, etc.

If you write a game plugin architecture based on WASM and only give it access to affect entities in your game (and you're careful to treat its output as if it came from a potentially malicious remote server), there's nothing that the WASM plugin can do to break out, it can't attack arbitrary resources on your computer. Configured properly, at best you could compare WASM to a locked down SELinux process, but WASM is so much more portable that it's basically a new category of thing.

> As for the papers not being there, I guess it is more fun to try to exploit other stuff than WebAssembly.

Yeah I guess a target with an attack surface that includes every web browser compiled in the last 5 years and ubiquitous cloud infrastructure that terminates half the web's TLS sessions is just too boring and low profile. Sure, that's why nobody's found a vulnerability yet.

> Rest assured that you will be notified when they get announced.

Certainly, I can read the HN front page just as well as anyone. But I suppose that means you don't have anything better either.

If history tells us anything, it's that marketability is really important when it comes to getting minimum-viable-adoption for a technology.

Specially when most of the audience doesn't bother to go through the standard, or bothers to learn about the history of bytecode formats for executable distribution.

The "grant capabilities" model of WASI is such a "how did they not think of this before?" thing. I need to play around with it some more, but WASMtime, etc. is really fun.

They did, there are several capability based research OSes.

Let's not forget KeyKOS which ran very early ATMs in the 80s!

In many ways, WebAssembly was a poor choice in naming. I propose that "WASM" be retroactively associated with the phrase "world-wide assembly".

(Serious proposal. Please consider adopting this in your own usage, even if informally; think of it as guerilla rebranding.)

Well it's fitting in that JavaScript is a pretty poor name too

WASM isn't "assembly" either though but bytecode ;)

If we can accept Javascript as bytecode we can accept this as assembly...

At the risk of stumbling into something terrible, who accepts Javascript as bytecode?

The Hermes JavaScript engine statically compiles JS to bytecode. https://github.com/facebook/hermes

IIRC V8 generates bytecode from the AST and compiles that bytecode

Yes but that is converting JavaScript to bytecode, not using JavaScript as bytecode.

That's an implementation detail.

Javascript as bytecode (or Javascript as assembly) is a paradigm that comes from compile-to-js languages like Typescript and asm.js[0], considering such "compiled" javascript as equivalent to a universal bytecode. Here's an old HN thread about it [1,2] and if you look around you can find it being asserted[3] as often as criticised[4,5].

Ironically, many of the arguments against the use of JS as bytecode just got transplanted into arguments against WASM, but one common argument against WASM is "why do we need this when we can already compile to javascript?"







Oh but that's not really bytecode is it? I'd say that's language transpilation. Which is a different topic, a contentious one for sure, but I don't think I'd qualify compiling down to another language as a form of bytecode, perhaps I'm splitting hairs though. My understanding is with bytecode that would then be read during execution on a vm or some such like Java, or Python bytecode during execution. JS is still a high level language which V8 then turns into it's own type of bytecode before being able to execute it.

Yes, I would agree completely... but I've gotten into enough arguments about it on HN and been downvoted hard enough for disagreeing with the premise to know how seriously a lot of people take it.

But who does that?

Serious question.

I read it often, but besides minified JS, most "to js compilers" try to emit readable JS and I didn't have the impression that the goal of bytecode was to be readable.


> The Hermes JavaScript engine statically compiles JS to bytecode. https://github.com/facebook/hermes

It sounds great in theory, but I am not convinced that it brings anything significantly new to the table yet. I am still looking for backend applications in wasm outside of envoy proxy.

Lua pretty much does everything (minus polyglot) that wasm claims to do and has a good dev tooling system. It ll be a lot more enticing, if I could write lambda functions/serverless computing right of the bat with wasm alone.

How this "comparison" could miss the elephants in the room?

Java is mentioned, but not the JVM, or .NET (core)?

Therefore this reads once again like an ad…

But that actually fits a scheme: Most things one can read currently about WASM are highly over-promising this technology. It's nowhere close to the established VM Runtimes until now!

Does WASM really need dishonest marketing posts so badly?

Who is actually throwing money after that marketing efforts? Who doesn't like the JVM anymore, and isn't using .NET either?

Doesn't the jvm weight dozens of megabytes, with tracing JIT, baked in OO semantics, GCs, etc.? That seems hardly comparable with wasm which currently works best for languages like C++ or rust with pretty small runtime. Just like people more commonly use lua as an extension language rather than Java.

And where is the advantage to run C/C++/Rust as WASM?

It's much slower than native (you will need a JIT to make it reasonable fast again!), needs a considerably huge runtime with a significant memory overhead compared to native, and a complicated build setup.

Someone will say now: Sandbox.

But sandboxing native code, in a native way (using directly the facilities that the WASM runtime would use) is much simpler, cheaper, and more performant.

And for high-level languages you need anyway all the things that the JVM has build in. Also you get there instrumentation, monitoring, and debugging facilities for "free".

The only valid point of criticism I see here is the complaint about the "build in" OO semantics of the JVM. But that's nothing that couldn't be augmented by some better fitting mechanics for cases where translation to OO is very problematic. (But actually I have a hard time to think of an example for this problem. OO and FP are actually duals¹ ² of each other, and there's not much that's neither OO nor FP).

¹ https://www.cs.uoregon.edu/Reports/DRP-201905-Sullivan.pdf ² https://www.researchgate.net/publication/332248372_Codata_in...

You do need a JIT, but a simpler one, like AOT-on-demand, since it's very static compared to other formats. It's not that far from shipping LLVM IR, except it's specified.

As for the memory overhead, where have you seen that? To my knowledge the most problematic part there is that wasm currently doesn't offer any way to return memory to the OS.

The point of sandboxing is that the space allocated to wasm is well delineated and there should be no way to access the rest of your memory from the inside. For stuff like plugins, that's very valuable compared to just providing a .so with absolutely 0 isolation, in addition to having to compile the .so/.dynlib for your specific OS and architecture beforehand.

It is an ad, most WebAssembly advocates never bother to study the history from bytecode formats since the early 1960s.

It sells better when "WebAssembly did it first".

How easy/practical is it to ship a WASM binary to run on all major platforms?

Numerous native runtimes for webassembly already exist[0], with the current popular choices apparently being Wasmer[1] and Wasmtime[2].

All one would need to do (AFAIK) is ship a client for all major platforms, as is done with Electron (and web browsers themselves, and everything else.)




You still need a browser if you want to do anything GUI related.

While a "host" application (for the WASM runtime used) is required to enable access to graphical output (or user input) it doesn't have to be a browser.

At the (almost) most basic level a chunk of memory can be used as a framebuffer--the host application would read the pixel data which the WASM bytecode wrote and then write it to the host display via OS-level routines.

There are some plans/experiments at making a framebuffer "device" available as part of WASI.

I've written a couple of graphical WASM host applications that aren't browsers (and which don't use memory for pixel data transfer just integer values returned from a function):

The "WebAssembly Calling Card (WACC) Viewer" is implemented via the Godot game engine and an addon that integrates the Wasmtime WASM runtime with the engine: https://wacc.rancidbacon.com

(Also implemented a WACC Viewer in Rust: https://gitlab.com/RancidBacon/rusty-wacc-viewer)

WACC specifies how to transform three integer values (returned from a function in a WASM module) into a coloured triangle in order to render it on screen.

Another "host application" I implemented was a libretro compatible plugin that loads a WASM module and then feeds the module with input from libretro & retrieves framebuffer pixel data (one pixel at a time :D ) via a WASM function call & writes it to the libretro framebuffer for display.

Can you link to some of your applications?

Why? Other languages running in similar environments like Lua/Java don't need a browser. WASM does not need to run within a browser, and there are a couple of runtimes that run it outside of a browser, in that case why wouldn't something like SDL be able to be used?

The most important feature of WASM is that it provides a secure sandbox, and all APIs that talk to the underlying host platforms must be designed with security in mind (see all the considerations that went into WebGL vs vanilla OpenGL).

In a browser environment, sure because browsers need to sandbox webpages. But WASM runs on different environments too.

It's also important outside the browser, WASM could be the solution for running untrusted code without a walled-garden app distribution model, but only if the sandbox actually works.

As long as the internal memory doesn't get corrupted due to lack of bounds checking inside linear memory internal accesses.

Corruption that can then be used to subvert the behaviour of functions being called.

That's a language-level problem, e.g. "just use Rust" if you're concerned about "sandbox-internal" memory corruption (and that this internal corruption can't be used to alter the behaviour of called host platform functions is why APIs must be designed with security in mind - WASM running in browsers has the same problem, that's why WebGL does additional validation compared to native GL implementations).

Just use language X only works when not relying on 3rd party WASM libraries, also I bet emscripten sees much more use on the wild than Rust.

WebGL additional validation is so good that the only way to work around exploits is to blacklist user's hardware, OS and GPU drivers. Thus rendering those nice 60 FPS scenes into a single digit FPS caused by software emulation.

Security would be so much better if liability was already thing across the industry, and not only on high integrity systems.

As secure as an OS process.

There are proofs of concept using shaders for security attacks.

WASI could be extended by "media APIs" to communicate with a host platform window system, rendering- and audio-API. The only question is "where to stop?", because at the end of that road lies a complete browser runtime (I still think it makes sense to extend WASI with a small number of media APIs though).

Would it not be adequate to supply a minimal implementation of canvas and develop with a UI framework that draws to canvas?

Or perhaps something like React Native, with WASM standing in for the JS runtime?

You it be faster though? E.g. as a replacement for electron (please note I’ve never worked with electron so I’m using anecdata)

If I understand correctly, the consequence is that, Electron being a wrapper around a fully capable browser engine, Electron users are free to use WASM modules so long as they expose a JS interface to interact with Electron's API where neccessary but until WASM gets DOM bindings I presume that Electron itself won't use WASM to implement internal behavior.

does wasm support more than 1 program counter (more than 1 virtual cpu?) or is it stuck in "single thread" model?

if so, are there any plans to change that?

Rust has an interesting application for WASM where it can be used to pre-compile macros. Saves a bunch of compilation time by skipping the need to compile the macros up front.


shouldn't the JVM be considered polyglot?

Uhh, how is it polyglot if it only accepts one language (Java)?

Java, Kotlin, Clojure, Scala, Groovy

More here: https://en.wikipedia.org/wiki/List_of_JVM_languages

It accepts any language that compiles to JVM bytecode. Java is just one of them. Just like an x86 processor accepts any language that compiles to x86 instructions.

Doesn't that require a language to be engineered to output java bytecode? For example, rust can't be compiled to java bytecode because it does things that jvm can't do. A real polyglot can't only understand a language by requiring it to speak its native language Wasm supports languages that weren't created with wasm in mind, just by requiring a compiler with the right output, without restraining the language.

> just by requiring a compiler with the right output,

How is that different from a compiler that generates JVM bytecode?

Yes, the bytecode was designed specifically for Java's needs, but running languages that weren't designed for the JVM on the JVM is still less of a stretch than compiling C to asm.js, something which has worked quite successfully. Or targeting a different CPU architecture.

Of course you could run Rust on the JVM (more precisely on GraalVM)!

Have a look at Project Sulong.


GraalVM has a Wasm front end, there is almost no reason to use bitcode as the interface to Graal.


That makes no sense in case of Rust (or C/C++ for that matter).

You would compile

  LLVM-frontend -> LLVM-IR -> LLVM-WASM-backend -> GraalVM-WASM-frontend -> GraalVM-bytecode-backend
instead of

  LLVM-frontend -> LLVM-IR -> GraalVM-LLVM-IR-frontend -> GraalVM-bytecode-backend
The additional transformations won't be healthy for the performance of your program, or compile times…

Does the second compilation chain do Control Flow Integrity?



It looks like it might be possible, but I really like the security properties of Wasm. It looks like I am moving the goal posts, but I am not, it is the biggest reason to use Wasm, the second one is the capabilities model.

It would be a great undergrad research project to measure the difference in those two compilation chains.

I'm not sure you could hack the control flow when running bytecode on the JVM, but I strongly doubt that. (The JVM is "high-level" as pointed out previously and doesn't execute ASM like code. So there is no of the attack surface you have to care on the ASM level).

And capabilities are anyway something that belongs into the OS — and than programs need to be written accordingly. The whole point of the capability-security model is that you can't add it after the fact. That's why UNIX isn't, and never will be, a capability secure OS.

But "sanboxing" some process running on a VM is completely independent of that!

WASM won't get you anything beyond a "simple sanbox" ootb. Exactly the same as you have in the other major VM runtimes.

If you want capability-secure Rust, there is much more to that. You have to change a lot of code, and use an alternative std. lib¹. Of course you can't than use any code (or OS functionality) if it isn't also capability-secure. Otherwise the model breaks.

To be capability-secure you have actually to rewrite the world…

¹ https://github.com/bytecodealliance/cap-std

> I'm not sure you could hack the control flow when running bytecode on the JVM

LLVM IR is C/C++, if you are embedding that into your Graal program w/o the same level of CFI that Wasm has, you can definitely crash your Graal program. We were literally talking about LLVM IR and you switched back to bytecode. JVM bytecode is safe, LLVM IR is not.

> WASM won't get you anything beyond a "simple sanbox" ootb.

Not true, the core tenet of Wasm is based around CFI.


Wasm has capabilities everywhere, it doesn't need an OS to provide this.

Graal is so much more than JVM. The way it integrates LLVM IR absolutely matters wrt safety. If you include Wasm into a native code it doesn't reduce any of its security properties. You include LLVM IR into a JVM program and it will reduce your security properties. While LLVM has CFI it doesn't provide the same guarantees that Wasm CFI does.

It accepts Java bytecode, but various programming languages (Java, Kotlin, Scala, Clojure to name a few) are built on top of that.

serious question: what would wide WASM adoption mean for js developers? is my obsolescence imminent?

While WebAssembly is better for some things than JS, for most code you run in a browser, it's no improvement and actually worse.

When you do need wasm, though, then it opens up entire new possibilities, things you really couldn't do before. I like to compare wasm to the video element: not every site should use it, and it won't replace JS or CSS or HTML, but its existence makes things like YouTube and Netflix possible.

The future of code on the web is JS + wasm.

The future is WebGL/canvas + WASM, Flash's revenge.

Never pin your career to just one language/framework/technology; otherwise you'll become obsolete at some point no matter which one it is.

That said: I don't think you need to worry about JS receding for at least another decade, maybe two.

WebAssembly in the browser currently can't manipulate the DOM directly. Neither can it make HTTP requests. You need to use a JS bridge to do that. That has some overhead. So I don't think webassembly will replace JavaScript anytime soon. It might be reserved for high-performance compute, or maybe canvas based rendering for the time being.

Upcoming versions of Google Docs will use canvas based rendering [0], and I presume their custom renderer might be compiled to WebAssembly. WASM and Canvas (potentially using WebGL) are definitely an interesting combination right now.

[0] https://news.ycombinator.com/item?id=27129858

I think the nail in the coffin will only happen when the DOM is killed. If someone can write a rendering engine in WASM that renders to canvas (maybe even render apps with webgl), then can just create a new api for browser apps.

That’ll be the end of frontend as we know it, and everyone will have to learn this new api. The benefits are obvious, we’ll finally have a rendering engine for apps. I suppose document-based html pages can stick around, but the rest of us can move on.

You mean like for example MS Blazor WASM, or Qt for WebAssembly (QML in the browser)?

Even I think something like that is long overdue (as the misuse of HTML and DOM as the most horrible GUI framework for application development ever invented should stop immediately) I somehow don't like the outlook of MS Blazor becoming a thing. I hope everybody knows what that would mean: "Windows" (APIs) in the browser…

QML on the other hands side looks good (at least from the outside). But it's nowhere close to significant usage, so no hope for that, frankly.

And it will be exactly as infuriating as Flash was (except for the artist). Change my mind.

I won't surely change your mind, that is exactly what I am looking for.

WebAssembly made us regress 10 years, and we lost the great GUI based tooling for Web page creation and 3D games that are actually hardware accelerated without browsers doing blacklists.


Say polyglot again.

Please bring more toolchains to target WASM, and please have some options to interact with the dom!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact