Hacker News new | past | comments | ask | show | jobs | submit login

Can someone explain to me what the difference really is between WASM and older tech like Java Applets, ActiveX, Silverlight and Macromedia Flash, because they don’t really sound much different to me. Maybe I’m just old, but I thought we’d learnt our lesson on running untrusted third party compiled code in a web browser. In all of these cases it’s pitched as improving the customer experience but also conveniently pushes the computational cost from server to client.



Java and Flash failed to deliver its promise of unbreakable sandbox where one could run anything without risking compromising host. They tried, but their implementations were ridden with vulnerabilities and eventually browsers made them unusable. Other mentioned technologies didn't even promise that, I think.

JavaScript did deliver its promise of unbreakable sandbox and nowadays browser runs JavaScript, downloaded from any domain without asking user whether he trusts it or not.

WASM builds on JavaScript engine, delivering similar security guarantees.

So there's no fundamental difference between WASM and JVM bytecode. There's only practical difference: WASM proved to be secure and JVM did not.

So now Google Chrome is secure enough for billions of people to safely run evil WASM without compromising their phones, and you can copy this engine from Google Chrome to server and use this strong sandbox to run scripts from various users, which could share resources.

An alternative is to use virtualization. So you can either compile your code to WASM blob and run it in the big WASM server, or you can compile your code to amd64 binary, put it along stripped Linux kernel and run this thing in the VM. There's no clear winner here, I think, for now, there are pros and cons for every approach.


>So there's no fundamental difference between WASM and JVM bytecode. There's only practical difference: WASM proved to be secure and JVM did not.

There's more to it than just the sandbox security model. The JVM bytecode doesn't have pointers which has significant performance ramifications for any language with native pointers. This limitation was one of the reasons why the JVM was never a serious compilation target platform for low-level languages like C/C++.

E.g. Adobe compiled their Photoshop C++ code to WASM but not to the JVM to run in a Java JRE nor the Java web applet. Sure, one can twist a Java byte array to act as a flat address space and then "emulate" pointers to C/C++ but this extra layer of indirection which reduces performance wasn't something software companies with C/C++ codebases were interested in. Even though the JVM was advertised as "WORA Write-Once-Run-Anywhere", commercial software companies never deployed their C/C++ apps to the JVM.

In contrast, the motivation for asm.js (predecessor to WASM) was to act as a reasonable and realistic compilation target for C/C++. (https://blog.mozilla.org/luke/2013/03/21/asm-js-in-firefox-n....)

So the WASM-vs-JVM story can't be simplified to "just security" or "just politics". There were actual different technical choices made in the WASM bytecode architecture to enable lower-level languages like C/C++. That's not to say the Sun Java team's technical choices for the JVM bytecode were "wrong"; they just used different assumptions for a different world.


Also, the start-up time for the JVM made running applets very sluggish. Java quickly became a synonym for "slow".


You can’t just compare across decades of software and hardware development. Even downloading native binaries would have been sluggish, as the download would have been slow with those download speeds.


Isn't the cold-start for the JVM still relatively slow, even in [current year]?

EDIT: seems like yes[1], at least where AWS Lambda is concerned.

[1] https://filia-aleks.medium.com/aws-lambda-battle-2021-perfor...


I have a couple Quarkus apps that I've run in Lambdas that start in about a second. This is without using GraalVM too! Good enough for what I was doing (taking a list of file names, finding them in an S3 bucket and zipping them into a single payload)


But web pages were not so sluggish, hence people chose them over using applets.


Web pages at the time could at most <blink>, its interactivity was extremely limited compared to what we have know. Meanwhile a java applet could include a full-blown IDE/CAD/what have you


Well, web pages could submit forms, which was the main thing. I remember working on apps where we went with web pages because applets were too slow, regardless of the features we gave up. Images were generated on the back end instead, for example.


Lack of 64-bit ints didn't help either...


> WASM proved to be secure and JVM did not.

It is interesting to ask why that is the case, from my point of view the reason is that the JVM standard library is just too damn large. While WASM goes on a lower-level approach of just not having one.

To make WASM have the capabilities required the host (the agent running the WASM code) needs to provide them. For a lot of languages that means using WASI, moving most of the security concerns to the WASI implementation used.

But if you really want to create a secure environment you can just... not implement all of WASI. So a lambda function host environment can, for example, just not implement any filesystem WASI calls because a lambda has no business implementing filesystem stuff.

> An alternative is to use virtualization. So you can either compile your code to WASM blob and run it in the big WASM server, or you can compile your code to amd64 binary, put it along stripped Linux kernel and run this thing in the VM.

I think the first approach gives a lot more room for the host to create optimizations, to the point we could see hardware with custom instructions to make WASM faster. Or custom WASM runtimes heavily tied to the hardware they run on to make better JIT code.

I imagine a future where WASM is treated like LLVM IR


I'll just add one thing here, WASM's platform access is VERY small. There's almost no runtime associated with WASM and thus no guarantees to what WASM can access.

When you throw WASM into the browser, it's access to the outside world is granted by the javascript container that invokes it.

That's very different compared to how other browser extensions operated. The old browser extensions like the JVM or flash were literally the browser calling into a binary blob with full access to the whole platform.

That is why the WASM model is secure vs the JVM model. WASM simply can't interact with the system unless it is explicitly given access to the system from the host calling it. It is even more strictly sandboxed than the Javascript engine which is executing it.


Wht can't wasm have it own invoker, instead of relying on javascript?


> I think the first approach gives a lot more room for the host to create optimizations, to the point we could see hardware with custom instructions to make WASM faster

Heh, there were literally CPUs with some support for the JVM! But it turns out that “translating” between different forms is not that expensive (and can be done ahead of time and cached), given that CPUs already use a higher level abstraction of x86/arm to “communicate with us”, while they do something else in the form of microcode. So it didn’t really pay off, and I would wager it wouldn’t pay off with WASM either.


> Heh, there were literally CPUs with some support for the JVM!

Jazelle, a dark history that ARM never wants to mention again


> JavaScript did deliver its promise of unbreakable sandbox

Aren't its VM implementations routinely exploited? Ranging from "mere" security feature exploits, such as popunders, all the way to full on proper VM escapes?

Like even in current day, JS is ran interpreted on a number of platforms, because JIT compiling is not trustworthy enough. And I'm pretty sure the interpreters are no immune either.


I think "routinely" is overstating it, billions people are running arbitrary JS on a daily basis and no meaningful number of them are being infected by malware.

Browser surface attracts the most intense security researcher scrutiny so they do find really wild chains of like 5 exploits that could possibly zero day, but it more reflects just how much scrutiny it has for hardening, realistically anything else will be more exploitable than that, eg your Chromecast playing arbitrarily video streams must he more exploitable than JS on a fully patched Chrome.


Both chrome and firefox lock down the javascript that site is running into their own box. By using a standalone process and whatever mechanism system provided. A pwned site alone isn't enough to cause damage. You also need to overcome other layer of defenses (unlike something like flash that can be owned from it's script engine alone)

It usually require multi 0 day to overcome all those defense and do anything useful. (And it is also the highest glory in defcon)

The browser is surely frequently attacked due to the high rewards. But it also get patched really fast. (As long as you are not using a browser from 10 years ago).


Flash/applets could have been isolated in a process too, right?


yes but no, because they needed access to the OS for various services, all of which would have had to be isolated from the user code. Sun and Adobe woiod never have done this. Chrome did it, Safari and Firefox followed. WASM runs in that environment. Flash/applets ran outside of that environment. they did that precisely to provide services the broswer didn't back then.


Chrome did put a sandbox around Flash, didn't they? I thought the bigger reasons it died out was that it didn't integrate with DOM and Apple hated it


There were a bunch of things missing from OPs description around the security considerations of Wasm but it has a lot of other stuff on top of what the browser provides when it’s executing JavaScript.

The primary one is its idea of a “capability model” where it basically can’t do any kinds of risky actions (I.e touch the outside world via the network or the file system for example) unless you give it explicit permissions to do so.

Beyond that it has things like memory isolation etc so even an exploit in one module can’t impact another and each module has its own operating environment and permission scope associated with it.


I was surprised when google has agreed to implement the capabilities model for Chrome. I would guess that asking the user for permission to access the microphone would not sit well with google. In smartphones they own the OS so they can ignore wasm's security model as much as they like.


I feel there’s a bit of a disconnect here between Google’s Ads division who are looking to basically do the bare minimum to avoid getting repeatedly spanked primarily by the EU but also now with talk of a breakup in the US and most other parts of Google who I say this entirely unironically are by far the best of all major options with regards to security in both the browser and their public cloud offerings. I’d even extend that possibly to operating systems as well. ChromeOS is miles in front of anything else out there currently but on mobile Android has historically lagged behind iOS although that gap is close to indistinguishable in 2024.


> on mobile Android has historically lagged behind iOS although that gap is close to indistinguishable in 2024.

This is true, but unfortunately in the negative sense: both are as insecure as each other, i.e. pwned. [1]

[1] https://discuss.grapheneos.org/d/14344-cellebrite-premium-ju...


It is not my intention to be contrarian, but honestly this might be the most incorrect comment I've ever read on hacker news, in several different ways. Sure, some of these might be subjective, but for example chromeOS is Linux with a shiny coat in top, how could it be any better than, well, Linux, let alone miles ahead?


ChromeOS uses the Linux kernel but unless you enable developer mode (which has multiple levels of scary warnings including on every boot and requires completely wiping the device to enable) everything runs in the Chrome web sandbox or the Android VM.

A ChromeOS user isn't apt-get installing binaries or copy/pasting bash one liners from Github. If you enable the Linux dev environment, that also runs in an isolated VM with a much more limited attack surface vs say an out of the box Ubuntu install. Both the Android VM and Linux VM can and routinely are blocked by MDM in school or work contexts.

You could lock down a Linux install with SELinux policies and various other restrictions but on ChromeOS it's the default mode that 99% of users are protected by (or limited by depending on your perspective).


Even when you enable “developer mode” which is essentially Debian in a VM the level of care that went into making sure that no matter what happens there you will never suffer a full system compromise is truly impressive.

To give you a sense of where they were half a decade ago you can already see that it’s as I described miles in front of anything that exists even today in this video: https://youtu.be/pRlh8LX4kQI

When we get to talking about when they went for a total ground up first principles approach with Fuchsia as a next generation operating system that is something else entirely on a different level again.

I genuinely didn’t have a hint of irony in my original comment. They are actually that much better when it comes to security.


Most of all the problem with Java Applets was that they were very slow to load and required so many resources that the computer came to a halt.

They also took much longer to develop than whatever you could cook up in plain html and javascript.


Funnily enough, wasm also has the problem of “slow to load”. In that vein, a higher level bytecode would probably result in smaller files to transport. And before someone adds, the JVM also supports loading stuff in a streaming way - one just has to write a streaming class loader, and then the app can start immediately and later on load additional classes.


Too be fair, they were slow to load if you didn’t have the browser extension and correct JRE installed.


I would add that most of it was politics.

The JVM is not fundamentally insecure the same say as neither is any Turing-complete abstraction like an x86 emulator or so. It’s always the attached APIs that open up new attack surfaces. Since the JVM at the time was used to bring absolutely unimaginable features to the otherwise anemic web, it had to be unsafe to be useful.

Since then, the web improved a huge amount, like a complete online FPS game can literally be programmed in just JS almost a decade ago. If a new VM can just interact with this newfound JS ecosystem and rely on these to be the boundaries it can of couse be made much safer. But it’s not inherently due to this other VM.


> WASM proved to be secure and JVM did not.

This is an oversimplification — there's nothing about the JVM bytecode architecture making it insecure. In fact, it is quite simpler as an architecture than WASM.

Applets were just too early (you have to remember what the state of tech looked like back then), and the implementation was of poor quality to boot (owing in part to some technical limitations — but not only).

But worst of all, it just felt jank. It wasn't really part of the page, just a little box in it, that had no connection to HTML, the address bar & page history, or really anything else.

The Javascript model rightfully proved superior, but there was no way Sun could have achieved it short of building their own browser with native JVM integration.

Today that looks easy, just fork Chromium. But back then the landscape was Internet Explorer 6 vs the very marginal Mozilla (and later Mozilla Firefox) and proprietary Opera that occasionally proved incompatible with major websites.


Yes it’s true that there’s more to the story, but also, Java really is more complicated and harder to secure than WASM. You need to look at the entire attack surface and not just the bytecode.

For example, Java was the first mainstream language with built-in threading and that resulted in a pile of concurrency bugs. Porting Java to a new platform was not easy because it often required fixing threading bugs in the OS. By contrast, JavaScript and WASM (in the first version) are single-threaded. For JavaScript it was because it was written in a week, but for WASM, they knew from experience to put off threading to keep things simple.

Java also has a class loader, a security manager that few people understand and sensitive native methods that relied on stack-walking to make sure they weren’t called in the wrong place. The API at the security boundary was not well-designed.

A lot of this is from being first at a lot of things and being wildly ambitious without sufficent review, and then having questionable decisions locked in by backward compatibility concerns.


> back then the landscape was Internet Explorer 6 vs the very marginal Mozilla

Your timeline is off by about five years. Java support shipped with Netscape Navigator 2 in 1995, and 95/96/97 is when Java hype and applet experimentation peaked.

Netscape dominated this era. IE6 wouldn’t come out until 2001 and IE share generally wouldn’t cross 50% until 2000 https://en.m.wikipedia.org/wiki/File:Internet-explorer-usage...

By the time Mozilla spun up with open sourced Netscape code, Java in the browser was very much dead.

You nailed the other stuff though.

(Kind of an academic point but I’m curious if Java browser/page integration was much worse than JavaScript in those days. Back then JS wasn’t very capable itself and Netscape was clearly willing to work to promote Java, to the point of mutilating and renaming the language that became JavaScript. I’m not sure back then there was even the term or concept of DOM, and certainly no AJAX. It may be a case of JavaScript just evolving a lot more because applets were so jank as to be DOA)


ActiveX and Macromedia Flash were also popular alternatives to Java applets. Until v8 and Nitro were available, browser-based JavaScript was not a credible option for many apps.


> There's only practical difference: WASM proved to be secure and JVM did not.

The practical reasons have more to do with how the JVM was embedded in browsers than the actual technology itself (though Flash was worse in this regard). They were linked at binary level and had same privileges as the containing process. With the JS VM the browser has a lot more control over I/O since the integration evolved this way from the start.


What would you say is the performance difference between say running a qt app as native compiled vs running it in WASM? I’ve always been curious but never tried. I know it would vary based on the application but I’m guessing something that is maybe calculating some Monte Carlo model and then displaying the result or something else along those lines that actually will max out the CPU at times rather than be waiting on human interaction 99%of the time.


> JavaScript did deliver its promise of unbreakable sandbox

I'm sure there's a big long list of WebKit exploits somewhere that will contradict that sentence...


JavaScript is all fun and games until a type confusion bug in V8 allows arbitrary code execution from a simple piece of JavaScript code…


Sure, and if you find one of those, you can trade it in for $25k or more [1]

[1] https://bughunters.google.com/about/rules/chrome-friends/574...


Unlike ActiveX, Silverlight, or Flash, it's an open standard developed by a whole bunch of industry players, and it has multiple different implementations (where Java sits on that spectrum is perhaps a bit fuzzier). That alone puts it heads and shoulders above any of the alternatives.

Unlike the JVM, WASM offers linear memory, and no GC by default, which makes it a much better compilation target for a broader range of languages (most common being C and C++ through Emscripten, and Rust).

> Maybe I’m just old, but I thought we’d learnt our lesson on running untrusted third party compiled code in a web browser.

WASM is bytecode, and I think most implementations share a lot of their runtime with the host JavaScript engine.

> In all of these cases it’s pitched as improving the customer experience but also conveniently pushes the computational cost from server to client.

The whole industry has swung from fat clients to thin clients and back since time immemorial. The pendulum will keep swinging after this too.


> The whole industry has swung from fat clients to thin clients and back since time immemorial. The pendulum will keep swinging after this too.

Indeed, graphics pioneer and all-around-genius Ivan Sutherland observed (and named) this back in 1968:

"wheel of reincarnation "[coined in a paper by T.H. Myer and I.E. Sutherland On the Design of Display Processors, Comm. ACM, Vol. 11, no. 6, June 1968)] Term used to refer to a well-known effect whereby function in a computing system family is migrated out to special-purpose peripheral hardware for speed, then the peripheral evolves toward more computing power as it does its job, then somebody notices that it is inefficient to support two asymmetrical processors in the architecture and folds the function back into the main CPU, at which point the cycle begins again.

"Several iterations of this cycle have been observed in graphics-processor design, and at least one or two in communications and floating-point processors. Also known as the Wheel of Life, the Wheel of Samsara, and other variations of the basic Hindu/Buddhist theological idea. See also blitter."

https://www.catb.org/jargon/html/W/wheel-of-reincarnation.ht...


That was why i stopped using the word 'tech' to refer to these things. You don't suddenly go back to stop using the wheel after a time, or suddenly think that printing press was a bad idea after all. Those are techs. Many of the things we call techs nowadays are just paradigms. And frameworks are defnitely not 'new technology'.


All it takes for something to be replaced is something that does the job better. You can only really apply your definition in hindsight, after something has stood the test of time. You can't tell the difference between sails and wheels until after the rise of the steam engine.


> Many of the things we call techs nowadays are just paradigms

More like fads sold to milk even more money from people.



WasmGC is there no matter what, unless we are talking about an incomplete implementation, also plenty of linear memory based bytecodes since 1958.


WasmGC is a feature you can opt in to, rather than a core feature of the platform. It's more of an enabler for languages that expect a GC from their host platform (for things like Dart and Kotlin). Inversely, other forms of bytecode might have linear memory, but the JVM isn't one of those.

For the purposes of OP's question, the memory model difference is one of the key reasons why you might want to use wasm instead of a java applet.


JVM is one bytecode among many since 1958, no need to keep bashing against it as way to champion WASM.

Opt-in or not, it is there on the runtime.


It seems relevant since we are in a thread asking to compare WASM to java applets.


Wasm has a great benefits over those technologies:

- Wasm has verification specification that wasm bytecode must comply to. This verified subset makes security exploits seen in those older technologies outright impossible. Attacks based around misbehaving hardware like heartbleed or rowhammer might still be possible, but you, eg, can't reference memory outside of your wasm's memory by tricking the VM to interpret a number you have as a pointer to memory that doesn't belong to you.

- Wasm bytecode is trivial (as it gets) to turn into machine code. So implementations can be smaller and faster than using a VM.

- Wasm isn't owned by a specific company, and has an open and well written specification anyone can use.

- It has been adopted as a web standard, so no browser extensions are required.

As for computation on clients versus serves, that's already true for Javascript. More true in fact, since wasm code can be efficient in ways that are impossible for Javascript.


Btw, is WASM really more secure? JVM and .NET basically have capability-based security thanks to their OOP design together with bytecode verification: if you can't take a reference to an object (say, there's a factory method with a check), you can't access that object in any way (a reference is like an access token).

As far as I understand, in WASM memory is a linear blob, so if I compile C++ to WASM, isn't it possible to reference a random segment of memory (say, via an unchecked array index exploit) and then do whatever you want with it (exploit other bugs in the original C++ app). The only benefit is that access to the OS is isolated, but all the other exploits are still possible (and impossible in JVM/.NET).

Am I missing something?


AFAIK you’re correct.

Also see: https://www.usenix.org/conference/usenixsecurity20/presentat...

„We find that many classic vulnerabilities which, due to common mitigations, are no longer exploitable in native binaries, are completely exposed in WebAssembly. Moreover, WebAssembly enables unique attacks, such as overwriting supposedly constant data or manipulating the heap using a stack overflow.”

My understanding is that people talking about wasm being more secure mostly talk about the ability to escape the sandbox or access unintended APIs, not integrity of the app itself.


For now, (typical) WASM is indeed more secure than (typical) JVM or .NET bytecodes primarily because external operations with WASM are not yet popular. WASM in this regard has the benefit of decades' worth of hindsight that it can carve its own safe API for interoperation, but otherwise not technically superior or inferior. Given that the current web browser somehow continues to ship and keep such APIs, I think the future WASM with such APIs is also likely to remain safer, but that's by no means guaranteed.


When discussing security it's important to keep in mind the threat model.

We're mostly concerned with being able to visit a malicious site, and execute wasm from that site without that wasm being able to execute arbitrary code on the host - breaking out of the sandbox in order to execute malware. You say the only benefit is that access to the OS is isolated, but that's the big benefit.

Having said that, WebAssembly has some design decisions that make your exploits significantly more difficult in practice. The call stack is a separate stack from WebAssembly memory that's effectively invisible to the running WebAssembly program, so return oriented programming exploits should be impossible. Also WebAssembly executable bytecode is separate from WebAssembly memory, making it impossible to inject bytecode via a buffer overflow + execute it.

If you want to generate WebAssembly code at runtime, link it in as a new function, and execute it, you need participation from the host, e.g. https://wingolog.org/archives/2022/08/18/just-in-time-code-g...


The downside of WASM programs not being able to see the call stack is that it makes it impossible to port software that uses stackful coroutines/fibers/whatever you want to call them to WASM, since that functionality works by switching stacks within the same thread.


yes you're missing something. Java applets and flash outside of any security and they ran the users code in that insecure environment

WASM, in broswers, runs entirely inside a secure environment with no access to the system.

    js->browser->os
     |
     +--Flash/java-->os

vs

    wasm->browser->os

further. WASM and Js are in their own process with no os acesss. they can't access the os except by rpc to the broswer

flash/java tho, ran all user code in the same process with full access to the os


Seems like a trivial thing to fix though, it was a lack of will over an explicit design tradeoff. At Applet’s time there was simply no such API surface to attach to and make useful programs.


it's not a trivial thing to fix. It took apple, Mozilla, and google years to refsctor their broswers to isolate user code in its own process and then effiently ipc all services to other processes.

chrome started with that but also started without GPU based graphics and spent 2-3 years adding yet another process make it possible. mozilla and safari took almost 10 years to catch up.


>Wasm has verification specification. This verified subset makes security exploits seen in those older technologies outright impossible

Both Java and .NET verify their bytecode.

>Wasm bytecode is trivial (as it gets) to turn into machine code

JVM and .NET bytecodes aren't supercomplicated either.

Probably the only real differences are: 1) WASM was designed to be more modular and slimmer from the start, while Java and .NET were designed to be fat; currently there are modularization efforts, but it's too late 2) WASM is an open standard from the start and so browser vendors implement it without plugins

Other than that, it feels like WASM is a reinvention of what already existed before.


AFAIK the big new thing in WASM is that it enforces 'structured control flow' - so it's a bit more like a high level AST than an assembly-style virtual ISA. Not sure how much of that matters in practice, but AFAIK that was the one important feature that enabled the proper validation of WASM bytecode.


I don't think there's any significant advance in the bytecode beyond e.g. JVM bytecode.

The difference is in the surface area of the standard library -- Java applets exposed a lot of stuff that turned out to have a lot of security holes, and it was basically impossible to guarantee there weren't further holes. In WASM, the linear memory and very simple OS interface makes the sandboxing much more tractable.


I worked on JVM bytecode for a significant number of years before working on Wasm. JVM bytecode verification is non-trivial, not only to specify, but to implement efficiently. In Java 6 the class file format introduced stack maps to tame a worst-case O(n^3) bytecode verification overhead, which had become a DoS attack vector. Structured control flow makes Wasm validation effectively linear and vastly simpler to understand and vet. Wasm cleaned up a number of JVM bytecode issues, such as massive redundancy between class files (duplicate constant pool entries), length limitations (Wasm uses LEBs everywhere), typing of locals, more arithmetic instructions, with signedness and floating point that closer matches hardware, addition of SIMD, explicit tail calls, and now first-class functions and a lower-level object model.


Are they validating code to the same degree though? Like, there are obviously learned lessons in how WASM is designed, but at the same time JVM byte code being at a slightly higher level of abstraction can outright make certain incorrect code impossible to express, so it may not be apples to oranges.

What I’m thinking of is simply memory corruption issues from the linear memory model, and while these can only corrupt the given process, not anything outside, it is still not something the JVM allows.


Wasm bytecode verification is more strict than JVM bytecode verification. For example, JVM locals don't have declared types, they are inferred by the abstract interpretation algorithm (one of the reasons for the afore-mentioned O(n^3) worst case). In Wasm bytecode, all locals have declared types.

Wasm GC also introduces non-null reference types, and the validation algorithm guarantees that locals of declared non-null type cannot be used before being initialized. That's also done as part of the single-pass verification.

Wasm GC has a lower-level object model and type system than the JVM (basically structs, arrays, and first-class functions, to which object models are lowered), so it's possible that a higher-level type system, when lowered to Wasm GC, may not be enforceable at the bytecode level. So you could, e.g. screw up the virtual dispatch sequence of a Java method call and end up with a Wasm runtime type error.


Thx for this perspective and info. Regarding "signedness and floating point that closer matches hardware", I'm not seeing unsigned integers. Are they supported? I see only:

> Two’s complement signed integers in 32 bits and optionally 64 bits.

https://webassembly.org/docs/portability/#assumptions-for-ef...

And nothing suggesting unsigned ints here:

https://webassembly.org/features/


Signed and unsigned are just different views on the same bits. CPU registers don't carry signedness either after all, the value they carry is neither signed nor unsigned until you look at the bits and decide to "view" them as a signed or unsigned number.

With the two's complement convention, the concept of 'signedness' only matters when a narrow integer value needs to be extended to a wider value (e.g. 8-bit to 16-bit), specifically whether the new bits needs to be replicated from the narrow value's topmost bit (for signed extension) or set to zero (for unsigned extension).

It would be interesting to speculate what a high level language would look like with such sign-agnostic "Schroedinger's integer types").


CPU instruction sets do account for signed vs unsigned integers. SHR vs SAR for example. It's part of the ISAs. I'm calling this out as AFAIK, the JVM has no support for unsigned ints and so that in turn makes WASM a little more compelling.

https://en.wikibooks.org/wiki/X86_Assembly/Shift_and_Rotate


Yes some instructions do - but surprisingly few (for instance there's signed/unsigned mul/div instructions, but add/sub are 'sign-agnostic'). The important part is that any 'signedness' is associated with the operation, and not with the operands or results.


Well, it has compiler intrinsics for unsigned numbers, for what it’s worth.


Wasm makes no distinction between signed and unsigned integers as variables, only calling them integers. The relevant operations are split between signed and unsigned.

https://webassembly.github.io/spec/core/appendix/index-instr...

See how there's only i32.load and i32.eq, but there's i32.lt_u and i32.lt_s. Loading bits from memory or comparing them is the same operation bit for bit for each of signed and unsigned. However, less than requires knowing the desired signess, and is split between signed and unsigned.


I stand corrected! That’s great information, thanks. I didn’t know JVM bytecode had so many problems.


Java Applets and ActiveX had less-mediated (Applets, somewhat; ActiveX, not at all) access to the underlying OS. The "outer platform" of WASM is approximately the Javascript runtime; the "outer platform" of Applets is execve(2).


This article is about WASM on the server so to answer your question it's different because it's not pushing computational cost from the server to the client. It can, but it doesn't in all cases. That's a huge difference. Others have already commented others (better sandboxing, isolation, etc)


It's amazing how many people don't actually read the article and just start commenting right away. It's like leaving bad amazon reviews for products you haven't purchased.


> untrusted third party compiled code in a web browser.

WASM makes that safe, and that's the whole point. It doesn't increase the attack surface by much compared to running Javascript code in the browser, while the alternative solutions where directly poking through into the operating system and bypassing any security infrastructure of the browser for running untrusted code.


WASM is a child of the browser community and built on top of existing infra.

Java was an outsider trying to get in.

The difference is not in the nature of things, but rather who championed it.


Pushing compute to the client is the whole point, and is often a major improvement for the end user, especially in the era in which phones are faster than the supercomputers of the 90s.

And otherwise, WASM is different in two ways.

For one, browsers have gotten pretty good at running untrusted 3rd party code safely, which Flash or the JVM or IE or.NET were never even slightly adequate for.

The other difference is that WASM is designed to allow you to take a program in any language and run it in the user's browser. The techs you mention were all available for a single language, so if you already had a program in, say, Python, you'd have to re-write it in Java or C#, or maybe Scala or F#, to run it as an applet or Silverlight program.


CLR means Common Language Runtime for a reason.

From 2001,

"More than 20 programming tools vendors offer some 26 programming languages — including C++, Perl, Python, Java, COBOL, RPG and Haskell — on .NET."

https://news.microsoft.com/2001/10/22/massive-industry-and-d...


It's not the same thing though. All of these languages have specific constructs for integrating with the CLR, the CLR is not just a compilation target like WASM is. C++/CLR even has a fourth kind of variable compared to base C++ (^, managed references of a type, in addition to the base type, * pointers to the type, and & references to the type). IronPython has not had a GIL since its early days. I'm sure the others have significant differences, but I am less aware of them.


As if WebAssembly doesn't impose similar restrictions, with specific kinds of toolchains, and now the whole components mess.

This WebAssembly marketing is incredible.


Are there any examples of how, say, C++ compiled for WASM is different from native C++, or Python on WASM vs CPython? I haven't really used or cared about WASM, so I'm happy to learn, I don't have some agenda here.


ActiveX wasn't sandboxed so it was a security joke. Flash and Silverlight were full custom runtimes that a) only worked with a specific language, and b) didn't integrate well with the existing web platform. WASM fixes all of that.


But that’s missing a few steps. First they banned all those technologies saying JavaScript was sufficient, then only later made wasm.

There never was a wasm vs applet debate.


Nobody banned Flash. Apple just sensibly didn't implement it, because it was shit on phones. Android did support Flash and the experience was awful.


They sure banned Java Applets.

> Nobody banned Flash.

What happened first? Chrome dropping support for flash, or flash stopped making updates?


WebAssembly has a few things that set it apart:

- The security model (touched on by other comments in this thread)

- The Component Model. This is probably the hardest part to wrap your head around, but it's pretty huge. It's based on a generalization of "libraries" (which export things to be consumed) to "worlds" (which can both export and import things from a "host"). Component modules are like a rich wrapper around the simpler core modules. Having this 2-layer architecture allows far more compilers to target WebAssembly (because core modules are more general than JVM classes), while also allowing modules compiled from different ecosystems to interoperate in sophisticated ways. It's deceivingly powerful yet also sounds deceivingly unimpressive at the same time.

- It's a W3C standard with a lot of browser buy-in.

- Some people really like the text format, because they think it makes Wasm modules "readable". I'm not sold on that part.

- Performance and the ISA design are much more advanced than JVM.


> This is probably the hardest part to wrap your head around, but it's pretty huge.

It's just an IDL, IDL's have been around a long time and have been used for COM, Java, .NET, etc.


> Can someone explain to me what the difference really is between WASM and older tech like Java Applets, ActiveX, Silverlight and Macromedia Flash

As well as the security model differences other are debating, and WASM being an open standard that is easy to implement and under no control from a commercial entity, there is a significant difference in scope.

WebAssemply is just the runtime that executes byte-code compiled code efficiently. That's it. No large standard run-time (compile in everything you need), no UI manipulation (message passing to JS is how you affect the DOM, and how you ready DOM status back), etc. It odes one thing (crunch numbers, essentially) and does it well.


There have also been exploits of Chrome's JS sandbox. For me the greatest difference is that WASM is supported by the browser itself. There isn't the same conflict of interest between OS vendors and 3rd party runtime providers.


The replacement for those technologies is arguably javascript. WASM is more focused on performance by providing less abstractions and an instruction set closer to assembly (hence the name).

The issue with those older technologies was that the runtime itself was a third-party external plugin you had to trust, and they often had various security issues. WASM however is an open standard, so browser manifacturers can directly implement it in browser engines without trusting other third-parties. It is also much more restricted in scope (less abstractions mean less work to optimize them!) which helps reducing the attack surface.


> The replacement for those technologies is arguably javascript. WASM is more focused on performance by providing less abstractions and an instruction set closer to assembly (hence the name).

That is nonsense. WASM and JS have the exact same performance boundaries in a browser because the same VM runs them. However, WASM allows you to use languages where it's easier to stay on a "fast-path".


Conceptually, they aren't that different. The details do matter though.

WASM on its own isn't anything special security-wise. You could modify Java to be as secure or actually more secure just by stripping out features, as the JVM is blocking some kinds of 'internal' security attacks that WASM only has mitigations for. There have been many sandbox escapes for WASM and will be more, for example this very trivial sandbox escape in Chrome:

https://microsoftedge.github.io/edgevr/posts/Escaping-the-sa...

... is somewhat reminiscent of sandbox escapes that were seen in Java and Flash.

But! There are some differences:

1. WASM / JS are minimalist and features get added slowly, only after the browser makers have done a lot of effort on sandboxing. The old assumption that operating system code was secure is mostly no longer held whereas in the Flash/applets/pre-Chrome era, it was. Stuff like the Speech XML exploit is fairly rare, whereas for other attempts they added a lot of features very fast and so there was more surface area for attacks.

2. There is the outer kernel sandbox if the inner sandbox fails. Java/Flash didn't have this option because Windows 9x didn't support kernel sandboxing, even Win2K/XP barely supported it.

3. WASM / JS doesn't assume any kind of code signing, it's pure sandbox all the way.


Not an answer, but I think it’s unfair to group Flash with the others because it was both the editor/compiler and the player were proprietary. I guess same applies to Silverlight at least.


The ActiveX "player" (Internet Explorer) was also proprietary. And I'm not sure if you could get away without proprietary Microsoft tools to develop for it.


The big conceptual difference is that Flash, ActiveX etc allowed code to reach outside of the browser sandbox. WASM remains _inside_ the browser sandbox.

Also no corporate overlord control.


For starters, in that it gives you memory safe bytecodes computation that aren't coupled with one specific language.


You can't easily decompile WASM so it makes it harder to block inline ads.


You can alreay compile javascript into https://jsfuck.com/ and you could also very easily recompile the wasm into js.

Obsuscation and transpilation are not new in jsland




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: