Hacker News new | past | comments | ask | show | jobs | submit login
Universal Binaries Using WASM (github.com)
335 points by grey-area on Feb 26, 2019 | hide | past | web | favorite | 229 comments

It looks like that talk is from 2014. Ideas like that had been talked about for many years -- that is, moving all of computing to the browser.

July 2007:

Atwood's Law: any application that can be written in JavaScript, will eventually be written in JavaScript.


JSLinux by Fabrice Bellard in 2011:


As far as I remember, the addition of Typed arrays to JavaScript made this feasible.

In typical Bellard fashion, he didn't really talk about it -- he just demonstrated it!

> the text format defined by the WebAssembly reference interpreter (.wat)


I like to share this talk with new junior devs instead of ranting about strangely defined behavior in javascript. Saves time, and it's more fun.

Maybe prophetic in the sense of "people really wanted this for years and it has finally been implemented". People have been talking about it since at least when NaCl debuted in 2011.

A lot longer than that. There was a standard calls ANDF in 1989 (architecture neutral distribution format) trying to solve the same problem. I'm curious why it didn't work out, but I imagine part of the problem was that it had to support both many different flavors of Unix as well as cpu arch.


> I'm curious why it didn't work out

I worked at a multi-platform UNIX software vendor back then, and did the project of compiling our products to ANDF. At one point I counted 20 distinct UNIX variants that we had in our office (all workstations).

It was believed that UNIX systems complied with government procurement standards (eg this is where POSIX is relevant), but there were always bugs and differences in behaviour. In C code this is handled by #if pre-processor directives to adapt as needed. ANDF required turning that into runtime if statements instead. (An example would be if there were two different network interface calls depending on platform.) That would have been a herculean task and was in same places impossible such as the same API taking different numbers of parameters on different platforms. The ANDF compiler would have to pick one for its header files.

Even something like endianess is usually handled by #if and not an if statement, so you can see just how much effort would be needed just to have it compile. You still had to do all the multi-platform installation instructions and testing, so no one had an incentive to make things more complicated.

Java killed ANDF stone dead. While Java was most prominent for applets in the browser, it also competed with C++ on the backend. C++ compilers cost a lot of money, had licensing daemons that locked you to one platform, and implemented different subsets of the C++ standard and library. Java was more forgiving and the JVM provided standard features like multi-threading and networking. You will also note that Sun spent a lot of effort to keep Java standard.

“If I had asked people what they wanted, they would have said faster horses.” - Henry Ford

JavaScript by itself is not a “nice” language, I would ask what the root of your request is: an easy garbage collected language that is ubiquitous?

> I would ask what the root of your request is: an easy garbage collected language that is ubiquitous?

Honestly, I just want to be able to write the stuff I have to in JS today in my favorite language.

This "long-held fervor" that culminated in WASM started back before Node.js existed, so back then, "Javascript" was just a thing browsers did (except for, say, Windows Scripting Host's support of it.) The fervor back then wasn't really driven by a desire for a ubiquitous anything; it was driven specifically by people thinking about browsers, and what is required to program web-apps in browsers.

What people have wished for, since... oh, 2001 or so, is the ability to write web-app frontends without needing to grok and deal with the awful runtime semantics of Javascript—the way you do when you write Javascript, yes, but also the way you do when you write in a language that directly transpiles to Javascript, like TypeScript or ClojureScript.

Languages like TypeScript may add semantics on top of the JS runtime's semantics, but they can't get away from the fact that the JS runtime is the "abstract machine" they program, any more than e.g. Clojure can get away from the fact that the JVM is the abstract machine it programs. That's why none of these languages were ever seen as a "saving grace" from the "problem of Javascript", the way WASM is.

WASM has its own abstract machine (which runs efficiently in browsers), which finally frees people from the tyranny of the Javascript-runtime-as-abstract-machine.

It's great that it's also now replacing the Javascript-runtime-abstract-machine in other contexts (e.g. Node-like server-side usage, plain-V8-like embedded usage) but that was never really "the thing" that anyone cared about.


Mind you, the NaCl fervor was for a ubiquitous VM—but the NaCl fervor wasn't nearly as large, and isn't really what's propelling WASM to prominence right now. Even then, it wasn't about "an easy garbage collected language that is ubiquitous", no. The goal of it was to be able to take code that you already have—native code, written to run fast, like a AAA game—and put it in a browser-strength sandbox, such that it can be zero-installed just by visiting a URL, with full performance. You know, like ActiveX was supposed to be. But better.

NaCl didn't really get us there, because it happened right as the architectural split between x86 and ARM really started heating up, and NaCl's solution to that split—PNaCl, a.k.a. sandboxed LLVM IR—was both too late and not really efficient-enough at the time to fully supplant the "Native" NaCl in NaCl messaging. (LLVM IR works well-enough now for Apple to rely on it for being a "unified intermediate" of both x86 and ARM target object code, but that shift only began with ~8 additional years of LLVM development after the version of LLVM that PNaCl's IR came from.)

WASM seems to get us there. But do we care any more? Everyone already has other solutions to this problem. ChromeOS can run Android apps; Ubuntu Snappy packages can expose GUIs to Wayland; Windows has a Linux ABI to run Docker containers. Ubiquity is a lot easier now than it was back then, for any particular use-case you might want.

On the embedded-scripting side of things, everyone has seemingly settled on embedding LuaJIT or V8. Do people even need embedded scripts to be fast, in a way that "WASM as compilation target" would help with? Maybe for the more esoteric use-cases like CouchDB's "design documents" or the Ethereum VM (https://medium.com/coinmonks/ewasm-web-assembly-for-ethereum...) But I doubt WASM will hit ubiquity here. Why would OpenResty switch? Why would Redis? Etc. You're not writing-and-compiling native code for any of these in the first place, so adding WASM here would only break existing workflows.

> What people have wished for, since... oh, 2001 or so, is the ability to write web-app frontends without needing to grok and deal with the awful runtime semantics of Javascript

It was so even before 2001. Consider why the <script> element as the "type" attribute to begin with (and why it had the "language" attribute originally, before "type"). As I recall, W3C specs from that era gave examples such as Tcl! On Windows, you could use anything that implemented Active Scripting APIs - e.g. Perl. JS just happened to be the one that everybody had, because it was the first, and so it became the common denominator, to the detriment of the industry.

If by "people" you mean "a vocal minority", then sure.

This project has been known to copy code from other projects without respecting licenses. The founder of ethereum pointed this out a few weeks ago.

https://twitter.com/gavofyork/status/1094597005333151745 https://twitter.com/Sunfishcode/status/1075486397824327681

The extent to which the people in this project took "inspiration" from wasmjit has been questioned as well https://github.com/wasmerio/wasmer/commit/a81500e047fdd321b3...

Hi, thanks for bringing this to our attention!

We think we've resolved this issue here (https://twitter.com/syrusakbary/status/1094635228457426944), but our response was definitely later than it should have been. If you feel that we need to rectify further, feel free to email me at lachlan@wasmer.io.

Looks like they made a mistake, apologised, and have added attribution since?

When they were called out privately, no mistake was admitted and no action was taken. The attribution was only added after they were called out publicly.

a problem fixed under less ideal situations is still a fixed problem regardless.

Do you think they would they have fixed the problem if they weren't called out publicly?

If it's a "once off", then it probably a bad idea to treat them with too much hostility and suspicion. That'd stop further first time mistake/makers from fessing up and fixing things.

If however this project has a history of doing the wrong thing, covering up, and only fixing things for PR value... then yeah, not really a project to put time into. :)

i don't think it matters because they __did__ fix the problem. maybe that would have been a problem in the past before the problem became acknowledged, but the thing is, that isn't the case anymore and so to me it really doesn't matter. but even if it did matter, i still probably wouldn't care. sometimes people need convincing to do a thing and when it comes to presenting a consistent and positive image to others, i think it's quite a generally good thing if someone is responsive to the criticism of others and actually acts on it. maybe they should have fixed it before of their own accord. but maybe before they didn't see the issue that others saw. people aren't perfect any they make mistakes. constantly.

Do you think it's possible that this wasn't a mistake and instead they acted with malicious intentions and only fixed the problem to minimize the negative PR fallout? If not, why not?

i sure think it's possible but my decision is binded by Hanlon's razor in this case. don't attribute to malicious intent that which is attributable to stupidity/neglect/etc. so I mean it's possible they had malicious intent, but i mean, really, what did they stand to gain by having it? to me it just seems more likely that they just didn't notice.

Do you think potentially malicious behavior should be overlooked or made known?

i think it's for the best if such is done, yes, but then the question becomes potentially malicious by whose standard? for instance, if we were to operate off of what your perspective appears to be, then you (appear to) have the preconceived notion that the actions were indeed malicious and if anything it would seem you're seeking to gratify your inherent confirmation bias as a human rather than attempting to discern the actual nature of things. of course, i don't want you to think im saying you're doing any of this with absolute certainty as im not sure we can know anything with such clarity, but more just that in these situations it pays to have a third party that is ultimately detached from either outcome to determine which interpretation is the most valid one. also words

I agree that attention should be drawn to potentially malicious behavior.

I don't know that the actions were indeed malicious but there is evidence to suggest that they may have been and I'm doing my small part to make the greater community aware. This will enable the community to make wiser judgments in the future, if the need arises.

For those thinking that this is just Java and Flash all over again: Here's the relevant blog post from Steve Klabnik explaining why this is _very_ different:


There’s another reason Wasm succeeded: it’s tiny.

I'll agree with him that Java is very much not tiny, but I disagree about Flash. The last time I remember setting it up, the browser plugin was a single binary of a few MB (and I'm sure that could be made smaller.) SWF files are also a great example of a well-designed space-optimised vector graphics format --- remember that it was designed close to 20 years ago for the computers of the time.

Edit: care to explain, downvoters?

It's not tiny in the sense of file size, it's tiny in the sense of scope. It doesn't even have a standard library, let alone offer everything flash and Java did. At it's heart it's just a compact, efficient representation of a subset of JavaScript

> At it's heart it's just a compact, efficient representation of a subset of JavaScript

WASM is a well specified encoding for a stack based machine with the following 4 types, i32, i64, f32 and f64. It is very much not JavaScript.


I'm speaking of asm.js. Other than i64, all of those types can be operated on in JavaScript. WASM is just a better representation of that.

WASM was intentionally designed to be able to integrate easily with existing JS implementations.

It's so much not JavaScript that you cannot even compile JavaScript to wasm!

Wait... why? What's the limiting factor?

there’s not a lot of reason to. In the browser, you already have a JS environment. It’s easier to add a wasm environment to it than it is to re-write the world. Additionally, wasm doesn’t have a GC, and is not really optimized for dynamic languages, so there’s just not a lot of reason to do it.

Oh, I thought you mean that there's some fundamental reason why it can't be done. Ofc it's pointless today, but I wouldn't be surprised if JS in browsers eventually becomes a layer on top of wasm.

20 years ago, indeed, Java wasn't tiny. That was on "high end" desktop computers with roughly 64 to 128 megs of RAM. Today, in the days of browsers taking up gigabytes of memory, it's hardly a concern. That applet JVM would start lightning fast.

Flash runtime was several hundreds MBs. Sure the actual applets were a few mb of mostly animations and resources, but the runtime was another thing.

The runtime is ~15MB, far from hundreds of MBs.

Back in the early 2000's, IIRC, Squeak Smalltalkers succeeded in getting a Smalltalk runtime down to 385 kB. Haters still complained about it being "too fat." There was an R&D project which got one Smalltalk image for a Unix-style command line utility to 45 kB. Back in those days, the standard class library for a VisualWorks Smalltalk image was about 12MB on disk.

In contrast, Perl had a runtime of something like 768 kB. Yet, VisualWorks could load its runtime faster, if you tweaked things, like shutting off the boot-up chime and notifier dialog.

> Yet, VisualWorks could load its runtime faster, if you tweaked things

For the same reason that Emacs loads quickly: the "image" is (an abstracted, runtime-dependent equivalent to) a memory dump. Perl has to parse Perl code when it starts up; Smalltalk just "is" Smalltalk when it starts up. Like booting up a DBMS where the DB already has data in it.

Honestly, I'm kind of surprised more modern languages/runtimes don't take this approach. The Smalltalk approach would work exactly as well for e.g. Ruby.

It leads to a lot of issues with the end design of everything. You either have to reconstruct the heap and all the pointers involved, or it has to be loaded at a known address. With things like ASLR, and a bunch of other security features this ends up difficult or impossible to do directly anymore. Along with that, it also means that the runtimes aren't cross-compatible at load time so weird differences might leak out (little or big endianess, etc.). It leads to a lot of headaches because of changing systems underneath it.

pepperflash plugin, the flash runtime that chrome includes, is about 20mb.

But flash was not an option for similar purposes because of its terrible security.

I'm not wholly convinced.

I'm quite happy that Wasm is a great step forward, but I think that is more by accident of history. Javascript was already in the browser so JS/Wasm was low-hanging fruit. Using it to claim "Wasm succeeded" is premature.

The post is not about this implementation (the link of the original post) though. This implementation is not about the browsers.

That is a weird, unconvincing post.

His first point about why it's different is "Wasm won"? Really? Back in the day Java applets were everywhere. They had "won". They disappeared because browser makers kicked them out along with Flash, which had also "won". He then tries to redefine plugins, which were created by Netscape, had a standard cross platform API and their own HTML tags as somehow "not a part of the web platform".

Then he says that's "in some senses the final point", as if being forced into the platform by de-facto execution of all competitors says all you need to know about why WebAssembly is good.

But still, he does go on to make other points too. Firstly he picks up on loading screens, presumably claiming WebAssembly doesn't have loading screens. Well, no, the web platform makes you implement your own. But the reason apps like Gmail have loading screens is because it's waiting whilst the browser downloads, parses and initialises megabytes of JavaScript and WASM (maybe, assuming Gmail uses some). To the extent it's faster it's only because hardware is faster, not due to any fundamental difference in technology. Not having a standardised splash doesn't seem like an actual win.

He then goes on to argue:

These platforms never gained the ability to interact with the rest of the platform provided by your browser. Well, technically they might have, but that’s not how these technologies were used in practice.

So he makes a false claim - that applets and flash couldn't interact with the hosting web page - and then immediately admits it's false, but that "this isn't how the tech was used in practice". But I remember quite a few Java and Flash applets that interacted with the host page and changed the HTML, most obviously in things like the original Google Talk. And to the extent Java applets rarely did, that's because HTML sucked in that era. Arguably it still does. The primary reason people were writing Java applets was to escape the woeful inadequacy of the web platform, so no big surprise not many applets rendered their UI using "the benefits of HTML and CSS".

And users ended up solidly rejecting it. Outside of games, users hated Flash. Java Applets were heavy and slow.

Users didn't reject Flash. Flash went everywhere. It was the de-facto video streaming solution, the de-facto cross platform lightweight games solution, it was in every Gmail page, it was the standard way to make attractive animated websites. Site authors adopted it en-masse and users loved those sites, which were often doing things HTML users couldn't even begin to do.

And of course Java applets were "heavy and slow" in an era when the web itself was so heavy and slow, that people didn't even try to do the same things with it. The crime of Java applets was mostly that they raised developer expectations to unreasonable levels and devs ended up blowing their hardware budgets. The web was crippled and limited, so people didn't even try to make sophisticated UIs or modular software, and it ended up staying within the envelope of what slow internet connections could manage more effectively.

WebAssembly, on the other hand, is much closer to JavaScript. It doesn’t inherently require taking over a part of your screen. It doesn’t expect to be its own little closed off world. Via JavaScript today, and on its own in the future, it’s able to interact with the surrounding environment. It just… fits.

Here he's just repeating an argument he already admitted himself is wrong. Flash applets and Java applets could both be invisible and could both "fit" with the surrounding environment. And he already knows that.

I have to admit I stopped reading at this point. WebAssembly is a poor re-implementation of the JVM, 20+ years later, and its primary selling point is that browser makers control it so they are forcing people to use it in the same way they did for JavaScript. The advocates for it know their own comparisons with prior tech are bogus but make them anyway. It's depressing to see.

This is one of the logical next steps towards mass WASM-adoption. Another was Cloudflare's Isolates [1].

There needs to be official OS support from vendors (MacOS, Windows and Linux distros) in order to avoid the stinking pile of non-shared shared library mess that is electron.

[1]: https://blog.cloudflare.com/cloud-computing-without-containe...

We are working towards that. There are two ABIs (sets of syscalls, such as read a file or opening a socket) that we believe would help to bring universality between platforms: * CloudABI * PWSIX

We wrote more about it here: https://medium.com/wasmer/webassembly-cloudabi-b573047fd0a9

What about packaging & distribution? Unless I missed something, it's up to us to package & ship our app complete with assets.

I firmly believe that this project is incomplete without a standard like .war (Web ARchive, from the Java ecosystem).

Of course, you could just use what's out there eg. .app (macos) or .appimage (gnu/linux) but there has to be a well-defined and clearly endorsed way of distributing wasmer'd apps.

This "ship all libraries in the package" approach is also quite scary, you never know if an app will use your patched local libraries or use an ancient library with lots of exploits.

Double so for anything exposed to the internet.

WASM binaries needn't be exposed to the internet.

That said, the OS-sanctioned runtime could very well support a shared-object-like approach instead of one-binary-per-app but that would 1. complicate the runtime quite a lot 2. hinder startup times of wasm apps due to dependency solving.

Small nit, the product is Cloudflare Workers. https://workers.dev/

Isolates are a V8 feature that is used to implement Workers: https://v8docs.nodesource.com/node-0.8/d5/dda/classv8_1_1_is...

Mass adoption in what way?

So, we are officially campaigning for Javascript as the default language for everything now? :-(

WASM is not javascript

Funny, I hear often that "WASM is just javascript" whenever I compare it to Flash and Java, as a defense for why it is different this time and totally won't have any of the problems of those other runtimes.

Because wasm runs on the same javascript and not as a separate plugin. It will be a major feature of the browser and be automatically updated with it. (also it allows for feature discovery)

It has no relation to ecmascript, but you could say that it is integrated in (but independent from) javascript.

People say that it is not javascript because you can use ALL of the wasm toolchain without any trace of javascript.

WASM and javascript are run using the same underlying environment. WASM is not javascript on a language level.

actually, the exact opposite

This is Java 2.0 isn't it? "Write once, run everywhere".

I wonder how one would solve linking to native libraries and the like through this.

I still prefer just having the application source code available so one can compile it for a specific system.

No. Java is a language. WASM is a byte code compile target. A better comparison would be the JVM, such as a language like Scala that is definitely not Java still compiles to the JVM and can run anywhere there is a JVM.

> I wonder how one would solve linking to native libraries

Either use WASM outside the browser (I don't even know if that's a thing) or create some sort of import mechanism using JavaScript.

> No. Java is a language [...] A better comparison would be the JVM

I think the point was that it's a reinvention of "write once run everywhere", which had its own problems at the time (such as native libraries). Java introduced the idea, and arguing the fact that the JVM is the reason why it works is a touch pedantic in this discussion.

> Either use WASM outside the browser (I don't even know if that's a thing)

That's what TFA is doing.

Actually, Oberon first came with it, with JIT compilation of intermediate code into the target code.

The concept goes back all the way to Xerox PARC workstations and Pascal UCSD.

> use WASM outside the browser (I don't even know if that's a thing)

Not mainstream, but absolutely a thing. There's people even looking at running it in Ring 0.


I thought Java was both a language and a virtual machine target. You can compile programs written in other languages for it's VM like Clojure.

It should be far more open and lightweight than java.

Linking to native libraries isn't desirable - you don't want to leak your native vulnerabilities into your web browser.

If you have the source however (as you mentioned), using Clang to compile to WASM should be a viable approach.

But if this is to be a "universal" binary it has to interface with the OS, the DE and other native programs not just the browser.

If not you will end up with Swing type libraries that look very off compared to native applications.

Same for games really. You usually need to be aware of what platform (hardware and software) your run on and adjust for that to be able to use it optimally.

This is supposed to kill javascript(hopefully)

WASM is a parallel technology with no intention of impacting JavaScript, according to its spec. Therefore WASM will never replace JavaScript. JavaScript can be replaced, but it won't be WASM that does so.



Au contraire: if people can ship WASM to the browser to do what they’re currently doing with JS, but it’s faster, in nearly whatever language they like and just as nice/if not nicer, then they’ll stop writing JS.

I’ve long maintained that the reason that JS is so prelevant and popular is the result of it exclusively being the only language allowed in the browser.

Mozilla has already proven that a Rust WASM can take in the DOM, do all the necessary transformations and hand back the final result faster than JS can transform it. Host binding protocols are the first step in negating the necessity of JS. Once a direct DOM <-> WASM API comes along (and it will), JS will be as good as deprecated.

You're forgetting two relative benefits to JS - it can be written in a text editor and it doesn't require knowledge of a second language, toolchain and compiler. A lot of javascript is still hand written.

Also, deprecating JS would mean deprecating functionality most of the web itself. No one is going to rewrite all existing javascript in Rust or C or whatever and compile it to WASM.

Like it or not, as long as the web exists, javascript has to exist, browsers have to support it.

> it can be written in a text editor

Are there programming languages that we don’t use as targets that can’t be written in a text editor? I’m not sure what you mean here. I can write Rust, C, C++, Python, Swift, .Net, etc etc in a text editor with no more difficulty than JS?

> it doesn't require knowledge of a second language, toolchain and compiler. A lot of javascript is still hand written.

Have you not seen the borderline arcane processes and quantity of tooling involved in “compiling” and bundling JS applications? It’s nightmarish and far more convoluted than literally any other language I’ve dealt with.

> Also, deprecating JS would mean deprecating functionality most of the web itself.

Oh no, what a pity, large chunks of bloated websites will go away. I’m sure we’ll all miss them greatly, they made such a great difference to our lives. /s

> No one is going to rewrite all existing javascript in Rust or C or whatever and compile it to WASM.

You don’t have to write it in whatever low-level language, I’m sure any number of higher level languages will get support. Also, we dropped flash pretty hard and fast, and I distinctly, distinctly remember a huge number of websites being entirely flash.

It’s maybe not going to go away entirely, but it will hopefully cease to become as popular as it is now.

> Oh no, what a pity, large chunks of bloated websites will go away. I’m sure we’ll all miss them greatly, they made such a great difference to our lives. /s

I don't even know how to answer this... You are proposing to partition the web into a past and future version where almost no site of commercial interest has any incentive to leave the old internet.

Do you seriously believe that a browser without javascript (not a "noscript" addon, just no javascript) could be successful in any way? or say, more than elinks?

> Also, we dropped flash pretty hard and fast, and I distinctly, distinctly remember a huge number of websites being entirely flash.

And some sites still requires it. It is one of the reason I still keep chrome on my computers

Steve Jobs killed Flash. I fully believe that Flash would still be around and kicking (horribly) if he didn't step in and do something about it.

Jobs isn't here to save us anymore.

Really? He single-handedly killed Flash? What a powerful man.

He took the risk of creating a botched experience on his platform first. He did not fail and other followed. He was maybe not the single reason why flash declined, but for sure is the one that initiated it.

There is no issue rewriting JS. People do it anyway every year. I.e new frameworks, new tools etc. JS won't be deprecated. It will just cease to be used (i.e like COBOL)

> if people can ship WASM to the browser to do what they’re currently doing with JS

If if if.... That is not what WASM is or whats to be.

> I’ve long maintained that the reason that JS is so prelevant and popular is the result of it exclusively being the only language allowed in the browser.

People who hate JavaScript have long maintained that, without evidence. I can remember writing JavaScript long before it became popular and was still everywhere.

> WASM API comes along (and it will), JS will be as good as deprecated.

A DOM interop spec is underway, which will allow access into the WASM target from JavaScript. It will not allow access from WASM to the DOM, because that is insecure.

According to Mozilla : "There are future plans to allow WebAssembly to call Web APIs directly.".


Although I couldn't actually find any description of the intended capabilities.

MDN has a page for that as well:


The page DOM is not listed there. The closet thing is DOMParser which will allow parsing of an XML/HTML island into a new DOM object.


I believe you are over thinking this bullet point:

> Ergonomics - Allow WebAssembly modules to create, pass around, call, and manipulate JavaScript + DOM objects

That would allow WASM functionality similar to behavior already present in the browser, but that page does not say where that behavior will occur. Given this is an update to WASM it will likely occur against WASM artifacts. An example would be allowing a WASM application instance to create JavaScript event bindings and DOM artifacts on an internal DOM tree.

It is wishful thinking that an update to WASM would modify the web environment outside of WASM.

I think I’m just really confused as to what you’re saying. In the browser, you will be able to manipulate the browser DOM directly from wasm. That’s all I’m saying. I don’t think I understand what you mean in this reply.

So currently there is demand for a DOM interface in WASM and WASM has none. I have seen several WASM applications that ship markup and then fabricate their own DOM-like instance to accompany that markup to allow web-like behaviors to the WASM application instance. That is a significant amount of overhead. If WASM had access to a DOM interface applications could be written in any language that execute DOM methods on accompanying markup without need for JavaScript. You get the benefits of a self-contained event-driven UI environment with minimal overhead and without need for JavaScript, but just because WASM ships with support for DOM method execution does not necessarily mean that execution reflects a state or environment outside the WASM sandbox.

Yeah, that does sound like a lot of overhead.

> just because...

It certainly will though, 100%. I know this from talking to all sorts of people who work on wasm. That’s what the above proposal I linked you to does.

> A DOM interop spec is underway, which will allow access into the WASM target from JavaScript. It will not allow access from WASM to the DOM, because that is insecure.

More insecure than allowing arbitrary JS like we do now? I wouldn’t be surprised if we do get a WASM DOM API, maybe it won’t start as an official spec, but it’ll eventually happen.

a DOM API for wasm is planned, but IIRC before that people need to standardize the whole concept of external references/GC, so it might be still quite far away.

This may not be a goal of the project, but it is still the hope of a lot of people. When parity with JS is reached in terms of ability to access the DOM and JS interop, then there shouldn't be much advantage to JS, and possibly a performance penalty.

Having one language in a 'privileged' position for a platform isn't ideal. At least with the mobile platforms ObjC could be supplanted by Swift, and Java by Kotlin.

> This may not be a goal of the project, but it is still the hope of a lot of people.

I hope for world peace.

> When parity with JS is reached in terms of ability to access the DOM and JS interop

There is work being down on DOM interop, but its not what clueless people think it is. DOM interop allows JavaScript interaction to WASM via the DOM. It doesn't allow WASM to creep out of its sandbox to become some magical JavaScript replacement.

Even if it did, you would still have to learn to access the DOM. Most developers completely fail at this point. Somehow the DOM is scary.

> Having one language in a 'privileged' position for a platform isn't ideal.

Why not?

These threads always repeat themselves. There are the people painfully explaining why they would like to be able to program for the web in the languages they like. And then there are the JS advocates that take it personally and go out of their way to explain why JS is all you need. Even though no other platform forces you into a single language somehow the web is different.

But the web is different, web pages (HTML/CSS/JS + whatever) are script supposed to run in the browser and offer a fairly uniform interface. You can already compile to javascript, and also platform like android force you to use some of the native language.

The difference is that the web has a very different security model, where for example it is unthinkable to ship binaries.

Javascript was not designed as compile target. Stop this insanity! You can sandbox wasm. It's not unthinkable. It's real!

WASM already is a sandbox by default, and so is JavaScript in the browser. I don't understand your point.

Web technologies are defined by standards that define the limitations of said technologies no matter how sad people get about it.

I don't understand what you mean. The standards have to be defined and as demonstrated by wasm there's nothing preventing a runtime that all languages can target that isn't just compiling to JS. How that was ever considered a solution is just mindboggling. What was delivered by wasm is something that we've wanted for 10 years and were repeatedly told was not possible. Even today there are people in this very thread arguing that was is already available is not possible...

The Pandora box is already open, haven't you noticed it?






Telling everyone what the purpose of WebAssembly is regarding JavaScript doesn't matter, unless browser vendors decide to backtrack on it.

Web technologies are based on standards otherwise there is no conformance. If you want something different advocate for a change to the standard.

> The Pandora box is already open, haven't you noticed it?

WASM has been widely available for a while now in all major browsers and yet the fantasy of replacing JavaScript still has not come to pass. I guess I haven't noticed it.

Companies are still migrating from IE 8 to IE 11 on their RFPs requirements list, and WebAssembly is at 1.0.

Flash will come back rest assured.

I am already using pure browser versions of WebEx and Citrix.

While Autodesk customers can play with their AutoCAD projects on the browser.

When talking about web technologies I am astonished by the repeated irrationality. Most JavaScript developers are deathly afraid of the DOM. They will happily sink their careers over it and perform 100x more work to not contend with it and justify the strangeness of that reasoning in really creative irrational ways. It’s bizarrely fascinating. That phobia is further strange considering it’s about a skill that can be learned in 2 hours of active study and mastered in 4.

Then there are people who are equally fearful of JavaScript. Again, they will justify their phobia in all manners of ways at great energy. They will even form an alternate future reality to justify knowingly bad decisions in the present.

Both of the groups are really demonstrating the same phobia with all the same behaviors but just slightly shifting the subject a bit. What’s ironic though is that the people afraid of JavaScript, and thus having little to none experience working with or understanding the DOM, really want access to DOM.

When you get past the irrational fear around these things the motivations of the technologies and their direction becomes something completely different. This is why web technologies are difined in one way and yet people strangely impose their own alternate realities, abstractions, and unfounded projections upon them. It’s also why the standard technologies are slow, intentionally so, to reflect certain things even those things are highly demanded.

I have encountered these phobias enough, even in person, that I am curious to learn more about them and how they arise.

It is the need to feel assured one did the right choice to belong to a certain group.

As polyglot developer I have a schizophrenic attitude towards technology, you will see me bashing tech X in one thread and praising it in another one.

Because at the end of the day, regardless of my personal opinions, is what the customer wants that counts, and every tool has plus and minus.

Now, some people that sell themselves as Developer on X, User of Y, need to be sure that they did the right choice otherwise the hard truth is that they need to go elsewhere.

Naturally this creates a religious attitude around technology decisions and us vs them.

As for the Web, from my point of view it should have stayed for interactive documents, with native for everything else, using Internet network protocols as it was.

Everyone wanted to turn browsers into general purpose VMs instead, so naturally we are now here, with everyone porting their favourite stack into WebAssembly.

If DOM access to the page were to come to WASM what is the expectation that deveopers suddenly learn to appreciate and execute DOM instructions? On what basis do you form your answer?

Due to the phobias I described above my expectations of success are low, and rightly so. Rather, people are whining for something they don't have in their favorite language and if they get it the results will be a shit show. I espcially doubt advanced topics, such as accessibility, will be successful. If subjects that actually are challenging were taking seriously these developers wouldn't be crying something simple, like executing the DOM from JavaScript.

There is no if, DOM access is already at Stage 3.


Besides there is already WebGL, WebAudio and WebGPU is being worked on.

You completely ignored the details of the thing you linked to and you completely ignored my comment to do it. Dillusionment is a part of extreme reactions to phobias.

The details for the spec update you are talking about have already been addressed in these comments: https://github.com/WebAssembly/host-bindings/blob/master/pro...

You should read my previous comment if you want any further replies.

Thing is, I don't have any phobias, unless you haven't been paying attention, Web development is part of my job.

I am just stating that Flash like Web sites will come back, regardless of what WebAssembly haters think.

As for further replies, oh well.

Restating the goals of the WASM project and actually reading the spec doesn’t make anybody a hater. Also Flash had the equivalent of host bindings and could not modify the containing page.

Goals and spec are theory, what matters in the end is what people are actually doing on the field.

The goals of W3C were never to have a general purpose VM, rather hypertext documents, which nowadays only a minority cares about.

Javascript exists because you can't run anything else in the browser. wasm allows you tonuse your fav language so I can't see why anyone would use JS once wasm gets DOM access

WASM is getting DOM access, but there is nothing indicating that DOM access is to the page containing the WASM instance.

Ok, strawman using Rust (pick your favorite WASM capable language): I write in Rust, that is alreasdy a decently portable language. Most likely WASM will be on many of the platforms the LLVM allows Rust to target. Most Rust libs are cross platform capable (Linux, macOS, Windows, etc).

Now, when I target WASM I get one obvious thing, compile once deliver everywhere, with some overhead hit. Versus produce binaries for everything and distribute.

Is gaining the universal binary (with WASM runtime) enough? The reason I’m excited about the Web target for WASM is that I gain the entire web platform from my favorite language.

I’m 50% excited and also baffled by this. Now, as an embeddable language runtime in other software, that’s interesting...

Maybe, maybe not, in this specific scenario.

But it's not the only scenario. Imagine, for a moment, that you're not distributing your own binaries; you're in the business of running them, possibly from untrusted sources. And that's just one other scenario.

So the Cloudflare perspective. Yes, it’s interesting.

Or the “script my application with wasm” perspective, or the “smart contracts for blockchain” perspective, yeah. Think “docker”, in some ways.

Amazing! It's like we could write once and run everywaaaait a moment...

Tech is rarely about tech (same as for most things in life: X is rarely about X). It's about money and power/control.

Java was proprietary for a long time and even now it's basically controlled by Oracle.

Also, due to various issues, it doesn't really run on Android and it doesn't run at all on iOS (I know about Codename One, I mean the OpenJDK).

WASM is a truly open standard. And it's quite likely to be present on all platforms, even though for obvious reasons platform vendors will drag their feet regarding WASM.

>WASM is a truly open standard

I guess in practice this means "open" like HTTP and HTML, for which Google decides what does/doesn't go into the standard.

WASM has been developed mostly outside of google (mozilla mainly) and is actually counter to a lot of the attempts google made to get similar features (e.g. NaCl). It's quite hard to claim google has had undue influence on its development.

precisely what I was thinking...they have their hands in all these standards and most times seem to push the direction. But I will say this.. I'd rather them work on these "open" standards, then go ahead and keep to their own.. so i guess gotta take some bad with the good

But now you can choose your own programming language. The JVM is very Java centric, for obvious reasons.

Obviously its purpose was to support Java, but a number of languages - most prominently Scala, Kotlin, Clojure, Groovy and Ruby - run on the JVM. With perhaps the exception of Kotlin, these are not especially Java-like.

There is an interesting degree of independence. A lot of Java semantics is not baked into the JVM, and in the case of invokedynamic there was for a time a significant JVM feature that java didn't use.

The JVM is a pretty decent target for languages with garbage collection, objects (of some form), and methods / functions. Those facilities are useful for most languages these days, and provide a large degree of interoperability. I don't have to really thing about ruby calling java and back again, no calling conventions or issues with garbage.

It will be interesting to see whether WASM will support those kinds of features.

Names matter, though. For as long as the JVM has been called the "JVM" it has been associated with the Java language.

Using the Java language on other VMs confuses people in exactly the same way that using the Java Virtual Machine for other languages confuses people.

The outcome might have been different had they called it something like HLVM instead of JVM.

Taking this further, wouldn't this imply that WebAssembly is only for the Web?

JVM is too high-level, is the problem. You're still tied to its memory model, its object model, its type system etc.

It's much better to build this in layers - a standardized lowest layer that's something like WASM, then a standardized object model on top of that etc. That way, you can have a single stack supporting a broad variety of languages, with degree of interop compatibility dictated by how much in common they have.

You mean, like with TruffleRuby and JRuby, and TrufflePython and Jython, which both already run faster on the JVM than their "native" implementations of Ruby and Python do otherwise?

Just like the CLR and LLVM.

I've never been very fond of CLR outside of Windows platforms.

As for LLVM, sandboxing has never been a core feature (AFAIK).

It was called PNaCl.

WebAssembly only exists because Mozilla went with asm.js instead.

yeah but Windows support is highly experimental - hmm, wonder why.

Very impressive. Is this compatible with any WASM-compliant language, or just the ones listed? EG AssemblyScript:


I'm looking forward to seeing some benchmarks comparing native binaries with the same program run via wasmer.

It says it runs compiled .wasm binaries, which should mean any language. I think the listed languages are those targeted for interoperability.

That's right.

Wasmer executes any wasm binary, including AssemblyScript generated ones :)

Guys from NEAR Protocol using AssemblyScript and Wasmer together in production

The hard part will be coming up with portable API's that can do the things that native apps can do.

Consider that browser standards include an enormous amount of functionality and new web API's are getting added all the time, and yet, many people claim that web apps can't compete with native apps on phones or desktop. Also, even if you just pick one of these, there have been many attempts to come up with a portable way to write apps for both iOS and Android, and it's getting better (see Flutter for example), but it's still a lot of work for the implementers.

Docker got around this by standardizing on Linux (including the filesystem), but the client side is much harder.

So I suspect that for a "universal" file format based on WebAssembly to get anywhere, it will have to succeed in some niche not served well by web browsers, Docker, or Unity.

WebAssembly itself is useful in the way a fast interpreter for a scripting language is useful - as an embedded component of some larger runtime. Its potential customers are teams that would otherwise need a scripting language and choose WebAssembly instead.

One realistic use case that comes to mind for WebAssembly modules outside the browser is extension plugins for tools like Maya, 3DSMax or Photoshop. For such cases it's usually ok to provide a restricted set of APIs, but you gain binary compatibility across platforms and performance that's much better than (for instance) Python or Lua scripting.

On the bright side - since this is a brand new thing built from bottom up, we're not limited by past API design choices.

In particular, I hope that this time we can get a standardized ABI that goes well above what C (the current de facto standard) had to offer. Language evolution went far beyond where things were back in 1980s, and the set of common features across various languages has also increased substantially - and that should be reflected in the ABI.

Speaking of which: https://github.com/Microsoft/xlang

Can’t wait to have popups on my desktop telling me to update my WASM runtime to the latest version!

Browsers already auto update

And each app shipping its own copy

Electron apps already do.

Indeed my comment was a variant on the electron meme to this effect

So, with this you can make `main process` in electron, obsolete?

Embedding in other applications seems useful - imagine using it as a general plugin system. It's a bit strange that security and sandboxing aren't mentioned; that seems like WASM's primary value proposition here.

Those plugins still need an API in some language to interface with the host. A natural choice would be JavaScript: it's the world's most popular programming language, relatively easy to embed, and runtimes are available in every operating system.

At that point, you don't need WASM separately — just execute JavaScript and allow plugin writers to embed WASM modules for high performance, just like browsers do.

Well, if you were doing that, I don't think you'd need wasmer. V8 alone would be enough. Wasmer seems like an attempt to get away from Javascript and interface more directly with WASM.

Any real, simple c++ toolchain compiling directly to .WASM?

Emscripten is not really user friendly.

Time is passing, WASM is here but I can't see decent toolchains available to work with it.

Depends on how you define "user friendly". IMHO emscripten does what a gcc-compatible C/C++ toolchain is supposed to do:

> emcc hello.c -o hello.html

...which gives you a complete WebAssembly application runnable in the browser.

With VSCode as IDE it's possible to build a fairly nice edit-compile-test workflow with Intellisense and error squiggles.


What's really missing is proper debugging support (can mostly be worked around by debugging a platform-native executable compiled from the same source, but working source-level debugging for WASM blobs would be nice).

You can use the LLVM toolchain directly if you know your way around it. I believe the GCC folks were working on a wasm target a while ago but I don't know the ETA for that.

If you are looking for an easy setup of LLVM's own wasm-target (which Emscripten will be switching to eventually), try this tool: https://github.com/appcypher/wasabi

Cheerp and emscripten are the only two choices for C++, as far as I know.

That said, the Rust team has been putting a ton of work into a real, simple toolchain for exactly these reasons: tools matter for adoption. We'll see if the pain of the toolchain is less or greater than the pain of a different language...

Is anyone working on something like a POSIX layer on top of it? And maybe, someday, a GUI? We have universal binaries now, but what APIs will we call?

What problem is this possibly solving? Why wouldn't you just compile your Rust/Go to a native binary?

A portable binary like wasm makes deployment easier and potentially makes it more secure/easier to sandbox.

Makes deployment harder surely? Now you need to both ship the wasm runner and the compiled code of the executable?

Projects like this one are the first step towards native wasm support on OS level across major platforms. It'll happen, the only question is how soon.

You can still ship your native code. So you have two options instead of one.

The host is supposed to support wasm and most hosts will do.

With about 50% of corporate desktops still using Windows 7 with IE11 as the default browser - it’s going to take a while before wasm is truely ubiquitous.

That's not true: https://bit.ly/2GMbAN3

You can, if you've written it in Rust/Go. If you write in a another language, then you don't have to have a compiler (or VM) that supports multiple platforms, you just compile to WASM, and this would sort it out.

Fantastic! This makes me extremely excited about WASM! (Bonus points for Rust)

What's the strategy with dealing with 32-bit vs 64-bit machines?

Historic attempts at using LLVM IR as a universal binary have stumbled on that issue, and typically have ended up deciding to support the lowest-common-denominator machines i.e. 32-bit, meaning that it hasn't been viable to use them for server-style software.

Wasm is split between wasm32 and wasm64 they are separate https://github.com/WebAssembly/design/blob/master/FutureFeat...

So yes, we're at the lowest common denominator for addressing memory. You would need to explicitly build for wasm64 in the future.

Within wasmer wasm is compiled into a different IR through cranelift. That generally compiles for 64 bit systems but can compile to 32: https://github.com/CraneStation/wasmtime/pull/44

Not sure if this entirely answers your question. Seems like there are still a handful for decisions wasmer would need to make in terms of what exactly is supported.

If you are having trouble wrapping your head around WASM (it is a bit funny -- almost but not quite analogous to several more familiar paradigms), I highly recommend this series of posts: https://hacks.mozilla.org/category/code-cartoons/a-cartoon-i...

Ah, Pascal UCSD!

We should bring back a platform like TAOS.

The TAOS operating system (1991) had a virtual Instruction Set Architecture (ISA). Except for a small kernel, the entire OS was compiled to the virtual ISA, then assembled into actual machine code Just In Time as fast as the executable was being read off disk.

This resulted in an operating system that could run with decent speeds (80-90% of native) on seemingly any hardware, which could be ported to a new platform in just 3 days.


Speaking as somebody who actually ported it (or at least the second-generation "intent/elate") to new platforms, it took more than three days (though you could probably get to "hello world" in three days or so if you were in a hurry and you only needed to handle new hardware or a new host OS, not a whole new CPU architecture).

How is this different from compiling Linux from C to LLVM to native code?

And why would I ever want to generate native code for an OS at runtime?

And why would I ever want to generate native code for an OS at runtime?

Why? To get write once, run everywhere. If you can do the generation at the speed the data comes off the disk, you're paying nothing for the feature.

If you do things right, the abstract memory model can even be bit-identical. Even though there are differences in the underlying hardware, like endianess, all of the virtual ISAs can still implement the same virtual memory model. Smalltalk also does this, and it absolutely works across 100's of combinations of ISA and OS.

Not necessarily at runtime.

OS/360 (IBM z nowadays) and OS/400 (IBM i nowadays) work kind of this way.

The whole userspace is based on binaries deployed in a bytecode format.

They are than AOT compiled into native code at installation time, or anything there are system relevant changes, via a kernel JIT service.

Whole mainframe hardware migrations are possible without changing a single thing on the userspace, just doing a system refresh.

  > curl https://get.wasmer.io -sSfL | sh
Why do people still do this? What happens when owner of the domain forgets to renew it and someone else buys it, and puts up a copy of this site, with the binary replaced? Or if the owner sells it to someone malicious who does that? Or someone messes with your DNS server to serve that?

Please please please never do this.

Oh, and SERIOUSLY question the ability of anyone who suggests doing this to at all reason in a secure fashion.

This is neat but it has me thinking. Is there a similar project that allows linking wasm libraries into your LLVM language of choice? Seems like that could allow for shared libraries across languages. The lib would take some perf hit but your core project perf could be unaffected.

So ActiveX / Java applets, then Flash and now this. Yeah just another fad. It will pass, and next generation of developers will think something else to make the "holy grail" that is cross-platform everywhere with native speed. And so on and so forth.

http://troubles.md/posts/wasm-is-not-a-stack-machine/ wasm is already tainted by the legacy of javascript. It was intended as a replacement for asm.js and carries the burden of being too intended to be with javascript and as a way to get "faster javascript" due to how poor javascript is at executing efficiently.

The result is what I'd consider a poor mans java bytecode. The legacy of java was bloat connected to the runtime and its massive standard library coupled with nasty Sun then Oracle burden. And Oracle again being Oracle with what feels like more more lawyers than engineers to threaten.

So we tossed out a standard library and made a poor man's java virtual machine model.

This is java again, it just has massive hype because its better than javascript at actually running efficiently. But it can still be worse than any given other virtual machines out there.

I'm guessing that's the point everyone should see. WASM is going to be the new bytecode that works everywhere. If that's not the reason of even Golang started to support Webassembly, I would love to hear your your conjecture.

Its literally Java all over again.

Don't. Just. Don't.

Or p-code. This is my knee-jerk reaction too, can't wait till someone implements a portable browser in wasm like it's 1997. If someone actually builds a working and useful portable OS and UI layer around wasm, I might stop joking about it.

Most people don't have a p-code or java interpreter on their computer already, but 80% can already use webasm.


Why not? A single cross-platform compilation target seems like a great idea.

Java got caught up with some proprietary stuff thanks to Oracle, but there's a useful goal there. In the same way that the web is a useful target for cross-platform UI applications.

As if under Sun it was all flowers and such.

So he's basically stating that Wasm is better at doing what Java/Shockwave/Flash did. Thats most likely correct but also beside the point. The concept of "That one Layer which enables Cross-Platform applications" is just something which has failed, does fail and will fail in the future. Idc if its better at being Java since its still trying to be java.

I’m saying that’s the case in the browser, but I’m also a big believer in wasm outside of the browser too.

The JVM, the .Net CLR: mature, stable, secure - why do we need another alpha-level VM thing that was devised to make Javascript faster that can awkwardly run slow(er than native) bytecode? How is that progress?

Because it’s not “devised to make JavaScript faster”. WASM is first and foremost designed as a compilation target for languages that work with a simple linear memory and don’t use a garbage collector, like C/C++/Rust (extensions to interact directly with the JavaScript GC are in progress, but quite a ways off). The progress it enables is the ability to run performance sensitive code written in these languages in a software-sandboxed environment at anywhere near native speed, as opposed to multiple times slower.

Also neither the JVM nor the CLR even seriously attempt to offer the security when executing untrusted code that WASM does. WASM has been deployed for over a year in all the major browsers, making vulnerabilities in its implementations very valuable. Yet we haven’t seen some huge explosion of CVEs due to its addition, unlike the early history of the JVM and CLR’s attempts to implement robust software sandboxes (before they both threw up their hands and stopped trying to market their sandboxing features as suitable for running untrusted code).

> unlike the early history of the JVM and CLR’s attempts to implement robust software sandboxes (before they both threw up their hands and stopped trying to market their sandboxing features as suitable for running untrusted code).

Is this a fair comparison? JVM and CLR allow general purpose programming and syscalls across several platforms. What can you do with WASM today? Only whatever is allowed by browsers.

WASM was designed with the intent that it run in the web as well as natively[0].

It just happens that the technology is still in its infancy and, because initial support was deployed in browsers, the web is currently the primary focus of developer interest.

And, unfortunately (to me) the most likely means of "native" deployment for WASM applications is going to be in Electron, but that's due to a limitation of the initiative of developer culture, not WASM itself.


IOW, JVM/CLR wanted primarily to support locally run, trusted application code and gave up on browser sandboxing. WASM is primarily run on browsers and hasn't attempted yet to support full applications. Apples vs. oranges.

I don't think it's apples and oranges, I think it's approaching the same destination from different starting points.

I wasn’t claiming that they don’t have distinct feature sets; WASM for example can’t currently do any kind of shared memory concurrency, let alone provide a GC with shared memory concurrency. The claim that it’s just a recapitulation of the JVM however is equally innacurate. Nobody would seriously consider running multiple mutually-untrusted JVM programs in the same process, but WASM is offered in that manner in every major browser.

That said, nothing stops a WASM embedder from offering system calls, and you can already use whatever system resources you want via Node as easily as you can use the DOM in a browser.

Because it’s not another shitty language runtime like JVM or CLR, it’s a machine-code level target.

You can take your Java/Rust/.Net/Haskell/etc/etc apps and compile them down to WASM, much like you would if you were using LLVM to compile down to assembly.

I bet the existing AOT compilers for Java and .NET are able to produce much better optimized code than any available WASM implementations.

And LLVM can do better than those in turn, would I prefer if we were using LLVM as the underlying target for machine code? Hell yeah I would, but we’re not quite there yet, and if it’s a toss up between the JS powered 3MB+ websites and endless shitty electron apps or WASM, I’m going to elect for WASM.

At least with WASM if it takes off, it can at least path the way for a more better and more appropriate (LLVM, AoT CLR, etc) universal machine code later down the line. That’ll never happen if we’re chained to JS though.

I rather have the AOT compile times from Java and CLR based toolchains.

As for LLVM as the underlying target for machine code, it is already a thing, nothing new here.



LLVM _is_ the backend for Mono. It's easier for LLVM to produce better code if it doesn't have to go via a register based bytecode VM like WebAssembly. Was discussed here not so long ago: http://troubles.md/posts/wasm-is-not-a-stack-machine/

>and if it’s a toss up between the JS powered 3MB+ websites and endless shitty electron apps or WASM, I’m going to elect for WASM.

I don't know why you think the situation would improve with WASM.

It's literally not. You can't compile C into efficient JVM bytecode, but you can with WASM.

Graal project has other point of view.


Does it target pure JVM bytecode and runtime? This talks about GraalVM and how it's "optimized for many different languages", and they make the same claim on their page, specifically citing LLVM.

GraalVM is based on Graal, Java's future C2 JIT compiler implemented in Java, and already part of OpenJDK 11.

Sulong makes use of Graal and Truffle to create a LLVM AST interpreter (with native code JIT via Graal, thus allowing seamless integration between C and Java code on the same JVM.

When you say "reuses the JIT", does it reuse the JIT API that gets JVM bytecode as input, and native code as output? Or is it a lower layer, that JVM bytecode is implemented on top of, and that is not constrained by the Java object and memory model? Like, if it can do unbounded pointer arithmetic without emulating it with arrays (which is slow and inefficient), surely the JIT needs to have the corresponding low-level opcodes for that, like e.g. CLR does?

Graal only knows about JVM bytecodes, and there isn't any emulation of pointer arithmetic.

You need first to understand the role of Truffle in this.

It is a framework to generate AST interpreters, that plugs into Graal.

So an interpreter for LLVM gets written in Truffle, which while running a specific LLVM application, generates an interpreter for that specific code, so after awhile the JIT comes into the picture optimizing the generated interpreter that is processing that LLVM code.

This goes a few rounds until it becomes almost indistinguishable from generating the code straight from LLVM.

It is the same principle behind using PyPY and RPython to add new languages into PyPy.

JRuby is one of the projects making use of this infrastructure to run C extensions.

Naturally due to the slow startup until everything is finally converted into native code, this approach is only usable for long lived server based applications.

You can get more information about it in some of these links, mostly outdated though.


"Using LLVM and Sulong for Language C Extensions - LLVM Cauldron 2016"


"Project Sulong: an LLVM bitcode interpreter on the Graal VM"


Regardless of how it gets there in the end - from an interpreter or otherwise - if it produces pure JVM bytecode, that means that the memory model is also JVM. How does that accommodate the C memory model? Emulating heap with giant arrays?

Most likely yes, which doesn't matter, because Graal will eventually optimize bounds check away anyway, as it does for regular Java code.

The links that I provided go into detail about all of this.

Why is this post generating so many low effort joke comments? Is the project idea really that bad or are people missing the point?

It's sad to see people wailing on a project that (I'm assuming) people have put a lot of time into without trying to understand what it is and what it isn't.

I think because it's very easy to see it as something familiar. People are always skeptical about new stuff plus that familiar thing was shit so this must be shit too.

So no critical thinking leads to lame jokes and "OMG NO JAVA" style comments.

Well, the concept is arguably very close to what Java was marketed for. So I think it is quite natural to respond with some kind of Java association.

Nevertheless, I think this is a very interesting project.

It is close in the wording but very different in almost every other sense. wasm pushes for many indipendent implementation, it is meant to be very minimal and importantly it is made to be embeddable. In a sense it is closer to lua than Java.

At the risk of jumping into this, Java was intended to be small (small enough to run on processors that were below PC-class in the 1990s) and embeddable.


The idea of Java running on devices goes back AFAICR to the very beginning of the language. I remember the "Java everywhere" slogan and initiatives to put Java into every device. Java Card was a version of Java designed to run on smart cards. If memory serves, these were all design goals of the Java platform in the early 90s.

(Remarkably, this was all well before Linux really took off, before the Internet was ubiquitous, and when the most sophisticated electronics most households owned was a VHS player.)


There is indeed a risk that Wasm will end up bloated and fail at its main purpose.

Supporters hope that the web process will prevent excessive features. Actually the dual presence JS/wams will reduce the pressure on JS to be a strong compilation target.

I agree that (almost) no single aspect of wasm is new, but it has a lot of good points.

Java also attempted to be all those things.

(this will be slightly a rant)

So what? Java failed, for good reason, wasm has essentially solved all the reason why Java failed.

Almost everything of success used old ideas.

It is like saying the Wright brothers that Da Vinci already failed building flying machines; pretending that engines were not a relevant difference.

That's different than your previous comment. Wasm might certainly be better than Java but in terms of what you listed Java had very similar goals.

A Java subset runs on almost every SIM card, credit card chip and Intel ME out there.

So no, it isn't closer to lua than Java.

But that is basically useless for almost everyone.

It is available to anyone that can get their hands on pile of blank cards and the respective SDK.

With 5 minutes Google search effort:




Yes, but that is the definition of a nice.

Also I suspect that Java cards use a stripped down version of Java.

I explicitly said subset on my comment.

WASM on its current form also only allows for a stripped down version of many languages.

> Build Once, Run Anywhere.

Build once, run where the runtime in available.

>Build once, run where the runtime in available.

To be fair, that's true of everything. We only take for granted that some runtimes are more common than others.

The only reason the web is a "universal" runtime is that browser vendors ship to every conceivable platform and more or less agree on standards.

Kind of like Java

Java is slow(to start) and memory hungry. That's why it failed so badly. Compare that with JavaScript/Electron.

>Compare that with JavaScript/Electron.

Electron is heavier and even more memory hungry.

I don't know what the GP was thinking. Hopefully exactly that.

Build once, eat memory anywhere.

Please explain how Java “failed so badly”?

It failed so badly "on the web/in browser".

*Build once, run almost Anywhere

Except the Web part, it looks good. The portable object that nobody agreed upon now happens to be a eloquent JS representation Sigh

that isn’t what wasm is.

I remember running good old text WebAssembly about 7 years ago, which is admittedly cool - I was even waiting for Chrome to support it before I gave up on that. I remember how people had to ship a gzip-ed Unreal engine on the web for the first version;

My quip is how the C++ community miserably failed to ship a portable object that could have been used for module based compilation / linking without the AST generation overhead.

Unless wasm really becomes the package/intermediate standard for code, I wouldn't really regard it as useful yet. It would be another Java for me.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact