JSLinux by Fabrice Bellard in 2011:
In typical Bellard fashion, he didn't really talk about it -- he just demonstrated it!
I worked at a multi-platform UNIX software vendor back then, and did the project of compiling our products to ANDF. At one point I counted 20 distinct UNIX variants that we had in our office (all workstations).
It was believed that UNIX systems complied with government procurement standards (eg this is where POSIX is relevant), but there were always bugs and differences in behaviour. In C code this is handled by #if pre-processor directives to adapt as needed. ANDF required turning that into runtime if statements instead. (An example would be if there were two different network interface calls depending on platform.) That would have been a herculean task and was in same places impossible such as the same API taking different numbers of parameters on different platforms. The ANDF compiler would have to pick one for its header files.
Even something like endianess is usually handled by #if and not an if statement, so you can see just how much effort would be needed just to have it compile. You still had to do all the multi-platform installation instructions and testing, so no one had an incentive to make things more complicated.
Java killed ANDF stone dead. While Java was most prominent for applets in the browser, it also competed with C++ on the backend. C++ compilers cost a lot of money, had licensing daemons that locked you to one platform, and implemented different subsets of the C++ standard and library. Java was more forgiving and the JVM provided standard features like multi-threading and networking. You will also note that Sun spent a lot of effort to keep Java standard.
Honestly, I just want to be able to write the stuff I have to in JS today in my favorite language.
Mind you, the NaCl fervor was for a ubiquitous VM—but the NaCl fervor wasn't nearly as large, and isn't really what's propelling WASM to prominence right now. Even then, it wasn't about "an easy garbage collected language that is ubiquitous", no. The goal of it was to be able to take code that you already have—native code, written to run fast, like a AAA game—and put it in a browser-strength sandbox, such that it can be zero-installed just by visiting a URL, with full performance. You know, like ActiveX was supposed to be. But better.
NaCl didn't really get us there, because it happened right as the architectural split between x86 and ARM really started heating up, and NaCl's solution to that split—PNaCl, a.k.a. sandboxed LLVM IR—was both too late and not really efficient-enough at the time to fully supplant the "Native" NaCl in NaCl messaging. (LLVM IR works well-enough now for Apple to rely on it for being a "unified intermediate" of both x86 and ARM target object code, but that shift only began with ~8 additional years of LLVM development after the version of LLVM that PNaCl's IR came from.)
WASM seems to get us there. But do we care any more? Everyone already has other solutions to this problem. ChromeOS can run Android apps; Ubuntu Snappy packages can expose GUIs to Wayland; Windows has a Linux ABI to run Docker containers. Ubiquity is a lot easier now than it was back then, for any particular use-case you might want.
On the embedded-scripting side of things, everyone has seemingly settled on embedding LuaJIT or V8. Do people even need embedded scripts to be fast, in a way that "WASM as compilation target" would help with? Maybe for the more esoteric use-cases like CouchDB's "design documents" or the Ethereum VM (https://medium.com/coinmonks/ewasm-web-assembly-for-ethereum...) But I doubt WASM will hit ubiquity here. Why would OpenResty switch? Why would Redis? Etc. You're not writing-and-compiling native code for any of these in the first place, so adding WASM here would only break existing workflows.
It was so even before 2001. Consider why the <script> element as the "type" attribute to begin with (and why it had the "language" attribute originally, before "type"). As I recall, W3C specs from that era gave examples such as Tcl! On Windows, you could use anything that implemented Active Scripting APIs - e.g. Perl. JS just happened to be the one that everybody had, because it was the first, and so it became the common denominator, to the detriment of the industry.
The extent to which the people in this project took "inspiration" from wasmjit has been questioned as well https://github.com/wasmerio/wasmer/commit/a81500e047fdd321b3...
We think we've resolved this issue here (https://twitter.com/syrusakbary/status/1094635228457426944), but our response was definitely later than it should have been. If you feel that we need to rectify further, feel free to email me at firstname.lastname@example.org.
If however this project has a history of doing the wrong thing, covering up, and only fixing things for PR value... then yeah, not really a project to put time into. :)
I don't know that the actions were indeed malicious but there is evidence to suggest that they may have been and I'm doing my small part to make the greater community aware. This will enable the community to make wiser judgments in the future, if the need arises.
I'll agree with him that Java is very much not tiny, but I disagree about Flash. The last time I remember setting it up, the browser plugin was a single binary of a few MB (and I'm sure that could be made smaller.) SWF files are also a great example of a well-designed space-optimised vector graphics format --- remember that it was designed close to 20 years ago for the computers of the time.
Edit: care to explain, downvoters?
WASM was intentionally designed to be able to integrate easily with existing JS implementations.
In contrast, Perl had a runtime of something like 768 kB. Yet, VisualWorks could load its runtime faster, if you tweaked things, like shutting off the boot-up chime and notifier dialog.
For the same reason that Emacs loads quickly: the "image" is (an abstracted, runtime-dependent equivalent to) a memory dump. Perl has to parse Perl code when it starts up; Smalltalk just "is" Smalltalk when it starts up. Like booting up a DBMS where the DB already has data in it.
Honestly, I'm kind of surprised more modern languages/runtimes don't take this approach. The Smalltalk approach would work exactly as well for e.g. Ruby.
His first point about why it's different is "Wasm won"? Really? Back in the day Java applets were everywhere. They had "won". They disappeared because browser makers kicked them out along with Flash, which had also "won". He then tries to redefine plugins, which were created by Netscape, had a standard cross platform API and their own HTML tags as somehow "not a part of the web platform".
Then he says that's "in some senses the final point", as if being forced into the platform by de-facto execution of all competitors says all you need to know about why WebAssembly is good.
He then goes on to argue:
These platforms never gained the ability to interact with the rest of the platform provided by your browser. Well, technically they might have, but that’s not how these technologies were used in practice.
So he makes a false claim - that applets and flash couldn't interact with the hosting web page - and then immediately admits it's false, but that "this isn't how the tech was used in practice". But I remember quite a few Java and Flash applets that interacted with the host page and changed the HTML, most obviously in things like the original Google Talk. And to the extent Java applets rarely did, that's because HTML sucked in that era. Arguably it still does. The primary reason people were writing Java applets was to escape the woeful inadequacy of the web platform, so no big surprise not many applets rendered their UI using "the benefits of HTML and CSS".
And users ended up solidly rejecting it. Outside of games, users hated Flash. Java Applets were heavy and slow.
Users didn't reject Flash. Flash went everywhere. It was the de-facto video streaming solution, the de-facto cross platform lightweight games solution, it was in every Gmail page, it was the standard way to make attractive animated websites. Site authors adopted it en-masse and users loved those sites, which were often doing things HTML users couldn't even begin to do.
And of course Java applets were "heavy and slow" in an era when the web itself was so heavy and slow, that people didn't even try to do the same things with it. The crime of Java applets was mostly that they raised developer expectations to unreasonable levels and devs ended up blowing their hardware budgets. The web was crippled and limited, so people didn't even try to make sophisticated UIs or modular software, and it ended up staying within the envelope of what slow internet connections could manage more effectively.
Here he's just repeating an argument he already admitted himself is wrong. Flash applets and Java applets could both be invisible and could both "fit" with the surrounding environment. And he already knows that.
There needs to be official OS support from vendors (MacOS, Windows and Linux distros) in order to avoid the stinking pile of non-shared shared library mess that is electron.
We wrote more about it here: https://medium.com/wasmer/webassembly-cloudabi-b573047fd0a9
I firmly believe that this project is incomplete without a standard like .war (Web ARchive, from the Java ecosystem).
Of course, you could just use what's out there eg. .app (macos) or .appimage (gnu/linux) but there has to be a well-defined and clearly endorsed way of distributing wasmer'd apps.
Double so for anything exposed to the internet.
That said, the OS-sanctioned runtime could very well support a shared-object-like approach instead of one-binary-per-app but that would 1. complicate the runtime quite a lot 2. hinder startup times of wasm apps due to dependency solving.
Isolates are a V8 feature that is used to implement Workers: https://v8docs.nodesource.com/node-0.8/d5/dda/classv8_1_1_is...
I wonder how one would solve linking to native libraries and the like through this.
I still prefer just having the application source code available so one can compile it for a specific system.
> I wonder how one would solve linking to native libraries
I think the point was that it's a reinvention of "write once run everywhere", which had its own problems at the time (such as native libraries). Java introduced the idea, and arguing the fact that the JVM is the reason why it works is a touch pedantic in this discussion.
> Either use WASM outside the browser (I don't even know if that's a thing)
That's what TFA is doing.
Not mainstream, but absolutely a thing. There's people even looking at running it in Ring 0.
Linking to native libraries isn't desirable - you don't want to leak your native vulnerabilities into your web browser.
If you have the source however (as you mentioned), using Clang to compile to WASM should be a viable approach.
If not you will end up with Swing type libraries that look very off compared to native applications.
Same for games really. You usually need to be aware of what platform (hardware and software) your run on and adjust for that to be able to use it optimally.
I’ve long maintained that the reason that JS is so prelevant and popular is the result of it exclusively being the only language allowed in the browser.
Mozilla has already proven that a Rust WASM can take in the DOM, do all the necessary transformations and hand back the final result faster than JS can transform it. Host binding protocols are the first step in negating the necessity of JS. Once a direct DOM <-> WASM API comes along (and it will), JS will be as good as deprecated.
Are there programming languages that we don’t use as targets that can’t be written in a text editor? I’m not sure what you mean here. I can write Rust, C, C++, Python, Swift, .Net, etc etc in a text editor with no more difficulty than JS?
Have you not seen the borderline arcane processes and quantity of tooling involved in “compiling” and bundling JS applications? It’s nightmarish and far more convoluted than literally any other language I’ve dealt with.
> Also, deprecating JS would mean deprecating functionality most of the web itself.
Oh no, what a pity, large chunks of bloated websites will go away. I’m sure we’ll all miss them greatly, they made such a great difference to our lives. /s
You don’t have to write it in whatever low-level language, I’m sure any number of higher level languages will get support. Also, we dropped flash pretty hard and fast, and I distinctly, distinctly remember a huge number of websites being entirely flash.
It’s maybe not going to go away entirely, but it will hopefully cease to become as popular as it is now.
I don't even know how to answer this... You are proposing to partition the web into a past and future version where almost no site of commercial interest has any incentive to leave the old internet.
> Also, we dropped flash pretty hard and fast, and I distinctly, distinctly remember a huge number of websites being entirely flash.
And some sites still requires it. It is one of the reason I still keep chrome on my computers
Jobs isn't here to save us anymore.
If if if.... That is not what WASM is or whats to be.
> I’ve long maintained that the reason that JS is so prelevant and popular is the result of it exclusively being the only language allowed in the browser.
> WASM API comes along (and it will), JS will be as good as deprecated.
Although I couldn't actually find any description of the intended capabilities.
The page DOM is not listed there. The closet thing is DOMParser which will allow parsing of an XML/HTML island into a new DOM object.
It is wishful thinking that an update to WASM would modify the web environment outside of WASM.
> just because...
It certainly will though, 100%. I know this from talking to all sorts of people who work on wasm. That’s what the above proposal I linked you to does.
More insecure than allowing arbitrary JS like we do now? I wouldn’t be surprised if we do get a WASM DOM API, maybe it won’t start as an official spec, but it’ll eventually happen.
Having one language in a 'privileged' position for a platform isn't ideal. At least with the mobile platforms ObjC could be supplanted by Swift, and Java by Kotlin.
I hope for world peace.
> When parity with JS is reached in terms of ability to access the DOM and JS interop
Even if it did, you would still have to learn to access the DOM. Most developers completely fail at this point. Somehow the DOM is scary.
> Having one language in a 'privileged' position for a platform isn't ideal.
The difference is that the web has a very different security model, where for example it is unthinkable to ship binaries.
> The Pandora box is already open, haven't you noticed it?
Flash will come back rest assured.
I am already using pure browser versions of WebEx and Citrix.
While Autodesk customers can play with their AutoCAD projects on the browser.
When you get past the irrational fear around these things the motivations of the technologies and their direction becomes something completely different. This is why web technologies are difined in one way and yet people strangely impose their own alternate realities, abstractions, and unfounded projections upon them. It’s also why the standard technologies are slow, intentionally so, to reflect certain things even those things are highly demanded.
I have encountered these phobias enough, even in person, that I am curious to learn more about them and how they arise.
As polyglot developer I have a schizophrenic attitude towards technology, you will see me bashing tech X in one thread and praising it in another one.
Because at the end of the day, regardless of my personal opinions, is what the customer wants that counts, and every tool has plus and minus.
Now, some people that sell themselves as Developer on X, User of Y, need to be sure that they did the right choice otherwise the hard truth is that they need to go elsewhere.
Naturally this creates a religious attitude around technology decisions and us vs them.
As for the Web, from my point of view it should have stayed for interactive documents, with native for everything else, using Internet network protocols as it was.
Everyone wanted to turn browsers into general purpose VMs instead, so naturally we are now here, with everyone porting their favourite stack into WebAssembly.
Besides there is already WebGL, WebAudio and WebGPU is being worked on.
The details for the spec update you are talking about have already been addressed in these comments: https://github.com/WebAssembly/host-bindings/blob/master/pro...
You should read my previous comment if you want any further replies.
I am just stating that Flash like Web sites will come back, regardless of what WebAssembly haters think.
As for further replies, oh well.
The goals of W3C were never to have a general purpose VM, rather hypertext documents, which nowadays only a minority cares about.
Now, when I target WASM I get one obvious thing, compile once deliver everywhere, with some overhead hit. Versus produce binaries for everything and distribute.
Is gaining the universal binary (with WASM runtime) enough? The reason I’m excited about the Web target for WASM is that I gain the entire web platform from my favorite language.
I’m 50% excited and also baffled by this. Now, as an embeddable language runtime in other software, that’s interesting...
But it's not the only scenario. Imagine, for a moment, that you're not distributing your own binaries; you're in the business of running them, possibly from untrusted sources. And that's just one other scenario.
Java was proprietary for a long time and even now it's basically controlled by Oracle.
Also, due to various issues, it doesn't really run on Android and it doesn't run at all on iOS (I know about Codename One, I mean the OpenJDK).
WASM is a truly open standard. And it's quite likely to be present on all platforms, even though for obvious reasons platform vendors will drag their feet regarding WASM.
I guess in practice this means "open" like HTTP and HTML, for which Google decides what does/doesn't go into the standard.
There is an interesting degree of independence. A lot of Java semantics is not baked into the JVM, and in the case of invokedynamic there was for a time a significant JVM feature that java didn't use.
The JVM is a pretty decent target for languages with garbage collection, objects (of some form), and methods / functions. Those facilities are useful for most languages these days, and provide a large degree of interoperability. I don't have to really thing about ruby calling java and back again, no calling conventions or issues with garbage.
It will be interesting to see whether WASM will support those kinds of features.
Using the Java language on other VMs confuses people in exactly the same way that using the Java Virtual Machine for other languages confuses people.
The outcome might have been different had they called it something like HLVM instead of JVM.
It's much better to build this in layers - a standardized lowest layer that's something like WASM, then a standardized object model on top of that etc. That way, you can have a single stack supporting a broad variety of languages, with degree of interop compatibility dictated by how much in common they have.
As for LLVM, sandboxing has never been a core feature (AFAIK).
WebAssembly only exists because Mozilla went with asm.js instead.
I'm looking forward to seeing some benchmarks comparing native binaries with the same program run via wasmer.
Wasmer executes any wasm binary, including AssemblyScript generated ones :)
Consider that browser standards include an enormous amount of functionality and new web API's are getting added all the time, and yet, many people claim that web apps can't compete with native apps on phones or desktop. Also, even if you just pick one of these, there have been many attempts to come up with a portable way to write apps for both iOS and Android, and it's getting better (see Flutter for example), but it's still a lot of work for the implementers.
Docker got around this by standardizing on Linux (including the filesystem), but the client side is much harder.
So I suspect that for a "universal" file format based on WebAssembly to get anywhere, it will have to succeed in some niche not served well by web browsers, Docker, or Unity.
WebAssembly itself is useful in the way a fast interpreter for a scripting language is useful - as an embedded component of some larger runtime. Its potential customers are teams that would otherwise need a scripting language and choose WebAssembly instead.
In particular, I hope that this time we can get a standardized ABI that goes well above what C (the current de facto standard) had to offer. Language evolution went far beyond where things were back in 1980s, and the set of common features across various languages has also increased substantially - and that should be reflected in the ABI.
Speaking of which:
Emscripten is not really user friendly.
Time is passing, WASM is here but I can't see decent toolchains available to work with it.
> emcc hello.c -o hello.html
...which gives you a complete WebAssembly application runnable in the browser.
With VSCode as IDE it's possible to build a fairly nice edit-compile-test workflow with Intellisense and error squiggles.
What's really missing is proper debugging support (can mostly be worked around by debugging a platform-native executable compiled from the same source, but working source-level debugging for WASM blobs would be nice).
If you are looking for an easy setup of LLVM's own wasm-target (which Emscripten will be switching to eventually), try this tool: https://github.com/appcypher/wasabi
That said, the Rust team has been putting a ton of work into a real, simple toolchain for exactly these reasons: tools matter for adoption. We'll see if the pain of the toolchain is less or greater than the pain of a different language...
Historic attempts at using LLVM IR as a universal binary have stumbled on that issue, and typically have ended up deciding to support the lowest-common-denominator machines i.e. 32-bit, meaning that it hasn't been viable to use them for server-style software.
So yes, we're at the lowest common denominator for addressing memory. You would need to explicitly build for wasm64 in the future.
Within wasmer wasm is compiled into a different IR through cranelift. That generally compiles for 64 bit systems but can compile to 32: https://github.com/CraneStation/wasmtime/pull/44
Not sure if this entirely answers your question. Seems like there are still a handful for decisions wasmer would need to make in terms of what exactly is supported.
The TAOS operating system (1991) had a virtual Instruction Set Architecture (ISA). Except for a small kernel, the entire OS was compiled to the virtual ISA, then assembled into actual machine code Just In Time as fast as the executable was being read off disk.
This resulted in an operating system that could run with decent speeds (80-90% of native) on seemingly any hardware, which could be ported to a new platform in just 3 days.
And why would I ever want to generate native code for an OS at runtime?
Why? To get write once, run everywhere. If you can do the generation at the speed the data comes off the disk, you're paying nothing for the feature.
If you do things right, the abstract memory model can even be bit-identical. Even though there are differences in the underlying hardware, like endianess, all of the virtual ISAs can still implement the same virtual memory model. Smalltalk also does this, and it absolutely works across 100's of combinations of ISA and OS.
OS/360 (IBM z nowadays) and OS/400 (IBM i nowadays) work kind of this way.
The whole userspace is based on binaries deployed in a bytecode format.
They are than AOT compiled into native code at installation time, or anything there are system relevant changes, via a kernel JIT service.
Whole mainframe hardware migrations are possible without changing a single thing on the userspace, just doing a system refresh.
> curl https://get.wasmer.io -sSfL | sh
Please please please never do this.
Oh, and SERIOUSLY question the ability of anyone who suggests doing this to at all reason in a secure fashion.
The result is what I'd consider a poor mans java bytecode. The legacy of java was bloat connected to the runtime and its massive standard library coupled with nasty Sun then Oracle burden. And Oracle again being Oracle with what feels like more more lawyers than engineers to threaten.
So we tossed out a standard library and made a poor man's java virtual machine model.
Java got caught up with some proprietary stuff thanks to Oracle, but there's a useful goal there. In the same way that the web is a useful target for cross-platform UI applications.
Also neither the JVM nor the CLR even seriously attempt to offer the security when executing untrusted code that WASM does. WASM has been deployed for over a year in all the major browsers, making vulnerabilities in its implementations very valuable. Yet we haven’t seen some huge explosion of CVEs due to its addition, unlike the early history of the JVM and CLR’s attempts to implement robust software sandboxes (before they both threw up their hands and stopped trying to market their sandboxing features as suitable for running untrusted code).
Is this a fair comparison? JVM and CLR allow general purpose programming and syscalls across several platforms. What can you do with WASM today? Only whatever is allowed by browsers.
It just happens that the technology is still in its infancy and, because initial support was deployed in browsers, the web is currently the primary focus of developer interest.
And, unfortunately (to me) the most likely means of "native" deployment for WASM applications is going to be in Electron, but that's due to a limitation of the initiative of developer culture, not WASM itself.
That said, nothing stops a WASM embedder from offering system calls, and you can already use whatever system resources you want via Node as easily as you can use the DOM in a browser.
You can take your Java/Rust/.Net/Haskell/etc/etc apps and compile them down to WASM, much like you would if you were using LLVM to compile down to assembly.
At least with WASM if it takes off, it can at least path the way for a more better and more appropriate (LLVM, AoT CLR, etc) universal machine code later down the line. That’ll never happen if we’re chained to JS though.
As for LLVM as the underlying target for machine code, it is already a thing, nothing new here.
I don't know why you think the situation would improve with WASM.
Sulong makes use of Graal and Truffle to create a LLVM AST interpreter (with native code JIT via Graal, thus allowing seamless integration between C and Java code on the same JVM.
You need first to understand the role of Truffle in this.
It is a framework to generate AST interpreters, that plugs into Graal.
So an interpreter for LLVM gets written in Truffle, which while running a specific LLVM application, generates an interpreter for that specific code, so after awhile the JIT comes into the picture optimizing the generated interpreter that is processing that LLVM code.
This goes a few rounds until it becomes almost indistinguishable from generating the code straight from LLVM.
It is the same principle behind using PyPY and RPython to add new languages into PyPy.
JRuby is one of the projects making use of this infrastructure to run C extensions.
Naturally due to the slow startup until everything is finally converted into native code, this approach is only usable for long lived server based applications.
You can get more information about it in some of these links, mostly outdated though.
"Using LLVM and Sulong for Language C Extensions - LLVM Cauldron 2016"
"Project Sulong: an LLVM bitcode interpreter on the Graal VM"
The links that I provided go into detail about all of this.
It's sad to see people wailing on a project that (I'm assuming) people have put a lot of time into without trying to understand what it is and what it isn't.
So no critical thinking leads to lame jokes and "OMG NO JAVA" style comments.
Nevertheless, I think this is a very interesting project.
The idea of Java running on devices goes back AFAICR to the very beginning of the language. I remember the "Java everywhere" slogan and initiatives to put Java into every device. Java Card was a version of Java designed to run on smart cards. If memory serves, these were all design goals of the Java platform in the early 90s.
(Remarkably, this was all well before Linux really took off, before the Internet was ubiquitous, and when the most sophisticated electronics most households owned was a VHS player.)
Supporters hope that the web process will prevent excessive features. Actually the dual presence JS/wams will reduce the pressure on JS to be a strong compilation target.
I agree that (almost) no single aspect of wasm is new, but it has a lot of good points.
So what? Java failed, for good reason, wasm has essentially solved all the reason why Java failed.
Almost everything of success used old ideas.
It is like saying the Wright brothers that Da Vinci already failed building flying machines; pretending that engines were not a relevant difference.
So no, it isn't closer to lua than Java.
With 5 minutes Google search effort:
Also I suspect that Java cards use a stripped down version of Java.
WASM on its current form also only allows for a stripped down version of many languages.
Build once, run where the runtime in available.
To be fair, that's true of everything. We only take for granted that some runtimes are more common than others.
The only reason the web is a "universal" runtime is that browser vendors ship to every conceivable platform and more or less agree on standards.
Electron is heavier and even more memory hungry.
My quip is how the C++ community miserably failed to ship a portable object that could have been used for module based compilation / linking without the AST generation overhead.
Unless wasm really becomes the package/intermediate standard for code, I wouldn't really regard it as useful yet. It would be another Java for me.