Hacker News new | past | comments | ask | show | jobs | submit login
Standardizing WASI: A system interface to run WebAssembly outside the web (hacks.mozilla.org)
434 points by bnjbvr 23 days ago | hide | past | web | favorite | 231 comments



Been hoping a stdlib for WASM would spring up. So if I wanted to implement a WASI backend (e.g. on the JVM), are the function definitions I should implement here: [0]? I assume string, array, struct, etc layout in WASM mem is as C expects? Also, is it a goal to provide a test suite to test conforming backend implementations (may already be there, didn't look)? Finally, pardon my lack of research before asking, does this mean that an LLVM-based WASM compilation can target this instead of emscripten/libc and the final WASM could reference all of these API pieces as a imports? Is there an expected posix/libc-to-wasi lib?

0 - https://github.com/CraneStation/wasmtime-wasi/blob/wasi/docs...


(Member of the team at Mozilla here )

Yes, that's the list.

And the layout of structs, strings, etc is up to the compiler, within the bounds of the restrictions WebAssembly imposes.

We'll definitely have a test suite, but this is all early days, so a lot of all that isn't yet in place.

And yes, this can be targeted by LLVM-based and other compilers. In fact, Emscripten could use this as the foundation for their POSIX-like libc and library packages. The syscalls are indeed exposed as Wasm function imports.


Will WASI normalize differences between platforms? e.g. convert argv or paths to a consistent character encoding?


Yes!


How will you deal with valid paths such as:

  /tmp/[DE][AD][BE][EF].txt # ext2 / linux
  # OR
  C:\stuff\[DEED][FFFE].txt # ntfs / windows
  # where [hex] indicates a single filesystem charater with that value


One fun thing about the capability model is that at the system call level, there are no absolute paths. All filesystem path references are relative to base directory handles. So even if an application thinks it wants something in C:\stuff, it's the job of the libraries linked into the application to map that to something that can actually be named. So there's room for the ecosystem to innovate, above the WASI syscall layer, on what "C:\" should mean in an application intending to be portable.

Concerning character encodings, and potentially case sensitivity, the current high-level idea is that paths at the WASI syscall layer will be UTF-8, and WASI implementations will perform translation under the covers as needed. Of course, that doesn't fix everything, but it's a starting point.


That’s good to know, but the parent’s examples seem to be referencing the issue of filenames that aren’t valid Unicode. The Linux example is invalid UTF-8, since Linux filenames are natively arbitrary byte sequences. The Windows example contains an unpaired surrogate followed by the reserved codepoint 0xfffe, since Windows filenames are natively UCS-2.


WTF-8 could solve the windows issue and I think for Linux it’s time to demand unicode filenames :)


What about platform differences like how file permissions work on windows vs posix? (i.e., stuff that Python does not fully normalize)


I have a dollar that says all platform difference issues will be solved by just doing whatever POSIX does and expecting the host OS to figure it out if it isn't already POSIX. Whenever you try to abstract away arbitrarily different implementations while retaining their non-common functionality you either end up reimplementing one of them and expecting the others to work around it, or you end up forcing the programmer to bypass the abstraction anyway and implement logic for each implementation.


I wouldn't count on that.

I have worked on file APIs. There are so many differences between Windows and Posix that abstracting them away just doesn't work. Undoubtedly, there will eventually be platform-specific APIs that implement one or the other, and cross-platform APIs that implement the intersection.


It's a good question. WASI currently doesn't allow you to set custom access-control permissions when creating files. But we're just getting started, so if we can find a design that works, we can add it.


This is really interesting. As part of my undergrad research I've built `silverfish`, which is a tool that turns WASM binaries into LLVM bytecode. It's currently pretty chained to how I personally compile my C to WASM, but it'd be pretty cool to get it working with this standard!

Here's a link: https://github.com/gwsystems/silverfish -- although the README is pretty sparse, so you'll have to look at the code.


wow, thank you. that's amazing!


Wait a second...

I thought Java was supposed to do this with the JVM...

I thought .Net was supposed to do this with the CLR...

What's different now?


The JVM and CLR suck balls.

They put in huge amounts of effort into things that don't need solving, and the things that need to be solved they do in the most roundabout way possible.

What we _really_ need is a sandboxed VM for running C code securely and efficiently.

The JVM is pretty much the opposite of that; it's not sandboxed, not secure and certainly not efficient as a target for C code.

We're hoping WASM starts over from scratch and does things right this time around.


... 20 years later

JVM, CLR, and WASM suck balls. People have been breaking the sandbox for the better part of a decade.

We're hoping Cool New Thing™ starts over from scratch and does things right this time around.


But isn't that the point? We should be getting better at building software over time. Sometimes that means restarting.


UNIX is a good counter-argument.


UNIX is thoroughly obsolete. Its continued use is from inetia; people rarely need anything but a tiny subset of its capabilities, for which there are more simpler alternatives every day.


are you kidding? do you know how many subsystems that people take for granted are using UNIX? furthermore, how do you get simpler than UNIX? it's not complex..


I just didn't feel like responding since it was clear OP didn't know what he was talking about.


"it's not sandboxed" well, it is, and it's secure and it can run bytecode as fast as C


Both of those claims need qualification: it’s not quite secure, and it’s not quite as fast as C.


This time, things will be different!

I mean, maybe they really will be, but history suggests otherwise.


Isn't sandboxing solved at the host OS level ?

Android apps are sandboxed Java apps.

Containers turn almost any Linux process into a sandboxed process.

Am I wrong ?


What would a sandbox for C (native code) look like?


A decade and change has passed, so we get to do it against a new set of I/O concepts and performance concerns. Those are always the factors motivating new software standards.

I did evaluate WASM as a native runtime environment recently, pre-WASI. If what you're doing can coexist with memory mapped I/O, you don't have to have an explicit interface of this type, you can just say "lower region of linear memory is where the I/O goes" and hardcode all APIs against that. It's more "honest" in certain ways than a syscall interface since it doesn't lead to any parsing of intent or variable expectations around resource usage.


Different use cases. You don't necessarily want a 10 ton standard library for every use case, like on embedded devices or plugins for a software package. C/C++/Rust being 1st class citizens is also very useful.


Standard libraries are what make languages useable. In my experience there's a solid case to be made for a 10-ton standard library (for targets with an operating system) or basically none whatsoever (for embedded). A thin standard library is always going to be too small to do useful things easily at the high level and too big to be deployed on an embedded system. Rust is good because it can play in both sandboxes.


For embedded devices you'd always be better just compiling C/C++/Rust directly to the native code. You don't have any portability possibility there in the first place anyway, so why ship inferior code gen with arbitrary restrictions?


Because you want to be able to dynamically load semi-trusted code on a microcontroller without MMU and run it without having to worry it could crash the whole thing?


Do you have an actual example of such a thing? I'm struggling to come up with a case of multiple independent mini-programs in such a microcontroller that needs, or even benefits, from that level of fault isolation.


Rapid prototyping, replacement for a scripting facility, diagnostics filtering & output, a platform for "apps" for a small IoT type devices and just the ability to run the same plugin anywhere you please.

It can be an enabler for new innovations. Even if it runs 2-10x slower than native code.

And what if the answer was just: just because I can? That's how a lot in our industry got started.


Ah, just like JavaCard.


is it really a use case for WASM?


.NET's '10 tonne' standard library is a decent part of what makes it so usable. And it's not exactly massive either - I can publish a standalone .NET Core app that's only around 30MB in size. For the usability and GC, I'll take that over C any day.


> I can publish a standalone .NET Core app that's only around 30MB in size.

Honestly, it shouldn't need to be that big. There's plenty of cruft even in .NET core.


That will probably come done quiet a bit when illink gets out of alpha. It is basically a tree-shaker to eliminate dead/unused stuff from the resulting standalone binary.

https://github.com/dotnet/announcements/issues/30


A tree shaking linker will definitely help a lot, but there's still some runtime baggage that probably can't be removed. For instance, dynamic loading and all of the related type and assembly metadata structures. I look forward to seeing what they can do!


Does it matter? I don't really see the issue with competing standards if the standards are open: The "best" (for some definition of best) should win out eventually.

Anything other than Electron if it comes to it


We could re-implement C as well, but is it sensible if we already have C?

If someone could explain the difference between WASI and JVM the way you can explain the difference between C, C++ or Rust, it would help a lot. If there is no such explanation, I think we should really question what is being accomplished here.

As it stands currently, WebAssembly seems to integrate nicely with browsers, whereas JVM works nicely in native environments. I'm a younger developer and only partly remember the days of Java on the web, but is there a reason for why JVM is not suitable in web environments? I.e. why WebAssembly -> Native, not JVM -> Web?


I've now found a blog post[0], which seems to outline two main reasons for WASM over JVM on the web:

1. WASM is an open standard

2. WASM spec is smaller, so it's easier to integrate into the Javascript VM, which itself brings many benefits

Since WASM wins on the web, I guess it makes sense for developers to want to develop one version of their code for both the web as well as native environments. Thus, WASM -> Native seems to totally make sense.

[0] https://words.steveklabnik.com/is-webassembly-the-return-of-...


> 1. WASM is an open standard

So is the web, but that didn't stop anyone from extending it in incompatible ways and all implementations being subtly different enough that you still have care about which one you're running on and, oh yeah, it's also becoming dominated by a single player who steers the whole thing pretty much wherever they want it.


Someone correct me if I'm wrong, but don't the JVM and CLR assume they're working with GC'd languages and a number of other things? Making them poor targets for many languages, and AFAIK a WASM runtime is a lot more lightweight than both of them

The JVM and CLR seem like they're language platforms first and VMs second


"The JVM and CLR seem like they're language platforms first and VMs second"

Ding ding ding ding ding.

There are lots of extant application VMs out there, but effectively all of them were implemented specifically to be able to run a specific language (Java → JVM, C# → CLR, Perl 6 → Parrot/NQP/whatever, Erlang → EVM/BEAM, etc.). Other languages have been implemented on top of those VMs, but they have to adopt the semantics of the actually-targeted language; you'd have a hard time taking a pure C codebase and compiling/running it on the JVM or CLR or BEAM.

WebAssembly is not that much different, technically; its actually-targeted languages are C, C++, and Rust, and you can reasonably think of it as a VM for those languages. However, it seems to be making an effort to not get caught up in the semantics of any specific language, much like how machine code tends to not really care about whether the instructions it's executing were compiled from C or Rust or Java or Ruby or COBOL or Brainfuck.


Some points of note:

Parrot was intended as a more universal VM, and was started before the Perl6 design solidified. So in the end it didn't actually fit that well with Perl6.

NQP is a small subset of Perl6 that is used for bootstrapping Perl6, and provides an isolation layer from the vagaries of the various VMs it targets. It used to target Parrot, and now targets MoarVM, the JVM, and JavaScript. (NQP is an acronym for Not Quite Perl6.)

MoarVM (Metamodel On A Runtime) was built specifically for Perl6, but its design is minimal. Most of the Perl6 specific features are built using Perl6 and NQP.

For example, MoarVM only provides only enough of an object/type system to bootstrap the full object system. That is MoarVM gets taught how Perl6 objects work every time it loads the Perl6 runtime.

MoarVM also has a pluggable optimizer where you write the plugins in a higher level language. It could be any language that compiles down to MoarVM bytecode. The Perl6 ones are written in NQP.

I think this dynamism makes MoarVM an interesting target for other languages.


Makes me wonder: is there any language implemented on the JVM that works around the JVM GC the way asm.js works around the js GC? Would it even be possible, could it be performant?


Early Blazor prototypes used a compact .NET runtime (including assembly execution, garbage collection, threading) that compiled to a mere 60KB of WebAssembly. They moved to mono, so that isn't true now.

https://github.com/aspnet/Blazor/wiki/FAQ


> If someone could explain the difference between WASI and JVM the way you can explain the difference between C, C++ or Rust, it would help a lot.

I'll take a stab at it. While both WASM and the JVM aim to be portable virtual machines, WASM prioritizes sandboxing / security and a smaller runtime. A host should be able to easily set up WASM with restricted access and expect that the WASM program is secure. As far as I know, the JVM doesn't have an equivalent feature, or if it does, it's a lot more complicated.


It really very much has :)

On the other hand, I seem to remember that the security model was (somewhat) adapted to the threat at hand when it was designed (~25 years ago). I haven't kept tabs on it, so I don't know how well it has aged, but there's a chance it doesn't work anymore.


Nah, just bytecode verifiers, security managers and access capabilities, but whatever.


If something, be it a language or a VM is not up to scratch, it would seem to be a good idea to reimplement it. Would you propose just sticking with the mediocre present instead?


Yes. C is an absolutely terrible programming language that we use because it got their first. I sincerely hope that C (and bad C++, so probably C++ as well) is phased out over the coming years purely because it's too dangerous to use it in critical software (One life lost to memory corruption is too many). This probably won't happen, but I hope it does.

JVM is pretty complicated and not open source(? I'm not entirely sure who owns the standard, if there is one). WebAssembly has the advantage of hindsight and being able to be designed to completion rather than evolved


OpenJDK is GPLv2.


Is it just me, or do I kinda feel the jvm running on wasm already? Even the uis would work by rendering via canvas in software ...


you can use html/css/js with it, I guess :/

Or, more accurately, its easy to make UIs for it?

I'm personally bearish on the whole idea from a community / technical standpoint (in that I'd like us to return to more native stuff than follow JS into the browser), but I can't argue that WASM and co won't enable some really cool uses.


I can't shake the niggling idea that this, just like node before it, is more or less a way for JS web devs to not have to leave the ecosystem they're comfortable with and learn a new language. We already have hundreds of ways to write native UI applications, but web devs can't bother to use them.


When you put WASM into something that isn't a browser then it still won't be a browser afterwards. WASM is not DOM+CSS.

On top of that, JS devs would be the ones who get the short end of the stick, because js is the one language that does not compile to WASM because where WASM comes from, you already have a JS environment, no need to build a second one inside WASM.


Couldn't a native developer also take advantage of wasm/wasi to build their native application to a single build target/binary, which could then be run on Any machine?


Cross platform is in no way new. There are already plenty of ways for native devs to target multiple platforms at once if that's required.


But are any of those ways both at the assembly level (speed) and decoupled from the runtime (space)?

My takeaway from WASI is that it would allow for a single universal runtime with near-native execution. The near-native aspect being what other universal runtimes like the JVM and CLR lack due to their bundling of a GC and other non-essential features.


All the ways we have to write native UI applications aren't exactly great though, and most aren't cross platform.

Node always struck me as a strange trend but using web tech for UIs seems like a good fit for a lot of cases


Now it's got JavaScript in the mix ;) but honestly I can't tell either.


Solomon Hykes, co-founder of Docker, believes WASM+WASI could take the place of Docker.

https://twitter.com/solomonstre/status/1111004913222324225


This. I don't have a clear enough mental model for either docker or wasm+wasi, but I immediately thought of docker. What is docker except a bunch of purpose build operating system instances? And why do we need Linux installations (as much as I am a fan of Linux) when we could just run on WASI? And Electron, where each instance runs on an individual instance of a browser?


This is the same reason that things like unikernels have been built for. They haven't taken off because of how difficult it is to rebuild your application to also contain it's kernel, but this kind of sandboxing with WASM and WASI doesn't require that and could likely make things a lot simpler to deal with.


Yep, just like JEE servers deployed bare metal, oh wait.


I'm not sure I get the reference? Could you explain a bit? I know what Java EE servers are, but don't recall anything about bare metal deployments. Comparing my experiences with using JEE and Docker was that they are nothing alike. Just unclear as to your point.


Java was at one point going to have a bunch of CPUs[1] built around the JVM bytecode, that would have allowed J2EE and such to run on the bare metal hardware, but they largely didn't go anywhere.

The most successful attempts that i'm aware of are Jazelle[2] and Java-card[3] plans to augment the software solutions

[1] https://en.wikipedia.org/wiki/Java_processor

[2] https://en.wikipedia.org/wiki/Jazelle

[3] https://en.wikipedia.org/wiki/Java_Card


This was my exact thought when I read the article as well. We currently have projects that are linux-specific in their build step and so even when I use a language server I actually have to run docker for it. I've patched the language server to run inside of a docker container and it works, but it's super slow and required two days of work + random fixes here and there.

WASI would give me near-native performance for this one without any additional work, and also fix issues like how file watchers don't work correctly, provided the source language has a compile-to-WASM option.

It's a massive game changer that could solve this issue after the fact. Even a 2x performance cost from native would mean a massive boost in performance and management in comparison to running it through Docker Desktop.


After reading about all the AMP and Google, Mozilla's blog is a welcome relief. Always a pleasure.


These Mozilla blogs are some of the best edited, articulated and illustrated explanations I've seen on the web. I wish more people could put in the resources to help communicate their mission like this.


The author, Lin Clark[0], is the person behind Code Cartoons[1]. She does absolutely amazing work. Unfortunately, there isn't really a single place to find everything she has done. She has done quite a few conference talks[2] as well.

[0] https://twitter.com/linclark

[1] https://code-cartoons.com/

[1] https://hacks.mozilla.org/category/code-cartoons/

[1] https://twitter.com/codecartoons

[2] https://www.youtube.com/playlist?list=PLIIHcC8epcPqhBrQ1dRmI...


Adding a podcast Lin was on, detailed just how much work goes into creating Code Cartoons. It's pretty amazing and highly appreciated.

https://changelog.com/podcast/294


This year is extremely interesting times for WebAssembly, there's a number of these different runtimes popping up. The effects of all of these decisions, including stuff like this, won't be felt until next year... I wonder how much adoption this will get, but the lineup looks extremely strong.


ok, so I hate to be this guy and I'm going to do it anyway because it has to be said:

Did they just invent Java again? Aren't they promising what Java promised? Won't they hit the same problems that Java hits during its write-one-run-anywhere promises?

I'm asking honestly - why is WebAssembly outside of the browser needed?


The idea of isolation that java espoused is not a bad one - it just wasnt executed well. Its not a java only idea either - see Native Client[0]. As with all things, execution and marketing matter. Java was lacking in both. [0] - https://static.googleusercontent.com/media/research.google.c...


OK, that's all correct and valid.

Java still exists and is a valid deployment platform.

Why is WebAssembly outside of the browser needed?


Java is no longer a valid deployment platform for dynamically executing untrusted code.

https://www.java.com/en/download/faq/chrome.xml

The Java Plugin for web browsers relies on the cross-platform plugin architecture NPAPI, which had been supported by all major web browsers for over a decade. Google's Chrome version 45 and above have dropped support for NPAPI, and therefore Java Plugin do not work on these browsers anymore.

All the other browsers killed it as well.

There's probably some way you can do it outside the browser, but I've never seen it used. Was it called Java WebStart or something?

I think it's because the sandbox had too many holes, and it was large and clunky. It didn't integrate well with the browser.

Java is obviously still extremely popular, but it's used in trusted contexts, like on the server, or "semi-trusted" contexts, like the Android app store. Although that is a different JVM which is probably easier to secure. And in that case there has to be a manual review process before allowing arbitrary code to execute (which is imperfect, but better than nothing.)


It remains to be seen if WASI will actually have fewer holes in its sandbox. Fundamentally it is not different.


I think an important difference is that in the new architecture, there's a separation between WASM and WASI.

That is, computation and I/O are treated separately. It's more like capability-based security. WASM modules have no capabilities except the ones explicitly injected when you instantiate it.

As far as I understand, the JVM wasn't as rigorous about this, although to be fair I don't know all the details.

So WASI could still have a lot of holes, but WASM would survive and be useful. And then maybe someone else could come around and do it in a different, better way. That hasn't happened with Java.


Java inside the browser doesn't exist. That's fine. That's not what I'm talking about or making comparisons to.

Java runs just fine outside of the browser. You don't even need a browser installed to run Java applications.

Javascript and WASM run fine inside the browser, and that's not what I'm talking about or making comparisons to.

I'm talking about running WASM binaries outside of the browser. Similar to how Java runs outside of the browser. Very similar, actually. Java needs a runtime to be installed or available, so will WASI. Java is a write-once-run-anywhere type of platform, so is WASI.

Why does WASI need to exist if Java already exists for the intended uses?


> Why does WASI need to exist if Java already exists for the intended uses?

IMO, the reason is that the market showed that Java never got used like that. I can only speculate on the reasons, but as far as I can tell it comes down to:

1) The sandbox isn't good enough. See my sibling comment -- the separation between WASM and WASI makes a lot of sense, and I don't think Java had that.

2) The fact that you can't port C code to the JVM easily. In fact WASI addresses exactly that -- the API is more like POSIX than a brand "new" bunch of JVM APIs, which makes it easier to port existing native apps (C, C++, Objective C, Rust, etc.)

In other words, there's still a lot more "value" in native code apps than JVM apps. And it's too hard to port native code applications to the JVM.

For example, Photoshop, Illustrator, Word, Excel, etc. were never ported to the JVM. There are actually better analogs in the browser than on the JVM.

I ran windows for 15 years and Linux for 10-15 years. On neither of those platforms have I ever needed a JVM. I used to play online chess in a Java applet, and that's about it. Most common desktop apps avoid dependencies on the JVM. Probably the only reason I could think of installing one is to use Eclipse or IntelliJ.

The browser is the main reason why anybody had a JVM installed in the first place. Without that hook, the JVM becomes much less important on the client. It's still very important on the server.

Java is important and successful, but it empirically did not succeed at some of its design goals. Of course, WASM and WASI may also fail -- that remains to be seen. But to me they look like a good attempt, based on significantly different technical foundations.


What is blocking C -> JVM compilation?

There seems to be such a compiler here: https://github.com/bedatadriven/renjin/blob/master/tools/gcc...


I would say it's:

1) There's a large difference between a compiler that exists and an efficient one. The JVM is designed for Java and not C, which makes efficient compilation hard / impossible. Graal as mentioned is an extremely unusual research technique that may make it feasible.

This comment is related:

https://news.ycombinator.com/item?id=19065829

Basically people underestimate how hard it is to make a VM that can run multiple languages. JRuby and Jython work, but it takes heroic efforts.

2) The interface to the OS, as mentioned. You can maybe run some algorithms like image processing, but try running the native sqlite code on the JVM, or something even hairier involving networking. It's not straightforward.

WASI provides something that's closer to what C programs expect than what Java provides. As I said, the market voted with its feet. All the friction adds up. If you try hacking on that compiler or manually porting code, you'll probably get a sense of why that is.


The JVM doesn't run languages that weren't designed for it, like C or C++ or Rust. WebAssembly does.

This is important for a lot of reasons- huge amounts of existing code you can now use without JNI or whatever, a higher ceiling for optimization, more freedom to implement new kinds of languages.

The core WebAssembly standard is also much smaller than Java, as a consequence of this design. This makes it easier and/or more feasible to deploy in more scenarios, even without subsetting things the way mobile/embedded Java does.


> The JVM doesn't run languages that weren't designed for it, like C or C++ or Rust.

You can run all those languages on top of the JVM!

The JVM also runs many other languages which weren't designed for it, like Ruby and Python.


True, I forgot about Graal. IIUC though that's somewhat different from what WebAssembly has done- Graal doesn't define any sort of stable compiler target for C, or produce JVM bytecode from C, but instead (via Truffle) JITs C source or LLVM IR via partial evaluation, right?

That would suggest (again IIUC) Graal/Truffle as a mechanism for using the JVM as a WebAssembly runtime. The WebAssembly format and its associated environment/binding system are a portable way to encode C binaries, with significant benefits over JVM bytecode- that's probably the comparison I should have made.


No that's right, the stable compiler target is someone else's job.

But I wasn't comparing to WebAssembly, so you'll need to argue that point with someone else.

You could build a WebAssembly interpreter on Truffle, yes. I'm not sure anyone's tried it yet so that project is available.


> You can run all those languages on top of the JVM!

Do you have a source for this? I'm sure it's theoretically possible in a Turing-completeness sense, but considering Java lacks any concept whatsoever of a pointer, and considering that most (if not effectively all) C code uses pointers pretty extensively, I find it unlikely that this statement is meaningfully true without some severe efficiency penalties.


For example Sulong runs C, C++, Rust https://llvm.org/devmtg/2016-01/slides/Sulong.pdf.

I use it to run unmodified C extensions for the Ruby programming language on top of the JVM.

Truffle C is another example https://www.manuelrigger.at/downloads/trufflec_thesis.pdf.


Thanks to GraalVM. Well, that wasn't always the case and it prompted the need for WASM.


Webassembly came from asm.js, which came from Emscripten. These 3 are a lineage of the same c compilation model targeting a JS Uint8Array as the c heap. Emscripten was published a long time ago, probably 10 years? So, JVM has byte arrays too...


Python, Ruby and Common Lisp were designed for the JVM?


To the extent that their object and memory models match the JVM's, you might say they were. To the extent that they don't, their JVM implementations wind up with warts and inconsistencies.

(But this is pedantry, you clearly know what I meant.)


No, I did not know what you meant. There is a difference between guessing and knowing.


Because Java exists for a different purpose, namely to run Java. (Ever tried to run your C or rust code on the JVM?)


Well, the second paragraph in the article gives you the answer:

> Why: Developers are starting to push WebAssembly beyond the browser, because it provides a fast, scalable, secure way to run the same code across all machines.

It allows you to use the same code both inside and outside of the browser. For example, Figma might use it, they already have a web app and a regular app, both using WebAssembly, so WASI would perhaps simplify their code or allow them to add more native integration to their app.

Making a product that's a slight variation of an existing product is often enough to gain significant market share. For example, was Chrome needed when there was Firefox? Was Figma or Adobe XD needed when there was Sketch? Is Rollup or Parcel needed when there is Webpack and Gulp? This could go forever. I mean, should Java be the only universal deployment platform?


Using Oracle's Java Programming Language is an immeasurable legal liability.


There are open source and non-Oracle Java virtual machines you can buy support from if Oracle isn't your cup of tea.

It's not really a legal liability either. Lawyers for companies who use Java go over those agreements with a fine tooth comb and approve them before they're signed. There are no surprises (if your attorneys aren't frauds.)


Google's was not using Oracle's virtual machine, and their attorneys were not frauds, but they've been tied up in litigation for most of a decade and are facing potentially ten billion dollars in penalties.

https://en.wikipedia.org/wiki/Oracle_America,_Inc._v._Google....

Java is not a responsible or ethical choice for anybody to use for any purpose.


Google's case is less one of "using" Java, and more one of reimplementing it. It's an entirely different legal beast.


They just screwed Sun, created Android J++ with applause from wannabe be Google employees, and turned down the opportunity to buy Sun and own Java.

They deserve what they get.


I am pretty sure IBM has a couple of patents regarding VM implementations.


Signed types and a proper cross language target. There is very little technical difference between the two, esp now that GraalVM has shipped. WASM is now the portable executable standard.

https://www.graalvm.org/


I'm sure this was meant to say unsigned types - Java only uses signed types.


Correct, my mistake. unsigned types.


So you're saying the JVM wasn't good enough so they had to make a better one? Why does the HN crowd constantly compare WASM with the "obsolete" JVM, when even the creator admits that it is not adequate?


JVM could have been WASM but they made some unfortunate design decisions. They started as a target only for Java, GC, signed types. WASM started from the bottom, floats and ints (signed and unsigned) with plans to add GC, threads etc later.

It isn’t that the JVM is bad, it is that it isn’t the universal compilation target that WASM is.

Remember folks getting excited about LLVMIR, Bitcode? WASM is this but with a clean sandbox and minimal tractable semantics.

The JVM is awesome, esp with Graalvm and Truffle. WASM is the next evolutionary step.


> why is WebAssembly outside of the browser needed?

It's arguably a better standard runtime than Java, for certain use cases. You don't see modern compilers targeting the JVM as an output target, but you do see them targeting WASM, there's probably good reason(s) for that (both technical and political/legal).

Furthermore, my understanding of WASI is that it allows standardization of _multiple_ runtimes, each domain specific.

For example, Functions as a Service. Currently AWS Lambda requires building a blob specially for them targeting their APIs and ABI. But what if Fastly, Fly.io, and the other "App CDN" FaaS providers come up with their own standard for "syscalls" for a WebASM-FaaS-1.0 spec? As a really bad example, the syscalls could be:

  - 1 GET URL
  - 2 POST URL
  - 3 Cache Read
  - 4 Cache Write
Meanwhile, in a totally different ecosystem, Ethereum can publish their own WASI for eWASM, where the syscalls are:

  - 1 Storage Read
  - 2 Storage Write
  - 3 Contract Call
Again, these are bad/contrived examples, the real syscalls would be better thought out than an HN comment, but the point is that you have two totally different runtimes, running is totally different places (on a CDN server, vs. inside an Ethereum node), both using the same tech stack w/ different "system interfaces" abstracting away what things your WASM code should have access to at runtime. Any language that has a compiler that targets WASM can be used in either case.

Furthermore, the barrier for entry as a syscall provider becomes much simpler too! If I want to make a local sandbox for FaaS testing, I just need to implement the syscalls from WebASM-FaaS-1.0 using local stubs, now I can test my FaaS locally. Way easier than the months/years probably spend reverse engineering the lambda environment for local testing. And if I come up with a novel solution for servicing one of those syscalls, hey, maybe I'll take my own stab at being a FaaS CDN. :)

Likewise for Ethereum and eWASM. Maybe I want to make a local test environment sandbox. Instead of re-implementing the EVM I "just" need to implement those syscalls outlined in the eWASM WASI spec (if such a spec existed). So now maybe light clients ship those syscalls over the wire somehow, and don't need to keep gigabytes of block chain data locally anymore.

edits: minor formatting


OK you answered a few of my questions and I still don't see why WASM is a requirement for any of this to happen.

No matter what language I use, no matter what platform I deploy to, if I want to interact with a service or with infrastructure, I'm going to have to use the provided API(s), WASM included.

If I deploy a pre-compiled binary to an AWS Lambda and an Azure Function, and it works on both, it's because I had to have that in mind when I wrote that software. Will that not be true for a runnable WASM binary? It sure seems like it would be true that I would need to detect which platform I'm running on and then execute the proper stuff for the detected platform.

If all APIs are going to be compatible with each other, so that, say, storing a file uses the same API call no matter what system or platform you're on (never going to happen; bear with me) implementing that in WASM outside of the browser does not suddenly make it possible.

This is an opinion, and on HN this is likely to be a very unpopular opinion. I don't expect anyone to adopt this opinion, nor do I expect to sway anyone's thought with this opinion. This really (really, really, really) feels like JavaScript/web people wanting to do things outside of the browser, with performance better than JavaScript alone can provide, and then, rather than simply using an existing language and runtime, deciding that the best route forward is to pave their own road to this destination with web technologies. This does not feel like the best route to take if you want to write software that performs well outside of a browser.

There are lots of other languages, runtimes, and platforms, that allow this today. And the web is a bag of chaos: I can't even get a consistent design or UI language across major websites, today. Buttons sometimes look like links, links sometimes look like menus, and buttons sometimes look like any of those, but somehow WASM outside of the browser is something everyone (except me) can agree on?

I'm skeptical that this is a good idea for anything serious.

Just like there are implementation problems with Java, there are going to be implementation problems with these runtimes that every platform is going to need in order to run this code, now. There are going to be many security concerns because (warning: incoming opinion) nearly all developers are terrible at thinking about preventing security vulnerabilities. Those same developers are usually OK when it comes to fixing them, and terrible at thinking about vulnerability prevention.


> OK you answered a few of my questions and I still don't see why WASM is a requirement for any of this to happen.

It's _not_ a requirement, but at least to me it seems like a pretty good _possible_ solution.

I sense you're pretty jaded about web development, and I get that, but I don't agree this is "web people" trying to do things outside the browser, but rather I see it more as "systems people" trying to bring _some_ order to the chaos.

> nearly all developers are terrible at thinking about preventing security vulnerabilities

I totally agree. Which is why you should have a solid abstraction between the sandboxed code and what it's allowed to do, which WASM seems particularly good at (by only exposing services as syscalls). In theory, sandbox escapes in WASM should be the same level of difficulty as modern kernel exploits, so still _possible_ but _pretty hard_. And if multiple runtimes are using WASM for different things, if one is exploited it can be a learning lesson for all the other WASM runtimes, vs. the bespoke scenario where each custom VM has to go through the security bug discovery process by itself.

If _think_ what you're against is some sort of future where a "WASM-based-Electron" becomes the de facto standard for modern apps, and _that_ I largely agree with. In fact, re-reading the WASI standard I do feel like they are already "baking in" too much w/ WASI-core; I was expecting the core to just focus on a simple syscall ABI, not to include syscalls like filesystem reads and writes.

Some maybe what I'm really looking for is a "wasi-minium-abi" that just standardizes _how_ syscalls are made without actually standardizing any actual syscall numbers.

In summary, I feel like syscalls are a _very_ tried and true way to specify the API/ABI for a runtime, and that I hope we see more runtimes use WASM and a syscall ABI vs. rolling their own VM, but I agree that I'm in no rush to see WASM/WASI become the "Electron 2.0" of local app deployment.


I don't think that web technologies are going to be a good solution to anything that is not the web.

The web development community haven't even solved their own problem space very well; spreading to new problem spaces will likely not suit them as much as they believe it will.


You should consider, that you just have a bias against "web people".

But this technologie comes not from javascript webprogrammers, but from browsermakers.

People who specialize in sandboxing, performance and portability since years. And they had to do this with javascript... and now they want a better foundation, BECAUSE of the flaws of javascript.

So I don't know if they will succeed making a better jvm either, but they certainly have the expertise that they might succeed.


I have no bias against "web people", I have a bias against using the wrong tool for the job.

This REEKS of developers who are psyched about WASM and want to create solutions where no problem exists.

If they want performant code outside of the browser, use any one of the 2-3 dozen languages that already do this! WHY is a new runtime, with the inevitable serious security issues that will be exposed throughout it's life, needed for this?


Look at it this way, this won't cause any harm and there's some chance that it will develop into something useful and successful.

Right now there's no reason to conclude that this will turn into something that's no better than Java.


Has a single sandbox ever been written that has not had an escape vulnerability? I don't know of any. Sandboxes aren't safe, and everyone thinks of them as being a perfect prophylactic when they provably, demonstrably, and historically are not, in any way.

the solution is not to create another sandbox to run software in.


Okay, I'll bite, what's the solution then? Personally, I don't have any novel software ideas lying around that are demonstrably better than what is proposed by WASI.

One advantage in this new push to build another sandbox is that Rust is leading the charge and is the de facto language of choice when building a greenfield WASM project. Given its propensity for memory safety and WASI's emphasis on capability-oriented security, I think the WASI team has a good chance of building something with a lot of value. Nothing here is novel (except perhaps Rust's extreme prioritization of memory safety) but that doesn't mean it can't or won't be an improvement over prior attempts.

I wonder how many of the previous sandbox vulnerabilities were viable due to some quirky memory manipulation techniques? Quite a few I would imagine.


> Did they just invent Java again?

No. Like Java, they just reinvented DEL again:

https://tools.ietf.org/html/rfc5


Targeting the JVM (or the CLR) requires that your language work nicely in the Java (or C#) memory model.

This basically means your language needs to work nicely with stop the world garbage collection.

(And yes, I know that modern garbage collectors aren't really stop the world, but the tradeoffs are still there.)


CLR was designed for multi-language since the beginning.

Go dig one of those .NET 1.0 SDK release CDs from 2002, plenty of sample languages on it.


I know that.

The problem is that the CLR comes with the the C# memory model and garbage collection.

If you don't want that, then what's the point of the CLR? You can't use most of the .Net API without objects and garbage collection.

That basically rules out any kind of real-time application. The more modern garbage collectors allow soft real time applications, but you still can't do things like use the CLR to run an airplane.

If you want to use the CLR for a real-time application, you basically have to replace most of the .Net API with a completely pointer-based version, and consider implementing your own real-time memory management model.

At that point, you might as well target your language to compile natively and use standard libraries in the operating system.


The CLR was designed to support C and C++ as well, apparently new generations don't have a proper clue what CLR is all about.

Also contrary to WASM, there are real hardware devices actually shipping stripped down versions of it.

And if real time is a concern, then not using a real time OS is already the big first mistake, there is nothing here for WASM to prove, only to catch up with existing battle tested solutions.


WASI layer interface for AssemblyScript from Frank Denis: https://github.com/jedisct1/wasa


Oh, this is amazing!


Excuse me if I come across as naive, I'm kind of a noob on these architectural topics, but how is this different/better than what the JVM is/accomplishes? When WASM first popped up I thought about the similarities with Java Web Applets and Flash.


1. It's an open standard that the major browsers have agreed to, so it's not a plugin that you have to install. A WASM app will work seamlessly in your browser without any additional software.

2. WASM is designed from the ground up as a compile target, not a language. We already see many languages with support for building to WASM. C, C++, Rust, and eventually when WASM supports garbage collection we'll probably see Python, JS, Go, all with support for compiling to WASM.


Go already can be compiled to WASM, and it is likely that the Go GC will always perform better than the WASM one for Go applications (with some minor exceptions)


The current Go implementation is pretty slow for a number of reasons though.

GCs usually need low level system acces in a way that is not supported by WASM for obvious reasons (security, sandboxing, ...). Go also has problems with the way they implement Goroutines if I remember correctly.

WASM will definitely need some kind of GC bridge to make things efficient for garbage collected languages. Potentially with certain primitives exposed that make shipping your own GC efficient. We'll see how it develops.


UNCOL and TIMI reborn.


(1) could be resolved by shipping Java, let's say hypothetically.

(2) the JVM was designed from the ground up as a compile target also, for Java, but is also used by Kotlin, Clojure, Ruby (JRuby), Python (Jython) and Scala, among others. Even JavaScript (Rhino, Nashorn and Graal).


The JVM was designed as a compile target for a very particular kind of language. One with a Java-like object model, with some later concessions for more dynamic languages.

C, C++, Rust, etc. do not fit into that model at all. There's a lot of software written in C, and there's a lot of performance to be gained by dropping down to that level when necessary.


By later concessions do you mean invokedynamic? How much did it help? Clojure doesn't use it even now, 8 years after it came about, I think. Is it mostly for JRuby?

The JVM is (and was from the start) a lot more dynamic than Java.


But Oracle.


I wrote a blog post about that a few months back, here's the HN discussion https://news.ycombinator.com/item?id=17616459


This is an excellent write up, thank you!


Among other things, Oracle has been… not very friendly to people and organisations working with the JVM[1]. A standard with many implementations that isn't controlled by one corporation is desirable. With that said, I kind of wish that this standard was RISC-V, but I guess it doesn't work that well for virtual machines?

[1]: https://en.wikipedia.org/wiki/Oracle_America,_Inc._v._Google....


I have been wondering this, it seems to me that it is potentially related to the memory allocation strategy for WASM should be more flexible than using something like the JVM


Java applets and Flash are dead. WASM isn't.


I love cross-over episodes like this.

Patiently waiting for when eBPF gets it's turn inside the browser.


I would love for WASI/WasmTime to emphasize on:

1. Backwards compatibility with existing libc. (Maybe pick musl)

2. Platform agnostic wasm generation: the same wasm file should run in the browser (with emscripten polyfills) and across ALL OS-es, mobile included. List: iOS, Android, Mac, Linux, Windows, FreeBSD

#1 shall enable decades of legacy programs to work with minimal porting, while #2 shall enable true cross platform capabilities without multiple codebases. Specifically, don't repeat what node did: digress from browser js semantics, instead of polyfilling them


Good news: we fully agree with these goals!

On 1, the libc we're working on[1] is based on musl. It won't ever be 100% compatible with all code, because that runs into constraints imposed by our security goals, but the vast majority of code should eventually just compile when targeting this. (Eventually, because this is all early days.)

On 2, yes, that is explicitly the goals. I'd add that it's not just about OSes, but also about platforms and hardware form factors.

[1] https://github.com/CraneStation/wasi-sysroot


> 1. Backwards compatibility with existing libc. (Maybe pick musl)

Honestly, that ought to be a non-goal. We already know that POSIX often enforces models that we don't want--filesystem permissions and the fork model are two good examples of things that are broken. So why start with "implement POSIX"?


The backwards compatibility isn't a testament for POSIX's merit. Sure, please introduce better, safer paradigms whenever you set standards, but enabling legacy software to run immediately is a great value proposition.

Also, goading legacy maintainers to adopt the newer, safer paradigms is likely more effective if you show them the traction the new platform is gaining


POSIX doesn't mandate fork, that's why it invented posix_spawn.


POSIX != C and from what i understand from the official docs, they already have a (musl derived, even) C library.


The C stdlib is so married to POSIX they may as well be the same thing.


Actually no, go read ISO C and compare it with POSIX expectations.


…on UNIX systems. C is still a thing for other platforms e.g. embedded where POSIX is not a thing.


I've heard people predict that the future of the web is to replace userland, to make every program cross-platform and sandboxed close to the metal, and for JavaScript to fade into a family of "web"-targeted languages. Looks like it's happening.


In this scenario, what's the point of having more than one OS?


Same as with POSIX - different implementations of the same standardized interface that target different hardware, and have different quality of implementation (possibly optimized for different use cases).


Differences of opinion in what/how to run code outside of whatever browser people decide the sandbox boundaries are.


For end-users, I'm not convinced that there is one.

Well, besides keeping competition/innovation alive.


This sounds great. Basically the web created a cross platform & sandboxed API to access everything we need (Camera, Sensors, OpenGl through WebGl, etc.) but it was very opinionated (for historical reasons) as it imposes the slow JS, the DOM, CSS/HTML etc.

Now this is the same idea but, this time, one layer lower so that we can have performance, the language we want, etc. This is kind what I always wished would happen.

It will be interesting to see how the sandboxing will work with already existing sandboxing solutions. For e.g a WebAssembly app packaged as a Android/Flatpak/UWP/etc. , there will be the need to make mappings between the permissions of the two sandbox systems. Or even maybe one day we'll have the Webassembly sandbox as the only sandbox (like in https://github.com/nebulet/nebulet)


WASI will have some interesting milestones ahead of it:

* Self-hosting (WASI can run inside WASI)

* GCC can run in it

* Linux can run in it

* Quake can run in it

* Chrome can run in it


Doesn't Quake traditionally come first?


Isn't Doom the standard go-to?


How can self-hosting be achieved considering SPECTRE etc? Surely, since WASI inside WASI couldn't be in different OS processes, there would be an opportunity to leak data between the inner and outer WASI?


There's a difference between asking is self-hosting can be achieved, and if self-hosting can be secure.

Second, I can run an emulator in WASI that implements WASI, including multiple processes. They might not be true OS processes. But as long as I have threads, I can do something.

Also, maybe we get WASI without high precision clocks. /shrug


This is super cool! We're going to have to rush to implement this in the wasmer runtime!


This is what I was thinking. How will this affect wasmer.


Can't help being reminded of that talk by Gary Bernhardt: “The Birth & Death of JavaScript”[0] — exploring a hypothetical future where JS takes over everything without (most) anyone using it of their own volition.

0: https://www.destroyallsoftware.com/talks/the-birth-and-death...


Except entirely irrelevant as WASM is not Javascript.


Can you show me an example of a WebAssembly app that runs in the browser with JavaScript enabled?



Meant to say with JavaScript disabled.


Sure, though it’s still a web technology taking over an otherwise unrelated space ¯\_(ツ)_/¯


I don't think it's unrelated. Despite the name, WASM isn't really a "web technology" - it's a sandbox technology and a compile-once-run-everywhere technology, and there has always been demand for that outside the web, even before the web existed. It might be that the web is what created enough demand for it to happen in the end, but what do we care?

The problem with JS was never that it's a web technology. It's that it's a bad technology that happened to be in the wrong place at the wrong time to get a first mover advantage.


Should they standardize to run WebAssembly on the Web first? There's like hundreds of things to do: https://webassembly.org/roadmap/, https://webassembly.org/docs/future-features/


We're working on that, too :) See this post from last Fall where we laid out a way to think about where WebAssembly is going, which use cases to enable, and how: https://hacks.mozilla.org/2018/10/webassemblys-post-mvp-futu...


No time, busy reinventing VMs.


How much of the Fuscia model apply to Web Assembly? It seems like they could share similar security models


We've mainly based the current design on CloudABI/Capsicum, but it's all early days, and Fuchsia is on our list of systems to at the very least take heavy inspiration from :)


Will this be backwards compatible with existing libc?


This tutorial gives an overview of how compatibility with existing portable C code works:

https://github.com/CraneStation/wasmtime-wasi/blob/wasi/docs...


Love it. This could be a huge innovation and really push the dream of cross platform development to the next level.

I suppose this would have pretty big implications for Electron or a similar successor to aid in the UI portion of this endeavor.


Yeap, so innovative it existed already for decades (virtual machine executing some fixed bytecode).


Properly sandboxed VM that you can compile practically any language into?


AS/400 TIMI comes to mind, and CLR as well.


CLR sandbox should not be considered a meaningful security boundary at this point. CAS and partial trust are both officially obsolete.

The problem is always complexity. It can be difficult to reason about safety with so many features on bytecode level. There have been exploits using dynamically-generated methods, exception filters, and vararg handling - note the near-lack of any common patterns here other than they're all obscure features of the platform, the security impact of which wasn't fully analyzed.

For example, a long time ago, I reported the vararg exploit, which boiled down to the fact that you can have ArgIterator over an argument list containing another ArgIterator, allowing the latter to be mutated via the former. Thus, you can stash away an ArgIterator to the arguments of a function call that has already returned. This is basically the same as having a managed reference to a managed reference, and it's exactly why CLR prohibits this - but they forgot that ArgIterator is also a kind of a managed reference.


And WASM remains to be battle tested in the wild, as in the juicy target of millions of black hats out there.

Forcing internal corruption of code produced from C compilers (like doing a stack out of bounds data overwrite due to incorrenct params size) is perfectly viable.

Yeah the exploit doesn't leave the sandbox, so what. It can still be used to direct the sandboxed code to produce other outcome from the called functions.


The CLR is not a great target for running unmodified C code. I suspect that was included in "practically any language."


Yeah, Managed C++ and C++/CLI don't exist.


I was careful to say "unmodified." C++/CLI is hardly that. It's also not at all sandboxed the way wasm is.

Why the chip on the shoulder?


You're conflating C++/CLI with the MSVC /clr compiler switch. They're distinct.

With /clr:pure (which produces CIL only, although it is allowed to use memory-unsafe features like pointers), the entirety of ISO C90 is supported on CLR, with the sole exception of setjmp/longjmp.

C++/CLI adds language extensions that allow one to interact with the CLR object model from C++ code. It is only needed if you need to call into the .NET standard library, or other managed libraries - i.e. if your C code is not portable to begin with.


I'm not conflating anything. And as I said, none of this is sandboxed like WebAssembly so it's hardly a comparison anyway.


Neither is WASM that allows for internal corruption due to lack of memory tagging.


C++/CLI is just ISO C++ plus a couple of language extensions, you know like the Linux kernel is ISO C plus GCC extensions.

I hardly see the difference.

No chip on the shoulder, just an old guy that has seen dozens of VMs since the mid-80s, delved into others from earlier decades, which doesn't buy into WASM marketing.


Yes, it's called DOSBox. Or QEMU with dynamic syscall translation if you want to be real fancy.


JVM and CLR.


But lack of good and safe isolation was a serious problem


Yep, just like....wait for it....UCSD Pascal.


Nobody is saying that the idea of a sandboxed VM is original. The interesting parts of wasm are the theoretically-uninteresting but important-in-practice features: JS interoperability, the LLVM backend, cross-browser support, etc.


Well, many are behaving as such.

Actually JVM got there first with JS-interoperabily.


Since it's based on web tech, is this going to be focused on async, rather than synchronous, calls?


This technology will become more compelling once there is a deployment target that runs WASM+WASI binaries but won't run native binaries.

Nothing like that exists (it's too early) but it would be interesting to know if anyone is planning anything.


And here I am still writing asmjs by hand. Do we know if that's fully deprecated or are there any plans to keep it fast and loved like wasm?


If there are any WIP implementations utilizing CloudABI and Capsicum they would be interesting to look at.


The amount of hatred in this thread for non-web technologies is insane.


Funny, I see the exact opposite. People dismissing WASM as unecessary, or just another Java, or just another Flash, or just another Node.js, or something that shouldn't exist outside of the browser.

If it didn't have "web" in the name, I think people would hate it a lot less.


What does this mean for host bindings?


There is already a WASI and its a standardized IQ test. I did a double take.


This is nice and I'm excited about it, however there is a concern that people are going to start building code that targets WASI and then shim it to run in the browser. This happened to JavaScript when Node.js was released; people began using APIs intended for servers and then shimmed them to also run in the browser resulting in code bloat. I'm worried that the same is going to happen here; the video actually encourages doing just that. The browser is not a platform that should be treated as secondary.


I think the browser shouldn't be a platform at all, frankly, and WASI is one possible way we can finally stop trying to shoehorn it into being one. If everything runs a WASI runtime, which is designed to run applications from the ground up, there really isn't any need for the browser to run applications anymore, is there?


I think the world in which the browser disappears and all we have is apps is a worse world than the one we have now.

With HTML/CSS we get the relatively easy ability to add extensions to dig deep into the data of a page. One of my favorite extensions is rikaikun which adds popup translation of Japanese words. It can only do this because it can inspect a standardized data structure (the DOM) with standardized APIs.

I also love the ad blockers. They let me not only block ads but also hide parts of websites I find distracting. For example I hide the "Hot posts from other stack exchange sites" section of stack overflow because I find I get sucked into 20-40-60 minute distractions if I don't hide it.

I can also copy and paste nearly any content and link to almost any content.

An a world of native apps none of this is really possible. Every native app will use its own internal representations of data. Different UI kits or libraries. Much of it is un-linkable, un-copyable, It drives me nuts when I can't copy and paste of phone number, or address in a native app.

I don't believe getting rid of the web for a WASI based app world would be a good thing for users.

PS: I think WASI is a great idea. I just think trying to replace HTML based apps with WebAssembly apps using WASI and rendering with WebGPU is the wrong direction for most web apps.


What you’ve mentioned about HTML/CSS is true for native apps too: there is a standardized API for drawing the UI (GTK+, Cocoa, whatever Windows has) and standardized data formats that back them. The issue is that apps are not necessarily required to use these (ironically it’s Electron et al. that break these) and they don’t use code that is readily inspectable, unlike pure JavaScript. On the other hand, though, this has been slowly been becoming true of the modern web: websites are ditching the DOM for React/Vue/Angular “SPAs” that use horrible selector names and minimized and nigh-impossible to comprehend JavaScript so they can reimplement things that the platform provides for free. So it’s not great either way…


> there really isn't any need for the browser to run applications anymore, is there?

Of course there is. You can open a new "application" without installing, just by clicking a link or entering a URL. The beauty of browser as a platform is simplicity of navigation between apps you haven't previously installed.

Also, note that the distinction between a document and an application is vague. Is interactive document, perhaps containing an interactive map, an application? If yes, do you really want to install these interactive documents before accessing them?


Why would you have to install a wasm/wasi binary? You can pull things over a network without a web browser you know.


the horror of Java web start comes to mind, and its resulting failure as well.


Why do you think it failed, and how many of those same reasons are applicable here?


How?


That's like saying "now that we have Minix there isn't any need for Windows anymore". WASI is awesome, but the Web is here to stay.


For sure, but it would be nice if there were alternative ways to run "no-need-to-install" apps without bringing all the weight of a browser with it. There's a large class of apps that don't need to link out to other URLs, don't need CSS engines (if they just need opengl/vulkan for example), and might not need as strict of security as a web page.


> There's a large class of apps that don't need to link out to other URLs, don't need CSS engines (if they just need opengl/vulkan for example), and might not need as strict of security as a web page.

Yes, there are, and those uses are already solved with Steam, the Windows Store, Play Store, App Store, etc....


"Steam, the Windows Store, Play Store, App Store, etc...."

And there's also your answer. All those are fragmented, proprietary plattforms.

WASI is not.


No, they aren't. They are a variety of distribution mechanisms, which WASI is not and does not contain.

Also, WASI is literally fragmented by design. It does not have a singular target, it instead is a bunch of modules (aka, shared libraries), and what modules you get and how they behave is up to the platform.

Real native has already long since solved distribution, dependency management, and portable abstractions. WASI does not appear to be doing anything interesting, new, or novel here.


Which nonproprietary plattform can I use today, to reach the widest audience?


Not a useful question to ask or answer, as the answer is either anything or nothing depending on how nitpicky you want to get or where you feel like drawing an arbitrary line.

For example do you consider C++ on Windows to be a proprietary platform, even though C++'s STL isn't proprietary? If so, then WASI on Windows must also be proprietary, no? And if you don't consider C++ to be proprietary, then, you know, you can pick just about anything. They almost all have a standard library that abstracts OS differences and are generally portable.

All that aside, .NET Core is MIT license and already exists. So you can literally be non-proprietary, multi-platform, single-binary today with a mature ecosystem, language & library support, and tooling.

Also in the context of shipping apps let's not forget that this isn't really viable on the 2 biggest consumer app platforms, iOS (no JIT) & Android (majority APIs require Java interop), and for games it's also not viable on the other dominant platforms in those markets - Xbox One & PS4.


With C++ you have to compile to a specific plattform. So yes, that works, but is different to WASI, where you compile once and can deploy whereever a runtime exists.

And apart from that, yes .NET is the only alternative I see today. But you don't see the benefits, of a new lower level option, entering the field?


> With C++ you have to compile to a specific plattform.

In a world where CI servers are plentiful, does this actually matter in the slightest?

And aren't you going to end up doing multiple compiles with WASI anyway since the list of required modules for a platform isn't mandatory?

And even if you only use wasi-core, you're still going to be doing platform-specific builds for either the installer to fetch the required runtime or a fat binary that just bundles it. Most likely the latter, like Electron does.

> But you don't see the benefits, of a new lower level option, entering the field?

Not unless it does something new or acknowledges why previous attempts didn't succeed and how this one will be different.

Plans on doing all the same things but years later and with worse tooling isn't exactly a compelling story for why anyone should touch it.


From reading your other comment "you can pull things over a network without a web browser...", your argument appears to be: we could have a separate platform from the browser for running web applications, and lose the need for the browser to support applications. Correct me if I misunderstood.

A benefit of applications in the browser is that they share and benefit from the same security policy decisions which are made for web pages. I think the closest thing to what you're suggesting is mobile apps, which don't always compare favourably. Users are often coerced into giving up unnecessary permissions, and they have less control over the user experience (can't block ads or change styles).


> we could have a separate platform from the browser for running web applications, and lose the need for the browser to support applications.

Yes. And the benefit of that is that it building a new WASI implementation is almost certainly going to be much easier than building a new web browser currently is. If web browsers were to revert to being simple hypertext renderers, then those would be orders of magnitude easier to implement as well.

> Users are often coerced into giving up unnecessary permissions

How is that different from the web?

> and they have less control over the user experience (can't block ads or change styles).

If I have control over who the WASI VM can and cannot talk to, I can block ads using exactly the same mechanism that is used today. Styling would depend on precisely what interface ends up in common use to render graphics for WASI applications, it is totally possible to style native widgets or something like QT/GTK. You can't style anything that uses a canvas regardless.


So every web site you visit now balloons to 150MB of code that just badly reinvents what Chrome & Firefox did? And also just runs with, what, all permissions & capabilities? Or no permissions? GPU, video, audio, etc... are all just not allowed then?


Yeah, sure, you go ahead and reinvent 30 years of cross-platform application APIs. I'll keep using the web until your new ones match maturity.


We have had dozens of cross-platform application APIs running in VM sandboxes, yet the browser is intent on slowly reinventing them anyway.


Eh, I'm not trying to discourage you from building cool apps on top of WASI. I just hope you understand what you're up against. And most of it is not technical; you've got to convince Apple to include a WASI runtime on iOS (among others).


Just like any new feature added to the web standard requires that Apple actually implement it in Safari, right?

I'm not really pro-WASI so much as I am against the idea that the browser is a good application platform.


Right, just like that. They are totally comparable, adding a new feature to an existing runtime is just as hard to convince as adding a new runtime itself. Totally comparable.


My understanding is that Apple still doesn't implement some new web standards, so yeah actually.

Ultimately, if a vendor refuses to support something, it doesn't matter if it is an 'open standard' or not. The openness means absolutely nothing in this instance.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: