Hacker News new | past | comments | ask | show | jobs | submit login
Nginx on Wasmjit (wasmjit.org)
272 points by wofo 3 months ago | hide | past | web | favorite | 118 comments

Sounds like the big idea from Singularity (run everything in the same hardware security ring, use static analysis for memory safety) is going mainstream, incrementally. Unfortunate that this is happening right as it becomes clear that such a design is fundamentally unsound on modern hardware.

+1 for pointing at the irony but it's not really true.

1. Hardware design flaws are not irremediable and will be fixed in time.

2. It's still very useful to run software that is trusted to not be malicious and/but not trusted to be void of memory bugs.

> Hardware design flaws are not irremediable and will be fixed in time.

There's no particular reason to believe this. Neither Intel nor AMD have committed to any form of side-effect-free speculative execution or similar. That'd required a rather large chunk of transistors and die space, and if nobody is footing the bill for it it's not going to happen.

Preventing side-effects from leaking between processes (and ring levels) entirely at the hardware level is definitely going to happen. Within a process, though? That's not going to happen without multiple major players demanding it. Since nearly all the major CPU consumers are currently happy with process boundaries being the security enforcement zones, there's no particular reason to believe that in-process sandboxing will ever have hardware fixes to prevent spectre attacks.

> That's not going to happen without multiple major players demanding it. Since nearly all the major CPU consumers are currently happy with process boundaries being the security enforcement zones, there's no particular reason to believe that in-process sandboxing will ever have hardware fixes to prevent spectre attacks.

The amount of untrusted code running in browser is only increasing over time, and the CPU vendors have absolutely noticed this. That's why ARMv8.3A added an instruction that literally has JavaScript in it's name (FJCVTZS) for instance.

Once you have mitigations for colocating trusted and untrusted code in ring 3, doing the same in ring 0 almost certainly isn't a big deal.

Browser vendors have already rolled out process isolation to handle this. So why would CPU vendors spend silicon on something userspace has already solved with the building blocks that are already supported?

In-process sandboxing by all appearances is simply dead.

As for "doing the same in ring 0 almost certainly isn't a big deal" no, very extremely no. If you let code run in ring 0 it has everything ring 0 can do, period. You cannot put restrictions on it, that's what spectre proved. Give code access to a process and it has entire access to that process. Similarly give something ring 0, and it has the entirety of ring 0.

Untrusted code goes in ring 3 in an isolated process. That's the security model of x86, and it's the only model that CPU vendors have any pressure to fix.

It's worse than what you're stating. Spectre is remotely exploitable with network packets (NetSpectre). The current state of the world isn't just fine with process level security, and the chip vendors are going to need to fix it, because ultimately you can exploit then if you control the data, not just the code. They have several options available to them, all of which also let's you run sandboxed untrusted code in the same MMU context as trusted code.

It is unclear if NetSpectre works in a real environment, but either way it's fundamentally similar to an IPC call and can be treated as such. As in, that's where you'd insert hardening boundaries.

Chip vendors have & will fix process boundaries. But nobody is talking about any sort of protection of any kind that would let in-process sandboxing work again. It's just not even on the table at this point.

> Preventing side-effects from leaking between processes (and ring levels) entirely at the hardware level is definitely going to happen. Within a process, though? That's not going to happen without multiple major players demanding it. Since nearly all the major CPU consumers are currently happy with process boundaries being the security enforcement zones, there's no particular reason to believe that in-process sandboxing will ever have hardware fixes to prevent spectre attacks.

I do not think there is the degree of distinction between "processes" and "threads" that you think there is. If tools exist to isolate speculation state between processes, those tools can probably also isolate threads in a process.

Of course there is. Threads share page tables, processes don't. Literally the exact distinction between threads & processes in userspace is the one that exists in hardware, too.

Processes share the higher half of the page tables at least partially, even with KPTI turned on.

Wasmjit doesn't really address #2. Memory bugs in nginx will continue to corrupt data and potentially allow RCE, they just will not be allowed to spread to the rest of the kernel (in theory)

If wasmjit is sound, it won’t allow general RCE. WASM enforces a significant level of control flow integrity; for the most part the worst you can do with an indirect branch is take advantage of type confusion to execute an unintended function. You cannot generate arbitrary code, or use something like return oriented programming to execute portions of existing functions.

Not without a big performance hit most servers don't need because they don't cross security domains.

There's been remote Spectre exploits. https://lwn.net/Articles/761100/

The big news this year was that the majority of hardware isolation designs in existence were fundamentally unsound, this is a software sandbox. Are you referring to the recent discovery of a new hardware bug class here?

As I understand it, there were basically two bug classes. One was that code that makes faulty fetches across a hardware privilege boundary can infer data from how long the fault takes. Another was that characteristics of branch prediction within the same process can reveal information about the branch path not taken, even if that branch is itself a bounds check or other security check.

The first attack isn't relevant to designs that don't use hardware isolation, but the second one absolutely is. If your virtual bytecode (wasm, JVM, Lua, whatever) is allowed access to a portion of memory, and inside the same hardware address space is other memory it shouldn't read (e.g., because there are two software-isolated processes in the same hardware address space), and a supervisor or JIT is guarding its memory accesses with branches, the second attack will let the software-isolated process execute cache timing attacks against the data on the wrong side of the branch.

(I believe the names are more-or-less that Meltdown is the first bug class and Spectre is the second, but the Spectre versions are rather different in characteristics - in particular I believe that Spectre v1 affects software-isolation-only systems and Spectre v2 less so. But the names confuse me.)

For someone with only passing attention for this stuff, your comment explains it perfectly -- thanks!

Now I'm wondering how browsers presumably already cope with this for JS, or how CloudFlare workers cope with it, or .. etc.

The immediate response from browsers was to disable shared memory across JavaScript contexts (previously you were allowed to create a SharedArrayBuffer and share it across WebWorkers, which if you're familiar with UNIX you should read as "mapping a shared memory segment in multiple processes" or "threads with shared memory") and to disable high-resolution timers, which makes it hard to get useful information out of timing attacks. I believe SharedArrayBuffer is coming back now-ish and I'm not totally sure what the mitigations are.

The panic fix was to short term remove SharedArrayBuffer which was necessary to create a high precision timer. But the real fix is http://www.chromium.org/Home/chromium-security/site-isolatio...

Browsers just resort to process sandboxing entirely. They assume JS can escape its sandbox, but since it can only read contents that were produced from its origin anyway it doesn't really matter.

Browsers and cloudflare workers removed/neutered high resolution timers (and things from which you can build high resolution timers) to make exploitation difficult.

CPU speculates past the bounds check and loads values you should not read into cache and you try to find out which value was loaded into cache using precise timing but the timer is imprecise so you don't know what the value was which you were not supposed to be able to read.

Ultimately, code from each security origin is going to have to run in its own process space.

The big news this year was that you can't run code of different trust levels in the same address space.

Actually I think the big news is that you can't run them on the same CPU.

So is the idea to take a page from the RISC and VLSI playbooks and outsource the isolation and security to the compiler/JIT VM?

Not only Singularity.

Xerox Parc workstations, IBM and Unisys mainframes, UCSD Pascal, Oberon, Inferno, Java, .NET, Flash.

Or for more actual examples, Garmin Apps, watchOS bitcode, DEX on Android.

Java applets, Flash, and Silverlight were famously examples of people being overconfident in software isolation. There's been an active effort to kill off browser plugins, and wasm (and asm.js and Native Client before it) is the result of an attempt to accomplish the same goal in an actually sound manner.

Dalvik on Android is not a software isolation mechanism. Each app has its own Dalvik VM using traditional UNIX processes and user accounts for isolation, which are in turn powered by hardware isolation. (I'm betting they did this because they knew that JVM software isolation had been a disaster in practice, although in part this also means apps can link native code libraries without changing the security model at all.)

The JVM architecture had and has proven safety characteristics. The WASM machine model is more constrained in order to make proof easier. The JVM is more complex but is intrinsically safe all the same. Sun just had more dedicated sources to work through the proof.

The promise of the JVM failed not because of design, but because of buggy implementations. The drive for features and performance brought risk. And WASM is heading down the same path. WASM can add GC with an unimpeachable proof of design safety, but it would be irrelevant.

The issue isn't design or architecture. The issue is that the motivation for adding GC and other features to WASM is principally for performance. Version 1.0 of WASM will be the last specification that made security paramount. Everything after that will be a managed retreat as more and more features and code are placed outside the sandbox in the endless pursuit of performance.

To reiterate: Java applets weren't insecure because the JVM was insecure. Java applets were insecure because the vast majority of the implementation of the environment--and particularly the most complex aspects--existed outside the confines of the sandbox. The more feature rich and performant the sandbox environment then necessarily the more code and complexity must exist outside the sandbox.

ActiveX, Silverlight, and (worst of all) Flash were far more insecure, both in design and implementation. But that's a distinction without a difference; their relative inferiority didn't make Java applets more viable.

WASM will be insecure for all the same reasons: JITs and GCs are tremendously complex beasts, with implementations least amenable to verification methods relative to most other software projects. Rust's borrow checker is worthless for ensuring memory ordering and life cycle invariants of machine objects. DOM implementations have similarly become tremendously complex, yet the focus is on how to expose these implementations and their interfaces in entirely novel and brittle ways, prioritizing performance above all else. The most safe alternative and likely sufficient for 80% of use cases would be a message passing interface, afterall. Instead, design proposals are focused on finding the thinnest possible abstraction over direct addressing of DOM object references from within the VM. Thus the prioritization of GC.

Java has had about 20 years to get the implementation right. In my day job I still run Java desktop code because that's how server remote consoles work and they've been slow to switch to HTML5, so there was obviously a market for getting Java applets to work soundly until very recently. If your design results in buggy implementations despite 20 years of work, it's a buggy design. If your "proven safety characteristics" are that it worked on paper and not any implementation actually secured anything in the real world, it's a useless definition of "proven."

I do not think that "JITs and GCs are tremendously complex beasts" is a meaningful argument against wasm, because plain old JavaScript on the web has JITs and GCs, and is overall a much more secure environment to run than Java. (For instance, approximately every enterprise disables Java for security reasons except for people with a business need for it, and approximately no enterprise disables JavaScript for security reasons. I'm not claiming JS is foolproof, just that it's been much much more secure in practice.) You might note that Sing#, where this subthread started, had a GC. So does Inferno's virtual machine. If you think GCs make things too complex for software fault isolation to work, do you believe Singularity and Inferno are insecure?

I don't claim to be an expert in this, but I think there's a coherent reason why Java's design failed: Java attempted to do isolation at the language level (and inside a language-specific bytecode), which is a richer interface. It should be entirely possible to develop a high-performance interface for software fault isolation as long as it's a small enough interface to successfully secure, and my sense is that that's where wasm is going.

One important advantage of wasm over Java, Flash, and Silverlight is that it can learn from their failures.

Message-passing doesn't need to be slow either - just write your messages in a high-performance but securely parseable format like Cap'n Proto. We know how to do such things now; we didn't 20 years ago. The state of the world keeps advancing.

"The promise of the JVM failed not because of design, but because of buggy implementations. "

Well said. That happened everywhere from desktop to server to embedded. However, safety-critical side of embedded further supported your point by building real-time, safe implementations of JVM (or subset) designed for certification. There were also companies using Ada to get systematic protections against errors with one, Praxis, doing semi-automated proofs with their SPARK language. A JVM implemented in Ada, SPARK, and (where necessary) C/C++ might have been much safer.



"JITs and GCs are tremendously complex beasts, with implementations least amenable to verification methods relative to most other software projects. "

There's actually verified JIT's and GC's they can draw on. I doubt they will, though. History shows they'll go with a non-verified design followed by penetrate and patch.

"Rust's borrow checker is worthless for ensuring memory ordering and life cycle invariants of machine objects."

I've long pushed Abstract, State Machines (or languages based on them) to do this kind of stuff better. The work on memory models can be ported to something like Asmeta. The algorithms can be checked against them by solvers. Then, equivalent software and/or hardware comes out the code generator with more analysis/tests in case it messed up. I got excited seeing Galois was using ASM's for hardware/software verification recently. They'd be great for checking security of interpreters against software and hardware level issues.

"The most safe alternative and likely sufficient for 80% of use cases would be a message passing interface, afterall."

I haven't updated myself yet on advances in typing for that stuff. Pony's method for type checking might help here since it uses a capability-secure, actor model. Wallaroo also uses it for a high-performance database. So, it's not a slouch either.


Dalvik? That doesn't exist since Android 5.

ART on Android 5 and 6 is as much runtime as any other programming language with a AOT compilation model.

And as of Android 7, there are multiple execution modes. An hand optimized interpreter written in Assembly, a JIT compiler with PGO feedback, and an AOT compiler that takes the JIT PGO data to generate a proper executable on when the device is charging and idle.

Traditional native code is also pretty much clamped down in recent versions of Android via SELinux, seccomp and white list of shared objects.

Google doesn't want you to do more than just implementing native methods, high performance 3D graphics, real time audio or importing "legacy" libraries.

I don't consider WASM that sound because it still allows for internal data corruption of modules written in unsafe languages, instead of supporting proper memory tagging like SPARC and the upcoming ARM architecture.

ART still translates Dalvik byte code. If you separate "Dalvik the interpreter implementation" from "Dalvik the specification", you see that Android is still fundamentally based on Dalvik.

I think the parent comment's point about Android application isolation being enforced by hardware rather than software still stands.

Inferno is close, but most of the others are really using a VM for portability and flexibility, not running code at ring 0.

Xerox Parc workstations with microcoded CPUs, IBM and Unisys language environments with JIT at installation time, Oberon, Java with ART on Android 5/6, .NET on Windows 8.x/UWP are certainly not using a VM.

I legitimately can't tell if you're being sarcastic or not.

It is not my fault that many devs still don't get the point between a VM, a language runtime, and how multiple implementations relate to a single language specification.

You know you managed to list examples that are literally case studies of VMs in "Virtual Machines: Versatile Platforms for Systems and Processes", right?

Going through your list:

Xerox PARC worstations: these didn't really run smalltalk in microcode, but ran an interpreter written Data General Nova asm, with the microcode "emulator task" dispatching and executing Nova machine code. There were a few new instructions added for smalltalk, but that was stuff like bit blit instructions. All that being said, even if the smalltalk VM interpreter was pushed down into microcode, it'd still be (a component of) a VM.

The big iron JITing environments are quintessential hardware/software codesigned VMs.

Oberon the language you might have a point, but the secure loader/verifier/compiler in it's only implementation is absolutely a VM.

Java with ART is absolutely a VM, unless you're going to make the argument that HotSpot isn't a VM.

.Net UWP is absolutely a VM too. Yes, it's partially compiled before it reaches end users, but also includes the entirety of .Net core linked in for cases where you're dynamiclly adding new code to your're running process.

Essientially, I think you've come up with some weird definition of VM that doesn't match industry or academia, and then berating people who don't follow that non standard definition.

What IBM systems are you thinking of?

System/360 had (and used) hardware privilege levels.

Seems like few of your examples involve not using hardware for process isolation. Java: no, except a few research OSes. Flash: no.

There's been quite a few of these systems over time. Especially sold commercially.

IBM System/38, evolved into AS/400 and IBM i, is one of architectures described in this book:


Far as language-based security, the first mainframe for businesses used a high-level language combined with a CPU that dynamically checked the programs. Still sold by Unisys but I doubt hardware checks still exist.


The Flex Machine implemented capabilities and trusted procedures in the microcode:


ASOS supported a mix of methods where each app was Ada for its safety features but a MLS kernel modeled in Gypsy separated various security levels:


SAFE explored tagging at CPU level which got commercialized as CoreGuard or Inherently Secure Processor:



In embedded, there's Java processors that run bytecode natively with some support for separation. They blur the line between VM's and native apps:


He's probably talking about AS/400, and it's hardware/software codesigned VM. For some reason people call those mainframes.

AS/400 though does use hardware heavily in it's isolation model, going so far as to have a custom PowerPC variant currently that adds tagged memory.

Both AS/400 and System/370 have a so called language environments.

And yes I call them mainframes, because it is as I always heard people referring to them during my Summer job back in the day, so the name stuck with me even it isn't correct.

Language environment in IBM parlance is closer to "ABI" than "VM", or "sandbox" in the rest of the world. It's a common set of idioms to allow two languages (generally ASM and a higher level language like C) to call eachother and interoperate.

There is another layer of abstraction that is heavily being exploited of running layers of kernels. I'm really interested in seeing more advances in this and unikernal approach where if you are running in the cloud, the hypervisor already provides you with a sandbox, so run things at a lower level than necessary. There are certain secuirty challenges that we see because we keep thinking in terms of user and kernal space. If we try to narrow (and slowly remove) the line separating the spaces, we can address these problems in more efficient way than done today for sure.

gVisor is pretty interesting, it kinda takes this idea when you run it on KVM.


Except minus the static analysis, right?

WASM gets statically verified before it runs.

The intention here is to use wasm to allow you safely run user code _within_ the kernel. Their primary targets are nginx and FUSE. Conceivably, avoiding the context switch into and out of the kernel will have significant performance implications, but there aren't any numbers out yet for nginx specifically.

That's certainly a fascinating idea. My initial thought was "Wait, doesn't the kernel already provide a sandboxed execution environment -- called userspace?" This would still have scheduling overhead, but I assume the idea is to avoid a lot of the other context switching steps such as switching page tables. And instead rely on Wasm/JIT checks to ensure ahead of time that memory violations won't happen.

Once upon a time syscalls were slow, but architectures now provide features like syscall/sysenter for switching privilege levels, with costs comparable to userspace function calls.

Once upon a time switching page tables was slow, but now we have features like PCID that allow preserving buffers.

Soon, if not already, the principle cost to context switching will be the necessity to flush prediction and data buffers. In-kernel solutions like Wasmjit must incur the same costs. Quite possibly they may turn out to be slower overall: 1) they won't be able to take advantage of the same hardware optimized privilege management facilities (existing and future ones--imagine tagged prediction buffers much like PCID), and 2) they still incur the extra runtime overhead of running in a VM which, JIT-optimized or not, eats into limited resources like those prediction and data buffers that have become so critical to maximizing performance.

Granted, if it's going to work well at all than Nginx seems like a good bet, especially because of I/O. But there are many other solutions to that problem. Obsession with DPDK may be waning, but zero-copy AIO is still a thing and there are more ergonomic userspace alternatives (existing and in the pipeline) that let you leverage the in-kernel network stack without having to incur copying costs. And then there are solutions like QUIC that redefine the problem and which should work extremely well with existing zero-copy interfaces.

CPUs are incredibly complex precisely because so much of the security heavy-lifting once performed in the OS is being accomplished in the CPU or dedicated controllers. And these newer optimizations were designed to be integrated within the context of the traditional userspace/kernel split.

Wasmjit looks like an extremely cool project and I don't doubt its utility. There's plenty of room for alternative approaches, I just don't think the value-add is all that obvious.[1] Probably less to do with performance and more to do with providing a clear, stable, well-supported environment for solving (and subsequently maintaining!) difficult integration problems.

[1] I just want to reiterate that by saying the value-add isn't obvious I'm not implying anything about the potential magnitude of that value-add. I've been around long enough to understand that most pain points are invisible and just because I can't see them or people can't articulate them doesn't mean they don't exist or that the potential for serious disruption isn't there.

Just an honest question, could you elaborate on what methods are the: "there are more ergonomic userspace alternatives (existing and in the pipeline) that let you leverage the in-kernel network stack without having to incur copying costs". I've been curious about DPDK, FStack, Seastar, IncludeOS/MirageOS, etc. but wondering if there are easier ways to get the zero-copy IO.

Off the top of my head:

Netmap - DPDK-like packet munging performance but with interfaces and semantics that behave more like traditional APIs. Signaling occurs through a pollable descriptor, meaning you can handle synchronization and work queueing problems much more like you would normally.

vmsplice - IIRC it recently became possible to be able to reliably detect when a page loan can be reclaimed, which is (or hopefully was) the biggest impediment to convenient use of vmsplice.

peeking - Until recently Linux poll/epoll didn't obey SO_RCVLOWAT, which made it problematic to peek at data before using splice() to shuttle data or dequeueing a connection request. I have a strong suspicion that before this fix many apps like SSL sniffers simply burnt CPU cycles without anybody realizing. Though in the Cloud age we seem much more tolerant of spurious, unreproducible latency and connectivity "glitches".

AIO - There's always activity around Linux's AIO interfaces. I don't keep track but there may have been a ring-buffer patch merged which allows dequeueing newly arrived events or data without having to poll for readiness first.

Device Passthru - CPU VM monitor extensions make it easier to work with devices directly. Not quite the same thing as traditional userspace/kernel interfaces, but it seems like people are increasingly running what otherwise look like (and implemented like) regular userpace apps within VM monitor frameworks. Like with Netmap all you really need is a singular notification primitive (possibly synthesized yourself) that allows you apply whatever model of concurrency you want--asynchronous, synchronous, or some combination--and in a way that is composable and friendly to regular userspace frameworks. VM monitor APIs and device pass thru permit arranging the burdens between userspace/VM and the kernel more optimally.

> costs comparable to userspace function calls.

You're going to have to show me on what CPU this is true on. A syscall is no where near as fast as a function call.

An indirect function call is ~50 cycles. (http://ithare.com/infographics-operation-costs-in-cpu-clock-...)

The entry and exit cost of a syscall is ~150 cycles. (Source: Many Google hits--blogs, papers--show people reciting 150 cycles exactly so I assume there's a singular, primary source for this. Maybe will track down the paper later.)

I'd say that's comparable. Many syscalls take much longer, but that's just because syscalls tend to be very abstract interfaces where each call performs costly operations or bookkeeping, especially on shared data structures requiring costly memory barriers. That doesn't mean the syscall interface itself is expensive. Microkernel skeptics stopped arguing syscall overhead a long time ago, and proponents are no longer defensive about it.

Direct call without args is about nearly 10 cycles on newish hardware, vdso is probably +5-10 syscall on the same CPU that returns something like the function will probably be 4-10x

Sure, but the context is relative interface and abstraction costs. Nginx running in Wasmjit in the kernel is unlikely to be making direct calls to internal kernel functions. Even if the JIT and framework were capable of that, I would think that Nginx would still be calling through an abstraction framework that provides proper read/recv semantics. It would be the sum of those intermediate calls until reaching the same point in the kernel that you'd want to compare.

This talk is relevant https://www.destroyallsoftware.com/talks/the-birth-and-death.... Tl;dw in-kernel JIT has the potential to be 4% faster than direct execution. I am still dubious, however, as JIT requires more resources by a long shot than direct binary running.

While I'm far from convinced this is a useful thing: nginx is "untrusted code"?

Anything receiving potentially malicious input should be untrusted and sandboxed if possible. That includes the network stack itself in high-assurance, security products. We also prefer simple, rigorously-analyzed software with high predictability. Other stuff often has vulnerabilities. Nginx is nearly 200,000 lines of code per an interview with CEO I just skimmed. Lwan, made for security and maintainability, is about 10,000 lines of code in comparison:


Lwan's actually small enough that mathematical verification for correctness against a spec is feasible, even though costly. Unlike Lwan, I could never have any hope of proving the correctness of Nginx. Even its safety would be difficult just because of all the potential code interactions on malicious input. Leak-free for secrets it contains? Forget about it. Best bet is to shove that thing either in a partition on a separation kernel/VMM or on a dedicated machine. The automated tooling for large programs does get better every year, though. One can use any compatible with Nginx. And still shove that humongous server into a deprivileged partition just in case. ;)




User code is generally not trusted with kernel privileges, no.

Last time I measured this, the time it took the Linux scheduler to decide what task to schedule was far more than the time it took the entry code and CPU to switch from user to kernel or vice verse. Meltdown changes this, but Meltdown-proof AMD CPUs are all over and Meltdown-proof Intel CPUs should show up eventually.

So I don’t see the point.

One step closer to METAL[1]

[1]: https://www.destroyallsoftware.com/talks/the-birth-and-death... (at 18:46)

Everything said in the talk went true. Which means we are very close to a Nuclear War. ( And it certainly looks like a possibility at the way things are going )

Use Sendfile and if that's not enough try netmap or ddpk, why would I want to run Nginx in kernel space?

I don’t understand. How is this possible? The POSIX API is not implemented for WASM. Non web embedding have not yet been standardized, nor threads. How have they implemented this? Are they implementing a non standard embedding and pthreads?


It seems like wasm will finally enable the "write once, run everywhere" promise that java made but never truly executed by starting with the premise that you don't need "one true language" (java), but rather just the VM.

Yeah, I know the JVM supports several languages these days but most require non-superficial similarities to Java (garbage collected, etc.)

This ignores the tons and tons and tons of work that would really have changed that. I don't think it really has anything to do with whether you have one true language or not.

There were plenty of well-funded efforts to have "write once, run everywhere" in the past that were just VM's and formats (ANDF, etc).

In practice, a lot of things have changed since Java that have made this kind of approach feasible. As a simple example: good compiler infrastructure to build on top of is much more available than it was then. These days you pretty much just have to write a frontend.

Even though GCC existed then, it was still compiling statement at a time!

I know there have been many efforts to write a universal VM. Two things I think will make wasm more successful than prior attempts is:

1. it comes on with platforms already via a browser or node.js

2. like you said, the tools are here now (really LLVM made most of this possible)

Sure, i'm just saying your comment seemed to be saying "java's write-once run everywhere failing was due to trying to have one true language", and i think that part is fairly orthogonal.

I don't believe it's failure is entirely based on it shipping with a prescribed language--but I wouldn't say it's orthogonal. Java, the language, promised "write once, run everywhere" whereas the JVM in the beginning was just an implementation detail. The JVM slowly evolved into a universal vm concept, mostly at the hands of the community and not those (Sun, Oracle) that had the most control over it.

You literally can't write a portable "hello world" command line app in WASM. It's the worst "write once, run everywhere" of anything. Which is expected because it distinctly does not provide any standard APIs or syscalls. There's no standard library. At all.

Someone may attempt to add a batteries-include system that uses WASM with a bunch of platform-abstraction libraries, but WASM itself does not provide that. And isn't going to provide it.

You can make a portable library with WASM, assuming you have zero dependencies on anything, but that's about it.

> but most require non-superficial similarities to Java (garbage collected, etc.)

It should be noted that doing this requires non-superficial similarities to *nix/POSIX (signals, files, threads, etc). It's not like you could run Nginx on this without its POSIX impl or in the browser w/out Emscripten's POSIX impl or on any other WASM runtime w/out a POSIX impl.

> most require non-superficial similarities to Java (garbage collected, etc.)

You can (obviously) implement any language without garbage collection semantics using garbage collection, so this requirement is false.

See for example languages like C and C++ running on the JVM.

Has this been done without jumping through hoops like compiling to a specific CPU target with GCC then doing a binary conversion to JVM bytecode (NestedVM does this via MIPS I believe)?

Depends on what you mean by hoops (ala clang), but there is Graal which can run LLVM bitcode [0] but I am unsure if it can compile it. I have personally compiled C code to WASM and then to the JVM via [1]. C/C++ are complex (and/or have complex optimizations) so a compiler should be used, but doesn't necessarily have to target a specific CPU but still have to have a bit-preference and what not. And of course once you interact with the system, some abstraction has to occur somewhere.

0 - http://www.graalvm.org/docs/getting-started/#running-llvm-in... 1 - https://github.com/cretz/asmble

GraavlVM Seems like an incredibly useful tool for sandboxing non-java languages with java/jvm interoperability. LLVM really is enabling a renaissance of language interop. This is fantastic stuff.

Yes and it traces back to MaximeVM at Sun Research Labs, a project that Oracle was willing to keep sponsoring.

How do you avoid having to copy buffers from the kernel into WebAssembly? As far as I know WebAssembly does not provide a way to access memory outside the linear memory block, but maybe in this specific case (nginx) it is possible to have all syscalls write to a buffer allocated by WebAssembly.

Wasmer just got nginx working on their wasm runtime as well!

I might be getting too old, because I truly don't get the benefit of doing this... Nobody I know really cares about portability, and I don't see how running nginx with WASM is in any way better than running it directly on the system. Does anyone care to ELI5 for me?

This is satire but still very interesting: https://www.destroyallsoftware.com/talks/the-birth-and-death...

Running it in the kernel removes the overhead of context switching into the kernel for I/O (and other things), so it could be faster. Although as this post says, right now it's only running in user space.

Does anyone else look at this title and see a jumble of letters?

I miss the days of pronounceable acronyms

"engine-ecks on whaz'm-jit" It's certainly pronounceable, though perhaps that pronunciation is non-obvious.

At a glance it actually looked like Dutch to me (with apologies to my Dutch buddies!)

Obligatory birth-and-death-of-javascript


I suppose calling it METAL would have been too on-the-nose.

Not sure why you're downvoted, this is a very interesting talk.

It's pretty relevant to the discussion of "why would you want to run wasm in the kernel", but I'm not too worried about the votes.

It's not, though... WebAssembly doesn't really have all that much to do with js, any more than Flash or Java plugins would if they ended up being standardized instead. Every time there's a wasm thread it gets posted, but it misses the point of the talk to suggest that wasm is the prediction bearing fruit.

The talk is great, but I'd suggest that's the reason for the downvotes.

> it misses the point of the talk to suggest that wasm is the prediction bearing fruit

Does it? It seems that the talk has two main points:

1. Javascript succeeded because it was (at least initially) just good enough not to be completely unbearable, but bad enough that people ended up using it primarily as a target for other languages.

2. Ring 0 JIT can be 4% faster that normal binaries.

WASM is primarily a target for other languages, and qualifies as a language that can theoretically be JITted 4% faster than native code can be run.

Point number one isn't applicable to wasm.

The execution inside the kernel is related, but nobody replies to a lua in kernel post with a link to the talk.

Because Lua isn't related to Web development, while JavaScript and WASM are.

Right, but that's my point. Wasm is tenuously connected with JavaScript because both are web technologies, so people link the talk.

But they couldn't be more different technically, and if wasm does indeed become the lingua franca of future computing it will be much more boring than the craziness of js doing the same.

The talk was great because it was about an insane yet plausible future. We now have a boring and probable future.

I don't believe in that, too much experience to believe in a miracle bytecode format that will magically succeeded where others failed.

It will just be yet another VM platform.

Even less reason to link to the talk then :-)

Hi HN!

I'm Syrus, from the Wasmer team. We have been working in something similar, but with a special focus on maintainability and with bigger goals in mind:


Here is the article about our journey on Running Nginx (which funnily enough we actually accomplished just before wasmjit):


> but much more maintainable.

> we actually accomplished just before wasmjit

> Wasmer is the first native WebAssembly runtime [...]

Whoa there. I like both projects and respect competition as much as the next guy, but maintainability is subjective and being first is of little importance.

True! This statements without analysis are wet paper.

In the article I've linked there is a better analysis on why:

1. in wasmjit, the machine instructions are hardcoded into the runtime (this is like creating your own LLVM, by hand... and only available for x86)

2. it doesn't have a single test

I was talking about my own experience here, because I tried to contribute to wasmjit before creating Wasmer... and was quite challenging!

It might be useful to check how many people interacted with the code in each of this projects! ;)

Also note the timeline that took for this projects to accomplish the same: wasmer (<2 months) wasmjit (6 months)

You may be correct on all those points but still come off as rude. And I think you're just trying to be helpful and steer people in the direction that you think is right. However, given the context of where you're doing it, it feels like someone is trying to ruin someone else's parade.

They are competing and think theirs is better. They are trying to get people to look at the competition. That’s not ruining anything for anyone

Then the discussion should have been a post unto itself, not a comment piggy-backing on another project's thread. I dunno, seems like basic netiquette to me.

But I agree that if you think your solution is better, there's really nowhere better to put it out there than in front of eyes that are looking at something similar.

Piggy backing rubs the wrong way, but sometimes there is interesting content.

Personally, I would challenge the piggybacker to show me something.

Talk is cheap.

I apologize if it felt that way.

That was not the intention but rather to showcase and make sure everyone understands the tradeoffs of each of this projects :)

As an alternative form of promotion, it might be interesting to write an article about the performance issues involved in writing a wasm engine.

Is that an honest time comparison? It looks like Wasmjit implemented a parser and jit compiler from scratch. Also, their emscripten implementation is much more fleshed out and it works in kernel space. Wasmer doesn't handle signals or multiple processes in Nginx, while Wasmjit does.

We just focused on approaching faster to market.

Because of that, we prefer to leverage on existing open-source projects (for example, for parsing or for the IR) that are already working, than to create everything from scratch.

Implementing a half broken nginx is going to faster market? That’s an interesting strategy.

This competition seems unnecessarily heated but I’m at least happy there’s so many exciting wasm projects out there.

Also your two projects must be collaborating somewhat because it really looks the nginx.wasm file you’re distributing is the one wasmjit compiled! Correct me if I’m wrong but I don’t think there’s any other way they’d end up being byte for byte identical.

Here is our compiled version of Nginx... if you want to take a look! (or compile it yourself)


looks like your docs updated recently, but they used to refer to this identical nginx.wasm which made me think you were collaborating somewhat! https://github.com/wasmerio/wasmer/blob/master/examples/ngin...

I don’t have a horse in the race but I could at least hack around with wasmer in 15 minutes. It definitely gets the approachability thing right.

Can you run Wasmer in the Linux kernel? What makes Wasmjit stand out from all other WebAssembly virtual machines (WAVM, Wasmer, Life, wasmi, wagon, ...) is that its main goal is to run WebAssembly in kernel space.

That's right. You can run Wasmer in Linux, but not yet in the kernel (Ring 0). It will still run at native speed though.

Running in Ring 0 might open bigger risks regarding security, and we want to make sure everything is under control (with external security audits) before approaching that space.

Here's a more detailed answer about its risks: https://news.ycombinator.com/item?id=18587353

I was looking into something like this in order to port a POSIX-only program to run on Windows.

Does anyone know how to link in native OpenGL system libraries? I'm looking to link to native graphics libraries so that I don't have to pass through Emscripten's OpenGL -> WebGL emulation layer. I'd like to drop the browser render layer all together and just have GLFW or SDL take care of rendering native client windows.

That's something that we also are super interested on as well. It should be easy to accomplish with the right tools :)

I was looking into (ab)using wasmer for compilig dylibs to wasm and load them with Python. Do you know if someone already tried something similar?

It should be feasible! No one has tried it yet afaik, but I think it's a great idea :)

How are you different from wasmtime?

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact