
WebAssembly in the Linux Kernel, with Faster-Than-Native Performance - k__
https://github.com/wasmerio/kernel-wasm/blob/master/README.md
======
andrekandre
> Since WASM is a virtual ISA protected by a virtual machine, we don't need to
> rely on external hardware and software checks to ensure safety

famous last words...

> faster than native

if you used a static compiled (and safe) language in-kernel i would assume
that would be faster still than wasm or am i missing something...?

~~~
krageon
There is _absolutely_ no way that a more abstracted piece of software is
faster than a less abstracted piece of software, unless there is some defect
in the latter. Even the hypothetical equal performance is more like a holy
grail (though I have had professors swear to me that they have seen it reached
in isolated circumstances, so maybe a little less hypothetical than that).

~~~
AnIdiotOnTheNet
I don't think that's a 100% accurate statement. My understanding is that JIT
techniques do often allow non-native code to run faster than native code by
taking advantage of information during the programs runtime for optimization,
in addition to specific hardware features that the compiled binary version may
not have taken advantage of for compatibility reasons.

~~~
Accujack
In other words, it's possible for a non native program to be faster than a
native program provided the native program is less optimized for the execution
platform.

This is not exactly news.

~~~
Someone
A VM can also optimize for the workload and the configuration of a program in
ways that a compiled-once program cannot (profile-guided-optimization can help
there)

A simple example is

    
    
      log.debug(“foo:” + bar + “(“ + baz + ”)”)
    

A native C++ compiler would have to do the string concatenation and must call
_log.debug_. A VM can (sometimes) detect that “log.debug” is a no-op, and skip
computing its argument and calling it, and can even recover when “log.debug”
stops being a no-op. It can even do that if the code for _log_ get loaded
dynamically at runtime.

~~~
krageon
What this means in short is exactly what was already said, but in more words
and in such a way that it seems like that wasn't what was already said. You've
described a native program that wasn't optimised competing against an
optimised non-native program. It's already been established that this is a
scenario where the latter might be faster.

------
bppete
WebAssembly in the kernel faster-than-native reminds me of the comedic talk by
Gary Bernhardt, "The Birth & Death of JavaScript" [1]. Great foresight

[1] [https://www.destroyallsoftware.com/talks/the-birth-and-
death...](https://www.destroyallsoftware.com/talks/the-birth-and-death-of-
javascript)

------
billconan
How to debug wasm? I previously tried compiling llvm into wasm, and the
debugability was very poor. I could only rely on limited stack trace and
logging.

------
klysm
I don’t see how WASM could possibly be faster than native as somehow it has to
be translated into native with native first right?

~~~
fulafel
There are a lot of ways that install-time or JIT compilation can have
advantages over static binaries. For the theory see partial evaluation,
feedback directed optimization, dynamic recompilation etc.

But yes "faster than native" is not precise enough wording for this, should
say "faster than conventional statically compiled C" or something.

------
justinclift
> Faster than native (partially achieved)

So, bogus then.

------
cjbprime
Yavascript!

