Making unsafe languages safe is not a design goal of WebAssembly. The author seems to believe that WebAssembly should provide many things that are the responsibility of the operating system and standard c library.
I do not believe that is a good way of thinking about webassembly. It shouldn't be thought of as an operating system target (like Windows, Linux, or Mac); it should be thought of as an architecture target (like Arm vs x86). What the author wants is an operating system and runtime, which you can write for (and standardize on) web assembly and then load your program into it.
Many people seem to think it is. I work on the area of secure software stacks and the #1 “helpful” suggestion I get is “why don’t you use WebAssembly?” As if that made a significant difference to the attack surface, at all.
WebAssembly is designed to provide safety, in the sense that WebAssembly programs are sandboxed and unable to do anything you don't give them permission to do. WebAssembly won't make the runtime behavior of your programs correct, which is what the article seems to be getting at, but that doesn't make the WebAssembly VM model unsafe or insecure from a host perspective.
If you isolate untrusted code in a WebAssembly VM, that should reduce the attack surface for the system as a whole to whatever functions you expose into the WebAssembly VM.
Personally, my bias is to suggest that using Rust would significantly cut down on (but not eliminate) the attack surface inside the WebAssembly program... but some people don't like to hear that.
WebAssembly does protect the host from a compromised process. WebAssembly doesn't do jack to prevent a process (and whatever data it controls) from being compromised in the first place.
It does in the sense that it totally protects the stack and doesn't allow creating new executable memory. Control flow can be manipulated, but then only via changing function pointers to other functions of the same type
It doesn’t even do a particularly good job of that, given the size of the WebAssembly code base and reasonable bugs-per-kLOC assumptions. If you really want to protect the host then run a small, probably correct interpreter or JIT that is simplified enough to prove properties about and have an implementation small enough to be potentially bug free.
Well, it certainly could be helpful, if part of your stack wants to run inherently untrusted code. I think this is what gets people excited by the idea of putting it into the kernel.
It is a bit surprising, though, that many people do not realize WebAssembly doesn't make C code safe against itself.
Making unsafe languages safe (at least to some degree) has to be a design goal of WebAssembly; else, it has zero business being used to run arbitrary code that's automatically downloaded from the Internet.
There are different kinds of safety, though. There's sandboxing, which means that applications running in WebAssembly should not have access to resources they are not privileged to access. And there's memory safety, which means the WebAssembly application itself shouldn't try to break guarantees about memory. These are two separate things, and only the first can do things outside of breaking the website you're on.
WebAssembly protects you from untrusted code getting out of its sandbox and accessing data outside its own context, and that's the whole point.
But it will not (and hasn't been designed to) magically remove crashing bugs from your code, just as (e.g.) Javascript doesn't protect you from calling an undefined function.
But those bugs can't do any harm to your machine except crashing the WebAssembly application itself.
*this doesn't mean the sandbox itself doesn't contain bugs of course, but it's the job of the sandbox to prevent stuff running inside it from accessing stuff outside
Your system is safe from the arbitrary code. The article points out that WASM makes it really hard to catch memory bugs inside the app itself even though these bugs can't escape the sandbox.
> it has zero business being used to run arbitrary code that's automatically downloaded from the Internet
That has always been the very nature of the web.
Web Assembly, like other web applications, will be executed in a sandbox separate from other things. Isolation alone does not make it safe, but it does prevent corruption of other things.
This is why many websites execute advertisements in an iframe, because the code that comes across is not trusted and often not safe. The unsafe code will still attempt to stalk and/or attack the end user, but the iframe prevents that malicious code from corrupting the serving page. There is no reason expect anything different from WASM.
I do not believe that is a good way of thinking about webassembly. It shouldn't be thought of as an operating system target (like Windows, Linux, or Mac); it should be thought of as an architecture target (like Arm vs x86). What the author wants is an operating system and runtime, which you can write for (and standardize on) web assembly and then load your program into it.