Imagine extensions for applications or databases, written in any language you want, with no ability to exfiltrate data. Imagine supporting a safe plugin API that isn't just for C and languages that FFI to C, but works natively with safe datatypes.
Today, if you want to be extensible, you typically expose a C API for people to link to, or you embed a specific language like Lua or Python. Imagine embedding a WebAssembly runtime like wasmtime, and telling people they can write their extensions in any language they want.
> tiny IoT device
But, I've looked at this in the past and concluded wasm was a pretty poor fit for small devices: 1) it only addresses memory in 64KB pages (perhaps more SRAM than you might have) and 2) requires both single and double float support.
(1) might be possible to work-around by backing the memory space with smaller page allocations -- at the cost of indirection for every memory access. (2) seems pretty insurmountable -- it'll just need a chunk of extra C library.
Are you aware of any projects working on this for smallish microcontrollers?
Similarly, compilers like GCC will synthesize 64-bit arithmetic inline for i386 and other targets lacking 64-bit support. These days few people think twice about using 64-bit data types even though 32-bit processors are still in heavy use. (Though, AFAIU emulation has largely moved from compiler to the instruction decoder or microcode.)
is this by the standard or implementations?
As in, if you write your program in c, python, whatever, and put it in wasm, then have a wasm extension system of some sort - if that c or python is exploitable, then whatever external access that wasm instance has is now useable. And whatever memory is in the memory space can be modified.
This is an important distinction. Wasm should be thought of as a tool to containerise execution and memory if the implimentation is proven to do so.
Is that desirable? Won't it lead to a bloated mess with every extension dragging in a different language runtime with it?
no ability to exfiltrate data
"There is unfortunately one less straightforward way that attackers can access another module’s memory—side-channel attacks like Spectre. OS processes attempt to provide something called time protection, and this helps protect against these attacks. Unfortunately, CPUs and OSes don’t offer finer-than-process granularity time protection features."
"[snip] Making a shift to nanoprocesses here would take careful analysis."
"But there are lots of situations where timing protection isn’t needed, or where people aren’t using processes at all right now. These are good candidates for nanoprocesses."
"Plus, as CPUs evolve to provide cheaper time protection features, WebAssembly nanoprocesses will be in a good position to quickly take advantage of these features, at which point you won’t need to use OS processes for this."
What's your take on the browser, instead? Curious to hear your point of view.
p.s. Thanks for working with the Alliance, I think it's great news overall.
How does Java preclude plugins exfiltrating data?
Downvoters: I’m not trying to be sacrilegious; I genuinely don’t know the answer.
Unfortunately, it's fraught with danger because of the confused lieutenant issue ( you'd need to give parts of the app permission but not others - doing so isn't trivial).
Now, I get that sharing memory is a huge safety issue - it kind of inherently breaks the sandbox, but when I see the "nanoprocesses" bit in the article I worry about death by a thousand paper cuts (lots of tiny WASM module spending more time copying data than processing it). Are there ways/plans to minimize memory copies that don't conflict with the safety concerns?
Shared memory doesn't break the sandbox. Sharing all memory by default would, but controlled sharing of specific memory doesn't. Think of it like inter-process shared memory, rather than threads sharing an entire address space.
I was wondering why the focus on copying memory between the processes. It should be possible to make capabilities which represent a pointer and a length (and maybe an access mode), which could be used to give a process direct access to shared memory safely. I don't know if that could be done with low enough overhead for small objects, it's possible that there would need to be a threshold below which you would just copy values to be efficient, but it seems like exploring fat pointer capabilities would be worthwhile.
Memory mapping files or inter-process shared memory is the very coarse-grained version of this, using hardware protection. I feel like it should be possible to do something more efficient and finer grained with pointer capabilities and the static type checking that is done to verify WASM during compilation, but it may be a significant research project of its own.
For languages that can express unforgeable pointers as first-class concept, that is indeed a very attractive, fine-grained approach. Unfortunately bringing that to languages like C/C++/Rust is a different matter altogether.
Since we want to support those languages as first-class citizens, we can't require GC support as a base concept, so we have to treat a nanoprocess as the unit of isolation from the outside.
Once we have GC support, nothing will prevent languages that can use it from expressing finer-grained capabilities even within a nanoprocess, and that seems highly desirable indeed.
(full disclosure: I'm a Mozilla employee and one of the people who set up the Bytecode Alliance.)
The semantics of these languages aren’t incompatible with unforgeable references, though: it generally works in practice, but it’s technically undefined to create pointers out of thin air. Why can’t we take advantage of the standard here to disallow illegally created references? (Which, as I understand it, many other vendors are already beginning to do with e.g. pointer authentication and memory tagging.)
Forging a pointer is UB in all of these languages as far as I know.
It seems like you should be able to have opaque types that represent these unforgeable pointers which you can't do arithmetic on or cast to raw pointers, but can access values in type safe ways, or provide a view to a byte slice which does bounds check on access.
Is there a good place for discussion of this design? I seem to be having this conversation with you and Josh both here and on Reddit, and it seems like a lot of the discussion is spread out in a lot of places.
Given that most code in Rust is safe code and includes bounds checks before access, you should be able to have the verifier rely on those when they exist, and add in bounds checks in cases in which the access is not protected by a bounds check.
Maybe that would be intractable, or to inefficient to be worth it with all of the extra bounds checks. I'm not sure. I'm asking because it's something that I feel should be possible, but I haven't been involved in the research or development, so I'm wondering if those who have been more involved have references to discussion about the topic.
I feel like you're overthinking it. Can't you just have a table that holds GCable objects and only hand out indexes to C and co?
It doesn't help with making capabilities more fine-grained, though: we have to treat all code that has access to that table as having the same level of trust.
Anyway, that's great to hear! Any suggestions for discussions and such that one could follow to see how it develops?
Web binary lisp sans tail call optimization is a little more fitting IMO.
But the main question I've had is how big the overhead is, specifically since modules don't share their wasm Memory. That means data will be constantly copied between them. Compared to regular static or even dynamic linking, that may be a noticeable slowdown.
We're working on that. We want to make it possible for modules to share memory in a controlled way, without giving access to their entire address space.
Other than that, I wonder if nested page tables that are normally used for virtualization can be used to separate address spaces within a process.
(Sorry for the lack of more concrete information, documentation is still in progress.)
It is possible today to have modules import memory from the host, which means the host can provide the same imported memory to multiple modules. Then modules can pass pointers to each other without needing any memcpys.
It does require the modules to not tread on each other though, such as by requiring them to have distinct static offsets (poor man's linking), or by requiring them to be PIC / compiled with relocation info so that they can be relinked by the host loader.
It doesn't seem to me from reading TFA that this is going away, merely that there are planning to add alternatives to make it more fine-grained.
The article does mention a possible future extension of multiple wasm modules in a single nanoprocess (the section with "allowing native-style dynamic linking"), and JoshTriplett mentions in another comment some future ideas of sharing parts of memory but not all. Those things will compromise the strict initial requirement of no shared memory, and improve performance.
Those are the interesting questions for me - is not sharing memory too much overhead, and if it is, is there a way to relax that which preserves enough security with enough performance.
There's still a lot more details to figure out, of course, but that's true for wasm dynamic (shared-everything) linking in general.
Starting with speed and trying patch in security doesn’t seem to make things actually secure.
$ strace /bin/ls 2>&1 | grep brk
brk(NULL) = 0x55fb032b3000
brk(NULL) = 0x55fb032b3000
brk(0x55fb032d4000) = 0x55fb032d4000
$ ktrace /bin/ls >/dev/null
$ kdump | grep brk | wc -l
$ kdump | grep mmap | wc -l
Hopefully once WASM has demonstrated the principle other languages will follow. This seems like it'd be especially useful in the JS ecosystem, where many modules are already small enough that they'd likely be able to run with no permissions at all.
Many things went wrong. Microsoft was actively sabotaging JVM. They implemented very fast JVM for Explorer and their operating system that intentionally broke the JVM 1.1 standard. See Sun vs Microsoft 1997. Microsoft lost and paid damages. .NET was created to do more damage.
Oh, and debugging is still at printf level style and reading raw bytecodes.
Wasmtime has had support for source-level debugging via lldb or gdb for a few months https://hacks.mozilla.org/2019/09/debugging-webassembly-outs...
Now, the prefered way to ship a Java application is to bundle the runtime and the application into a single installer similar to, for example, Electron apps.
.NET CIL is much better in that regard precisely because it was designed to be low-level enough to compile C to it - it has stuff like raw pointers and pointer arithmetic, stack-allocated arrays, unions etc. The reason why not many languages bothered is longstanding lack of cross-platform support - Mono was around for a while, sure, but it was not the official implementation, and nobody knew whether it'd still be there next year. Today that's not really an issue anymore, but by now wasm is a better choice.
The JVM is definitely more important than Java. But it might not be the most important VM, let alone the most important thing in software.
And other languages/platforms emulating that plan doesn't diminish the JVM, except numerically.
The wasm committee made many mistakes by treating it as an idea on paper, and not writing the actual implimenting software.
This has resulted in significant fragmenting of implimentation, each less trustworthy than the last.
If any software is going to advertise safety, it must prove it. That's done through the feedback cycle and careful development. The only ones that have this are in browsers.
Unless wasm decides on a set of standard runtimes that become trustworthy there will not be a wasm outside the browser.
For example, when I search for python, I get python. There's still other pythons, but there's the python. This is true of all other software used. Wasm is failing to do that.
All of those things you've listed have only a few big players, with enough use and feedback cycle that there can be multiple different implimentations.
Those different implimentations are very similar. There is not a different C to compilers, they have their own quirks, benefits, and target different systems.
SQL has multiple big implimentations that deviate completely. This has a lot to do with marketing, and the fact data storage/retrieval is a significantly diverse section of computer science.
In each of these known technologies there is a well known, well backed, named software project that people use it's name synonymous with it's supposed standard.
Wasm doesn't have this, and that's because the implimentations took place in browser, which afaik did not impliment a unique engine to it. It just integrates wasm instructions to their js instructions.
Which then led to there being dozens of different self called 'safe' implimentations. None of these have the users, time, and open source community necessary to know this. Which will inevitably lead to exploits.
Given that situation, the only way wasm is going to live up to its goal of safety is backing a well made runtime.
The problem of safety is what makes this very different from other software. Your own code with bugs is an annoyance, and maybe you can't do what you wanted. But code that is meant to be safe that instead allows for damage to you, your company - that deserves a much higher level of scrutiny.
So you would no longer be able to amplify a string value into a file descriptor as you can in systems with ambient authority (which is nearly every system in widespread use today).
So "secure by default" here means, "programs conform to least privilege".
Bringing the web up to speed with WebAssembly
WebAssembly also provides a fine-grained API surface area; you can run a WebAssembly sandbox with no external functions provided, or just a few.
WebAssembly's sandboxing isn't tied to the web; we're keeping all the same security properties when running code locally, and we're protecting modules from each other too.
Also, the WebAssembly bytecode format is designed from the beginning to support many different kinds of languages, including languages that directly store types in memory, rather than keeping everything as garbage-collected or reference-counted objects on the heap.
Are there concerns about compatibility when browsers are inevitably forced to reduce that surface area because of a security flaw?
And how are we going to keep all browsers on the same page with regard to what functionality they provide? It will suck if wasm turns into a cross browser compatibility nightmare.
Also the fact that many vendors have reached a consensus on both a MVP and an update process to future features looks like a win. If Sun, Microsoft, Apple, and Google had all been on board with the (supposedly free software) JVM the story would have been very different.
So why aren't people just shipping straight up LLVM intermediate language VMs and instead go through wasm?
Incidentally, would you consider wasm as a destination language for virtual machines for obfuscation? I.e. is it reasonable enough to implement it all in about a week? Plus I fear like the decompilation/disassembly tooling might be there too soon for it to be a real viable option, but maybe nobody's been working on that yet.
This is answered in the WebAssembly FAQ: https://webassembly.org/docs/faq/#why-not-just-use-llvm-bitc...
Still leaves open the question as VM for obfuscation (RE tooling, ease of implementation from scratch) though.
Now if I don't need sandbox, it's going to be a tougher sell. But who knows, may be it'll outperform JVM on bare metal some day.
Lack of bounds checking for multiple data accesses mapped to the same linear memory block.
Secondly, nor ISO C or ISO C++ forbid implementations that do bounds checking by default. In fact that is what most modern compilers do by default in debug mode.
Finally look at memory tagging in Solaris SPARC ADI, Apple iOS or the upcoming ARM extensions support on Android for how bounds checking is enforced at hardware level while supporting unsafe languages.
So hand waving such security issues is rather strange, when it should be the top concern when selling an infrastructure to run code from unknown sources.
The difference is if the exploit is run by the browser or the VM. If the VM has a logic error, gives back the wrong result, the browser decides to trust the result and be exploited then it is a browser bug; not a sandbox issue.
The other situation is when a browser spins up a VM to add 2 and 2 but then the VM starts downloading malicious files from the internet.
No kind of safe language can avoid the first class, wasm avoids the second.
Assuming that “debug mode” is -g or equivalent, then I have not seen a modern compiler that does this.
Or Solaris SPARC and iOS compilers that make use of hardware memory tagging.
Clang and GCC do need extra flags to enable FORTIFY mode though.
However on Android FORTIFY is now a requirement and future versions will make use of memory tagging on ARM hardware.
So ironically something like Android does have a better sandboxing model as WebAssembly.
No iOS devices ship with ARMv8.5, so while I think compilers are implementing this today I am not sure if Xcode ships with this or if it's functional.
> So ironically something like Android does have a better sandboxing model as WebAssembly.
…you're not understanding what sandboxing means.
I do understand what sandboxing means, and how WebAssembly advocates keep overselling its security capabilities, by ignoring issues that other bytecode formats have taken a more serious approach, already in the mid-60's.
Essentially the promise here is that you can download a random wasm from anywhere, run it with little-to-none privileges and be sure nothing bad can happen.
There was an article here many months ago detailing how wasm on the server makes it harder to mitigate attacks due to lack of wasm-inspecting tooling compared to system utilities for processes/native binaries.
But in part that is because the attack model of wasm is "literally executing malicious code".
That’s part of why running instrumented code feels like running python.
WebAssembly memory access bytecodes only do bounds checking of the linear memory block that gets allocated to the module.
You then do your own memory management taking subsets from that memory block and assigning it to the respective internal heap allocations or global memory blocks.
So you just need to have a couple of structs or C string/arrays, living alongside each other and overwrite one of them by writing too much data due to miscalculations of the memory segment holding the data.
I rather let WebAssembly turn out to be the next Flash, when black hats start looking at it with the same care they had before, no need to waste cycles myself, as it is a lost battle against WebAssembly advocacy.
With all due respect, that doesn't make WebAssembly unsafe in any way.
By the same logic, any program that takes any kind of user input is unsafe because the program could trust data it should not trust and then execute incorrectly.
If a program does not validate (untrusted) input then it is the program's fault. Not the input's fault or the input-producing-method's fault.
I agree with where you're coming from though. People are going to make mistakes and if the average developer has to interpret blobs of bits as meaningful data structures just to get things done, then we are going to see a lot of these types of problems. However, there are already projects in the works that are automating the $LANGUAGE to Wasm glue code which should completely mitigate this issue.
This really sounds like you have an axe to grind with webasm for some reason, the things you saying seems like grasping at straws. It already works in browsers, so if there is something to exploit you could demonstrate it with a few files on github.
I guess everyone knows how to corrupt memory with C code, no need for github files.
Apparently the force is strong with WebAssembly advocacy.
If there are actual exploits or security flaws, then demonstrate them with a working implementation. You seem to be trying to turn a technical discussion into an emotional one.
Why should I need to provide new examples when we have plenty to chose from?
 From a technical standpoint, I consider politics, hype, parasitic corporate behavior, etc, "random chance."
Where are Google, Microsoft and the other big names you would expect?
Those folks are the incumbents. What do they have to gain from joining? Google especially has developed a colossal amount of tooling in-house that gives it an edge over others; new tech may make the lives of competition easier. Ditto to an extent for MS, though there, I suspect it's more bureaucracy and not seeing anything to gain.
> Imagine extensions for applications or databases, written in any language you want, with no ability to exfiltrate data
That's what we are working towards on Wasmer, the server side WebAssembly runtime - https://github.com/wasmerio/wasmer
In fact, we already have a lot of different language integrations (maintained by us and the community) and our software is the pioneer on the space.
Is there any reason on why you think is a good idea to do a side-alliance instead of collaborating with us and the community so users and developers can be the ultimate beneficiaries? (it's good, it just seems is not an alliance made for the users, which it's a bit weird from my perspective)
I think a good analogy is: wasmer is to qemu as BytecodeAlliance is to rust-vmm.
Wasmer is a general purpose WASM runtime. But BytecodeAlliance is a place to build tooling for the construction of runtimes (WASM or otherwise), including special purpose runtimes. I think there is a lot of space for both in the growing non-browser WASM market.
Not sure if I got the analogy, but all I can say is that I'm very excited about the Enarx initiative :)
No Google on board, it means no Chrome support
¹ - https://man.openbsd.org/pledge.2
I fear that at the end of the day, capabilities will have the same fate as other sandboxing mechanisms: nobody will use them. And, just so that their application works and avoid support burden, developers will tell people to use a setup that enables access to everything.
pledge(2) and unveil(2) learned from the past and are way simpler. I really wish WebAssembly had adopted similar mechanisms.
One important aspect here is that this doesn't just target whole apps. It also targets developers using dependencies: while it's desirable to restrict an application's capabilities, there's a lot of value in developers only giving packages they depend on very limited sets of capabilities. And that seems much more tractable, given that kitchen-sink packages aren't what most people want to use anyway.
Enverywhere you have to think: who can load/run a module/process and from where, how to authenticate and authorize, which API to give to it, etc...
A historical note:
Bell Labs Plan 9 had a universal OS level solution, that Linux has somewhat adopted, but could not make general enough, partly due to the higher lever ecosystem being stuck to old ways:
- per process name spaces with mountable/inheritable/stackable union directories and optionally sharebale memory (Linux light-weight process, LWP, comes close, it was also historically copied from Plan 9)
- Almosty all APIs (even "system calls") as synthetic file systems (Where do you think /proc came from?)
- which you could mount and access (efficiently) locally or through a secure unified network protocol (9P)
Note that Docker kind of retro-fits Plan 9 ideas in Linux kernel to embrace and extend the original ideas of Plan 9...
UNIX 8th Edition (http://lucasvr.gobolinux.org/etc/Killian84-Procfs-USENIX.pdf) and SRV4 (https://www.usenix.org/sites/default/files/usenix_winter91_f...)
A better analogy would be /dev, but that was already part of Unix from the beginning. Plan 9 is really about per process, user mountable namespaces implemented by 9P-speaking user processes; basically what you said, sans the origin of synthetic file systems and file-like objects.
Just so it's clear, the definition of "malware" used here is in-browser crypto mining and obfuscation. Only 1 in 600 of the top 1 million sites use WebAssembly. WebAssembly doesn't actually provide a new vector for malware.
I think the real problem with webassembly would be if it becomes too popular and starts to become a JS competitor rather than a complement to JS.
This doesn't seem like much of a red flag to me. If one out of every ten thousand unique sites I visit uses one hypercore while it is opened that isn't going to keep anyone up at night.
On the other hand full video editors, image editors, CAD, 3D content creation programs, silky smooth 3D games, custom video codecs and more have already been made possible due to webasm. Not bad huh?
Thanks to service workers, the miner won't go away when you close the browser, as by default settings (which normal users don't even know they exist) service workers run on their own processes.
Webasm + webgl is a potent combination. C and C++ libraries can be used directly instead of trying to squeeze fast matrix and vector math out of a Java JIT that needs special wizardry just to avoid heap allocating everything.
And let's not forget that you implied that someone had to be careful of what sites they visited when it is actually a case of 1 in 10,000 sites carrying mild consequences and not security exploits.
I get that you love Java and don't know modern C++, but denying reality doesn't change reality.
I am quite up to date with modern C++, in fact more than many regular HNers, thank you very much.
Yes, and it was still unsafe as well.
As security exercise that everyone keeps asking me about, just compile Hearbleed to WebAssembly.
I think wasmtime would likely be a good fit for your use case; you could either use the wasmtime C API, or use Rust to bind to Tcl and to the wasmtime-api Rust crate.
I've found it quite easy to embed wasmtime and run a simple WebAssembly module.
See https://github.com/bytecodealliance/wasmtime-demos for some samples of how to do so.
Right now Tcl loads binary extensions using the "load" command, this will load a shared object (.so, .dylib, .shlib, .dll, .a, whatever your OS supports for runtime loading) and call <ExtensionName>_Init.
My plan is to create a new command named "webassembly::load" that will open a WebAssembly module and provide ... some way ... to call <ExtensionName>_Init as well as some way for the extension to make calls back the ~200 Tcl API functions and some selected other APIs per platform (Solaris, AIX, FreeBSD, Linux, Windows, macOS, HP-UX, etc).
Additionally, a mechanism (probably in the form of an SDK) for Tcl Extension maintainers to compile their extension targeting being loaded by "webassembly::load" with the APIs mentioned above being available. Most Tcl extensions right now are written in C or C++.
You may find the witx project (https://github.com/WebAssembly/WASI/tree/master/tools/witx) useful when writing the bindings from WebAssembly to the Tcl APIs.
Is the security limited to sandboxing of the code itself or is there some sort of verification process involved?
We're providing mechanisms here, not identity-based policies.
This is, IMO, pretty huge. I'm building a game right now that supports clientside NodeJS mods, and figuring out sandboxing has been a huge pain. Similarly, I've been trying to figure out how to sandbox some of our dependencies at work and in personal projects.
I want to be able to let someone mod my games in any language, and distribute them however they want, while still providing guarantees to my users that the worst a mod can possibly do is maybe freeze your computer or something.
So much of the problems the OP describes ring true to me; it's a very exciting project.
This leaves the door open for trying to influence behaviour of C and C++ generated WebAssembly modules, by corrupting their internal state via invalid data.
In other words, finegrained sandboxing does not solve the problem. It may be an improvement on the current -dismal- state of affairs as far as ecosystems like pypi or NPM are concerned, but I don't see how it addresses the main issues in any sort of practical, real-world environment.
Something that definitely works is that which security-conscious orgs/teams/persons currently do: ownership and curation.
Ownership implies minimization of 3rd party dependencies.
Curation implies strict quality (incl security) reviews and relentless culling of code that fails them.
The distributed engineering model that you advocate for where code is being pulled-in from hundreds of disparate sources outside of one's control is _fundamentally broken_.
Webassembly through fine-grained sandboxing promotes software decoherence by amplifying the number of dependencies (since the major downside to working in this fashion is now advertised to be reined in).
When the number of dependencies goes up, combinatorial explosion ensures that the state-space is full of possible attacks. Fine-grained sandboxing does not solve this anti-pattern but can in fact make it a lot worse. You can examine each and every dependency and make sure that its sandbox is kosher but that does not guarantee anything about the interactions and transitive relationships between dependencies. The metasystem is now an amplified (by sheer number of dependencies) state-space that attackers can seek to manipulate.
Since security is a systemic rathen than an isolated affair, the model that the OP advocates for is broken.
In-process sandboxing, where wasm competes, is, if anything, the security failure of the past decade. JS in browsers has been a constant, never ending battle. And it just hard-failed thanks to spectre.
The idea of everyone rolling their own, hardened syscall interfaces is a straight up terrible idea if security is your goal.
>You could subscribe to a monitoring service that alerts you when a vulnerability is found one of your dependencies. But this only works for those that have been found. And even once a vulnerability has been found, there’s a good chance the maintainer won’t be able to fix it quickly. For example, Snyk found in the npm ecosystem that for the top 6 packages, the median time-to-fix (measured starting at the vulnerability’s inclusion) was 2.5 years.
Sometimes it looks like writers think that the average reader is complete idiot. How is that supposed to be example? First they say that it takes long time fix once the bug is found and as illustration they give period starting with introduction of vulnerability?
It would also be interesting to have an embedded WebAssembly plugin runtime, much like Lua is used all over now that you mention all those examples.
Absolutely. The spec provides a set of instructions and their semantics. Browsers provide a set of common runtime APIs. Non-browser environments can provide the sandbox with any API surface area they want.
> in any runtime this includes NodeJS but also excludes it as we see future runtimes
> Aside from WebAssembly being in every modern browser what do you think will be the next killer feature for WebAssembly?
Shared-nothing linking; libraries that don't have to trust each other with their entire address space. I see WebAssembly as the future plugin interface for any software that wants to be extensible.
> It would also be interesting to have an embedded WebAssembly plugin runtime, much like Lua is used all over now that you mention all those examples.
wasmtime is easy to embed; it only takes a handful of lines to load a WebAssembly file, hand it a few functions of your choice, and run it.
See https://github.com/bytecodealliance/wasmtime-demos for various demos of how to embed wasmtime.
Here are some examples on how to embed Wasm in different languages:
* Python - https://github.com/wasmerio/python-ext-wasm
* PHP - https://github.com/wasmerio/php-ext-wasm
* Go - https://github.com/wasmerio/go-ext-wasm
* .Net - https://github.com/migueldeicaza/WasmerSharp
...and many more! (just check the Wasmer repo)
> If so, you can probably exfiltrate data between nanoprocesses with spectre.
Right, this is mentioned in the article. (TL;DR if this is a concern for you don’t use the sandbox, at least not until someone’s figured out how to implement timing protection.
If it wasn't so impossible to work with W3C, I think it would probably make more sense for the web to work towards something like more strict, compilable typescript. Then sites could download the source, compile, and cache.
And that was years ago.
Seems like an awful waste to have many millions of cpus with avx for instance, wasted.
It might be nice to have features for enforcing memory bounds within a module, but I wouldn't call those sandboxing, or call the lack of those features a deficit of the sandbox.
Some other bytecodes like Unisys ClearPath mainframes follow the same approach when using unchecked bounds access.
WebAssembly folks just hand wave it as not an issue.
Even if they would be at word level, it is already an improvement over the actual design.
For instance, memory safety does not really matter so much that people would stop using unsafe languages.
What's old is new again!
(One kid talking to another, with dad in the background hunched over an Apple ][.)
"Daddy's playing UCSD Pascal. That's the game where you try and see how many dots you can get before it beeps and you say nasty words."
UCSD p-System Program Development (p. 2-4)
While the compiler is running, it displays a report of its progress
on the screen in this manner:
Pascal compiler - release level VERSION
< 0> ...................
< 19> .......................................
< 61> .......................................
< 111> .......
< 119> ......................................
237 lines compiled
However, I'd discourage you build complex GUI-s in WA. If you do that it means that you have to reimplement everything how the browser works and your end users are accustomed to.
FF 70 on Arch Linux.
If you want to fiddle with your own computer, you can write your own code to do exactly that
>Don't you think there is lockdown now?
No? Anyone can author and host a website, anyone can view html+js saved on their own machine, etc. Open source web browsers exist, are fully-featured, and are popular.
* Obfuscating dark patterns e.g. aggressive fingerprinting
based on hardware, OS, cache contents, timing
* Breaking ad blockers
* Breaking tracking blockers like Privacy Badger
* Preventing uBlock Origin from removing annoying elements
* Preventing scraping
* Offering different prices for goods and services based on the hardware and software (as it happened before)