Hacker News new | past | comments | ask | show | jobs | submit login
Using WebAssembly threads from C, C++ and Rust (web.dev)
183 points by belter 3 days ago | hide | past | favorite | 89 comments

The bummer with WASM threads is the COOP/COEP headers requirement once it dawns on you that this only works if you have control over the web server configuration (to set the required response headers).

This means WASM threads can't be used with popular hosting solutions like Github Pages, which in turn means that for library authors like me the feature is nearly useless because it splits the audience into people who have control over their web servers, and those who don't, which in turn means maintaining two code paths, one with and one without threading support. And even if you theoretically have control over the web server configuration it's still more hassle because the people who write the code are usually not the same people who configure the web servers. So instead of just dumping a couple of WASM files to IT you also need to find and ask the right people to tweak the web server configuration just for your use case.

Sometimes I don't understand what's going on in the heads of the people thinking this stuff up :/

> Sometimes I don't understand what's going on in the heads of the people thinking this stuff up :/

So this post made me look up[1][2] COOP/COEP, but as far as i can tell this seems to be a security measure. Seemingly because they don't know, at this point in time, how else to enable shared memory in WASM without this limitation.

So what in your mind could have been done better? I agree it really sucks having your WASM apps live in two camps, single and multithreaded, but it seems like we, as users conceptually have two choices:

1. Don't get shared memory at all. Or, 2. Get shared memory in limited scenarios

#2 still seems better than #1, no?

Or do you perhaps think the performance Opt-In is overly aggressive. Ie if we just enabled shared memory always we'd reduce the WASM split with minimal issues. Alternatively we could do the reverse, Opt-Out, such that for resource constrained environments the phone/whatever could connect to `mobile.example.com`.

[1]: https://web.dev/coop-coep/ [2]: https://www.youtube.com/watch?v=XLNJYhjA-0c&t=4s

Well, "obviously" the web should have a mechanism in place that allows to request security sensitive features without having to poke around in the web server configuration, because in many cases this is "off limits" to the people who author the files to be hosted. How this is achieved, I don't really care, I only know that the current solution is half-baked.

The underlying problem is that this is a classic finger-pointing-situation that will probably never be fixed, because the web security people point the finger at the web hosters, and the web hosters shrug it off because 99% of their customers don't need the features because they just host their static blog there.

HTML meta headers used to be the solution to this kind of stuff, like the <meta charset="UTF-8"> tag for example (which contains information you can also provide in a HTTP header).

Good use of past tense, given that they aren't the solution any longer.

maybe depending on the app, a layer can be created, so like a wasm inside a wasm, kinda like a docker type thing that would allow an app to live inside a wasm virtual machine

> So what in your mind could have been done better?

If it's a security risk, there shouldn't be an option. Setting up a web server is a low bar for malicious actors.

It's not a security risk. The security risk was removed, and the provided feature is an expensive workaround to avoid the security risk.

The Github Pages situation is definitely unfortunate. We tried to contact them for couple of months to ask to add an option for COOP/COEP before the deprecation, but didn't have any luck with our contacts.

Well something as basic as CSP is also crippled without access to the server to set headers. And a lot of other stuff.

The more relevant question here is why so few (if at all) web hosting solutions allow the user to configure custom server rules?

TBH I wonder why response header requirements can't be set right in the meta tags in index.html. Would this be any different than having a provider-specific config file next to index.html from a security pov?

Some of them can be, but not all. Some headers are needed before the content is loaded. With ubiquitous virtualisation these days, I don't see why hosting providers can't allow users more control over their server without compromising security.

You don't even need virtualization at all. Apache have allowed this for twenty years!

All PHP (Apache) hosting provider I know allow user-defined configuration on a per-directory level.

> something as basic as CSP is also crippled without access to the server to set headers

Why would communicating sequential processes be crippled by this?

Ah, I see. A different namespace.

I've not try that but wouldn't the html meta tag work as well?

Something like

    <meta http-equiv="Cross-Origin-Embedder-Policy" content="require-corp" />
    <meta http-equiv="Cross-Origin-Opener-Policy" content="same-origin" />

No, http-equiv only supports a hard-coded whitelist of headers. https://html.spec.whatwg.org/multipage/semantics.html#pragma...

It seems the answer to the question "wouldn't this work" is "no", but I'd ask "shouldn't this work?" what's the problem with this being configured in a meta tag?

Apparently not, but I haven't tried myself:


You should still provide a fallback without sharedarraybuffers and atomics because safari doesn't provide them.

> COOP/COEP headers requirement

Just like CORS proxies maybe someone can make a public COOP/COEP proxy.

Stupid requirements will lead to stupid workarounds.

Actually you could probably just use CloudFront if you don't mind spending a couple bucks a month and configure it to add some headers.

Ran into the same issue using Github Pages with an FFmpeg/Wasm project. Hoping they add support for these headers soon.

I could use something like Netlify, but I like the convenience of the gh-pages branch and deployment.

Agreed, couldn't "I hereby relinquish my ability to bring other-origin content into this process without their opt-in" be a JS API call (one-way)?

HTTP headers should not affect app functionality in this way.

This seems like a GitHub pages problem more than anything. Setting a header is a reasonable way to direct the client to behave a certain way, GH just doesn’t want to set that yet?

It's worth noting that Netlify is a much superior solution to GitHub pages that's just as easy to use, and it supports custom headers.

That may be true, but it's up to the library users what hosting service they use. The people that decide to use (for instance) Github Pages shouldn't be left in the cold.

I'm not a native English speaker. The title got me confused.

When looking at the title, I thought the article was about running a WASM module in a separate thread from a c/c++/rust program. For example: https://github.com/WebAssembly/wasm-c-api

However, after reading it, it seems to be the opposite.

It's about running a WASM module that has thread usage, implemented in c, c++ or rust. Then in that situation, I think the title should be rephrased as:

Running WASM threads written in C, C++ or Rust.

is it me or the use of "from" is misleading? is it "executed by/called from" or "compiled from" ?

Sorry for asking this unrelated question. As a non-native speaker, every time I have this type of situation, I start to self-doubt.

I am a native english speaker and I thought the same thing.

substitute "posix threads" to see why it was written this way.

WebAssembly is the platform (aside: and an assembly langauge) that provides WebAssembly threads. It makes sense to say "use posix threads from C or C++". Posix is underneath whatever language you use.

Saying "running posix threads written in C or C++" seems to suggest that someone is running their own custom threading library in C or C++.

I don't think "use posix threads from C or C++" is a good example, because posix threads are natively available to c and c++. whereas webassembly threads are not. the code has to be compiled into wasm first. That's why I asked if the word "from" was "compiled from or executed from".

Yes, wasm is a platform, but c++ runtime can also be a platform. You could build a c++ runtime/compiler into wasm to compile and run c code. Or you could embed a wasm runtime into a c++ program (i.e. the github project I mentioned.) you can run linux vm inside a windows host, or you can run a windows vm inside a linux host, which one is the platform?

And what if we say "use posix threads from rust"? I would interpret it as a rust program calling a posix threads, perhaps implemented in c. or a rust program calling a rust crate wrapping posix thread.

Your suggested title makes me (native English speaker) interpret this the opposite of the way you intend.

I don't think the original title is bad. The ambiguity as to whether the C/C++/Rust code is guest or host seems natural to me.

WebAssembly "threads" depend on SharedArrayBuffer, which is not available on Safari. Whatever your opinion of Apple or web feature adoption in Safari, this definitely puts a damper on things if you need to support mobile browsers.

Web workers already exist and there's wasm threading libraries that target just web workers.


It's just that with sharedarraybuffers and atomics we unlock better performance and can match APIs of native OSes. But that's okay, we can still have a fallback for Safari (specially if we can architect the app so the threads only communicate by passing messages) and it shouldn't be a big deal.

Yes, I've been using WebWorkers for 10 years. My note was intended to give folks a heads-up before they go rewriting their web app in C++ or Rust to take advantage of concurrency via SharedArrayBuffer.

Even on the beta of the next Safari?

SharedArrayBuffer was disabled in Safari 11 due to Spectre and Meltdown vulnerabilities. https://bugs.webkit.org/show_bug.cgi?id=212069 re-enabled it behind a flag, and talks about making it public "Once cross-origin isolation is done". However, it links to https://bugs.webkit.org/show_bug.cgi?id=215407 as representing when cross-origin isolation is complete, which as I understand it hasn't seen any activity in the past year.

As far as I can tell, it's not enabled in the beta, but looks like it's available on the Safari Technical Preview on the Mac behind a flag:


According to https://caniuse.com/sharedarraybuffer it's default disabled on Safari due to spectre/meltdown, but can be enabled.

In addition to SharedArrayBuffer, it looks like the atomics themselves are in-progress on Safari[1]. They don't work for me on Safari Technology Preview, FWIW.

As far as I can tell, “threads” from a WebAssembly implementation's perspective is just the combination of SharedArrayBuffer and atomics; support for the actual threading comes from Web Workers which are already a separate standard. Compilers like Emscripten then use the combination of the three to implement threading.

[1] https://webassembly.org/roadmap/

What doesn't work about them?

I use `Atomics.wait` and `Atomics.compareExchange` and maybe a couple other APIs and it works well on Safari Technology Preview (if you enable SharedArrayBuffer).

Ah, I should have clarified that I mean the assembly instructions for atomics, rather than the JavaScript API. I.e. the opcodes listed here: https://github.com/WebAssembly/threads/blob/master/proposals...

Oh right, that makes sense in the context of this thread :)

> As far as I can tell, “threads” from a WebAssembly implementation's perspective is just the combination of SharedArrayBuffer and atomics;

FWIW that's exactly what the linked post describes :)

Unpopular opinion: We would've been better off keeping web plugins and isolating those in a "click to play"-container rather than turning the web browser into an operating system.

The problem is that the current state of operating systems is terrible. There's almost no isolation, tons of complexity, tons of attack surface, lock-in, etc.

Browsers are a massive improvement as a platform for developing code, or at least they have the potential to be.

Only if you define operating systems as "Windows, MacOS, and desktop Linux"

Otherwise mobile operating systems have had isolation and a robust native platform for years. It's kinda why mobile apps absolutely exploded and browsers have been desperately trying to keep up for almost a decade now.

They also have proper graphical APIs, instead of trying to use an API that targets 2012 hardware in 2021.

Yeah, mobile platforms are moderately better.

It’s almost like operating systems should keep up or something.

It's really unfortunate that multiple attempts at capability-based microkernels have basically failed to come to fruition.

There should be a FOSS OS with powerful OS-level security features, but it just hasn't happened.

I think every major OS kernel (Windows NT, Linux, Darwin) actually has optional sophisticated security capabilities these days. I think the problem is that it's really hard to make these work nicely with legacy software? The whole classic UNIX-ish model sort of falls apart.

I assume by capabilities they mean specifically a capabilities-based security model.

Well, doesn't Linux have that? Maybe you could go into more detail about what you feel is missing today.

What linux calls capabilities is a misnomer that isn't particularly effective - it's sort of splitting root up into smaller pieces.

This is totally different from the actual capabilities security model, which is closer to a "if you can name a resource you can access that resource". So, for example, imagine every file on your system has a path with a uuid, and there is no "ls" - you'd only be able to access files whose names had been told to you/ that you created yourself.

That's a capabilities model, and Linux does not have that.

I always thought that's what Fuchsia is about?

Writing JavaScript and making sure my web UI gets laid out exactly as I intend makes me feel like I'm making a UI in a word processor that has a very advanced macro system — but it's still a word processor at its core.

Flash on the other hand felt more like a proper programming environment. A shame it was proprietary. And a big shame Adobe didn't open-source Flash Player after discontinuing it.

I mean what is the use of a "Web Browser" anyway? Couldn't each OS app communicate with its server over TCP/IP?

At what point people decided HTML/CSS/JS is the epitome of user interaction for dynamic applications such as document editing, mapping and routing, media playing, chatting...?

Navigating to a web page is so much easier than installing an application. Even with modern operating systems like iOS. Despite the fact that web pages are getting more and more bloated, applications are like 10x bloated. And it's even officially supported. Modern Android application require embedding something like megabyte of various "compat" libraries. I don't know how iOS state of things is, but in the past Swift runtime was required to embed as well (you could avoid it with Objective C, but that's so out of fashion).

Another factor is cross-platform apps. Web app is still the most cross-platform solution. Also you can create and update web apps without gatekeepers approval.

> Navigating to a web page is so much easier than installing an application.

Perhaps, but I don't spend a lot of time installing applications, nor am I concerned about megabytes of disk usage. None of the web applications I use do anything particularly interesting - all of the interesting applications I use are not web apps.

Web apps give maximum convenience to developers and sysadmins and in some cases, this is a worthwhile tradeoff, but often enough it just makes for an inferior product.

But the problem is that the premise of web falls apart if you have to install a separate app for every kind of content. I want to be able to have a hyperlink on my website to a TikTok video or an Instagram post and have people actually see it rather than go "ugh I have to install an app to see the content you are linking to". I want to be able to just click around the web without having to ever think about the boundaries between "apps". This is also why a lot of us hate authwalls: "oh god another account I need to set up"... it doesn't take much time to set up an account, but in a world where every single damned website would want an account you will never be done with it. The dual reasons then why you don't spend much time installing apps is because on the one side the web works well enough that people just link to stuff on the web and it works most of the time... and yet on the other side you also see people choose to centralize things into places that are easier to link stuff from, including apps people are more likely to have installed, which is a background force of centralization.

The world people (like you) seem to want is one where the web is essentially dead and everyone just sits around in some subset of waller gardens that they don't really leave, with some minimal amount of hyperlinks available to get content in and out of other products. I have a link to a map, well, you better have Google Maps installed or it won't work. That just seems like a depressing failure of vision to me: the goal should be to get the web to the point where it replaces all of these stupid apps we have to have, so no one ever has to decide if an app is "installed or uninstalled" ever again: all content is immediately consumable at all times with nothing other than a URL. And we could be there if people with vested interests--like Apple--didn't stop actively trying to hobble web standards so people were forced to install more apps constantly.

> I want to be able to have a hyperlink on my website to a TikTok video or an Instagram post and...

To me, that's not an interesting application. If you want to use the web to share images or videos, that's fine.

If you want to create such content however, why not have a native application that can actually take advantage of the hardware?

> The world people (like you) seem to want is one where the web is essentially dead and everyone just sits around in some subset of waller gardens that they don't really leave, with some minimal amount of hyperlinks available to get content in and out of other products.

Hyperlinks don't necessarily make my life easier. Let's say I want to get <large file> from application A and into application B: The web would just get in the way. I have a file system and all (native) applications can read/write/mmap it. Import/Export workflows exist. It's a solved problem, the web doesn't need to eat everything.

> That just seems like a depressing failure of vision to me: the goal should be to get the web to the point where it replaces all of these stupid apps we have to have, so no one ever has to decide if an app is "installed or uninstalled" ever again: all content is immediately consumable at all times with nothing other than a URL.

But then how would the largest companies in the world excise a 15% to 30% tax on the entirety of the mobile app market, and soon the desktop app market, as well?

> I don't know how iOS state of things is, but in the past Swift runtime was required to embed as well (you could avoid it with Objective C, but that's so out of fashion).

This is no longer true on iOS and hasn't been for at least a couple years.

So essentially, it was a race between sand-boxing environments like JVM, .NET and browsers? I think there was a big potential for JVM to dominate this sector...

Sandbox for JVM was mostly an afterthought IMO. There were lot of vulnerabilities to escape sandbox, much more compared to browsers.

But nail in the coffin IMO was Steve Jobs, who decided not to adopt nor Flash, nor JVM in mobile Safari.

It's amazing how much our everyday experiences are resulted from decisions of few people or completely random coincidences.

There were earnest early attempts at making Java and .NET the language to run webapps, that all failed. For reasons.

So, it wasn't for lack of trying. Javascript won by default, just by being slightly less nasty. Many of us run (selective) Javascript blockers to keep it from being too nasty. They don't always succeed.

The point of sandboxing and ease of deployment. I don't want to run a separate program for each random site I visit.

I think because the document centric web was already there, the infrastructure was there, and it was much more easier to extend it than to come up with a million NIH solutions. Each app can communicate over its own protocol but the web allows interoperability and that is its greatest strength IMO.

I think goes as far back as the dot com bubble when managers started to realize that it's easy to earn money by doing X, but on a website.

While I share your nostalgia for the web of yore, it depends on the use case. The web browser is and has been an execution environment, long before WASM. Giving developers better low level tools unlocks more possibilities. I'm not really seeing the connection with applets/activex/flash. Sure you can do some things with WASM that you could do with those old tools, and much more.

“isolation” in that context is a little misleading - Flash had countless security vulnerabilities that blew open any isolation you might have hoped to achieve.

When you say “we”, do you mean web users or developers? As a user, I'm sure happier that I don't have to deal with needy plugin installers (“what do you mean you don't want Bonzi Buddy?”).

An opinion that I agree with, despite doing a lot of web development, but since Google basically wants to turn the Web into ChromeOS, with Microsoft's help, that is what it is.

Web Assembly is the secure web plugin

Hardly when linear memory segments don't have protection against internal corruption.

Do you have some resource to read up on that?

This idea is not unpopular in hn I guess.

Nice! Just noticed the backlink from those release notes to the article as well.

`.map(x => x * x)`

This isn't how you make a rust closure. It should be `.map(|x| x * x)`. Besides the mistake, it's exactly the example from the readme for rayon.

1) Oops, thanks for pointing out the mistake. Will fix in a few minutes. 2) The freaky thing is, I'm fairly sure I didn't even see or look at Rayon's official example in README when I came up with that example for my crate I only came up with it because I was _just_ working on root mean square errors in another side-project. That's hell of a coincidence.

I’m subscribed to all the current threads and issues relating to WebGPU multithreading. Very much looking forward to trying to put all this stuff together in order to enable multithreaded 3D rendering on the web.

This stuff is right around the corner, it’s all very exciting. We’re really close to having nearly native performance 3D games in the browser.

Async is another problem detailed that still plagues building wasm programs.

If your program has to read a file using fetch the operation is asynchronous. C and wasm doesn't understand this. You cannot await a response inside wasm the same as in JS.

The caveat of asyncify is detailed in the docs: "It is not safe to start an async operation while another is already running. The first must complete before the second begins."[1]

It's a hack, which does work in some instances, but I'm not comfortable putting stack winding hacks into code that's meant to be debuggable. Spooky things could happen.

I'm still looking for alternatives, maybe rust can do it since rust has native async/await support. Haven't quite yet understood what !?!!!?!().?()! Means in rust yet. :)

1 https://emscripten.org/docs/porting/asyncify.html

Not sure how much this helps, but in Rust "!" usually means "this name refers to a macro" and "?" usually means "if the value of this expression is an error, short-circuit the current function by returning that error." Usually when you're reading working code, you can ignore those two symbols and still get the meaning. (Those symbols also have some more advanced meanings for when you're dealing with generic/templated types, but those are pretty rare and unlikely to show up in examples.)

Yeah Rust's wasm-bindgen has integration between Rust async-await and JavaScript promises which doesn't have same restrictions: https://rustwasm.github.io/wasm-bindgen/reference/js-promise...

However, Asyncify is useful in other scenarios - where you want to shim out an API that is normally synchronous in C / Rust land, with an async JS implementation.

async-await, like any coroutine-based mechanism, requires changing every single function in the entire callstack to be async as well, which is usually not feasible in such scenarios (e.g. when shimming out some basic filesystem API). That's where Asyncify helps a lot.

if you are looking for alternatives, you may want to try zig, I think async zig works in wasm, and I think you don't need to put stack winding hacks into the code.

Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact