Hacker News new | past | comments | ask | show | jobs | submit login
WebAssembly Threads ready to try in Chrome 70 (developers.google.com)
272 points by feross on Oct 30, 2018 | hide | past | web | favorite | 89 comments

Quick notes... The threads feature is in proposed status vs others that are in implementation status [0], but I assume this has been implemented before things like multi-value. I wonder why threads first, I assume just for experimentation. It should be noted that threads are essentially atomic memory accesses and wait/notify, nothing for starting a worker inside of WASM or anything, that is still in JS land. The test cases don't appear to include wait/wake nor does the test case interpreter that I saw [1]. Also, "Report issues" in the blog post links to https://TBD.

0 - https://github.com/WebAssembly/proposals 1 - https://github.com/WebAssembly/threads

The example in the article creates a thread. Is that what you mean by "starting a worker"? Because it does seem to be able to do that.

Just to clarify, it creates Web Workers with JavaScript which share a WebAssembly instance. This plumbing is done automatically when compiled with Emscripten with the correct flags. You can't just take code that spawns threads, create and run a single WebAssembly instance and have it work without something like Emscripten that will do this plumbing for you. It must be done via Web Workers, which are heavyweight, so you wouldn't want to be spawning lots of them.

My understanding is that the atomic memory operations you're talking about are a big step towards enabling more interesting features in the future.

I did a glancing read of the overall spec a while back and I remember seeing a few asides to the effect of "we'll have to tackle this as part of the threads effort."

Take it with a grain of salt though, I'm still digging into this stuff.

It is a big step, and leaving thread management up to the host system is ideal even in non-web contexts. I wouldn't expect there to be any larger threads effort beyond this at least any time soon.

What's a solid language choice for targetting wasm if I want to kick the tires? Go?

Rust has a pretty great WebAssembly story:


Learning Rust at the same time as Rust+WebAssembly might be a bit of a reach, though.

Is there documentation on Rust+WebAssembly that is presented more like a conventional book or tutorial?

I don't do well with example-driven documentation like these. Resources like the original rust book are excellent though.

I'm going to have to be that guy and say, did you try googling it?

Here's one guide: https://hackernoon.com/compiling-rust-to-webassembly-guide-4...

I was under the impression that the wasm emscripten target was deprecated in favor of the rust-specific wasm target. Am I wrong?

You're not. Well, it's not officially deprecated, but it pretty much is, yeah.

Is there really that much to learn?

You can just target WebAssembly like this:

cargo +nightly build --target wasm32-unknown-unknown

Okay, then how do I run that? How do I interface with the JS/DOM world? (Those are rhetorical questions. I just mean to show that there is, in fact, more to learn.)

Try (after installing of rust and cargo via rustup):

  # Install support for "wasm" command for cargo:
  cargo install cargo-wasm
  # Install wasm target and wasm-gc tool:
  cargo wasm setup
  # Create new wasm project:
  cargo wasm new hello_world
and then in hello_world directory:

  # Build and run wasm project:
  cargo wasm run

C or C++. Everything started with C and the first language for everything will be C.


"WebAssembly is being designed to support C and C++ code well, right from the start..."

I just started playing with Go's WebAssembly support last night. Support is still nascent but I was able to get up and running very quickly, and the API is dead simple: https://github.com/golang/go/wiki/WebAssembly

On the Go front, TinyGo (a subset of Go targeting embedded dev) recently added Wasm support:


It's output files are very small (~1kb for the toy examples), as it does dead code elimination, which is missing from the mainline Go approach (for now).

Both Go and TinyGo are in very early stages of their Wasm support though, so things are changing and updating fairly quickly. :)

The go compiler does dead code elimination. I'm not familiar with TinyGo but from the readme one thing it does do differently is drop the whole scheduler if there are no blocking go routines.

> The go compiler does dead code elimination.

For Wasm output? It really doesn't seem like it. Output binaries from the mainline Go compiler (atm) are ~2MB. Most of which is dead code.

Rust or C++ were easiest when I tried it last.

.Net with Blazor.net is pretty neat.

C# doesn’t get a lot of love on HN, but we’re launching our first Blazor app in production this week and it’s been pretty easy to use.

Blazor is (currently) using the mono .net interpreter compiled to WebAssembly. This seems to me more of a proof of concept than something you would really want to use.

With garbage collection support landing in WebAssembly we might see compilation of .net IL to WebAssembly, but that’s still out in the future.

The really cool thing about Blazor is currently with a couple small changes, you can switch it to a server side "Razor Components" app. The idea behind the server side concept is that it uses Web Sockets to tell the browser what changes to make. All the logic happens on the server side.

There are a lot of pros & cons here to this way. Some pros are a fast initial render & small amount of bytes needing to be sent. Cons are you need internet access & render changes aren't as fast as client side.

This biggest pro I see though is being able to develop a front end app that you can later switch to Blazor/Web Assembly with the client taking over part of the logic when Blazor & Web Assembly are less experimental. For internal apps, or apps where you know the internet/network connection should always be good, this is a great way to go.

We found it good enough to build internal apps. Better than our current JS setup because it’s far more productive.

It’ll make its way into .net core 3.0, so I think it’s more than just a poc.

It’s early of course and not very flexible, if you want to build something it wasn’t meant for, it breaks.

what's the performance penalty of the mono VM?

Performance is fine, but it’s still too big for slow internet. A simple form for turning in signed data and starting an internal service is easily a few MB. Which isn’t a problem for internal solutions that never leave the administrative network, but we’ll keep doing vue and graphql Apollo for our apps that have to function on mobile networks for the time being.

We had a look at Blazor at work the other week. My colleagues liked it, but on my computer the tooling crashed when asked to build the standard template project.

Like a lot of things these days, Blazor has a website way more polished than the actual product. It's very interesting, but I'm surprised you find it ready for use in production.

Currently, the c++ wasm tool chain is the most mature one.

Go has a large footprint with 2.5MB hello world wasm and some glue dependency.

Rust has a similar footprint at first sight but can be shrinked significantly:

wasm-gc test.wasm # 1.7MB -> 100 bytes

IMHO the rust language is quite ugly, so you might take a fresh look at c++ which became surprisingly readable with its new features (auto, initializer_lists, nice for loops, ...)

I wrote a PoC tool the other day that can shrink the Go emitted WASM by a little bit: https://github.com/cretz/go-wasm-bake

C++ is already used in production by big companies. It currently is deployed as asm.js. But you can use an equivalent toolchain to create WebAssembly.

This way you can create production-ready code and be ready to move to WebAssembly as soon as it gets feature ready.

The fine control that C++ offers will help you to understand the underlying architecture and put its performance to test.


Go doesn’t have thread support yet AFAIK.

While it may not support atomic ops yet, it still supports all language concurrency via suspend points throughout the WASM.

Depending on what you are doing you shouldn't dismiss Unity3D and C#.

Adding multi-threading support for WebAssembly is on the roadmap for 2019.1

What happened to concerns on Spectre/Meltdown being exploitable through high-resolution timers implemented through shared buffers/multithreading in Chrome?

According to [1]

> Note that SharedArrayBuffer was disabled by default in all major browsers on 5 January, 2018 in response to Spectre. Chrome re-enabled it in v67 on platforms where its site-isolation feature is enabled to protect against Spectre-style vulnerabilities.

[1] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

It's not fully fixed by those, for what it's worth; I'm happy to provide examples outside a public venue. Chrome just decided it doesn't care about the security holes it's creating in websites.

Other browsers are working on rolling out other mitigations they consider security-critical before being OK with re-enabling SharedArrayBuffer.

I believe the solution is treating same-process code as same-security code, as probably always should have been the case.

So, what's the memory model going to be? Without guarantees about the safety of operation, how can anyone reason about the safety of the program?

Some discussion here: https://github.com/WebAssembly/threads/issues/9

tl;dr if WASM shared memory is specified to follow the same rules as SharedArrayBuffer, then atomic operations are sequentially consistent. Normal memory accesses in the presence of data races may be inconsistent, but they are still constrained (e.g. they must obey happens-before relationships).

I tried searching does anyone know if this is available in Firefox yet?

Nope: https://bugzilla.mozilla.org/show_bug.cgi?id=1389458

WebAssembly threads mostly depend on SharedArrayBuffer, which was yanked by everyone for security reasons. Chrome rearranged their browser memory model and re-enabled it, which let them deliver this feature while everyone else still has blockers.


To be clear, Chrome re-enabled it without implementing mitigations that other browsers are likely to consider a basic requirement before re-enabling. A clear case of product or marketing trumping security...

They implemented site isolation and cross-origin read blocking. Unless the OS is also vulnerable, shouldn't that be enough to ensure sites can't access anything not already accessible to the current origin?

No, that's not enough. I can provide examples in a non-public venue.

Bold claim. If you can prove that, I'm sure google will give you a nice chrome security payout for letting them know. If they won't, why not pull a Google security style blog post exposing it?

Google won't give me a payout for a problem they already know about. I know they know about it, because we have discussed it with them. In case it wasn't clear: I work on a web browser that is not Chrome.

> why not pull a Google security style blog post exposing it

Because I don't think publicly exposing specific security problems in shipping software is necessarily the right thing to do.

> Because I don't think publicly exposing specific security problems in shipping software is necessarily the right thing to do.

I'm no security engineer, but isn't it best practice to expose security risks instead of letting them get knowingly exploited, especially if the parent company considers it designed as intended?

It depends on the exact situation. In this case there is coordination happening among multiple vendors who are working on various fixes in this space, which complicates things. Also complicated is that apart from "don't use Chrome" there's no useful mitigation users can do for this risk, and most Chrome users would never see any security-related blog post, nor act on it if they did.

There are times when "screw you, I'm going public with this" is in fact the right thing to do. Again, in this particular case I don't think it is.

What are some of these additional mitigations other browsers are eyeing?

The last proposal I saw floated involved sites sending a header to opt in to SharedArrayBuffer and the browser ensuring that all successful loads such a site does are done with CORS enabled. That ensures that such a site only has access to data that is either same-origin or explicitly shared by the other sites it's pulling data from, so there's no point in doing a spectre attack to start with.

Chrome is a new IE

As it says in pretty_bubbles link, enable javascript.options.shared_memory in about:config

You can enable it on Firefox nightly. I think the config name is something with shared_mem.

It's javascript.options.shared_memory. It's currently set to false in Firefox 63.

As other people already wrote, it was disabled at the beginning of the year because of timing attacks related to Spectre and Meltdown. Details at https://blog.mozilla.org/security/2018/01/03/mitigations-lan...

We had this thread on HN about multithreading Rust with WASM https://news.ycombinator.com/item?id=18283110

There's no mention of whether applications will gracefully degrade if there's no threads support. What happens in that case? Or should developers have to check whether threads are enabled?

Devs need to do feature detection like with any other feature e.g. for the presence of atomics and sharedarraybuffers.

I hope WebAssembly threads are only enabled in contexts where site isolation is active, lest we have another Spectre.

Yes, atomics and SABs are tied to SI being active.

Documentation on how this is implemeneted in emscripten:


How does synchronization or the lack of it carry over and translate to WASM based threading?

I’m assumming all the typical threading problems are inherited by design such as the potential for deadlocks, memory corruption, priority inversion...etc?

Also I’m assumming these threads will allow true parallelism to occur?

Exactly right, everything sharing the same sharedarraybuffer will have all those same issues and will run in parallel. It's a space that I suspect newer languages like go, rust, d, etc will start to really make a mark on wasm because of the concurrency models and other safety features that they contain.

Without true parallelism there'd be no point dealing with all the mess the raw threading primitives get you!

But yes, it's basically good old pthreads in WebAssembly, with all the snake pits that implies. Although I would hope the runtimes will be robust enough that this can't take out the browser when you trip over a concurrency bug.

Every time I hear the word Webassembly I always imagine that one day the browser itself will be written in/compiled to webassembly and eventually the entire operating system will be too. Isn't that the natural end game for webassembly?

Its a bad idea. You will ignore all the benefits of running native code: performance and access to low level features.You will not gain portability because every platform will have different APIs - those are not part of wasm project.

That bad idea is how mainframes user space applications work, and is partially adopted across watchOS, Android, ChromeOS, webOS, Windows/UWP and Garmin apps.

Its certainly not how the OS and its runtime works. It may make sense when you have to support range of different hardware architectures, but its not really something you want to have by design. Limited range of supported hardware and native code will always be better.

I think colek42's comment was more tongue-in-cheek.

So everything is eventually going to WebAssembly. Eventually.

Regarding the OS, experience tells me this won't happen - very specialised tools are almost always better at very specialised tasks.

Huge swathes of Firefox are already written in Javascript.

Very cool. I remember using the pthread flag from C. But I don't ever recall provisioning a set number of thread pool workers at compile time. Assume that will change at some point.

Does this mean we can have MPV or VLC compiled to WASM?

One nice feature missing that's important for video decoding efficiency is SIMD. It's being worked on, but not ready yet: https://github.com/WebAssembly/simd

Note the use case of this is pretty limited without a direct API to access GPU accelerated decoding. Otherwise you can basically decode on the CPU or fake it by rendering HTML video and copying that buffer into the application skin.

In theory absolutely. Just compiled a mid-size cpp project without any major hurdles.

Why illustrate it with a program which has undefined behavior? It has signed integer overflow.

Does that translate in having support for std::thread?

Yes, as std::thread is built on pthreads (for platforms which use pthreads, like emscripten.)

Ok, Qt/QML is happier now.

How so?

Are the first WebAssembly attacks out yet?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact