Oo, I'm really interested that you're working on that, although I don't love the syntax (particularly `worker<endpoint>` seems awkward).
It seems like you should stick with function invocation syntax:
`worker = (endpoint) => {| block |}`
`worker(endpont) // does what you want`
with the difference that the result isn't a closure. As a bonus this syntax would let you write 'pure functions' even if you weren't working with workers. Perhaps the worker version then is `worker = async (endpoint => {| block... |}`.
Although considering the precedence for `async function` maybe the way to do this ought to be `pure function`.
If you take a look at the issues on the repo, a lot of people agree with you. The syntax was not final at all, we need to convince TC39 first before we can start bikeshedding ^^
I don't actually know the answer, but my guess would be it's because of scoping. Web workers aren't a language level feature, so they can't really dictate that the function body can't have access to the outer scope which functions in JS typically do. And if worker functions like that would have access, how do you ensure thread safety? Making it so you have to point to a different module altogether seems like a decent way to enforce that boundary.
const findPi = uwork((iterations = 10000) => {
let inside = 0;
for (var i = 0; i < iterations; i++) {
let x = Math.random(), y = Math.random();
if (x * x + y * y <= 1) inside++;
}
return 4 * inside / iterations;
});
// Run this inside an async context:
const pi = await findPi(200000000);
>Can someone explain why the ES standard doesn't have something like this:
new Worker(function(params) {
... compute heavy stuff here ...
});
In 2015 I worked on a framework that functioned like this as a personal project. It was even context-aware and spun itself up differently depending on the environment it was in; workers and shared workers loaded the same .js file that the UI thread did. As such, it chose speed over memory use seeing that the code was effectively pre-loaded everywhere.
It was never completed, and the reasons for that were:
1. SharedArrayBuffer hadn't landed yet. Transfer cost of computable data between workers was high, and transferables had shortcomings of their own.
2. Closures were disallowed, and AST transformations were one possible solution. It seemed like a very bad idea to go down that road.
3. Back then the differences in API behavior between even Chrome and Firefox were terrible to deal with. The APIs were heavily neglected across the board.
These days, I don't see the need. If you need the performance, WASM is a better choice anyways and depending on what you're doing it can use SharedArrayBuffer much more efficiently via way of something like pthreads.
> WASM is a better choice anyways and depending on what you're doing it can use SharedArrayBuffer much more efficiently via way of something like pthreads.
I think you misunderstand the concept of WASM. WASM does not have its own thread, it blocks the main thread and runs in the same context as JS. WASM still must spawn a Web Worker to emulate multithreading in a browser.
I do understand, was just mobile and in a hurry so I failed to fully qualify the statement.
What I was saying is if you're in the WASM toolchain/ecosystem, something like pthreads would already be implemented by Emscripten[0]. In other words, spinning up web workers is handled for you seamlessly. It's all dealt with at a very low level of abstraction. You're literally using the SharedArrayBuffer as memory.
Contrast that to JS, and you have to serialize/deserialize types before they can even be stored in SharedArrayBuffer. It's costly and far from seamless.
As an aside, it's unfortunate SharedArrayBuffer was hit so hard by Spectre. The poor standard is cursed or something.
As you already hinted, SABs have been widely disabled. Chrome is currently the only browser who has SABs and therefore the only browser with support WebAssembly threads.
Workers, on the other hand, are available everywhere. And serialization seems to be far less costly than most people think. In the apps that I have written that make use of a off-main-thread architecture, structured clone has not been my bottle-neck.
>And serialization seems to be far less costly than most people think. In the apps that I have written that make use of a off-main-thread architecture, structured clone has not been my bottle-neck.
If you're working within a render loop it does hurt, but I agree it isn't an issue for a typical use case. I think what detracts more is the ergonomics of the serialize/deserialize operations. Those are just largely seamless on the emscripten side. On the other hand, a full wasm/emscripten toolchain in your project arguably comes with its own ergonomic cost.