Hacker News new | past | comments | ask | show | jobs | submit login
When to use web workers (dassur.ma)
257 points by tiniuclx 30 days ago | hide | past | web | favorite | 107 comments



I have a large ClojureScript app (https://partsbox.io/) and I really, really wanted to use WebWorkers. I wanted to run larger tasks like indexing for search, or pricing calculations in WebWorkers.

But it seems to me that Webworkers were designed with an extremely narrow use case in mind. The restrictions placed on them make them essentially useless for me. To do anything ever remotely useful, I would need to duplicate my entire database in web workers, and then communicate with them through a thin straw. Database updates would need to be performed both in the main app database and in web worker threads.

The way they are restricted, I just can't figure out how to make any meaningful use of them.


IndexedDB is shared between workers and main thread, in case that helps.


Indeed, and it is something I'm looking at. But at the moment my database is really native Clojure data structures, which has fantastic advantages. I would need to start storing them in a "traditional" database, introducing additional layers, serialization, overhead and bugs.


> I would need to start storing them in a "traditional" database, introducing additional layers, serialization, overhead and bugs.

If the app state is stored in a single atom, I would use localForage[1] (available as a cljsjs package[2]) to keep it in sync between web workers and the main thread. It would also make the app accessible offline.

[1] https://github.com/localForage/localForage

[2] https://clojars.org/cljsjs/localforage


As a UI developer i want to move scrolling and style updates out of main thread. Every WebWorker tutorial is taking about how a “computationally heavy task” can go out of main thread. But rarely anyone does computation in the client.

I would say Animation Worklets are the workers that will make an impact on delivering multithreaded UIs in web.

https://www.chromestatus.com/feature/5762982487261184


If you haven't seen this before it might interest you. Nolan worked on offloading most of the work to threads to achieve 60 fps on mobile.

http://www.pocketjavascript.com/blog/2015/11/23/introducing-...


That's pretty cool!


Yes, that’s exactly what AnimationWorklet is for.

That being said, I think even smaller-scale things like state management and hitting APIs should be moved to workers. It all increases resilience and buys headroom for low-end phones.


Agreed, I’ve always wondered why this isn’t done more often, it seems like the perfect place for tasks such as those.

That being said, I’ve been a web developer for over 5 years and have never actually decided to move tasks to web workers; it’s about time I change that.

I also find it weird when devs complain about the lack of shared state and how it’s some how a limitation. Messaging passing architectures haven proven key for modelling concurrency in many powerful languages and frameworks like Erlang/Elixir and Go channels to name a few.

Anyways thanks for writing this up. Proper breakdown and grouping will have unseen benefits for sure, and in this case, very seen :)


My experience with Web Workers was terrible. It's not because of the Web Workers itself but there is no proper standard way to use modules inside of the worker.

Because it requires a separated .js file to instantiate, bundlers like webpack tend to introduce weird and non-standard way to work with [1].

It is usually fine when you use it your own project since you are going to stick with a bundler you choose anyway, but the worst things happens when you try to publish a JS library that uses a web worker internally. You cannot just publish the source because bundlers won't recognize require() and imports syntax in the web worker source code out of the box.

There is a ES standard:

  new Worker("worker.js", { type: "module" }); 
but no browsers implementations yet and some people aren't happy with this because it is not async [2].

[1]: https://github.com/webpack-contrib/worker-loader/blob/master...

[2]: https://bugs.chromium.org/p/chromium/issues/detail?id=680046


Agreed. I am quite annoyed that no browser has shipped modules in workers. There is little incentive as Workers are sorely underused.

Since I mostly use rollup, I ended up writing a plugin[1] that allows you to pretend that modules are in workers. It compiles down to AMD under the hood and uses a very minimalistic worker.

[1]: https://github.com/surma/rollup-plugin-loadz0r


Can someone explain why the ES standard doesn't have something like this:

    new Worker(function(params) {
    ... compute heavy stuff here ...
    });


The problem is mostly in that all these functions are closures, and scope can’t easily be transferred.

Domenic and I have been working on a proposal[1] called Blöcks to introduce transferable functions to JavaScript.

In the meantime, I wrote Clooney[2] ontop of Comlink[3] that gives you almost that.

[1]: https://github.com/domenic/proposal-blocks

[2]: https://github.com/GoogleChromeLabs/clooney

[3]: https://github.com/GoogleChromeLabs/comlink


Oo, I'm really interested that you're working on that, although I don't love the syntax (particularly `worker<endpoint>` seems awkward).

It seems like you should stick with function invocation syntax:

`worker = (endpoint) => {| block |}`

`worker(endpont) // does what you want`

with the difference that the result isn't a closure. As a bonus this syntax would let you write 'pure functions' even if you weren't working with workers. Perhaps the worker version then is `worker = async (endpoint => {| block... |}`.

Although considering the precedence for `async function` maybe the way to do this ought to be `pure function`.


If you take a look at the issues on the repo, a lot of people agree with you. The syntax was not final at all, we need to convince TC39 first before we can start bikeshedding ^^


Web workers would make a good http203 episode, uses/pitfalls etc


I don't actually know the answer, but my guess would be it's because of scoping. Web workers aren't a language level feature, so they can't really dictate that the function body can't have access to the outer scope which functions in JS typically do. And if worker functions like that would have access, how do you ensure thread safety? Making it so you have to point to a different module altogether seems like a decent way to enforce that boundary.


Since you seem not to like Clooney, what about uwork? I wrote it a while ago:

https://github.com/franciscop/uwork

    const findPi = uwork((iterations = 10000) => {
      let inside = 0;
      for (var i = 0; i < iterations; i++) {
        let x = Math.random(), y = Math.random();
        if (x * x + y * y <= 1) inside++;
      }
      return 4 * inside / iterations;
    });

    // Run this inside an async context:
    const pi = await findPi(200000000);


>Can someone explain why the ES standard doesn't have something like this:

    new Worker(function(params) {
    ... compute heavy stuff here ...
    });

In 2015 I worked on a framework that functioned like this as a personal project. It was even context-aware and spun itself up differently depending on the environment it was in; workers and shared workers loaded the same .js file that the UI thread did. As such, it chose speed over memory use seeing that the code was effectively pre-loaded everywhere.

It was never completed, and the reasons for that were:

1. SharedArrayBuffer hadn't landed yet. Transfer cost of computable data between workers was high, and transferables had shortcomings of their own.

2. Closures were disallowed, and AST transformations were one possible solution. It seemed like a very bad idea to go down that road.

3. Back then the differences in API behavior between even Chrome and Firefox were terrible to deal with. The APIs were heavily neglected across the board.

These days, I don't see the need. If you need the performance, WASM is a better choice anyways and depending on what you're doing it can use SharedArrayBuffer much more efficiently via way of something like pthreads.


> WASM is a better choice anyways and depending on what you're doing it can use SharedArrayBuffer much more efficiently via way of something like pthreads.

I think you misunderstand the concept of WASM. WASM does not have its own thread, it blocks the main thread and runs in the same context as JS. WASM still must spawn a Web Worker to emulate multithreading in a browser.


I do understand, was just mobile and in a hurry so I failed to fully qualify the statement.

What I was saying is if you're in the WASM toolchain/ecosystem, something like pthreads would already be implemented by Emscripten[0]. In other words, spinning up web workers is handled for you seamlessly. It's all dealt with at a very low level of abstraction. You're literally using the SharedArrayBuffer as memory.

Contrast that to JS, and you have to serialize/deserialize types before they can even be stored in SharedArrayBuffer. It's costly and far from seamless.

As an aside, it's unfortunate SharedArrayBuffer was hit so hard by Spectre. The poor standard is cursed or something.

[0] https://emscripten.org/docs/porting/pthreads.html


As you already hinted, SABs have been widely disabled. Chrome is currently the only browser who has SABs and therefore the only browser with support WebAssembly threads.

Workers, on the other hand, are available everywhere. And serialization seems to be far less costly than most people think. In the apps that I have written that make use of a off-main-thread architecture, structured clone has not been my bottle-neck.


>And serialization seems to be far less costly than most people think. In the apps that I have written that make use of a off-main-thread architecture, structured clone has not been my bottle-neck.

If you're working within a render loop it does hurt, but I agree it isn't an issue for a typical use case. I think what detracts more is the ergonomics of the serialize/deserialize operations. Those are just largely seamless on the emscripten side. On the other hand, a full wasm/emscripten toolchain in your project arguably comes with its own ergonomic cost.


But wasm isn’t asynchronous right? It still blocks the main thread.


This is actually similar to how it's done in Go Imagine you define a function:

   func hello(value string) {
      // compute heavy stuff
   }
To run it sync

    hello("World")
To run it async, just add 'go'

    go hello("World")
IMO when it has generics, it'll be the best language I have worked with.


I don't really publish libraries so I didn't run into your main gripe.

Is there any other reason why you didn't like worker-loader?

I experimented with it and found it pretty straightforward. For anyone who wants to try it, you can even get it working from a create-react-app project with the webpack inline loader syntax without ejecting:

  /* eslint import/no-webpack-loader-syntax: "warn" */
  import Worker from 'worker-loader!./Worker.js';
My use case was to run tensorflow.js in the background. This actually mostly worked out of the box with only some awkwardness with serializing images as messages. The only thing that didn't work out was that you don't have access to webgl there which defeated my original use case (since there's no point in offloading computation if that computational is going to be several orders slower in the background).


Because if you use it in a library you immediately create a hard dependency to Webpack. Libraries should be designed to work with any bundlers, or even without a bundler at best.

Besides, I personally don't feel good with Webpack's do-everything-with-import approach. It is straightforward of course, but is ES6 import statement supposed be used like this?


Would it be possible to draw off screen or canvas somehow? To preserve WebGL compatibility while still using the workers?



I did a thing where I used data URIs instead of explicit files to load up web workers. Don't know if that still works, as this was a few years ago, but it definitely helped a lot with those sorts of issues at the time.


Yes, it is still one way to workaround the problem. For those who don't know:

  function workerFn() { ...Do worker stuff }

  let worker = new Worker(URL.createObjectURL(new Blob(['('+workerFn.toString()+')()'])));
Though this hack works but it still have problems to be addressed:

1. It still does not resolve import/require(). This introduces another hack that dumping output of a bundler to workerFn.

2. It will make it much harder to debug since you create worker out of data, not source code.


I wish SharedWorkers were not removed from Safari. They are super useful for sharing a single websocket connection between multiple open tabs. Something you can’t do with ServiceWorkers


Yeah shared worker are awesome. I love having a single instance of the core of my app running in a worker, and then each tab the user opens is just a UI view of that one instance. It greatly simplifies keeping things in sync across tabs.

But it's just not possible for Safari... you either have to make your code much more complicated, or add some hack to prevent users from opening your app in multiple tabs at the same time :(


What percentage of your users use Safari? If it isn't impactful to your business, you could ask them to use Firefox or Chrome.

I realize we need more browser diversity and not less, but at the same time we should encourage all participants to keep up so that the burden doesn't fall on smaller players.


Pro tip: if you ask a user to install a different browser, they're just as likely to use a different site instead.


Depends on the market; I don't think it's that common in enterprise, for instance, to have Windows 10-using orgs with virtually no use of the default (Edge) browser, but large numbers of users with both IE and Chrome because of (legacy and modern) app demands.


Depends on the user. My GF for some reason uses FF, Edge and Chrome at the same time, most of the time (she's not in/into tech).

So if someone asked her, she would just switched the windows.


That sounds like a fairly rare workflow. I've heard of people using different browsers for different things but I don't think it's common.


I guess some people don't care about the browsers very much, and will just use whatever is installed. If there are multiple browsers installed, they may learn to use different browsers as a simple way to have different easily distinguishable browsing contexts (different icon/app name) - instead of having multiple windows of the same browser that looks pretty much indistinguishable.


  they're just as likely to use a different site instead.
just as likely, or more likely?


Depends on the site. In some cases, less likely. E.g. at this point in time, Facebook or YouTube or even Slack could make a browser change demand with little to no consequences.


YouTube would get the antitrust hammer thrown at it. Probably same with Facebook.


Agreed. I personally find BroadcastChannel more flexible, but they are not in Safari either :(


I don't like how the browser has to structured clone objects sent between threads. Seems like on low end devices, that could tie up tons of ram and have out of memory failure modes not seen on beefier phones. Reading the article, the author seems to advocate sending js events over from Dom thread to worker, doing all event processing on the worker, and sending commands from the worker back to the Dom that get processed into Dom mutations. It strikes me that OP's approach will offload the bulk of the CPU time spent doing clones onto workers, so at least we got that.

Short of the experimental OffscreenCanvas API[1], I don't see any way to avoid the structured clone memory tie-ups when communicating from worker to dom. Are there any other patterns or approaches that can help limit the memory consumption when employing workers?

[1] https://developer.mozilla.org/en-US/docs/Web/API/OffscreenCa...


You could always use transferable ArrayBuffers and serialize your data with Flatbuffers or something similar.

That being said, in all the off-main-thread apps I have written so far, the cost of structured cloning was pretty much irrelevant.


Good to know, thanks


Web Workers were added to Firefox ten years ago this month, so they're definitely not new technology. I think this is a great call to action to actually start taking advantage of them.


Reading this thread I was thinking this seems like it's turning into XHR all over again. It was a long time from XHR to 'ajax'


People have been taking plenty advantage, they are an indispensable tool to turn all of these CPU bugs or DRAM attacks into working browser exploits. Also those bitcoin miners of course.

Maybe if no one else is taking advantage it is time to simply throw them out.


“I don’t use it, therefore no one else uses it”

As an example, emscripten uses WebWorkers and SharedArrayBuffers to implement multithreading (unfortunately or not, Chrome is the only browser that supports SharedArrayBuffer, others have disabled it).


I suggest you look up why they disabled it. And that is still not an application of WebWorkers!

It would be much more helpful to the cause if you could point to some actually used app out there principally relying on WebWorkers. Until then, it's been a lot of pain, zero gain.


a bit niche, but here's one of my projects - svgnest.com

in addition to the render blocking issue brought up by this article, there are a lot of applications for cpu-bound web apps (eg. anything requiring machine learning or computer vision), but in my experience web workers are not very reliable for this purpose.


I use webworkers to run an in-memory streaming pivoting engine (compiled from C++ to wasm) to build real-time Tableau-like dashboards that run fully in the browser (which runs fairly well for 10M row datasets or less). Network IO and the compute is handled in the webworker, which allows the main UI thread to remain responsive.


This whole thread was quite overwhelming. I was starting to feel, can we just have a normal CPU machine and some C in it? That is horrible too, but in a way I can understand. And sure enough, emscripten gets mentioned. :-D


are you updating diep.io? it is showing lastModified 17th June 2019


They're pretty important for this free online graphing calculator (widely used by high school students and teachers and even starting to be used on some standardized tests): http://desmos.com/c

There is also some niche use as a component of sandboxing: https://gist.github.com/pfrazee/8949363

Your point is well-made though. Very few kinds of webapps are CPU-bound, graphing calculators being notable as an exception (and even then, Desmos made do with the main thread before Web Workers were usable). And while lots of webapps could use plugins, browsers could provide an API for sandboxing code that runs in the main thread, or with a GIL.

The reality is, though, the cat is out of the bag. WHATWG is committed to making the Web into a fully-featured platform for WORA apps with all the same capabilities as a desktop app, and multithreading is one of those capabilities. Even if the next Spectre/Meltdown dooms Web Workers in their current incarnation, browsers will run workers in Docker containers if they have to.


FWIW I believe that web worker adoption is more limited by tooling/knowledge than APIs. Web developers haven't thought about thread safety for decades (at least for those who have been around for that long) so it's more of a mental leap for web developers to think in terms of multiple threads.

I had some promising (at the very least interesting) results with react-native-dom which runs all of React's reconciliation & your own business logic in a web worker: https://rndom-movie-demo.now.sh. I'll fully admit there's a lot more exploration/experimentation left to be done in this space though.


One correction: I don't think thread safety is the issue at all. First of all, existing Javascript may not have parallelism in user code, but it does have concurrency. This means you absolutely have to deal with race conditions and resource sharing. Secondly, web workers can't access the mutable state in parallel, because they can only share data via message passing of copied data. This is what prevents thread safety from being a concern, while still enabling parallelism.


If you really care about the performance of your site on the mobiles that poor people use then render server-side as much as you can.


Yeah - that's been pretty much our guidance (I'm on Surma's team) - but we know you can build interactive experiences on the client on these low end devices too, but it has to be off main thread.... In the end, it's a blend .


But they may have a bad network connection as well.


Correct me if I'm wrong, but isn't one of the uses of web workers to do resource management via local storage, so that multiple tabs can share one set of resources?

Not everybody is using web workers for that, but that could address both issues to a degree.


Not related to the article, but the design and css styling of the blog is really interesting. Makes use of CSS variables as well as a couple properties that I've never seen before. (--mask, font-variation-settings)


The --mask is probably a CSS variable.


Thank goodness for Safari's reader mode. This article was unreadable without it. Why do people do this?


Can you elaborate? I enjoyed the layout, colors, fonts etc, but curious on why the page is not acceptable.


Really, really wish you could do canvas manipulation with web workers. There's some experimental browser stuff, but nothing standard. I'd like to be able to do image manipulation with web workers.


OffscreenCanvas is exactly that! It's standardized, but only chrome has implemented it so far, sadly.


I don't think it's entirely true that desktop applications use separate threads for all tasks except UI related tasks. At least not for old win32 based applications. I think it's pretty common to do other small tasks too in the main thread, and to spawn new threads only when doing queues or other stuff that you know will perceptibly halt the running of the main user interface thread.


I really enjoyed the article! Thanks! You other posts look interesting and I have bookmarked your blog.

A tiny typo in the second-to-last section: specture -> spectrum

Thanks again!


Hey @dassurma, can you explain your task scheduling function? As far as I can tell it schedules a micro task, and there is no meaningful difference between that and other microtask scheduling systems: https://codepen.io/ruphin/pen/qzbgYr?editors=0012


All your logs are executed immediately. .then() takes a callback function!!

FYI: AsyncTask and MicroTask are equivalent, and so are my task() function and your OnMessageTask


Ah, I messed up in my haste to experiment :)

I expaned my tests a bit and your version does indeed queue a task and not a micro task. I did find it to be somewhat less reliable than setTimeout, is there a particular reason why you used this scheme instead of a setTimeout based solution to queue tasks? Why did you choose this method in particular to queue tasks?

Also, AsyncTask and MicroTask are not fully equivalent. MicroTask is equivalent to a bare `await 0;`, but because AsyncTask wraps that in another async function it will queue a second micro task when the first one resolves, and resolve the promise when the second micro task resolves :)


Why do you think it's less reliable? setTimeout gets clamped by the browser to a minimum of 4ms, so you waste a lot of time when all you want is a task boundary.


I tried so many janky different ways that I don't quite remember how to reproduce it, but I found that setTimeout was more likely to always execute once per animationFrame, where using MessageChannel would sometimes execute more than once per frame. I guess the minimum timeout added to setTimeout makes it more likely to not hit twice between paints. The test setup I have now gets pretty clean results.

One final question would be why you have this uid system with attaching and removing event listeners? I think MessageChannels are order preserving, so you can do with a single EventListener that consumes events off a queue, or am I missing something?

``` const { port1, port2 } = new MessageChannel(); const taskQueue = []; port2.start(); port2.addEventListener("message", () => { taskQueue.shift()(); });

const task = () => { return new Promise(resolve => { taskQueue.push(resolve); port1.postMessage(0); }); }

```


One good use case I had for WebWorkers was running heavy Regex on strings. Easy to decouple from the web app as I just had to pass the 2 strings to the Worker (text to scan and regex expression) and get back the result array. There is, however, a performance penalty there as the browser will ‘favor’ the main thread to for rendering and UX.


I’ve been experimenting with all kinds of things and ideas for making a Rust-powered web frontend system that is designed to be completely functional with server-side rendering, with scripting on the frontend being deliberately optional, bringing things back closer to the old-style non-isomorphic server-side rendering, doing things like wrapping all the buttons in forms so that they will, in the absence of local scripting to intercept it and do it on the client side in a potentially more efficient way or with transitions or such, leave it to the server to render the result as it would have been after that button was clicked.

In conjunction with this, I’ve long been thinking about a new style of isomorphic rendering: where instead of running on the server and in the DOM, you run the server code on the server and in a service worker on the client side. (Aside: Svelte has been a major inspiration in the last few months; learn from it that SSR doesn’t need to mean VDOM: if you’re a compiler you can take different approaches.) Cloudflare Workers, when it came along, brought clarity to what I had been thinking at about that time, that it might work to have a pure API backend, and an HTML renderer that speaks to that API and can run either on the server or locally in a service worker on the client: truly use the same interface for both. (To clarify: this approach does not preclude additional client-side scripting for interactivity; but it would lend itself to a light hand on client-side interactivity.)

Now to the relevant point here: a couple of weeks ago I started toying with the idea of doing just about all of the client-side scripting (I mean the stuff for interactivity without full page loads) in workers, leaving the work done on the UI thread being purely applying UI changes that were even calculated in a worker, and event dispatch. This might be able to be slotted into the service worker in some way, or it might need to be another worker.

The essence of what I have in mind is that rendering in the worker would, instead of applying changes to the DOM, emit a byte code (I’ve been looking a very little into Glimmer.js’s), which can then be fairly efficiently passed back to the UI thread through one ArrayBuffer or SharedArrayBuffer, to be applied by a small unit of code (probably JS rather than Rust) to the DOM. In the last few days I’ve been playing with events, and adding the listeners on the document root and doing dispatch manually, through components more than through elements, in a way that lets the framework deal with hierarchical ownership (very Rusty, allowing you to skip GC/RC types) rather than something closer to the ECS style (what is mostly done for UI things in Rust), and I think the approach has promise. (Apart from the hierarchical ownership aspect, this is basically what our framework Overture that we use for FastMail does—and I should clarify at this point that these experiments of mine are personal and nothing whatsoever to do with FastMail, where we have no workers at all, like almost all sites—though I wrote one this very day that might be deployed in the coming week).

Events would be serialised on the UI thread and passed through to the worker. All events would thus need to be passive (i.e. no preventDefault()); though there will doubtless need to be some alternative channel for events that need to preventDefault, most notably clicking on links that should route instead, and form submit; I’m not sure how that will work.

Some parts of this I’ve written code for, to experiment with ideas, but especially the parts involved with workers have almost entirely been thought experiments. I’ve been thinking about the approaches SwiftUI takes too. Lots of interesting stuff to learn from it as well.

I’ve written all of this purely for thinking about. Maybe others will find it interesting. (I shan’t be able to respond to anyone that replies for the best part of a day.)


> there will doubtless need to be some alternative channel for events that need to preventDefault, most notably clicking on links that should route instead, and form submit; I’m not sure how that will work.

Naively I would expect that to be part of the rendered DOM("<a prevent-default='click'>"), or otherwise in a declarative datastructure available to the UI thread. Are there many events that need conditional preventDefault, such that the JS handling code would grow nontrivial?


I thought about it a bit longer after I wrote it and decided that what you need is some sort of pure function that takes the event target DOM element only, and decides whether the default should be prevented. For example, in FastMail we use a function to intercept links that will essentially check whether the href is routable (fiddlier than you might guess, unfortunately) and not target=_blank; that could be reduced to such a pure function, though various routes would now need to be specified in two places. Either that, or just always pipe generated <a href> elements through some function in the worker that annotates them—I like your idea, thanks for the thoughts!

Either way, I can’t think of anything where you need fully conditional preventDefault. Definitely something closer to declarative than imperative is good for these sorts of things.


This sounds very interesting. Keep us posted.


For those interested in moving apps (or parts) into a web worker, this project by the AMP team has some fascinating ideas (via DOM API replication in a worker): https://github.com/ampproject/worker-dom


Last time I used web workers (admittedly quite some time ago), they made everything slower. I encourage everyone who uses them to perform tests to ensure they're getting the benefit they think they are.


If there was a vdom library that moved the diffing part to a web worker, would you have used it? or is it still costly to do any diffing at all?


It's still costly, but at least it'd not be blocking the main thread. I'd definitely give it a try!


Did you ever consider using svelte? Or was it not around when you developed this site?


I have played with Svelte, not used it for any project yet.

When you say, “this site”, do you mean my blog? Because that one’s fully static using 11ty.


Sorry I meant proxx.app


Ah, no Svelte was around, but we were on a pretty tight deadline and had a good amount of experience with Preact in the team. We didn’t feel comfortable increasing the risk by using an unknown framework.


Got it. Thanks!


Has anyone used web workers at the edge ... like with cloudflare workers?


Despite the similar naming, "Web Workers" is a specific browser API that has nothing to do with "the edge" or any server-side thing.


Point is well taken. I have not tried cloudflare workers yet. It seemed like at least the architecture was the same. But I guess you would not have access to the full browser API. It seems like you can still play around with request / response side of things though.


Cloudflare Workers is so-named because it uses the same API as Service Workers, which are one kind of Web Worker. That said, Cloudflare Workers currently implements only a subset of the APIs that are normally available in the browser, though missing APIs are being added all the time.

So, your understanding is basically correct.

(I'm the tech lead for Cloudflare Workers.)


What is the use-case?

Why is static not better?

What are the edges?


Static _is_ better for a good bunch of performance metrics. But what about interactive content like https://proxx.app?

The article is mostly aimed at use-cases where some sort of logic in JavaScript is required.


Yes @dassurma. But web workers are not everything. Your own blog post does not score 100 : https://developers.google.com/speed/pagespeed/insights/?url=... And those issues are not necessarily addressed with web workers. Good basic static design matters too.


I never claimed that Web Workers are the silver bullet that will absolve us from all our problems. On the contrary, I don’t think such a silver bullet will ever exist.

I am saying that we should be using web workers to keep the main thread free. That is completely orthogonal to good static design, proper bundling, right caching headers, code splitting, asset hashing etc etc.


I agree with you on keeping the main thread free. What are your thoughts on WASM via workers?


We are using loads of Wasm in Workers in https://squoosh.app

Since Wasm is synchronous, I’d say it should almost always be run in workers except if the module needs access to some main-thread-only API.


They scored a 96. There is exactly one issue, and it’s esoteric in my opinion.


Heavily interactive content in the browser is an edge case for vast majority of mobile users.

Outside of that edge case the best solution is usually to use JS to sprinkle functionality not doing heavy lifting.


Within that edge case, an app (possibly using react native, if you love js) is often a better choice.


I think this is BS, practically all of the performance problems on the Web are related either to the DOM or CSS, which Web Workers can't fix.

If you have a phone from 2014, something's gonna jank. It doesn't mean you're excluded. It means you'll have to live with jank. You'll get over it.


Many websites have too much of everything, but UI-thread JS is definitely the most common largest performance problem that prevents you from using pages. It’s the biggest thing where an architectural re-think has the biggest effect—though certainly just cutting down on all things will improve performance too.


Can you show me an example? Like I said, I call BS on this. What website does a non-trivial amount of pure JS on the main thread? What even is the use-case? React reconciliation?

Secondly, how does it prevent you from using a website? Sure, jank is annoying, but it doesn't entirely prevent you from using the site. Perhaps you give up on the site, but that's up to you.


> What website does a non-trivial amount of pure JS on the main thread?

I challenge you to run a browser with no ad-blocker installed for one week, then block JavaScript (e.g. with NoScript; or with uMatrix and block scripts; or by setting javascript.enabled to false in Firefox).

Your eyes will be opened at just how much JS is run and how much it slows things down.

> What even is the use-case?

Mostly analytics, ad-serving, privacy invasion and laziness/inefficient coding.

> how does it prevent you from using a website?

When it’s bad enough (which it often is, on slower mobile devices), you reach the point where you can’t perform tasks in a timely fashion. I chose the word “prevents” deliberately, because that is how it ends up.


> I challenge you to run a browser with no ad-blocker installed for one week, then block JavaScript (e.g. with NoScript; or with uMatrix and block scripts; or by setting javascript.enabled to false in Firefox).

I was looking for an example of a "bad website" that I could profile, not to see how disabling Javascript makes things faster. Of course it does make things faster. It also breaks everything.

> Mostly analytics, ad-serving, privacy invasion and laziness/inefficient coding.

I remain unconvinced that most any of this could be moved out to a web worker. It sounds more like a "death by a thousand paper cuts".

> When it’s bad enough (which it often is, on slower mobile devices), you reach the point where you can’t perform tasks in a timely fashion.

I do use slower mobile devices for testing other websites, I have yet to come across one site that's absolutely unusable. Granted, I didn't test the whole WWW.

I'll ask you again: Show me one example of such a site, where I can fire up the profiler and observe that it's really the Javascript code and not the DOM or CSS/Layout reflows that's causing the performance issues. Only then can we start figuring out if Web Workers might help - which often they can't, because they are very limited.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: