Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Parallel DOM – Upgrade your DOM to be multithreaded (pdom.dev)
73 points by ashubham 6 days ago | hide | past | favorite | 78 comments

The demo fell a little flat because the "before parallel DOM" and "after parallel DOM" boxes were almost exactly in sync for me (Firefox, MacOS).

It does say right on the homepage:

> What browsers are currently supported ? Since we depend on the `Origin-Agent-Cluster` header being honored by the browser, we are currently limited to Chrome and Edge.

Maybe it's just me, but I'd argue that they should bring compatibility info closer to the demo section. While I do appreciate the mention in the FAQ at the bottom, that is pretty far away from where the demo occurs.

So, Chrome. Disappointing to announce it as 'upgrade your DOM'.

And Edge. This feature has positive signals from Safari/Firefox.

in case you didn't know Edge (like Brave, vivaldi, Opera GX, etc...) these days is just the chromium engine (blink) with a msft chrome.

I used chrome to confirm whether my DOM was upgraded, and saw the same result. The graphs were very synched up.

It's unintuitive, but I'm pretty sure we're supposed to be looking at the spinning square and not the graphs.

After a while the computations get harder to the point that one of the square's animations starts to become janky, while the other stays smooth.

This example is not intuitive.

The square should be more prominent than the graph, otherwise users will think that they're supposed to be looking at the graphs.

You might have to wait a few seconds to see the before dropping down in FPS on Chrome/Edge.

For 80% of all web traffic, I would still feel its an upgrade.

This is why Chrome is the new Internet Explorer. I dread the day that common websites show a “requires a Chromium browser” pop up.

Except it’s literally the opposite. Chrome gets a lot of hate for being associated with google, but it’s always innovating and implementing the latest standards. Internet explorer, at least in it’s last years was the complete opposite.

Sure, let's take 80% of market share and track users for our own benefits. Standards.

Use Brave. No tracking, latest features. FF needs to get their shit together honestly.

80% of all desktop traffic. Overall maybe 30-45% of all web traffic.

The non parallel was faster for me... But also on FF/MacOS

Unfortunately, the new browser features used in the project are currently supported on Chrome/Edge only. But, FF/Safari have shown positive intent.

I am an FF user myself and super sad to see that :(

Yea I did a comparison between Chrome and Firefox as I did not see any difference on Firefox between the panes.

It does seem to work in Chrome though.

Same here. Pdom stayed at 60fps in chrome, but in firefox it had the same framerate drop off as the sync dom.

On an M2 mac using Chrome. The before dropped down to 60 FPS / 10 MB, while the after was 120 FPS / 5 MB.

you have to wait a while until the before box starts to stutter, maybe 30-60 secs.

It seems the “threading” mechanism depends on the browser’s own process model? As in, you’ll get another “thread” iff the browser creates a separate process for the cross-origin iframe?

That’s a clever hack where it works. For what it’s worth, it doesn’t seem to work on mobile Safari, at least judging by both examples slowing down at roughly the same rate. In which case, the “parallel” example is very slightly slower, presumably due to marginal overhead of the cross-frame mechanism.

Yes correct. Safari/FF have shown positive intent to implement this FWIW.

They say they’re looking at implementing origin-agent-cluster. That doesn’t mean they’ll change their internal process model. There is no specification for how a browser runs processes. Even Chrome might decide to use threads instead of processes for iframes.

Correct. A different thread is good too, as long as they are separate resources from the main DOM.

I don't understand what it does and why.

What kind of DOM computations are so intensive they need to be parallel?

Most intensive computations are done in JS, which is not related to the DOM and can be ran in Web Workers if you need parallel execution.

I'm wondering the same thing. It seems like it'll primarily be used as a bandaid for poorly written applications to run slightly less slowly. Lovely, now some garbage page can hog more of my CPU, this is exactly what I wanted.

Heavy data visualizations, interactive infographics etc. Also, if your app has the capability to run third party plugins you generally want to run it in a separate context for security reasons. With PDom, you also isolate yourself from the perf implication the 3rd pary code may have.

> Heavy data visualizations, interactive infographics etc

If they are so heavy, shouldn't the vizualization be WebGL? The DOM is for hierarchical information.

There are no good webGL charting libraries. SVG is the gold standard as the fidelity is way higher. No one wants to look at charts which look like science experiments.

What are "third party plugins" in this context?

Let's say your application supports a plugin marketplace where community can build plugins for your app.

For eg, a testrail plugin for JIRA or a diagramming plugin on Google Docs.

You would want to run these plugins in their own DOM so that they don't accidentally slow down the main app.

Furthermore, how can a racey multi threaded DOM end well? AFAIK, most GUI rendering in apps is single threaded.

I assume you meant "race conditions" when you say "racey multithreaded DOM". The multithreaded part here is still isolated in its own context (iFrame), you should never have a race condition with your main DOM thread.

I don’t know if I agree with that last bit. “Don’t do work on the gui thread” is basically a mantra. It’s also not very effective to show a progress bar in a gui unless you don’t mind the rest of the gui to be unresponsive.

classic web engineering demo:

  - small well contained math operation that could be put in a worker
  - instead of putting that in a worker, startup a whole parallel page and shuttle messages via ipc for the heaviest part of the process, redoing a lot of the work twice
why are the solutions always backward?

Even the visualization is drawn in the parallel worker. Which CANNOT be put in a worker. If you wait long enough on the demo page, you will see how much time just redrawing the DOM is consuming.

Requesting performance isolation with the Origin-Agent-Cluster header:


We've strayed too far from God's light. Return to Gopher.

Can't call it hypertext without using hyperthreading!

For anyone interested, I made a public Docker Image of the "backend" (which is basically just and index.html and a JS file)[0].

This should make self-hosting a bit easier. You can also find the GitHub repository for the (pretty simple) Dockerfile on GitHub[1].

[0] https://hub.docker.com/repository/docker/uninspiredstudioops...

[1] https://github.com/UninspiredStudio/parallel-dom-docker

Edit: Formatting

I think you’re burying the lede here.

It looks like you have the ability to run an isolated portion of the dom on another thread and you can communicate with it with this library? Am I close?

I think more examples could help. Could you have 2-3 real time analytics viz/charts running in 2-3 different threads? Is this mostly for desktop or does mobile benefit also?

Thanks for the feedback.

You could run any number of parallel threads.

This is applicable on both desktop and mobile.

It's crazy that people say that WASM won't replace JavaScript. This demonstrates a use case better suited for languages with better threading models than JavaScript.

Feels like the web world is desperately in need of competent threading capabilities. Can I (practically) use Rust or Go in the browser via wasm already!?

The language has nothing to do with it.

You can use “WASM threads” from Rust or Go. Or JavaScript for that matter. Under the hood it’s all workers and shared array buffer, nothing to do with the language. Last time I checked you can’t even use std::thread for instance in Rust, since the compiler doesn’t understand workers (because workers aren’t actually part of WASM).

None of which will help you when updating the DOM anyway, since that happens in its own thread. It’s up to the browser how they run their rendering engine and I doubt they’re ever likely to allow direct control over that.

> Can I (practically) use Rust or Go in the browser via wasm already!?

You can (built into rust and tinygo for go.)! And wasm-threading has been around for a while! https://webassembly.org/features/ But I don't think these wasm-threading capabilities are well harnesses by any languages yet.

There been a decade of webworkers being super possible. That does get multi-threading! With transferable memory & everything. So, this isn't like some new super non-js thing. Some folks do use workers, but alas not popular enough, not something React or Angular helps steer folks to.

Whats super neat and tricky here is that the iframe in separate origin-agent-cluster has a full DOM implementation that's there & raring to go. We already could have tried using jsdom or happy-dom or a wasm-powered alternative in a worker to do something like this, already. This is super cool though because it's much lower code! It uses a dom runtime that's already loaded & available (the browser's) rather than having to ship & load a separate DOM runtime.

This use case is purposefully hamstringed for demo purposes. Rust or Go won’t change anything here, you still need to pass messages to/from the worker. Same exact mechanism as if you used a webworker, which are already widely available.

But to answer your question, yes you can use Rust and Go practically in the browser. It just isn’t all that helpful except in narrow circumstances.

Fair, you're right that typically you'd have a dedicated UI thread with workers handling business logic.

The difference with Rust and Go is they have better synchronization capabilities - where JavaScript largely forces you to clone data between threads.

So while you may still be messaging worker threads in Rust/Go - threads share memory which is fast and you have access to things like atomics and mutexes.

We can use Rust in the browser today with WASM and it's super cool - but without something like WASI, extensive thunking through JavaScript is required, plus there are issues with threading.

I hope that one day we can initialize a wasm module via a script tag with the browser offering a wasi interface

`<script src="module.wasm" type="application/wasi">`

And that browsers allow for threading without the restrictive security headers we have today

There is a proposal to add threading to WASM itself


But that’s not going to help when accessing the DOM.

Oh nice! That will be helpful. I guess WASI doesn't help with DOM access either - that's more about file access and such?

Is there a proposal(s) covering DOM access?

I remember interface types were a thing a few years back. If you have access to a dom object (I assume by pointer?) and it is removed/GC'd - would the wasm module have a null pointer?

> JavaScript largely forces you to clone data between threads.

Isn't that what SharedArrayBuffer is for (avoiding)?

Indeed. And there’s a whole Atomics namespace in the standard library for thread safety while using it.

Again, there’s very few actual uses for wasm. The main one is when you have a preexisting library in a different language, other than that you don’t gain any real functionality from using it.


SharedArrayBuffers are helpful for simple data types but because they are a simple fixed size array of bytes, you cannot share things like objects, class instances, arrays, etc and you need to manually extend the SAB by creating multiple SAB "pages" and manually glue them together. Ultimately it's not useful for most practical use-cases.

In the past I have used a SharedArrayBuffer to store a custom Graph implementation sent between threads. I stored the adjacency list in the SAB with the rest of the data being stored as a plain object and cloned/sent via postMessage between threads.

This worked but is extremely janky, required an unsafe custom data structure, manual/unsafe memory page management, was difficult to read/review and the performance was lacklustre because half of the data structure was plain JS objects that needed to be serialized and cloned anyway.

Simply, JavaScript sucks for multithreading and it would be great to have the ability to write high performance, multithreaded, complex web applications in a language designed with parallelism in mind - like Rust or Go.

While wasm _today_ cannot offer that (at least not practically) - there is certainly a use case for replacing JavaScript in that context and I hope that it does.

JS will still be useful for basic applications that just need a little scripting

SAB solves the problem of having a small number of dedicated workers that efficiently handle specific well constrained data processing tasks.

I must question whether the problem of random sites wanting to spin up a whole ton of cores that consume arbitrary amounts of compute and memory really needs to be solved.

Even with multithreaded WASM, the DOM will still only be usable from the page's (or iframe's) main thread, so if you want to do DOM operations not on the page's main thread you'll still need to do the same steps OP's project does (create an iframe and do stuff there).

Call me crazy but, WASM won't replace JavaScript. There are so many cases where a simple script will do. Firing up a terminal, writing in another language, compiling and going through the trouble to turn something simple into WASM is actually the crazy thing to do. Also, I am in no way approving of the techniques described in the article.

Yeah but most big web apps aren’t simple. If wasm ever replaces JS in a significant way, it won’t be replacing simple scripts. It’ll be replacing webpack lol

>It's crazy that people say that WASM won't replace JavaScript.

>big web apps

So you've defined the goalpost as "big web apps". There are billions more use cases for Javascript than however many "big web apps" might exist. So your phrasing is too broad, maybe consider:

>>It's crazy that people say that WASM won't replace JavaScript in big web apps.

Nobody's saying that. You are free to use WASM for your "big web app". Nobody and nothing is stopping you.

WASM certainly has its place, but it also certainly won't replace JavaScript in most of the places JavaScript is used every day. "Big web apps" are (my guess) maybe 1,000,000 of the 49,501,698 websites that use JavaScript, but it would depend on how you define "big web app" (and I really don't want go down that goal-posting rabbit hole). There are 13.8 million people writing javascript every day, how many of them do you think are working on "big web apps" that actually need WASM? How many of them do you think want to switch to Rust from Javascript? "developers" always seem to think everyone else should be some rockstar 10x programmer, but most use cases for javascript are pretty simple, and yet totally effective for what the requirements typically are.

Damn I’m just playing devils advocate for your point of “WASM won’t replace JS because there is a build process”.

I don’t think everybody needs to be building big web apps, I just think that most full time developers writing JS (or TS) are probably on a team working on a web app, so the introduction of a build process isn’t really an issue.

Of course it’s valid to write good ol’ javascript. I do it all the time. I’m always happy to avoid dealing with webpack

I do love the idea of a dedicated compiler handling my builds. Even with the most modern build tools today, large web applications can take minutes to build which doesn't include type checking, linting and testing. If you're not prioritizing build times as your app grows, this can quickly balloon.

Why would wasm be needed for build time tools? We already use fast compiled languages for modern webpack replacements (esbuild, swc, Bun :: Go, Rust, Zig), and they don’t use wasm.

The demo is computing prime numbers - what does that have to do with the DOM?

It also plots the prime numbers on a visualization which is DOM. You will see over time plotting more points becomes time consuming in the demo

pretty interesting, Origin-Agent-Cluster only exists on Chrome/Edge so far though and it sounds like it is only intended as a browser hint. From the example I don't quite understand though: Why would the non-parallel version already be so laggy if it runs on the mainthread (1 thread, nothing much else happening on the page) versus the iframe-parallel version which also just gets 1 thread right?

I think the idea is that part of the page would be rendered outside the iframe and part inside it.

That initial load lag seems is something non related to this project.

Please no, this is a solved problem for decades, it will not actually multithread the DOM and more likely just add overhead.

Can you please put more details. "Origin-Agent-Cluster" frames are actually run in a separate sub process.

See also: https://github.com/krakenjs/zoid which allows you to present a simple interface to sites that want to embed your application and send parameters/register callbacks. All the caveats of frames still apply, of course, and scrolling glitchiness alone is a reason to avoid frames altogether... but if you absolutely need to present your application as a frame, it's a developer-friendly way to do so.

Pdom seems to be a way to do that to yourself, if you can't trust that your content won't cause performance degradations and you absolutely want the context outside that content to stay responsive. I'd only use it, and really any frame, for components where the size is known ahead of time, though; asking a frame to resize itself to fit its contents, when its contents may themselves be resizing to the size of the frame, is a recipe for disaster. Which brings me around to why I try to avoid frames for anything user-facing when at all possible.

Firefox on M1 Max. After Parallel DOM is slower here.

The new browser features used for this project are only supported on Chrome/Edge right now. But FF/Safari have shown positive intent.

Worker threads are so frustrating. Good work

Why reach for iFrames over other technology like WebWorkers?

WebWorkers do not have access to DOM.

I understand, but things like https://github.com/GoogleChromeLabs/comlink enable it. Similar to how iFrames don't have access to their parent page you need a facilitator. My question is why not use a js facilitator that could work in all browsers, rather than just Chrome.

I find it an interesting choice that the author decided to invest in new iFrame technology rather than existing multi-thread technology in the browser.

I don't think Comlink supports DOM as well. It's just syntax sugar over Web workers making them easier to use rather than providing new functionality over them.

That's part of the FAQ?

I see it now, I don't know how I missed it.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact