Hacker News new | past | comments | ask | show | jobs | submit login
Intent to Implement: Display Locking (docs.google.com)
87 points by bpierre 6 months ago | hide | past | web | favorite | 43 comments

This reminds me of the time many years ago when we worked on a VB app that had to resort to the sledgehammer of Win32's LockWindowUpdate API call to do some hefty calculations and not "freeze" the UI even though we were, in fact, freezing the UI.


Interesting. My, perhaps wrong, visualization of the janking effect, without the display update, concerns resizing, or showing, a column or pane in a web app. During the resize, the content in both frames are not updated until the resize is complete under certain (most?) conditions.

"If an undue delay is likely to be caused, the work already completed is processed and the update phase yields to other update phases for unlocked content."

My interpretation for a pane/column resize or hidden to visible operation: display-locking reduces jank if I could operate on these elements with the display lock tools. This jank reduction produces more fluid updates in elements not affected by the lock.

Question: If sub elements have complicated draw/render cycles, how will the interplay of locks at different layers affect the result? Composing objects with libraries that use these locks makes me wonder about this issue (or if I make sub-components myself).

Wow! When I first learned web development way back when, I was surprised that something like HyperCard's lock screen didn't exist/wasn't possible. Being able to do this on an element level is a huge improvement over that. This is great.

Has there been substantial discussion about this feature? Neither of the two links provided lead to deep discussions about this. It doesn't seem like this is related to any agreed upon standards process.

As far as I can tell, this is being presented as a change to the programming model which will be undertaken unilaterally. Are we back to the days where one browser gets to decide how the web works, and everyone else can catch up if they like?

This is just about implementation, not shipping. It's still extremely early in the process. See:


Is the point of this just to reduce visible jank, or to obsolete shadow-dom implementations by replacing them with lock->modify->unlock?

There's a more detailed explanation of the proposed API at:


TIL jank is an actual term people use in formal documents.

Both as a noun ("causes jank") and as a verb ("the page janks")!

Unike formal languages, human language doesn't evolve via RFCs or ItIs.

(OK, formal languages don't, really, either. But they do tend to stick stakes in the ground via those mechanisms, for which most human languages don't have any analogous concept.)

Jank is an actual term, it’s the time-derivative of acceleration. Page jank looks how motion jank feels; unexpected and uncomfortable.

Jerk is the derivative of acceleration. I haven't heard jank used in that context.

Brainfart, my bad.

It's a daft and janky adverb.

Will this allow content providers to lock content so that ad blockers can't change the content ?

No, this is about increasing performance by taking elements out of the slow DOM update cycle

>> No, this is about increasing performance by taking elements out of the slow DOM update cycle

The question was not about the intent of the feature, but a possible use case that may not have been intended. A more specific reason for the "No" could be more informative.

Interesting idea - though I think any kind of script API can be circumvented by ad blockers pretty easily: They can always inject their own script before anything from the page is executed and manipulate the API functions or replace them with "do nothing" implementations.


It was a really short doc and I didn't see it answer this question at all. Where do you see it answering it?

Yup that is it.

This is one of multiple proposals to make DOM read/writes more performant. More are listed here: https://github.com/chrishtr/async-dom

For someone who doesn’t follow lower level browser dev that closely, this seems like an optimization pretty specifically for a react style ‘nested redraws’ use case. Is this at all correct?

Basically "<ul id='foo'><li>...</li></ul>" && #foo.append( ...li... ) may cause a redraw, recalc, reflow, of layout + styles for each LI you add to the UL, as well as the rest of the dom containing the UL.

However if you can do: "BEGIN TRANSACTION; ...#foo.append(...)...; END TRANSACTION;" it gives you a mechanism to "freeze" all or some of the display (think of it as a subset double-buffer), and "blit" the changed dom to the UI when you're done (with the possibility of: " && CONTINUE TRANSACTION".

Imagine a simple list of search results with alternating row colors (white / grey backgrounds) specified by css (nth-child %2 == 0).

If you prepend elements, instead of appending elements it might cause all elements to change color and re-render, on each insertion.

If you append elements instead then it's likely a more efficient on an individual element basis than prepending, but if you had the ability to "lock" the affected display area until you're done with your for-loop, then the browser can avoid updating the area at all until the "COMMIT" call (multiple actions, with a single commit resolution at the end).

Think of it as "batch these dom updates..." Git/SVN multi-file commits vs CVS commits (single-file).

For many such cases, i.e. layout changes triggering the recalculation of siblings or ancestors, contain[0] might be a more lightweight alternative.

[0] https://developer.mozilla.org/en-US/docs/Web/CSS/contain

Isn't that what document fragments are for? If your changes are all one tree you can build it then insert it all, right?

(Or set inner html? :-)

I would actually say that non-React-style code would benefit even more. Suppose the user clicked a different sort tab on a table, and the code is busily removing and re-inserting elements in the dom, which the browser attempts to layout and display at the same time, leading to a lot of wasted calculations and visible "jank".

"...without jank."

Is this a technical term?

I think it's a reasonably common technical term in Android circles (possibly elsewhere) that means "non-smooth animations" or "skipped frames".


I've heard "jank" used almost always when referring to visual stutter caused by mass-style changes, both in my current role at Atlassian as well as Microsoft. I think it's pretty standard.

It's a term of art in the graphics community, you'll see articles and talks by people like John Carmack on VR mentioning it quite a bit.

I think what they're describing is actually a glitch in the technical sense -- an untended output state caused by not being able to change all inputs exactly at the same time

Why not just double-buffering the DOM? That'd be much simpler. Mutate your DOM all you want, then swap it in when it's ready.

This allows the browser to delay all style, layout, and paint work on attached DOM, and then also spread that work across multiple frames during a commit, reducing jank for the unlocked portions of the page.

If you synchronously swap a large portion of DOM, style, layout, and paint will likely blow your frame budget.

Either I don't understand double buffering very well (not unlikely) or I don't know what you mean by "frame budget". Wouldn't swapping one buffer for the other be faster than most or all of the computations going into calculating that background buffer and easily doable in a single frame?

What am I missing?

Normal double-buffering involves doing something trivial at the moment of buffer-swap, like changing pointer or at most doing large-ish memory copy. Double buffering DOM would either not solve the problem (in the case of doing the layout computations in background and then swapping already layed out DOM) or would involve doing all the layout computations at once when the swap happens (with the implication of this being mostly equivalent to loading new page and thus slow and in many cases too slow to happen at 30, 60 or whatever Hz)

We can already do that: clone first and replaceChild afterwards.

It breaks already attached event handlers, undo buffers for input elements, and any existing references you have to DOM nodes.

The DOM isn't a buffer, it's a tree; doing a deep copy would be an expensive operation incurring lots of little allocations.

That's essentially what is already done, without the browser's assistance, with "virtual DOM"-style libraries. The point of this work is to obviate the need for a virtual DOM.

Possibly race conditions between scripts which see different versions of the DOM.

What is the difference? This allows you to mutate the DOM all you want and then have it redrawn when it’s ready.

Gamers know triple-buffering is obviously superior.

My buffers go to eleven.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact