
CSS Containment Specification - _bxg1
https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Containment
======
_bxg1
For those who are a little unclear on what this is for, there are typically
several layers of rendering at play in a modern web app:

1) Re-rendering of virtual DOM content

2) Mutating the real DOM to match that new virtual DOM

3) The browser actually drawing the new state of the DOM/styles

#1 is your JS code. #2 is covered opaquely by React and peers. #3 is
traditionally covered opaquely by the web browser.

This API is a way of giving hints to #3 when that becomes your bottleneck
(which is really quite rare, but when it happens, it's extremely nice to have
a recourse). If you're experiencing "jank" and are unsure whether this will
help, profile your app in Chrome and look for the purple "Reflow" bars in the
flame chart. If your chart is mostly orange, your JavaScript is still your
bottleneck.

This API is analogous to the compiler hints (assertions, etc.) you can add in
certain native languages which allow the compiler to make certain assumptions
and therefore optimize more aggressively, where it otherwise might have had to
play it safe to ensure correctness.

~~~
The_rationalist
nit: I've never understood why people use the term re-rendering for vdom
recomputations. I'm not a native english speaker but as I understand it,
rendering means <calculating and/or firing> pixels at the screen. So this term
should only be used for step 3)

For vdom the term used should simply be re-computing or re-calculating or
updating or diffing.

~~~
dwb
Yes and no. "Rendering" in computing has historically mostly meant what you
say, but it is defined much more broadly in the dictionary. There's clearly no
pressing need to distinguish the pixely variety from virtual DOM calculation
with the term (otherwise it wouldn't have caught on), and it works by analogy,
so I think it's fine. There's no "should" with language unless people can't
understand you.

~~~
The_rationalist
_There 's clearly no pressing need to distinguish the pixely variety from
virtual DOM calculation with the term (otherwise it wouldn't have caught on)_
There is a need: in order to maximise performance, we often talk about
avoiding needless vdom recomputations and also about avoiding the browser to
recalc the layout aka rerender. If I say in a commit "avoid needless re-
rendering of CardList when a Card is added" You cannot understand the nature
of the performance issue if you use both re-rendering for vdom and layout.

Indeed language evolve to a local minima of being good enough to understand
each other good enougthly in most cases. I would have hoped language evolution
to improve over time but the world is not ready for creating an institution
that would allow that. In the meantime, I and others that understand that
maximising semantical understanding matter, will keep voicing.

------
nicoburns
IMO this sort of initiative is the most promising pah towards solving the
web's remaining performance issues.

It effectively allows developers to opt in to a more restricted set of
functionality in return for more optimisations. Kind of like gradual typing
for UIs.

I'd like to see more emphasis on these kind of improvements from browser
vendors, because most things can be worked around with custom implementations,
but performance cannot!

Perhaps we could get limited capability lightweight DOM nodes too?

~~~
overeater
This kind of thing certainly does help, but there are other promising paths
that are complementary. For example, adopting better compression for images
and video would cut down loading times by almost half. All it takes is
something tiny, like Safari adopting webp which performs way better than jpeg
and png on lossy and lossless compression respectively. The new generation of
video codecs also will save a great deal of bandwidth and energy.

~~~
teddyh
I fear that all of the bandwidth thus saved would immediately be consumed by
induced demand.

~~~
overeater
If so, wouldn't the opposite be true too -- making less efficient images will
reduce demand? If that sounds absurd, then I think the fear is unfounded.

~~~
patrec
[https://en.wikipedia.org/wiki/Jevons_paradox](https://en.wikipedia.org/wiki/Jevons_paradox)

------
destructionator
I don't understand why the browser can't infer this from existing knowledge.
If it doesn't have any absolutely positioned or floating child elements,
wouldn't that imply contain: content? And if you add overflow: auto or hidden,
wouldn't that infer to contain: strict?

I like the idea of containing floats in a div easily, no more clear hacks lol.
But... I must be missing something since it really seems to be these
optimizations ought to already be possible.

~~~
frivoal
> If it doesn't have any absolutely positioned or floating child elements,
> wouldn't that imply contain: content?

Not just children, but descendants at arbitrary depth. Also no relative
positioning, negative margins making anything stick out, no transforms, no
shadow with a radius large enough to poke out of its parents...

> And if you add overflow: auto or hidden, wouldn't that infer to contain:
> strict?

No, you also need to make sure that the size of the parent element is not
influenced (horizontally or vertically) by the size of any descendant.

Well, some of these are allowable if the effects don't poke out of the
particular element you're interested in.

Effectively, this is all stuff the browser could check before turning on an
optimization that depends on these conditions being verified, but due to the
number of things you need to check on a tree of arbitrary depth, the
verification would cost more than the saving you'd get by applying the
optimization. So browsers don't do it.

css-contain gives you a mode where there's nothing to check, because the basic
rules of layout/painting/sizing are changed so that things can never leak up
if you've turned on the relevant type of containment.

~~~
_bxg1
And then there's another layer of complication when you assume that any and
all styles can change at any time. A big part of "contain"s use-case is when
you can guarantee that your JS will never apply certain styles to certain
elements. There's no way the browser could ever verify that.

~~~
frivoal
> when you can guarantee that your JS will never apply certain styles to
> certain elements

It goes further than that: using contain means that even if you tried applying
these styles that would break optimizations, the browsers will not obey you.
css-contain isn't just a hint to the browser that you're not screwing things
up, it's a mode switch that prevents you from doing so.

~~~
_bxg1
True, but the things it prevents you from doing are virtually never something
you would _want_ to prevent via a mechanism like this. You'd want to catch
them higher up. I don't think it would really be helpful as a constraints
system.

------
reaperducer
I see that this is supported in the various flavors of Chromium, but not
Safari.

I wonder which will come first: Safari support for contain or Safari support
for input type=date.

------
jtolmar
It sounds like you want "contain: content" for almost every element, except
when you're doing something like absolute positioning that's intentionally
breaking containment. I get that you can't make containment the default
because of backwards compatibility, but I'd think there should be a "contain:
none" option so you can open with "* { contain: content; }" and turn it off
for the subtrees where you're actually using containment-breaking layout.

~~~
frivoal
`contain: none` is a thing: [https://www.w3.org/TR/css-contain-1/#valdef-
contain-none](https://www.w3.org/TR/css-contain-1/#valdef-contain-none)

------
spankalee
Containment should mostly eliminate any perf benefit from reimplementing text
layout in WebGL for things like text editors and terminals that typically
already break text into lines. Put contain: strict on lines, gutters, etc.,
and contain: content on all inline elements.

I've tried this with CodeMirror 5 and it doesn't break anything. Hopefully
over time VS Code and other editors can back out their complex custom
renderers.

~~~
kevingadd
I suspect custom renderers will stick around because of the additional control
over rasterization you get and the ability to rasterize from the same internal
buffers that you use for editing and syntax highlighting.

If a file contains tens of thousands of lines (not uncommon these days) the
cost of duplicating all of that to a complex HTML+CSS tree is not
insignificant, even if contain reduces the cost of it.

Even now it's not hard to get the github code view of a large file to lag a
bit in Firefox when scrolling it because of how much layout and rasterization
has to happen (in part due to the way they aggressively syntax highlight the
text and split lines into elements to include line numbers)

Another example - if you haven't done it recently, try opening a 10mb+ text
file in chrome or firefox. The experience is still somewhat bad.

~~~
spankalee
Those renderers should use some kind of viewport rendering / virtualization so
that they're not creating DOM for the entire document.

------
The_rationalist
1) For optimal performance, should contain: strict be applied for any type of
elements? Or just for containers (div,span, ul, etc) Is e.g adding contain:
strict or contain:content to <p> a noop bloat that is going to decrease
performance? (because p never have descendants but would still enable a check
by the browser and make the css bigger).

Also 2): _if_ I understand correctly, if contain: content is applied to a
parent divs, it will apply to all it's child divs. So it would be a bloat and
a noop to apply contain:content to it's child divs? Therefore is the optimal
selector: body > div { contain: content; } or is it div { contain: content; Or
is it * {} (and if so, with or without pseudo elements?)

Edit: regarding 2) it seems to be refuted as I've read that all elements by
default are not and inherited to false, therefore contain on a parent does not
set contain on it's children. But 1) is still unknown and not answered
anywhere which is a shame.

------
Everlag
Is the value proposition here the CSS performance benefits of shadow DOM
without needing shadow DOM? If so, its surprising there's no direct comparison
in the link.

I could actually see this being pretty compelling for easy performance wins.
Its much easier to adopt a single CSS property versus a different model for
DOM interaction. The churn in the spec and various implementations didn't help
shadow DOM adoption at all.

Its 2020 and I've been using shadow DOM(plus web components) for 6 years in
side projects. When the stack has native support its great; it runs fast and
is a pleasure to develop. The downside is you have to be willing to swallow
polyfills that degrade performance when you don't have native support.

If 2020 isn't the year of shadow DOM, maybe 2022 can be the year of shadow DOM
polyfills that don't ruin performance.

~~~
randomfool
Nope.

Shadow DOM isolates selectors, style sheets, etc. It’s about a public/private
API for elements.

Contains is about layout and rendering hints to the browser.

------
hinkley
I worked on a mobile web browser what seems like ages ago. The previous
maintainer was recalculating the layout on every scroll operation, and then
rendering the entire thing. It was, uh, slow.

The first thing I did was rip out the layout engine and give everything on the
page extents at load time. Eventually, scrolling involved finding the range
that fell within the viewport and rendering that. That seemed like what any
video game designer would do for a side scroller. I just turned it 90°. Ended
up about as fast as Openwave and much smaller.

This article sounds as if CSS3 browsers don’t or can’t do that. I find that
quite surprising. Unless we’re using a different definition of render (I still
had to lay everything out).

~~~
frivoal
Browsers can do that, scrolling is cheap (unless you highjack it with JS, but
that's another story).

However, if change the layout / content of some element in the page, making it
bigger, or smaller, or positioned differently, etc, that has an impact on its
parents, recursively. So browsers mark subtrees as dirty when something
changes, and work their way back up.

In certain cases, there are changes that cannot have an effect on the parents,
so you should be able to stop the recursion. And browsers typically are
reasonably smart about that.

However, there are a surprisingly large number of cases where checking whether
or not the parents could possibly be affected by the change is an expensive
operation, so you're better off relaying out your parents (recursively) rather
than checking if you could safely skip doing it.

Css-contain changes the rules of how css works, to eliminate these expensive
to check cases. contain:content makes a few useful things impossible.
contain:strict is even harsher, as parents' size cannot depend on children's
size. But if you happen to have an element that can be set up to fit within
these constraints, turning containment on on the right element lets the
browser know that a whole bunch of optimizations are safe without checking for
preconditions.

------
bhk
This is baffling to me.

`contain: size` means that the element's size does not depend on its children.
Isn't this already given by height, width, etc.? Why is this needed? If these
properties disagree, who wins?

`contain: paint` means descendants cannot display outside the element's
bounds. Isn't this already given by `overflow: hidden`? Why is this needed? If
these properties disagree, who wins?

`contain: layout` means that its internal layout is not affected by anything
outside. It's not clear to me what behavior they have in mind. Is it something
`position: relative` would fix?

~~~
frivoal
> `contain: layout` means that its internal layout is not affected by anything
> outside.

The other way around:

* normally, a float can poke out of its parent div, and affect stuff around

* an absolutely positioned element can poke out of its parent, and if a further ancestor is overflow:auto and the absolutely positioned thing goes far enough, it could trigger scrollbars, whose appearance could cause a relayout

* margins of children can collapse with the margins of parents (recursively), and affect the layout of ancestors

* there are more like that

> `contain: paint` means descendants cannot display outside the element's
> bounds. Isn't this already given by `overflow: hidden`?

Almost: `overflow:hidden` actually makes the element programatically
scrollable. It doesn't discard everything that sticks out, because you might
just start scrolling using JS, so the browser needs to keep a buffer with the
out -of-view stuff ready, just in case, even if it's unlikely. Or maybe it'll
optimize a bit more, and create these buffers on demand, but it still needs to
have facilities to keep track of which buffers it has, which it could create,
what font would need to be downloaded to render the out-of-view part…

> If these properties disagree, who wins?

contain wins. The point of contain is that if it's on, the behavior is
guarateed, and the browser doesn't need to check a dozen properties on an
arbitrary number of elements before knowing if certain optimizations are safe
to do. So if it's on, it's on, and there's no way to break out.

> `contain: size` means that the element's size does not depend on its
> children. Isn't this already given by height, width, etc?

* there's an etc here, and it turns out that there a few more properties than you'd expect that need to be checked. Doable, but checking 13 properties takes more time than checking 1, and we're trying to turn on optimizations. Expensive checks before you can optimize can make the optimization not worth pursuing.

* There are cases where even with `width` set to something other than a fixed size, the width of the parent doesn't depend on children. But checking if you're in one of these cases can be complicated, if it isn't being guaranteed by something like contain.

~~~
bhk
Thanks for the explanation.

Regarding `overflow: paint`:

In <[https://www.w3.org/TR/css-contain-1/#containment-
paint>](https://www.w3.org/TR/css-contain-1/#containment-paint>) it says:

> ... This does not include the creation of any mechanism to access or
> indicate the presence of the clipped content; nor does it inhibit the
> creation of any such mechanism through other properties, such as overflow,
> resize, or text-overflow. This is as if overflow: visible was changed to
> overflow: clip at used value.

That seems to imply that `overflow: hidden` could still be used with `contain:
size` to "access" clipped content. And still I don't see how the optimizations
they list are not available with `overflow`.

BTW, I think it's unfortunate that the MDN article introduces `contain` as if
it were a hint...

> This information is something that is usually known, and in fact quite
> obvious, to the web developer creating the page. However browsers cannot
> guess at your intent ...

... when actually it modifies layout and display behavior, overriding other
properties.

~~~
frivoal
> That seems to imply that `overflow: hidden` could still be used with
> `contain: size` to "access" clipped content.

Ah, you're right. This changed at some point in the history of this property,
and I was remembering the old version.

> And still I don't see how the optimizations they list are not available with
> `overflow`.

Here's one: Setting overflow to something other than visible doesn't cause the
element to be a containing block for absolutely positioned children, so they
can escape.
[https://jsbin.com/wesirup/edit?html,css,output](https://jsbin.com/wesirup/edit?html,css,output)

Here's another one: stacking contexts are weird
[https://jsbin.com/hepiqof/edit?html,css,output](https://jsbin.com/hepiqof/edit?html,css,output)
css-contain:paint puts sensible boundaries.

> BTW, I think it's unfortunate that the MDN article introduces `contain` as
> if it were a hint... ...when actually it modifies layout and display
> behavior, overriding other properties.

Agreed.

~~~
detaro
It's a wiki, so if you're sure enough about your understanding of it (I'm not
about mine, otherwise I'd do it) so you can fix the unfortunate wording!

------
gluxon
I believe this would eliminate many uses of large list rendering libraries
(such as react-virtualized/react-window). That's pretty neat.

The same caveats seem to be present for strict containment mode though. The
width/height of a strictly contained element wont't update if its contents
change.

~~~
_bxg1
There are several layers of rendering at play:

1) Re-rendering virtual DOM content

2) Mutating the DOM to match that new virtual DOM

3) The browser actually drawing the new state of the DOM/styles

#1 is your JS code. #2 is covered opaquely by React and peers. #3 is
traditionally covered opaquely by the web browser.

Those libraries typically focus on optimizing step #1. The OP is a way of
giving hints to #3 when that becomes your bottleneck (which is really pretty
rare, but when it happens, it's extremely nice to have a recourse).

------
truebosko
This feels odd to live as part of the CSS spec, given its focus on style. Now
we're mixing in performance specification with styling

I guess that's already been done anyways. Anyone care to help me understand
why this lives here?

------
foota
Sounds like this is... not what people are assuming it is?

------
oefrha
Is this somehow going to lead to the resurrection of scoped CSS?

