
Firefox: Retained Display Lists - agnivade
https://mozillagfx.wordpress.com/2018/01/09/retained-display-lists/
======
mrec
Interesting post. It feels quite closely analogous to the virtual-DOM diffing
approaches that have dominated client-side JS frameworks lately.

Is there any history of this technique being used in game engine design? Or is
that more concerned about optimizing for the worst case (thou shalt not drop
frames) than for the average case?

~~~
royjacobs
Game engines have never had the approach of recalculating everything every
frame. Instead, aggressively reusing data and only changing what you need is
the default approach (as it should be if you want to squeeze the most out of a
CPU/GPU). Things like command lists, vertex and index buffers and textures are
retained from frame to frame as much as possible.

Anything dynamic is usually handled on the GPU-side by taking a static input
and applying simple (animated) parameters on top or, more commonly now,
feeding back into the original buffer thereby updating it from the GPU
calculations themselves.

The fact that web development has finally caught up to this approach is
certainly welcome, but this is nothing new at all.

~~~
pcwalton
This is something different from what game engines do. A game engine can
typically just walk the scene graph, emitting draw calls as it encounters
potentially visible objects. The equivalent approach on the Web would be
walking the render tree/frame tree, issuing draw calls as it goes.

This is indeed the traditional method that Gecko and WebKit used to use, but
it has a lot of problems, which are why Blink and Gecko use display lists:

1\. The CSS 2.1 Appendix E painting order is complex: it's not enough to
traverse the tree in breadth-first or depth-first order. You could get around
that by traversing the tree multiple times, as Blink and WebKit do, but that
gets complex. It's a lot more straightforward, not to mention cache
friendlier, to build display items into a list and then later sort them into
the proper CSS stacking order. This is actually similar to what game engines
do for transparent objects (assuming they actually care about rendering them
properly and aren't using any fancy OIT techniques).

2\. For security reasons, these days you don't want direct GPU access in the
same process as JavaScript. Display lists enable an elegant solution to this,
as they can be transmitted over IPC to a trusted process.

3\. Display lists can be rendered in a separate thread from the layout and/or
JavaScript execution, enabling better use of concurrency.

4\. Unlike a typical game, which has the luxury of being able to build up
vertex and index buffers beforehand and can cache them on the individual nodes
in the scene graph, on the Web each individual display item only has a tiny
amount of GPU data. Slow rendering on the Web is often the result of "death by
a thousand cuts", where the GPU is swamped in draw calls from lots of
unrelated items. The only scalable solution to this problem is to batch
_across_ display items, using knowledge of the _entire_ scene to construct
VBOs and IBOs. This is only possible if the renderer has a global view of the
entire scene, and the display list is a natural way to achieve that.

I certainly think that Web browsers can learn a lot from game engines (and
WebRender is designed to do just that). But display lists are one of those
cases in which the problem space is just different: trying to do exactly what
a game engine does doesn't work well.

~~~
Jasper_
I'd also add:

Game content is not built in a vacuum, and is built by a team of people who
were trained on the engine and have a large bag of tools to help improve
performance. You can't just plug a random scene into any game engine and
expect it to perform beautifully. There is good tech in game engines, but a
lot of it comes down to an incredible amount of manual labor because you can
target the internals of the engine. The web has to handle any random content
thrown at it.

------
Vinnl
So I'll be that person: if it can be enabled in the 58 beta, does that mean
it's on track for being included in that or another specific release? Or is it
not planned yet?

~~~
Manishearth
If it can be enabled in 58 beta, you'll likely be able to enable it in 58
release. But there may not be any stability guarantees on it working
correctly.

I believe the target is 59, though it might stretch to 60.

~~~
Vinnl
Thanks, that's the info I was looking for!

------
Misdicorl
That second graph and discussion is really frustrating. The claim is that
latencies from 38-50 get shifted down into other buckets, and the the 51+
bucket increases is an artifact of the lower aggregate numbers.

Maybe the data supports this, but there's no way to know looking at the graph.
The histogram should be scaled against total counts rather than the subcounts
(just zoomed in appropriately for the dataset). As shown, its bad statistics
at best, and disingenuous lies at worst

------
cryptonector
Will this improve remote X11 display performance for Firefox?

~~~
moosingin3space
Unlikely -- remote X11 display performance for anything GPU-accelerated isn't
great due to X11's design, not Firefox's.

~~~
cryptonector
Firefox repaints too often.

------
DiabloD3
Previously on HN:
[https://news.ycombinator.com/item?id=16112283](https://news.ycombinator.com/item?id=16112283)

~~~
dang
No, agnivade's post was earlier, as you can see from its ID as well as from
[https://news.ycombinator.com/from?site=mozillagfx.wordpress....](https://news.ycombinator.com/from?site=mozillagfx.wordpress.com).
We put it in the second-chance queue, described at
[https://news.ycombinator.com/item?id=11662380](https://news.ycombinator.com/item?id=11662380)
and links back from there. When we see a good article that fell through the
cracks and put it in the second-chance queue, we try to pick the first
submission of the article.

When we do this, the submission timestamp on the front page becomes the time
it was put in the queue, approximately. That's why it looked like a later post
than yours. I don't like doing this, but not doing it invariably triggers a
barrage of questions and complaints ("what is this 2-day old story with 3
upvotes doing on the front page"). You can always see the original timestamp
by looking at the submission in the user history
([https://news.ycombinator.com/submitted?id=agnivade](https://news.ycombinator.com/submitted?id=agnivade))
or the site history
([https://news.ycombinator.com/from?site=mozillagfx.wordpress....](https://news.ycombinator.com/from?site=mozillagfx.wordpress.com)).

More here if anyone cares:
[https://hn.algolia.com/?query=by:dang%20timestamp%20front&so...](https://hn.algolia.com/?query=by:dang%20timestamp%20front&sort=byDate&dateRange=all&type=comment&storyText=false&prefix&page=0)

~~~
DiabloD3
Huh. That's fine, but it should have caught my submission as a duplicate of
his then, or at least, tell me someone else has.

~~~
dang
That's not so easy, because in cases where we don't notice the story and re-up
it, reposts are key to giving good articles multiple chances at attention.

