The flow tree (render object tree) in Servo (or any other browser engine) must use inheritance: we have a heterogeneous tree of objects that all share a common set of fields (position, intrinsic widths, collapsible margins, some various bits that store state during reflow), but they all use virtual methods because they must lay out their contents differently.
We can't use composition because we wouldn't get virtual methods. We can't use an interface because then we would be forced into virtual dispatch for all of those fields that are shared between flows.
Rust doesn't have OO yet either, so we're forced to hack around it in weird ways (usually via a small amount of unsafe code to simulate inheritance).
> I have yet to be presented with one of these "killer" is-a relationships that must exist. If the sole excuse is "performance", then sure I'll conceded. I guess I don't work in environments where member access is the performance bottleneck so I guess I can't relate.
A browser engine is exactly that sort of environment. Forcing all member access to go through virtual dispatch would murder the performance of any browser.
Note that this was exactly the sort of thing that OO was designed for in Simula: heterogeneous trees of objects that all share some common fields but have different virtual methods. This generalizes to GUI libraries, game worlds etc—in short, simulations :)
Haven't used Servo, but one of the big eye opening composition experiences for me was Unity's Scene Graph. Whereas Cocoa uses an inheritance model for its view-tree, Unity has a tree of transforms that you do not subclass or change in any way, and then you add behaviors to those transforms. If you want it to render, you can attach a renderer, if you want to hit test, you attach a collider. If you want any arbitrary other thing to happen, you create that behavior. Its really nice, the idea of "tree" is completely separate from all other concepts. Rendering a 3D game, on mobile, at 60fps (on GC-ed Mono no less), makes me feel pretty good about its performance characteristics. Most our perf issues were with limiting draw calls and optimizing shaders, not method calling.
Similarly, I worked a lot on a browser engine in the past and virtual method dispatch was again not this clear cut performance killer.
Sounds like the work done by tree traversals weren't high overhead in general for your workload. But it does matter for some workloads.
> Similarly, I worked a lot on a browser engine in the past and virtual method dispatch was again not this clear cut performance killer.
We're seeing large gains from, as far as we can tell, having fewer virtual method calls than other engines. Eliminating virtual dispatch opens up a huge range of call-site optimizations since the methods can often be statically inlined (as well as reducing the load on the branch target buffer).
That doesn't match my experience. Devirtualization opens up lots of inlining opportunities, and inlining is one of the most critical optimizations that compilers can do (mostly because of the other optimizations that it opens up; e.g. const propagation, GVN, etc. etc.)
See this study: http://hubicka.blogspot.com/2014/04/devirtualization-in-c-pa...
Devirtualization optimizations improve Dromaeo by 7-8%. That's a significant win, especially since devirtualization is only a best-effort optimization and Dromaeo has a lot of JS in it.
We can't use composition because we wouldn't get virtual methods.
We can't use an interface because then we would be forced into virtual dispatch for all of those fields that are shared between flows.
we have a heterogeneous tree of objects that all share a common set of fields (position, intrinsic widths, collapsible margins, some various bits that store state during reflow),
I'm not an expert in performance though so I don't if that is just as bad as the other alternatives you mentioned. But you didn't mention them so I thought I'd ask if you considered that as a valid approach and if so why you rejected them?