The thing about having access to the Skia render graph is that all of a sudden you're no longer limited to product screenshots and screen recordings. Imagine a pipeline where you can export someone's interaction session with a site, pixel-perfect, into DaVinci Resolve or Blender or Unity as a fully annotated DOM-advised render node hierarchy, with consistent node identities over time, of every rendered element on the page as it changes across frames. That's way more powerful than just pixels.
Imagine flying through your site in 3D (or even VR) with full control over timing, being able to explode and un-explode your DOM elements as they transition into being - the type of thing that only Apple would do for their WWDC demos with dedicated visualization teams.
The start is to be able to see the rendering engine as a generator for not just raster data over time, but vector data over time. Of course, there's a lot of work to do from there, but this is the core leap.
Cynical take: it won't hold. It'll get neutered by vendors and eventually purposefully removed as a possibility by Google.
Here's the thing: this "core leap" you mention isn't new. It's been made long before, on input side: that's what HTML is. All those use cases you mention should be possible, but aren't. Why? Because the for-profit web doesn't want that.
Most websites on the Internet today exist not to be useful, but to use you. For that, it's most important that the website author has maximum control over what the users see. This allows them to effectively place ads, do A/B tests for maximum manipulation (er, "engagement"), force a specific experience on you, expose you to right upsells in the right places. If they could get away with serving you clickable JPEGs, they would absolutely do that. Alas, HTML + JS + CSS is the industry standard, the all things considered cheapest option - so most vendors instead just resort to going out of their way[0] to force their sites to render in specific ways, and supplement it with anti-adblock scripts/nags, randomizing DOM properties, pushing mobile apps, etc.
To be fair, they do have some point. Look at this very thread: currently, the top comments talk about using Chrome -> SVG pipe to access accidentally leaked commercial and government data, such as hidden layers in product CAD drawings, or improperly censored text[1]. Your own example, "export someone's interaction session with a site, pixel-perfect, into DaVinci Resolve or Blender or Unity" is going to be mostly used adversarially (e.g. by competitors). My own immediate application would be removal of ads, which presumably have distinct pattern in such rendering, as they get inserted at a different stage of the render pipeline than the content itself.
This is just the usual battle over control of the UX of a website. The vendor wants to wear me down with their obnoxious, bullshit UX[2], serve me ads, and use DRM to force me to play by their rules. I want my browser to be my user agent. We can't have it both ways[3], and unfortunately, Google is on the side of money.
--
[0] - Well, to be fair, a lot of this is done by default by frameworks, or encoded in webdev "best practices".
[1] - Obligatory reminder: the only fool-proof way of publishing partially censored documents or images is to censor them digitally, then print out, scan back, and distribute the scan. If you don't go through analog, you risk accidentally leaking censored information or relevant metadata.
[2] - Like e.g. every single e-commerce platform. The vendor hopes I'll get tired and make a suboptimal choice. I want to pull vendor's data into a database and run SQL queries on it, so I can make near-optimal purchase decisions in fraction of the time. This "core leap" you mention would be a big win for me, which is why it won't last.
[3] - At this point, accessibility is the only thing that's keeping websites somewhat sane. There's plenty of apologists for all the underhanded and malicious techniques that are core to webdev these days - but they can't usually dismiss the complaint that the website is not usable on a screen reader. For some sites, it would be illegal to do so.
I feel like we're talking at cross purposes. My point was primarily that OP's Chromium patch could be the start of an excellent tool for website creators to unlock their own sites' and web applications' rendering potential, and level the playing field between smaller startups and much larger technology companies.
I'm quite familiar with anti-scraping and anti-ad-blocking countermeasures, and the first thing any such tool would block is a non-standard rendering engine like this - so unless the website creator consents, this really doesn't hurt or help consumer-friendliness (which, I agree, is in a sorry state these days) in any meaningful way.
Imagine flying through your site in 3D (or even VR) with full control over timing, being able to explode and un-explode your DOM elements as they transition into being - the type of thing that only Apple would do for their WWDC demos with dedicated visualization teams.
The start is to be able to see the rendering engine as a generator for not just raster data over time, but vector data over time. Of course, there's a lot of work to do from there, but this is the core leap.