I feel like something's wrong, but I don't even know where to start debugging this. Do I need to enable Chrome feature flags? Or do mbps just have less powerful gpus than iPhones?
render busy: 99%: render space: 58/16384
task percent busy
CS: 98%: vert fetch: 422129727 (6178461/sec)
GAM: 97%: prim fetch: 140710713 (2059497/sec)
GAFS: 90%: VS invocations: 422116511 (6178270/sec)
VS: 89%: GS invocations: 0 (0/sec)
CL: 89%: GS prims: 0 (0/sec)
VF: 89%: CL invocations: 140703527 (2059406/sec)
SF: 89%: CL prims: 140659556 (2059072/sec)
SDE: 3%: PS invocations: 33276080884 (482451768/sec)
GAFM: 0%: PS depth pass: 35922946732 (507385276/sec)
One of the reasons I didn't go with D3 or any of the existing charting frameworks is that they simply do not seem built with that quantity of data in mind. Not only the rendering, the whole model: data is all packed in trees of JS-objects (at least the examples I see).
I have simple data that can be represented as dense typed arrays, with all the performance benefits of such, and no library seems to make good use of that. If I read the documentation for Stardust correctly, it's an exception to that rule (which makes sense, because typed arrays were mainly introduced for the sake of WebGL).
Similarly they also seem to do smart things with avoiding having to re-upload data to the GPU. I really hope this lives up to my expectations, it would save me a whole lot of work!
We've been drawing on canvas without proper acceleration for too long.
I'm excited to see both Stardust and the related DeckGL project. We've internally built several related framework layers (ex: streaming for cloud GPU offloading), and picking accessible abstractions for different developer personas is hard. Stardust & DeckGL are both (potentially) enabling dedicated visualization engineers to work more closely with high performance computing engineers. That's impressive: we deferred that problem to focus energy on enabling embedding for regular web developers (which goes above stardust) and scaling to the next 1000X (which goes below/adjacent)... but at the expense of making it harder for non-webgl visualization engineers to work on parts of our stack.
Long-term, I think their mindset is right, so, I've definitely been enjoying these projects!
I thought there were already many GPU accelerated paths for graphics in the browser, even including basic CSS functionality which leverages hardware acceleration in a significant number of scenarios.
It's great to see this library, just thought that since a lot of visualizations don't require advanced graphics techniques that a lot of GPU benefit was already being realized.
That is true, and therein lies the catch. Those paths must support a significant number of scenarios, which means they must be very generic. This incurs (comparatively) huge performance penalties.
Using webgl trades genericity for performance. To put a bunch of 2d coordinate points on a canvas like this library does, only requires the equivalent of 1-2% of CSS functionality that browsers have to go through for every DOM element. It also means that whatever you're drawing has no impact on the rest of the page layout so there's another ton of overheads the browser can skip.
Edit: from their paper: "We see Stardust as a complement to D3 instead of a replacement. Stardust is good at rendering a large number of marks and animate them with parameters, while D3 has better support for fine-grained control and styling on a small number of items. For example, to create a scatterplot with a large number of items, we can use D3 to render its axes and handle interactions such as range selections, and use Stardust to render and animate the points"
Long live d3.js!
so after a couple of lines, they're somewhat unmaintainable to me...
but couldn't find anything that covers most of d3's features...