If you're already using Rust, an alternative might be raqote, which would avoid the need to interface with C++ at all... and it's the canvas library used by Firefox's Servo:
Wonder if maybe it has less dependencies, that would be a win. like for example on Mac you need cairo pango libpng jpeg giflib librsvg for Node Canvas. So got to note it down for devs to put on their machines, and then remember to also add the same packages to the containers for deployment or if not using containers install on the server directly. I was playing with Node Canvas ages ago to create placeholder profile pics pngs using a random color background and letter.
If you are looking for a Rust native, not too complicated implementation of canvas api, check out https://github.com/cytecbg/gpucanvas. It's a rust port of nanovg.
Check the demo by running 'cargo run --example demo'.
I'll be adding more backends to this. I'm almost done with a metal backend (https://github.com/adamnemecek/gpucanvas), I'll add vulkan and wgpu next. If you want to get involved, we are definitely looking for contributors!
Also depends what you mean by rendering. Skia also uses system library for font shaping (CoreText on macOS and direct2d on windows), but also has support for using harfbuzz instead (which works on any platform, even for WebAssembly).
It depends. Skia can either be used as software or GPU renderer.
GPU AA rendering can slightly differs when using MSAA. When MSAA is turned on, Skia has different path renderer strategies (that could be CPU-based, like SDF, or GPU-based using coverage counting). In that case (msaa turned off), AA result should be identical on all platforms, at the cost of performance.
Having looked at Skia against a lot of different OS/GPU/driver combinations, I would be surprised to see bit identical results from GPU rendering except perhaps across different nvidia cards. I wouldn't even necessarily expect identical results in replicated runs on the same machine.
Worth keeping in mind that what Skia's GPU backend decides to do depends on GPU features, and sometimes on GPU workarounds. There's no guaranteed way to force it to render identically on all platforms short of something like swiftshader.
hey thanks for answering! You're one of the main skia contributor right?
I thought GrGlCaps was supposed to fallback most of the time to equivalents. Also thought that golden images baseline where pixel tested against a large set of different hardware.
But yeah, Skia blacklists MSAA for Intel GPU, so if you opt-in for sampling, you're not guaranted it's going to be enabled and therefore observe differences
you can force CPU only and also disable MSAA on the GPU renderer.
Enabling the GPU renderer doesn't guarantee that everything will be generated on the GPU. Skia can still decide to rasterize a path on the CPU before uploading it on the GPU.
Nice work! The other day I was doing a SVG renderer for Shiki[0] (a code syntax highlighter), but found no easy way to measure font width/height (SVG needs absolute positioning) in Node.js. The only library that does that records fonts' metadata in the package source[1].
I ended up using puppeteer[2]. Would love to switch to `skia-canvas`, however `ctx.measureFont` doesn't seem to work correctly yet. Measuring cap `M` produces positive `actualBoundingBoxDescent`, although `M` should sit comfortably on top of baseline.
> can generate output in both raster (JPEG & PNG) and vector (PDF & SVG) image formats
I thought browser canvas was a completely raster tool? What if I drew a million lines onto skia-canvas -- which is something you can do in a browser canvas -- would that result in a huge stack of vector operations getting saved somewhere for the possibility of future vector export?
And true - on Linux a X11 frame buffer DISPLAY is required, but I would still consider this to be headless. On other platforms, it should be truly headless. (I could be sorely mistaken)
Yes! We pass conformance tests[1] (for the core of WebGL 1 + the extensions that we support).
That said, the Angle version we use is decently old. If we updated it, we would be able to implement WebGL 2 (OpenGL ES 3.0) support. Unfortunately, it's no easy feat to update and integrate the latest Angle, and the project is more-or-less in maintenance mode until someone else comes to take on the task (the project's been revived several times before, so not totally unimaginable!)
This is a native dependency where that's compiling skia itself to wasm. The native dependency should be able to be faster by a wide margin by taking advantage of hardware acceleration but it's not obvious if it does (similarly skia compiled to wasm must be much slower than using the proper canvas API in the browser)
Why is wasm necessarily slower than the browser if the browser is already implementing canvas via skia? Just the hardware acceleration? Does that mean only gpu access or something more?
Not that that would make much sense. I’ve not been very successful at understanding what kind of problem canvaskit is trying to solve
What I am trying to do is the same as it was more than a year ago (https://news.ycombinator.com/item?id=20339574): A tile based, web based view with prerendered image tiles from the server and on the fly rendered tiles on the client. Ideally the client side rendering is super fast, whether that means gpu is enough or multi threaded cpu I don’t know yet
So what I understand so far is that I can’t have anything multithreaded (yet?) in the web client and probably native canvas is the most performant. Even canvaskit with webgl won’t reach that performance (not sure here)
Otoh canvaskit could possibly be used in other browsers if they don’t natively implement canvas via skia itself to minimize render diffs
And lastly this project here could be the backend renderer instead of either headless browser via canvas api (hacky) or canvaskit in node (slow because cpu only)
But none of these offer and kind of performance increase over native canvas which is limited to single thread because of spectre and meltdown
Yeah, Skia itself has multiple backends, including CPU and GPU implementations. The wasm one must use the CPU path, where Skia in your browser can use a CPU or GPU path based on whatever is fastest. So already wasm is necessarily equal-or-slower.
Wasm also does have some overhead too; some of that will get better with upcoming changes; at least wasm-SIMD support and threads are both only available in some environments.
But don't get me wrong, I'm certainly not suggesting the skia-wasm project is pointless, there's just trade-offs here.
So far I’ve mostly heard about porting apps that require native skia to the web. But that wouldn’t explain why google spent all that effort building a canvas compatible api and didn’t just stick to the skia api
I was hoping that skia would be more efficient in this tiling context where rendering tile images and putting them together in the final view isn’t that great with browser canvas. But I realize now that lack of multithreaded tile rendering means it doesn’t matter much anyways whether image tiles can be copied to another image quickly or not
Now I have a feeling that webgl might be a necessary step for me to look at.
This project has been dormant for a year now and I have to start again. Not an easy area to navigate with all those different and not necessarily comparable solutions
Last I played around with canvaskit I found the performance in the browser to be better than native canvas for certain things but worse for most. Why that is or if this follows any rules I don’t know. Perhaps I’m also mistaken
WebAssembly is essentially a bytecode that needs to be JITted before running. That's pretty much the same situation as with Java, you may have comparable performance but usually comparable Java code runs slower that native.
Wouldn't that require a browser? What point is the browserless feature? Are non-browser UI applications being used that take advantage of Node.js? Is it for apps?
I have definitely needed this exact functionality in the recent past.
We had an application where a canvas-based editor was used to create diagrams, and then these diagrams were saved to a library. We wanted to have each saved diagram to have a preview image to better visually distinguish between them while scrolling. The ability to render a png on the server from the canvas would have saved us a lot of trouble.
> Why would a drawing API be needed on a server? For headless rendering?
15 years ago I was driving GD and ImageMagick from PHP to render things like staff ID cards, barcode labels for things, etc.
It was trivial to write a script that would render something quickly and just output raw image data, so you could <img src="whatever.php?name=Joe&department=Meat" /> and the image would be dynamically created and sent with the right MIME type.
I was thinking on creating a turn-based multiplayer game for group chat in an app like Telegram. Then, whenever a player gave a command the server would draw the board and send the updated image to the chat.
It could start with something simple like tic-tac-toe and only two players. A player would tell the app a number between 1 and 9 corresponding to the place he/she chose. Then the server/bot would send the updated image to the conversation.
Not sure why that would have to be done server side though, couldn’t the server only be used for coordination/maintain state and then submit the current state/positions to the client and the client would render the board?
In his example, there is no client browser. The users are just sending commands into a telegram chat, the app is presumably only able to send API calls based on chat events responses. Its unlikely the app would be sufficiently featured to do the board rendering itself, although I am not really familiar with this space so I'm making assumptions.
As such, the backend would do the heavy lifting of rendering the board, and send the rendered board back into the message group.
I think one’s constrained by the chat application’s functionalities. Telegram doesn’t allow a bot to perform such operations on the client, last time I built one at least.
Of course one might be able to use emoji or text as a representation of said state if the game allows for it.
I once built a simple website for a Netflix show where you could enter some text and it'd be played in an animation, but you could also save/share a video of it.
It was so much easier to build the animation in canvas for the site and just replay it to get frames for a video on the server than to try to make the video in some other environment and then show it in the browser.
There are plenty of reasons to want to generate an image dynamically on a server. Dynamic image resizing to fit client requirements is one that immediately comes to mind.
Reminds me of how some games or other sites had an image you could put in your forum signatures. I think Steam had one or some third party built it. Would show like the games you were playing and if in game and not... I think Skype and MSN had icons too that was a image but used your profile id in it.
I once set up a large bank (you've probably heard of the bank, you've almost certainly not used this specific project though) with server-side Adobe Flex. We were using Flex to display charts on the client, and needed a way to render the same charts for use in reports.
This would be really useful for similar modern projects.
Server Side Rendering is a good use case. If you want to pre-render a data visualization that can be loaded with a web page, you can use something like this in place of the browser's canvas, but use the same code to generate the data visualization
I previously worked on an in-browser Hearthstone card renderer that uses Canvas. The rendering needed to be client-side for dynamic use cases (where the card would receive various effects thus change its look).
However I also needed to implement a server-side version of it that prerendered all the cards to PNG so that they're browsable without having clients consume excessive cpu/battery. Doing it using a separate codebase would have been hell to reconcile. Hence node-canvas.
I have built quite a few rendering servers for various use using canvas.js and nodejs. I also built a prototype for a dynamic video rendering server once - also based upon nodejs and canvas.js. It was slow but worked out as a proof of concept that became a fairly successful startup.
You're wayyy off here. There are all sorts of reasons to generate draw images on a server. Just look up imagemagick. And canvas has a much better drawing API and libraries for drawing that would be compatible.
https://github.com/jrmuizel/raqote