Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Canvas engines performance comparison – PixiJS, Two.js, and Paper.js (slaylines.io)
193 points by gnykka 58 days ago | hide | past | web | favorite | 102 comments



Something is definitely up with this benchmark - PixiJS can handle over 60k objects at once in this benchmark without even dipping below 60FPS:

https://www.goodboydigital.com/pixijs/bunnymark/

I checked the source briefly, and the problem is that the source draws every single rectangle manually every single tick. That's very inefficient. Just draw a rectangle to a Texture once, and create a bunch of sprites using that Texture. Pixi should be able to handle at least 10x the amount of rectangles if you do this.


> just draw a rectangle to a Texture once, and create a bunch of sprites using that Texture.

But then all of your boxes are the same. Which is not what this benchmark is.

We can conclude that Pixi.js is probably faster for drawing sprites, but is definitely slower than Two.js at drawing randomly sized rectangles.


Not on my phone it isn’t. PixiJS is the only library that stays at 60fps, Two is at 30fps and Paper is at 15fps.


I'm on a very slow phone but even here PixiJS is usable @ 10fps while the others barely touch 2fps.


Same here (Samsung Note 8)


but is definitely slower than Two.js at drawing randomly sized rectangles.

I pushed a fix, the way it was drawing randomly sized rectangles was an unfair comparison. It is much faster than Two.js now (tried it at 10000 rectangles too) and initialises much faster also.


The comparison was unfair it was only moving the positions of the rectangles each frame for paper.js, and two.js but for pixi.js it was redrawing them from scratch which means it had to retesselate everything every frame.

I made a PR to fix it and its much better: https://github.com/slaylines/canvas-engines-comparison/pull/...

It is still not the optimum way to do it in pixi.js but it at least matches the other examples now.


I'm not familiar with PixiJS in any way other than knowing that it's a game engine. What is the difference between doing single PIXI.Graphics for the whole canvas vs PIXI.Graphics for every rectangle? Pros / cons?


I see you saw the response on the issue, for anyone else this was the response:

The golden rule is the less you have to clear() and redraw the insides of the graphics the faster it's going to be as pixi.js has to convert your draw commands to triangles to pass to WebGL, if you're just moving the position of something it can use the already calculated triangles and just add an offset before drawing whereas previously you were generating new triangles and hardbaking the position into the triangles each frame.

Because you move each rectangle at a different rate I used a separate graphics instance so that we could move them without having to redraw them.


At your link, on my phone, performance dips when new bunnies appear, then gets back up in a few seconds. Which is weird, since afaiu objects should be handled in exactly the same way throughout their lifetime.


I see the same on the desktop. It's probably some procedural code that is updating a data structure as bunnies are added. Just speculation - I haven't looked at the code.


Holy fuck, you just sold me on using PixiJS if I should ever need to do something with canvases.


150k bunnies at 60fps on an iPhone 8. Impressive.


That demo is insane. Din't know the browser was capable of this.


All thanks to WebGL.

I've been using this exact test for years now to judge a phone/tablet before buying/recommending - devices with the exact same SoC can differ wildly in performance.

Nowadays any phone that can't display 20k bunnies at 20FPS likely has:

1. An underpowered GPU.

2. Badly designed cooling.

3. A screen with a too high resolution.

Of course there are many more thorough and appropriate benchmarks, but this one doesn't require you to install anything and will give you an answer before you're approached by the store staff inquiring what is it that you're trying to do with the merchandise.


I use that as one of two tests. The other being

https://www.shadertoy.com/view/Xds3zN

Something with lots of work per pixel to complement Bunnymark's lots of pixels.


> Nowadays any phone that can't display 20k bunnies at 20FPS likely has

Given my aging Samsung Galaxy S6 averages about 30FPS for 20k bunnies in Firefox mobile, your criteria might actually be too lax...

Edit: And in Chrome on the same phone I'm in the mid 30's FPS for 50k bunnies.


Back in 2014 when I started 20k was an impressive number.

To give an example: the Sony Xperia Z2 tablet appeared to be decent, because it had the - back then - state of the art Snapdragon 801 SoC.

I think I got less than 10k bunnies on it - no idea why(no power saving mode or anything), but it was a deal breaker for me.


It’s WebGL underneath, which explains the ability to render really fast. You’re not drawing individual shapes so much as shoving data structures into the graphics card.

(That’s not a statement on how impressive this is, just a “oh that’s how it’s done.”)


Agreed.

One thing that I didn't realize when I first saw it was that the demo is able to render so many sprites because it's doing the simplest possible thing - rendering just a few textures. If you render a bunch of different sprites with different textures, you'll get a big FPS hit.


PixiJS is the only one to reach above 100 FPS on my desktop, though that's inbetween frequent GC pauses. Two.js gets ~70 and Paper.js gets ~48.

There would probably be less GC pauses if the benchmark code wasn't doing things like

  [...Array(this.count.value).keys()].forEach(...)
instead of a for loop.


I see people doing this crap all the time in the middle of tight loops that do a lot of work in C#, JS, TypeScript, and justifying it on the grounds of "productivity" and "readability". It seriously gets on my nerves. Do not do this.

Functional code might look nice but often creates excess work for the GC and kills performance. We had a situation within the last week where a piece of code was blowing through 350MB of memory unnecessarily, and massively slowing down a heavy set of calculations, because of exactly this kind of issue.


> ustifying it on the grounds of "productivity" and "readability". It seriously gets on my nerves. Do not do this.

On the contrary, please do this. "productivity" and "readability" are important aspects to consider when writing code, especially if someone else is going to be reading it.

When you've identified a bottleneck, feel free to write the code in the bottleneck more performantly, if necessary. But please do not sacrifice readability across the entire codebase for a couple of hot loops.


I can agree with your point in general, but [...Array(N).keys()].forEach() is not the most readable way to write "do this N times".

It creates an array of length N, but for obscure-to-most-people reasons, Array(N).forEach() doesn't work, so they Rube Goldberged their way to an array that they could call forEach() on. Their solution was to use Array#keys to get an iterable from the array. But an iterable doesn't have a .forEach() method, so they iterate the iterable into another array just to iterate it again with Array#forEach. Frankly the only thing this seems optimized for is to solve the problem without the for-loop for some reason.

The for-loop, on the other hand, is an instantly obvious solution. It's how programmers have been expressing "do this N times" for decades across languages.


Especially when Array(N).fill(0).forEach((_, key) => ...) gives you the same exact functionality without the second eager array if you're seriously trying to avoid for loops.


    Array.from({ length: N }).forEach()


  for(var i = 0; i < N; i++) {}


This is the best and cleanest way.

Unless there's a reason to use Array.forEach such as automatic paralellization or some other SIMD-like optimization that can be done that cannot be done in a forloop.

A lot of cargo-cult programming seem to take functional programming to mean programming using the map/reduce/foreach functions, and you end up with shitty code like [...Array(N).keys()].foreach(), just so you can somehow claim that you're doing functional programming.


That was already proposed upstream.

The person above wanted a way to actually create an array of length N that could be used with .forEach().


`let` instead of `var`. No need to leak the variable into the outer scope.


I sometimes wish there was a way to demand the same level of proof for claims of readability and productivity that we demand for claims about performance.

Measuring before optimizing is of course a good idea, but it's also time consuming. There's a lot of latitude to make different reasonable choices about how you write code down in the first place before you get to the measuring staging.

Should we criticize someone who reaches for a for loop because they know it doesn't allocate without proving that that matters more than someone who reaches for a map because they think it's more readable and productive without any great way of proving that that's true?


>map because they think it's more readable and productive without any great way of proving that that's true?

I mean, doing

    sendUserIds(users.map(u => u.id))
is very obviously more readable than what you're forced to do in languages like Go

    ids := make([]int, len(users))
    for i, user := range users {
        ids[i] = user.Id
    }
    sendUserIds(ids)
The difference only grows once you start needing to do more operations, grouping, putting stuff into map by a certain key, etc. Many languages also have lazy sequences for such operations (e.g. https://kotlinlang.org/docs/reference/sequences.html, though they aren't always faster, it depends)

The code in the comment above is however not an example of this, and is actually less readable in my opinion. It seems more like an author wishing to be using a different language instead of accepting what they have in front of them.


There is nothing about this statement that would make it more readable than simple for loop.


You really don't need to measure to know that a tight loop inside a render function called at 60 fps is performance critical. The post does call this out explicitly as "in the middle of tight loops", which is a vary, very different situation than code that pretty much only runs once.

This is pretty far from premature optimization even if you never measure the effect, it is entirely predictable that code like this would have unnecessarily bad performance. If you write performance sensitive code you don't allocate stuff inside a tight loop unless you can't avoid it. The consequence of this is typically that the code is more straightforward imperative with fewer abstractions, but that is not inherently less readable.


>it is entirely predictable that code like this would have unnecessarily bad performance

Unless the compiler optimizes it out...


Is that really readable? I may be a crusty old fart, but that kind of shit is the antithesis of readable to me.


Sir, I have to upvote you


I made a change to loops to use for() cycle, thanks


Good point. Usually I like to use map or reduce for arrays but here simple for is easier.



https://jsfiddle.net/vas71r68/3/

Don't do stuff like pushing rectangles to an array to have their position wrapped, instead of simply doing it right there, unless you want to test the GC.


My point was more to compare libraries than to create the best canvas performance. But you are right: simple plain canvas is usually pretty fast.


Yeah, by default it's accelerated anyway, at least if depending on browser and hardware. So, at least for something as basic as drawing a lot of things with the same color, engines and additional code can only make it slower.

Generally speaking, avoid GC, and avoid setting state that's already set... if you do these two things and use engines that do these two things, it's usually going to be more than fast enough :)


Its nice if you can avoid creating new objects, but in recent optimization I've seen no difference between reusing objects vs immutable objects. Removing unnecessary code is always nice though.


What might be more interesting is drawing performance of images. And yes, a comparison with plain old canvas would be nice too.


LOL this looks like 60 FPS for 5000 rectangles and _no_ canvas engine .. stock. The three of the article are 10-20 FPS. (Pixelbook c0a)

Correct me if I am wrong.

Nice optimization.


I might be the only one but for such a simple website I found the menu confusing. When changing renderers, the order appears to change every time so I forgot which one I was previously looking at. Also, the Count is always reset back to 1000 so it's annoying to compare renderers at the highest count.


Not just you, super frustrating for such a simple thing.


You're not alone. Super annoying, especially on a phone


I wasn't thinking about this as a menu, this are just links to switch between renderers. Count saving is a good point, I'll add this feature


You were not the only one


I use Paper.js in my main project, and if you want to make a 2D game or something it would be a terrible choice. It isn't fast. However, it gives you a ton of extremely useful tools (like calculating intersections between arbitrary paths) with a very well thought out API. For interactive diagram generation it's great. The performance is still a solid 60fps if you limit what's getting updated to fewer than 20 or so things at a time.


> if you want to make a 2D game

all of these are quite low level engines, nothing wrong with that but there's wrapper around these, like phaser which uses pixi as backend and give quite some useful abstractions on top of the rendering.


FYI, Phaser 3 doesn't use Pixi as backend.


Paper.js simply isn't meant to be used to draw thousands of rectangles. You should only use something GPU-based for that, like Pixi or Three.

I love Paper.js because it has a TON of super useful vector features and is still is reasonably fast :D


The boolean functionality in Paper.js is quite outstanding. I am not aware of any other Javascript library that features such a robust implementation of path unite/intersect/subtract/exclude operations.


I simply loved the examples on paper.js webpage and just wanted to try it


https://matttt.github.io/sotu

Phew, glad I picked Pixi.js for this then. I’m building the next version of Scale of the Universe

(For windows/mac/linux only btw)


This is really cool. Do you selectively render depending on the zoom level? (I looked for source but could only find the minified js on your github)


Yeah, I have basic culling implemented. Objects that are larger than, say, 3x their normal scale are not rendered. Likewise with those 1/100th their normal scale.


> (For windows/mac only btw)

What do you mean? It's a website.


You can try it on a phone but its not designed for it. Theres a separate app version of it for mobile.


I guess you should have said it's for desktop, then, since I felt like you're implying Linux browsers won't work (which they do).


Not exactly on topic, but cycling the links is extremely annoying.


Two.js was fastest on my flagship Android phone with 5k elements. PixiJS was expected to perform best?

But on my flagship iOS device PixiJS got top spot, two.js close behind.


Interesting. On my iOS device and on MBP PixiJS is the fastest. Any idea why PixiJS can be slower on Android devices?


On my laptop PixJS was easily winner when it went to 5000 at 1000 Pix and Two were equal at 144fps, but two really sucked at all levels


Same on my Pixel 3a when rendering 2,000. two.js is faster. At 1,000 they're neck and neck.


How do these compare to https://p5js.org/ in terms of performance? I've only used p5.js.


To quote my dear teacher on this "Anything will be faster than p5."


Shameless plug here, but I made a quick-and-dirty perf test that does something similar in our HTML5 game engine Construct 3, and it appears to run way faster even with 20000 boxes: https://www.scirra.com/labs/boxperf/index.html

This kind of test is easy work for a well-batched renderer. You can accumulate everything in to a single big typed array, copy to a single vertex buffer, and do one call to drawElements() in WebGL, and bingo, tens of thousands of sprites drawn in one go.

I've got other similar performance tests that can do hundreds of thousands of sprites @ 30 FPS (on a high-end machine), which I believe are bottlenecked mainly on memory bandwidth, because I only managed to make it go faster by reducing the size of the JS objects involved.

Modern JS is ultra fast - if you have the right performant coding style.


FWIW the code in the post doesn't use sprites. I think it is done intentionally because not all of the engines support sprites. As someone mentioned in this thread if PixiJS would use sprites, it could be 10x faster.


Makes sense because PixiJS uses WebGL2 by default, falling back to WebGL1 and then 2D canvas.


Yes, I agree. I also found out that Two.js redraws 5k elements even a little bit faster but it takes a couple of seconds to render them for the first time.


Seems performance is much better with sprites:

https://pixijs.io/bunny-mark/

I get 10,000 bunnies at 60fps but only 2,000 rectangles.


My iPhone XR has better performance than both my 15" and 13" Macbook Pro laptops. Quick, someone put a A13 Bionic in a real machine.


I had to code up this test for my canvas library.

Results are ... well, Scrawl-canvas isn't Pixi.js fast!

But the results aren't too bad - I can live with that sort of speed. Especially as the library is entirely 2D with no WebGL magic added to the mix.

https://codepen.io/kaliedarik/pen/PoPQGxz


Could you add a switch to change renderer, where possible (e.g. on PixiJS)? I would be interesting to compare all engines on same renderer.


Actually I made a switch at first as I started with Two.js and there are 3 types (svg, canvas and webgl). But then there was Paper.js which only had simple canvas so I removed the switch at all and used the fastest renderers for every engine (webgl or canvas).


From this I conclude I should try using Two.js for my game. I have been limited to drawing 4k neutrons for performance but Two.js might be a better solution.

https://darrell-rg.github.io/RBMKrazy/


Why is it that if I have a video playing in another tab, the FPS goes from 60 to 10?


Timers and JS get throttled for inactive tabs and windows.


requestAnimationFrame stops its timers when tabs become inactive. To fix this, reset timer values when the browser tab receives focus.

https://developer.mozilla.org/en-US/docs/Web/API/Page_Visibi...


Its a browser feature to save performance


I'm a total newb with GPU stuff on the web. But am curious why aren't these GPU frameworks used to render most website? Things such as facebook for instance I imagine would be a lot snappier. Am I missing something?


Lots of reasons.

Font rendering for all of unicode is extremely hard.

https://gankra.github.io/blah/text-hates-you/

Multi-language input is extremely hard

https://lord.io/blog/2019/text-editing-hates-you-too/

Having to download an extra meg or 10 of code does not make your website responsive to start, worse if you're updating it constantly so you're users have to re-download that code every few days, hours.

Support for assistive technologies disappears. A page of HTML is relatively easy to scan for text to read or turn to brail or translate to another language. A screen (not a page) of pixels is not.

Similarly extensions all break. Extensions work because there is a known structure to the page (HTML)

UI consistency disappears. Of course pages already have this issue but it will be much much worse if every site rolls it's own pixel rendering GUI because none of the standard keys will work. Ctrl/Cmd-Z for undo Ctrl/Cmd-A for select all? Similarly maybe the user has changed those keys or is using some other assistive device which all works because things are standardized.

Letting the browser handle what's best for the device probably disappears. Cleartype for fonts? Rendering Text or SVG at the user's resolution (yes that can be handled by the page but will it? up to the site)

Password managers break including the browser's built in one. There's no text field to find so no way to know if this is the place to fill them in.

Spell checking breaks. Same as above.

Basically your site will suck for users if you do this. Some will say some frameworks will come up that try to solve all of these issues but that will just mean every pages is on a different version of the framework with different bugs not yet resolved. Sounds like hell.


Very good roundup! Thanks!

But don't you think there's a whole lot of apps that could benefit from being on a canvas rather than being slowed down by browser-stuff? Editors come to mind: CodeSandbox and VScode


Editors in particular, are worse for some people if they can't render Unicode text properly, have poorer rendering of fonts, can't be analysed by assistive tools such as screen readers and braille displays, and don't interact with OS and tools outside the browser the same way as HTML and native elements.

Sometimes a good compromise is to use canvas for rendering some things things on a page (the way Google Sheets does), but create HTML elements on top as needed for particular behaviour.


Because of browser support honestly.

Outside of grid-layout, there is nothing salvageable from the CSS layout engine that cannot be done better inside a programmable view such a canvas.

Give it 5 years or so for WebGL/WebGPU to standardize, and new UI libraries with a better layout engine and UI constructs far better than what can be built on top of HTML/CSS will show up.

Nobody thought we would take HTML/CSS as far as we have, but it has already served its purpose.

Your comment reminds me of this article back then: https://engineering.flipboard.com/2015/02/mobile-web


I think what we'll see in 5 years will be browsers using even more hardware acceleration/GPU acceleration for its rendering -- so it'll be entirely abstracted away from us. Obviously, direct canvas rendering will still be more performant.


I don't think anything will stop browser vendors from advancing the HTML/CSS standards, but I also believe that we will see the emergence of libraries that have a different opinion around layout declarations and core components. These libraries will benefit from browser vendors opening up the GPU with each new iteration.


I recently found out that Google Sheets uses Canvas for its spreadsheets. That's probably a part of why it's so performant.


Well, the browser already uses the GPU to render. The difference is how you manage state. A lot of web applications do that in a slow way by storing data in the DOM for example, or do things that require the browser to re-render the whole scene.

You could port FB to Canvas and it'd probably be faster until you add a ton of abstraction to give you what HTML does.


A few screws if you think websites need to be even more bloated ;)


What can I use if I need more performance than 2d canvas can give me? WebGL ?


WebGL is the fastest for now. But not all libraries and browsers support it


WebGL 1 support in the wild is roughly 97% and WebGL 2 is 54% based on stats collected from a number of participating sites. https://webglstats.com/


Paper.js looks the best on my Mac on a 4k monitor 200% scaling. The rectangles in the others look like the are upscaled from a lower resolution. Canvas vs WebGL?


AFAIK Paper.js doesn't support WebGL and uses canvas. WebGL also applies anti-aliasing, which may be a reason of blurring.


My understanding is canvas also applies AA the rectangle lines still look AA just at a higher DPI.

It looks to me like the rectangles done in WebGL are drawn at the scaled resolution (1920 x 1080) vs the unscaled resolution (3840 x 2160), whereas the canvas is DPI aware and drawing at the full 4k resolution.

I would need to dig further but basically in WebGL the rectangles are drawn to a texture at the lower res then upscaled just like a straight image of a rectangle prerendered would be at the lower dpi.

Edit:

Looks like paper.js is specifically HiDpi aware and it can be turned off for better performance which would be more fair when compared to the other implementations:

http://paperjs.org/tutorials/getting-started/working-with-pa...

hidpi="off": By default, Paper.js renders into a hi-res Canvas on Hi-DPI (Retina) screens to match their native resolution, and handles all the additional transformations for you transparently. If this behavior is not desired, e.g. for lower memory footprint, or higher rendering performance, you can turn it off, by setting hidpi="off" in your canvas tag. For proper validation, data-paper-hidpi="off" works just as well.

Also PixiJS seems to support HiDpi if resolution is set properly:

https://pixijs.download/dev/docs/PIXI.settings.html

  // Use the native window resolution as the default resolution
  // will support high-density displays when rendering
  PIXI.settings.RESOLUTION = window.devicePixelRatio;


The performance on my phone makes me sad. PixiJS can barely hit 30 fps and the others do worse than that. FWIW, I have a Pixel 2 phone and I'm using Firefox.


Don't worry, the benchmark is not optimized and only testing a very specific thing, you can still do awesome complex stuff running at 60fps with any of these libraries :)




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: