
Show HN: Canvas engines performance comparison – PixiJS, Two.js, and Paper.js - gnykka
https://benchmarks.slaylines.io
======
johnfn
Something is definitely up with this benchmark - PixiJS can handle over 60k
objects at once in this benchmark without even dipping below 60FPS:

[https://www.goodboydigital.com/pixijs/bunnymark/](https://www.goodboydigital.com/pixijs/bunnymark/)

I checked the source briefly, and the problem is that the source draws every
single rectangle manually every single tick. That's very inefficient. Just
draw a rectangle to a Texture once, and create a bunch of sprites using that
Texture. Pixi should be able to handle at least 10x the amount of rectangles
if you do this.

~~~
jansan
That demo is insane. Din't know the browser was capable of this.

~~~
Tade0
All thanks to WebGL.

I've been using this exact test for years now to judge a phone/tablet before
buying/recommending - devices with the exact same SoC can differ wildly in
performance.

Nowadays any phone that can't display 20k bunnies at 20FPS likely has:

1\. An underpowered GPU.

2\. Badly designed cooling.

3\. A screen with a too high resolution.

Of course there are many more thorough and appropriate benchmarks, but this
one doesn't require you to install anything and will give you an answer before
you're approached by the store staff inquiring what is it that you're trying
to do with the merchandise.

~~~
kbenson
> Nowadays any phone that can't display 20k bunnies at 20FPS likely has

Given my aging Samsung Galaxy S6 averages about 30FPS for 20k bunnies in
Firefox mobile, your criteria might actually be too lax...

Edit: And in Chrome on the same phone I'm in the mid 30's FPS for 50k bunnies.

~~~
Tade0
Back in 2014 when I started 20k was an impressive number.

To give an example: the Sony Xperia Z2 tablet appeared to be decent, because
it had the - back then - state of the art Snapdragon 801 SoC.

I think I got less than 10k bunnies on it - no idea why(no power saving mode
or anything), but it was a deal breaker for me.

------
negativegate
PixiJS is the only one to reach above 100 FPS on my desktop, though that's
inbetween frequent GC pauses. Two.js gets ~70 and Paper.js gets ~48.

There would probably be less GC pauses if the benchmark code wasn't doing
things like

    
    
      [...Array(this.count.value).keys()].forEach(...)
    

instead of a for loop.

~~~
bartread
I see people doing this crap all the time in the middle of tight loops that do
a lot of work in C#, JS, TypeScript, and justifying it on the grounds of
"productivity" and "readability". It seriously gets on my nerves. Do _not_ do
this.

Functional code might look nice but often creates excess work for the GC and
kills performance. We had a situation within the last week where a piece of
code was blowing through 350MB of memory unnecessarily, and massively slowing
down a heavy set of calculations, because of exactly this kind of issue.

~~~
johnfn
> ustifying it on the grounds of "productivity" and "readability". It
> seriously gets on my nerves. Do not do this.

On the contrary, please do this. "productivity" and "readability" are
important aspects to consider when writing code, especially if someone else is
going to be reading it.

When you've identified a bottleneck, feel free to write the code in the
bottleneck more performantly, if necessary. But please do not sacrifice
readability across the entire codebase for a couple of hot loops.

~~~
hombre_fatal
I can agree with your point in general, but [...Array(N).keys()].forEach() is
not the most readable way to write "do this N times".

It creates an array of length N, but for obscure-to-most-people reasons,
Array(N).forEach() doesn't work, so they Rube Goldberged their way to an array
that they could call forEach() on. Their solution was to use Array#keys to get
an iterable from the array. But an iterable doesn't have a .forEach() method,
so they iterate the iterable into another array just to iterate it again with
Array#forEach. Frankly the only thing this seems optimized for is to solve the
problem without the for-loop for some reason.

The for-loop, on the other hand, is an instantly obvious solution. It's how
programmers have been expressing "do this N times" for decades across
languages.

~~~
akiselev
Especially when Array(N).fill(0).forEach((_, key) => ...) gives you the same
exact functionality without the second eager array if you're seriously trying
to avoid for loops.

~~~
hombre_fatal

        Array.from({ length: N }).forEach()

~~~
Aeolun

      for(var i = 0; i < N; i++) {}

~~~
chii
This is the best and cleanest way.

Unless there's a reason to use Array.forEach such as automatic paralellization
or some other SIMD-like optimization that can be done that cannot be done in a
forloop.

A lot of cargo-cult programming seem to take functional programming to mean
programming using the map/reduce/foreach functions, and you end up with shitty
code like [...Array(N).keys()].foreach(), just so you can somehow claim that
you're doing functional programming.

------
mellow2020
[https://jsfiddle.net/vas71r68/3/](https://jsfiddle.net/vas71r68/3/)

Don't do stuff like pushing rectangles to an array to have their position
wrapped, instead of simply doing it right there, unless you want to test the
GC.

~~~
gnykka
My point was more to compare libraries than to create the best canvas
performance. But you are right: simple plain canvas is usually pretty fast.

~~~
mellow2020
Yeah, by default it's accelerated anyway, at least if depending on browser and
hardware. So, at least for something as basic as drawing a lot of things with
the same color, engines and additional code can only make it slower.

Generally speaking, avoid GC, and avoid setting state that's already set... if
you do these two things and use engines that do these two things, it's usually
going to be more than fast enough :)

~~~
z3t4
Its nice if you can avoid creating new objects, but in recent optimization
I've seen no difference between reusing objects vs immutable objects. Removing
unnecessary code is always nice though.

------
nogridbag
I might be the only one but for such a simple website I found the menu
confusing. When changing renderers, the order appears to change every time so
I forgot which one I was previously looking at. Also, the Count is always
reset back to 1000 so it's annoying to compare renderers at the highest count.

~~~
cchance
Not just you, super frustrating for such a simple thing.

------
onion2k
I use Paper.js in my main project, and if you want to make a 2D game or
something it would be a terrible choice. It isn't fast. However, it gives you
a ton of extremely useful tools (like calculating intersections between
arbitrary paths) with a very well thought out API. For interactive diagram
generation it's great. The performance is still a solid 60fps if you limit
what's getting updated to fewer than 20 or so things at a time.

~~~
LoSboccacc
> if you want to make a 2D game

all of these are quite low level engines, nothing wrong with that but there's
wrapper around these, like phaser which uses pixi as backend and give quite
some useful abstractions on top of the rendering.

~~~
darkwinx
FYI, Phaser 3 doesn't use Pixi as backend.

------
yokto
Paper.js simply isn't meant to be used to draw thousands of rectangles. You
should only use something GPU-based for that, like Pixi or Three.

I love Paper.js because it has a TON of super useful vector features and is
still is reasonably fast :D

~~~
jansan
The boolean functionality in Paper.js is quite outstanding. I am not aware of
any other Javascript library that features such a robust implementation of
path unite/intersect/subtract/exclude operations.

------
mattmar96
[https://matttt.github.io/sotu](https://matttt.github.io/sotu)

Phew, glad I picked Pixi.js for this then. I’m building the next version of
Scale of the Universe

(For windows/mac/linux only btw)

~~~
haack
This is really cool. Do you selectively render depending on the zoom level? (I
looked for source but could only find the minified js on your github)

~~~
mattmar96
Yeah, I have basic culling implemented. Objects that are larger than, say, 3x
their normal scale are not rendered. Likewise with those 1/100th their normal
scale.

------
Freeboots
Not exactly on topic, but cycling the links is extremely annoying.

------
vardump
Two.js was fastest on my flagship Android phone with 5k elements. PixiJS was
expected to perform best?

But on my flagship iOS device PixiJS got top spot, two.js close behind.

~~~
mkalygin
Interesting. On my iOS device and on MBP PixiJS is the fastest. Any idea why
PixiJS can be slower on Android devices?

~~~
cchance
On my laptop PixJS was easily winner when it went to 5000 at 1000 Pix and Two
were equal at 144fps, but two really sucked at all levels

------
jakecopp
How do these compare to [https://p5js.org/](https://p5js.org/) in terms of
performance? I've only used p5.js.

~~~
yokto
To quote my dear teacher on this "Anything will be faster than p5."

------
AshleysBrain
Shameless plug here, but I made a quick-and-dirty perf test that does
something similar in our HTML5 game engine Construct 3, and it appears to run
way faster even with 20000 boxes:
[https://www.scirra.com/labs/boxperf/index.html](https://www.scirra.com/labs/boxperf/index.html)

This kind of test is easy work for a well-batched renderer. You can accumulate
everything in to a single big typed array, copy to a single vertex buffer, and
do one call to drawElements() in WebGL, and bingo, tens of thousands of
sprites drawn in one go.

I've got other similar performance tests that can do hundreds of thousands of
sprites @ 30 FPS (on a high-end machine), which I believe are bottlenecked
mainly on _memory bandwidth_ , because I only managed to make it go faster by
reducing the size of the JS objects involved.

Modern JS is ultra fast - if you have the right performant coding style.

~~~
mkalygin
FWIW the code in the post doesn't use sprites. I think it is done
intentionally because not all of the engines support sprites. As someone
mentioned in this thread if PixiJS would use sprites, it could be 10x faster.

------
huy-nguyen
Makes sense because PixiJS uses WebGL2 by default, falling back to WebGL1 and
then 2D canvas.

~~~
gnykka
Yes, I agree. I also found out that Two.js redraws 5k elements even a little
bit faster but it takes a couple of seconds to render them for the first time.

------
andai
Seems performance is much better with sprites:

[https://pixijs.io/bunny-mark/](https://pixijs.io/bunny-mark/)

I get 10,000 bunnies at 60fps but only 2,000 rectangles.

------
sunpazed
My iPhone XR has better performance than both my 15" and 13" Macbook Pro
laptops. Quick, someone put a A13 Bionic in a real machine.

------
rikroots
I had to code up this test for my canvas library.

Results are ... well, Scrawl-canvas isn't Pixi.js fast!

But the results aren't too bad - I can live with that sort of speed.
Especially as the library is entirely 2D with no WebGL magic added to the mix.

[https://codepen.io/kaliedarik/pen/PoPQGxz](https://codepen.io/kaliedarik/pen/PoPQGxz)

------
butz
Could you add a switch to change renderer, where possible (e.g. on PixiJS)? I
would be interesting to compare all engines on same renderer.

~~~
gnykka
Actually I made a switch at first as I started with Two.js and there are 3
types (svg, canvas and webgl). But then there was Paper.js which only had
simple canvas so I removed the switch at all and used the fastest renderers
for every engine (webgl or canvas).

------
opwieurposiu
From this I conclude I should try using Two.js for my game. I have been
limited to drawing 4k neutrons for performance but Two.js might be a better
solution.

[https://darrell-rg.github.io/RBMKrazy/](https://darrell-
rg.github.io/RBMKrazy/)

------
xmonkee
Why is it that if I have a video playing in another tab, the FPS goes from 60
to 10?

~~~
tobyhinloopen
Timers and JS get throttled for inactive tabs and windows.

------
sktrdie
I'm a total newb with GPU stuff on the web. But am curious why aren't these
GPU frameworks used to render most website? Things such as facebook for
instance I imagine would be a lot snappier. Am I missing something?

~~~
greggman3
Lots of reasons.

Font rendering for all of unicode is extremely hard.

[https://gankra.github.io/blah/text-hates-
you/](https://gankra.github.io/blah/text-hates-you/)

Multi-language input is extremely hard

[https://lord.io/blog/2019/text-editing-hates-you-
too/](https://lord.io/blog/2019/text-editing-hates-you-too/)

Having to download an extra meg or 10 of code does not make your website
responsive to start, worse if you're updating it constantly so you're users
have to re-download that code every few days, hours.

Support for assistive technologies disappears. A page of HTML is relatively
easy to scan for text to read or turn to brail or translate to another
language. A screen (not a page) of pixels is not.

Similarly extensions all break. Extensions work because there is a known
structure to the page (HTML)

UI consistency disappears. Of course pages already have this issue but it will
be much much worse if every site rolls it's own pixel rendering GUI because
none of the standard keys will work. Ctrl/Cmd-Z for undo Ctrl/Cmd-A for select
all? Similarly maybe the user has changed those keys or is using some other
assistive device which all works because things are standardized.

Letting the browser handle what's best for the device probably disappears.
Cleartype for fonts? Rendering Text or SVG at the user's resolution (yes that
can be handled by the page but will it? up to the site)

Password managers break including the browser's built in one. There's no text
field to find so no way to know if this is the place to fill them in.

Spell checking breaks. Same as above.

Basically your site will suck for users if you do this. Some will say some
frameworks will come up that try to solve all of these issues but that will
just mean every pages is on a different version of the framework with
different bugs not yet resolved. Sounds like hell.

~~~
sktrdie
Very good roundup! Thanks!

But don't you think there's a whole lot of apps that could benefit from being
on a canvas rather than being slowed down by browser-stuff? Editors come to
mind: CodeSandbox and VScode

~~~
jlokier
Editors in particular, are worse for some people if they can't render Unicode
text properly, have poorer rendering of fonts, can't be analysed by assistive
tools such as screen readers and braille displays, and don't interact with OS
and tools outside the browser the same way as HTML and native elements.

Sometimes a good compromise is to use canvas for rendering some things things
on a page (the way Google Sheets does), but create HTML elements on top as
needed for particular behaviour.

------
polskibus
What can I use if I need more performance than 2d canvas can give me? WebGL ?

~~~
gnykka
WebGL is the fastest for now. But not all libraries and browsers support it

~~~
huy-nguyen
WebGL 1 support in the wild is roughly 97% and WebGL 2 is 54% based on stats
collected from a number of participating sites.
[https://webglstats.com/](https://webglstats.com/)

------
SigmundA
Paper.js looks the best on my Mac on a 4k monitor 200% scaling. The rectangles
in the others look like the are upscaled from a lower resolution. Canvas vs
WebGL?

~~~
mkalygin
AFAIK Paper.js doesn't support WebGL and uses canvas. WebGL also applies anti-
aliasing, which may be a reason of blurring.

~~~
SigmundA
My understanding is canvas also applies AA the rectangle lines still look AA
just at a higher DPI.

It looks to me like the rectangles done in WebGL are drawn at the scaled
resolution (1920 x 1080) vs the unscaled resolution (3840 x 2160), whereas the
canvas is DPI aware and drawing at the full 4k resolution.

I would need to dig further but basically in WebGL the rectangles are drawn to
a texture at the lower res then upscaled just like a straight image of a
rectangle prerendered would be at the lower dpi.

Edit:

Looks like paper.js is specifically HiDpi aware and it can be turned off for
better performance which would be more fair when compared to the other
implementations:

[http://paperjs.org/tutorials/getting-started/working-with-
pa...](http://paperjs.org/tutorials/getting-started/working-with-paper-js/)

hidpi="off": By default, Paper.js renders into a hi-res Canvas on Hi-DPI
(Retina) screens to match their native resolution, and handles all the
additional transformations for you transparently. If this behavior is not
desired, e.g. for lower memory footprint, or higher rendering performance, you
can turn it off, by setting hidpi="off" in your canvas tag. For proper
validation, data-paper-hidpi="off" works just as well.

Also PixiJS seems to support HiDpi if resolution is set properly:

[https://pixijs.download/dev/docs/PIXI.settings.html](https://pixijs.download/dev/docs/PIXI.settings.html)

    
    
      // Use the native window resolution as the default resolution
      // will support high-density displays when rendering
      PIXI.settings.RESOLUTION = window.devicePixelRatio;

------
itsajoke
The performance on my phone makes me sad. PixiJS can barely hit 30 fps and the
others do worse than that. FWIW, I have a Pixel 2 phone and I'm using Firefox.

~~~
yokto
Don't worry, the benchmark is not optimized and only testing a very specific
thing, you can still do awesome complex stuff running at 60fps with any of
these libraries :)

