

I benchmarked 8 different ways of representing a 2d grid in JavaScript - shazow
https://github.com/shazow/grid-benchmark.js

======
ilmarihei
Looked at the code a bit and tried to see what gives with the 1D example.
Conclusions: 2D arrays were fast in Chrome, slow in Minefield, 1D arrays were
fast in Minefield, Minefield had a tendency to occasionally get pummeled by
the GC in the write test, data layout matters even in JS, you should use
get(x,y) instead of get([x,y]), counting downwards in loops only helps in
making the code buggier.

Passing the pos coords as x,y instead of [x,y] halved the get/set runtimes.

The difference between data layout performance was around 30%-100%. That is,
accessing [x][y] by iterating over x in the inner loop is a good deal slower
than iterating over y in the inner loop. And vice versa for [y][x].

Arrays and typed arrays had roughly equal performance in this. On Minefield,
typed arrays were 10-20% faster than initialized arrays and initialized arrays
were faster to access than uninitialized arrays. Zeroing out an array was a
good deal slower than relying on the zero-initialized nature of typed arrays.

On Minefield, a 1D array was 5x faster than a 2D array. On Chrome, a 1D array
was 40% slower than a 2D array. The Minefield 1D array version was 60% faster
than the Chrome 2D array.

Accessing closure variables was slightly faster than property access.

Typed arrays are easy to use: new Array(w _h) - > new Int8Array(w_h). if (type
Int8Array == 'undefined') Int8Array = Array; gets you backwards compatibility
for simple uses.

------
apu
After a bit of experimenting myself, I think these numbers can be quite
misleading.

It appears that execution times change drastically depending on function names
and orderings! (This is in both firefox and chrome, on a macbook.) I tried
copying the 2d array function and renaming it different things, and found
different times, which often caused major differences in its relative order
against the original function.

Also, running the test multiple times gives different enough results that
orderings change.

Finally, I think the timing code needs some work. I remember John Resign had
some posts on this. This is one of them:

<http://ejohn.org/blog/javascript-benchmark-quality/>

But I can't find another one where he talks about the fact that with very
small times, timing functions in javascript give wildly varying values,
because they depend on some sort of interrupt (maybe?)...

~~~
shazow
You're right, I should be running each test at least a few times and taking
the average. I found that the results vary a few percent here and there, but
the drastic difference between the different datastructures is always
consistent and visible.

------
jdavid
in your set function for canvas, you change the fill style for each set, and
you draw a full rectangle which causes an invalidation event for the browser.

i think you are taking the wrong approach.

this page shows a fast redeye removal example <http://disruptive-
innovations.com/zoo/demos/eyes.html>

instead of operating on the canvas data, you should operate on the imageData
object, which is like a buffer and will not cause an invalidation paint event.

original function

    
    
      structs.grid_canvas = function(size) {
        var canvas = document.createElement("canvas");
    
        canvas.width = size[0];
        canvas.height = size[1];
    
        var ctx = canvas.getContext("2d");
    
        return {
            set: function(pos, value) {
                ctx.fillStyle = 'rgb(' + value +',0,0)';
                ctx.fillRect(pos[0], pos[1], 2, 2);
            },
            get: function(pos) {
                return ctx.getImageData(pos[0], pos[1], 1, 1).data[0];
            }
        }
      }
    
    

new function

    
    
      structs.grid_imagedata = function(size) {
        var canvas = document.createElement("canvas");
    
        canvas.width = size[0];
        canvas.height = size[1];
    
        var ctx = canvas.getContext("2d");
        var imgd = ctx.getImageData(0, 0, size[0], size[1]);
    
        /**
        //use this to invalidate the canvas at a reasonable frame_rate
        var frame_rate = 24; //fps
        var paint_frame = setInterval( function(){
    
        ctx.putImageData(imgd, 0, 0);
    
        }, 1000/frame_rate );
    
        */
    
        return {
            set: function(pos, value) {
                var offset = (pos[1] * imgd.width + pos[0]) * 4;
                // + 0=red, 1=green,2=blue,3=alpha
                imgd.data[offset + 0]= value;
            },
            get: function(pos) {
                var offset = (pos[1] * imgd.width + pos[0]) * 4;
                // + 0=red, 1=green,2=blue,3=alpha
                return imgd.data[offset + 0];
            }
        }
      }

~~~
shazow
getImageData returns an array equivalent to grid_1d_array. The computation
required to translate the 2d position (in this case 3d position as there's 4
slots per index) is what killed the performance of grid_1d_array.

Also I don't see how invalidating the canvas 24 times per second is going to
help in this benchmark?

~~~
bgrins
Calling fillRect is massively slower than calculating the an index in the
array. As a general rule, it is more efficient to modify native data
structures directly, while updating the DOM as infrequently as possible.

I understand that you may have been _wanting_ to benchmark the actual speed of
updating the canvas, but it really isn't a fair comparison with updating
arrays.

Invalidating the canvas 24 times a second doesn't help with the benchmark, but
it would make the example usable, in that it would actually update the canvas
element on your screen.

On my computer, the example from jdavid ran in 287ms, instead of the original
which ran in 7427ms.

~~~
shazow
I was trying to benchmark the actual speed of updating the canvas, in hopes
that it was doing something clever behind the scenes. I can see that it's not
a fair comparison, but a fair comparison is completely redundant with
grid_array_1d (except a needlessly larger array).

~~~
bgrins
I see your point. Thanks for posting this code - I implemented the a*
pathfinding algorithm in javascript, so I looked into some of these options
also. I ended up using the basic 2d structure to store the grid, since it was
the most straightforward and seemed somewhat equivalent to 1d.

For a demo of it (<http://briangrinstead.com/files/astar/>), I used divs to
represent the grid on the page, but I think it would be awesome to use the
canvas grid representation since I suspect it will have much better
performance than thousands of divs on the page (switch it to 100x100 to see
what I mean).

------
_ques
If you make an interface that automatically runs this on visitor's browsers
and populates a form that users can hit "Submit" (or alternatively, simply
make an AJAX call), you can get hundreds of diff OS / Browser Version /
Hardware combinations.

Aggregating these numbers should give you really nice insights into diff VMs /
platforms etc.

~~~
shazow
I considered this but there weren't any existing services that did this and
allowed me to make the type of tests I needed to make (and I didn't want to
spend unnecessary time building one). I also ran the benchmarks on Firefox and
though the numbers varied, the overall trend was the same. I don't imagine it
would be much different for IE.

Given the motivation of the benchmark, Chrome was of main concern for me. :)

~~~
Detrus
<http://jsperf.com/>

That service can benchmark multiple browsers and compare the results.

Example <http://jsperf.com/instanceof-htmlelement-vs-tagname-equals>

~~~
shazow
That's one of the ones I tried, but I needed some tests to create new objects,
and others to reuse the same object which was already created previously, and
I wanted to compare the numbers together rather than in isolation.

To use jsperf, I would have had to come up with many permutations of each test
at the very least. Not ideal.

------
ohhnoodont
It's neat that the most trivial solution, nested arrays, performed the best
(marginally).

------
mathias
If you mean to run this benchmark across different browsers, you should know
there might be some problems with the time measurement technique you’re using.
Read [http://calendar.perfplanet.com/2010/bulletproof-
javascript-b...](http://calendar.perfplanet.com/2010/bulletproof-javascript-
benchmarks/) for a detailed explanation.

Why not use Benchmark.js <https://github.com/mathiasbynens/benchmark.js> for
your test case? This library will take care of all the issues mentioned in the
article I linked to. Or just take the easy road and use <http://jsperf.com/>
which uses Benchmark.js behind the scenes :)

~~~
shazow
True story. Wasn't aware of Benchmark.js, looks good! Anyone want to fork n'
port? :)

------
Sephr
I'd recommend using {UI,I}nt{8,16,32,64}Array and Float{32,64}Array (from
WebGL) whenever possible in your code, falling back to JS natives.

~~~
mmastrac
In my testing that I did for our DotSpots client-side code (basically MD2 over
tokenized text), I found that the WebGL arrays performed at best the same as
native arrays. In some cases they did much worse. This was universal across
webkit and gecko as well.

It's still worth testing - I'm hoping that the JIT engines are finally able to
optimize them properly.

------
gindevine
When creating a 2d grid array with all values the same, slicing one row
multiple times works way faster than creating each row for each column.

    
    
       for(var y=h;y>=0; y--) row.push(0);
       for(var x=w; x>=0; x--) grid[x] = row.slice(0);
    

It seems to me that this method should work without any problems, but I may be
wrong..

------
liuliu
There is a problem in the code that causes the grid_1d implementation to
dynamically increase array size on demand.

In the test case, it looped over from w to 0, and then from h to 0, thus, the
scanline size is (w + 1). In the grid_1d impl, he allocated array of w x h
size, which in fact should be (w + 1) x (h + 1).

~~~
shazow
The loop is from w-1 to 0 and h-1 to 0.

[https://github.com/shazow/grid-
benchmark.js/blob/master/inde...](https://github.com/shazow/grid-
benchmark.js/blob/master/index.html#L60)

~~~
liuliu
That's correct. But I noticed a bigger problem:

you passed in [x,y] where x is from w to 0 and y is from h to 0. But in your
indexing code of grid_1d, it is pos[0] * w + pos[1] which translates to x * w
+ y. however, since x is from w to 0 and y is from h to 0, the correct code
should be something like y * w + x. I am sorry if I pushed too hard on the
correctness of this code, but I am just not quite buying the fact that
indexing a 1d array will be slower than indexing an array of arrays.

~~~
gruseom
_I am just not quite buying the fact that indexing a 1d array will be slower
than indexing an array of arrays_

As it happens, I recently ran similar benchmarks (on sparse arrays represented
using JS hashes) and got similar results: a hash of hashes performed much
faster than a single hash with computed keys, even though the keys seemed
trivial to compute. It was a disappointing outcome insofar as the data
structure in question is performance-critical and I had hoped that flattening
it would be an optimization. Another almost clever idea backfires. I wonder
whether collision/balancing issues in the underlying hashtable implementation
are responsible.

Admittedly, our sparse grid is a different beast than the OP's dense grid,
except that these things are not so different in Javascript, whose arrays are
weird specialized hashes anyway, and thus not so different when dense vs.
sparse. (Though not the same, either:
<http://news.ycombinator.com/item?id=1999637>)

------
gcb
Breaking news. js programmers discovers memory x cpu tradeoff

End of jackass mode. ...i was expecting more browser benchmark with the
already proven methods.

~~~
shazow
Almost all of those methods use approximately the same amount of memory,
though the performance varies drastically.

The key is that few people know what's actually going on underneath the
JavaScript interpreter engine. In some cases it could be doing something
really clever, and in other cases it could be doing something really stupid.
Only ways to find out is to read the source, or benchmark. :P

~~~
gcb
my point exactly.

grid data structures in JS as already beaten to death. there's only the canvas
that can be arguably be called new in that article.

But the final fact about JS performance is to understand/benchmark how every
browser type/version will do something under the hood.

