Couple of things to be aware of from a V8 perspective:
- to make "creation" test more fair for normal arrays they should be preallocated with new Array(arraySize), if arraySize does not exceed 90000. This will ensure that you are not wasting time reallocating backing store as it grows.
- It has been pointed to the test author a year ago that having a single test_SMTH function and calling it with different array types causes it to become polymorphic --- which affects the generated code. V8 became much-much better in handling polymorphism of this sort, but if you'll create a test_SMTH_ARRAYTYPE for each combination of test and array type you'll see results not distorted by the polymorphism.
I am not very familiar with internals of other engines. AFAIK both SpiderMonkey and Safari have polymorphic inline caches for a.foo property access sites. I can't say if the leverage PICs for a[i] access sites.
But from what I see from a quick glance over JavaScriptCore sources they do not seem to handle any kind of polymorphism for a[i] kind of sites in their new optimizing compiler (aka DFG). Additionally they do seem to handle polymorphism for a.foo sites in DFG only if foo always has the same offset in all structures this site have seen. I might be wrong about it though, it was just a quick flight over the source, without even checking it out to the disk.
A quick look with the JIT inspector addon (https://addons.mozilla.org/en-US/firefox/addon/jit-inspector...) seems to tell me that a lot of performance is gated on the array[i] index access, which you already mentioned. Indeed the polymorphic lookups are hurting us very badly and I think we don't inline cache anything and always take stub call. (Screenshot: http://i.imgur.com/UKL0t.png) All the red sections are the very hot stubcalls.
We only IC properties of regular objects (but also with different shapes and offsets) and indexes into some optimized array kind. But _not_ typed arrays. We used to do this, but Type Inference is usually very good with "normal" usages of typed arrays.
Yes, other engines do not seem to handle polymorphism well. I just amended the code to use separate test_something_arraytype for each array type. And, it turns out that the performance of other browsers have improved significantly and they look closer to Chrome now. Check out the updated charts in the blog post.
I suggest you add a note to the text hinting that you are not using original tests anymore. It would be even cooler if you publish both results of original (polymorphic) and your (non-polymorphic) runs.
I find results for "random read" particularly interesting. I need to take a look at the generated code. It might be that there is something in the way V8 compiles integer modulo operation that makes it slower than SpiderMonkey.
[though for this test to be comparable across browsers you should really fix function init_randlist not to use Math.random but use a pseudo-random generator with a _fixed_ seed. Uncontrolled sources of randomness should be discouraged in microbenchmarks and they lead to flaky unreproducible benchmarks. Though I don't think that it's the reason here].
I have added an update note to the post. Also, I have added a table comparison between polymorphic & non-polymorphic runs.
I will fix the code to use pseudo-random function to make the test results more meaningful.Also, I will try to send a pull request to the original test author.
- to make "creation" test more fair for normal arrays they should be preallocated with new Array(arraySize), if arraySize does not exceed 90000. This will ensure that you are not wasting time reallocating backing store as it grows.
- It has been pointed to the test author a year ago that having a single test_SMTH function and calling it with different array types causes it to become polymorphic --- which affects the generated code. V8 became much-much better in handling polymorphism of this sort, but if you'll create a test_SMTH_ARRAYTYPE for each combination of test and array type you'll see results not distorted by the polymorphism.