Hacker News new | past | comments | ask | show | jobs | submit login
JavaScript Loop Performance (jsperf.com)
19 points by blorenz on June 25, 2013 | hide | past | favorite | 32 comments



What exactly is this trying to test? This headline seems to imply that it's testing the performance of the different sorts of loops in JavaScript, but it's not a valid test for that, because the for loop does considerably less work than the do and while loops.


He probably just wanted to test the difference between 'while ' and 'do while' and takes the fastest possible solution (for) as a reference. It's however good to know that 'do while' is that much slower compared to while.


while is faster than for here.

  Testing in Chrome 27.0.1453.116 32-bit on Windows Server 2008 R2 / 7 64-bit


I disagree with you saying this isn't a valid test but added http://jsperf.com/while-vs-for-queue/4 for your convenience.


The for loop uses array access "items[j]", and the while loop uses "items.shift". At the end of the for loop 'items' is still the same, at the end of the while loop 'items' is empty.


Agreed. The context of this problem was a discussion with a colleague about using this in the order of a queue where the items array would represent transformations to apply to another object. Point taken, though!


The original test isn't really valid for comparing loops and your tests prove it. The for loop on my machine does 248MM ops/sec and the while (with no mutation) does 243MM, but the while loop with mutation is 0.44 ops/sec. So performance is basically the same when the workload of the loop is the same.


The do and while loops are unfair comparisons they should have done something like this:

var i = items.length; while(i--) { total += items[i]; }

and

var i = items.length - 1; do { total += items[i] } while(i--);

I think those would compare differently to the for loop.

Turns out I was wrong and I made the for loop faster: http://jsperf.com/optimal-loops


I believe that do-while will also miss summing the item at index 0.


Array shift is O(n). Using pop instead gives a better comparison: http://jsperf.com/while-vs-for-queue/18


Noted. Great to see the performance increase. Obviously pop() is for the last element so a FIFO would have to be constructed with this in mind.


If the JS engine implemented Array as a deque or circular buffer, shift() could be O(1).


Please don't just post a link like this. Apparently I'm expected to run it in chrome and see that do is 10x slower than the other two options.

But I ran it in firefox and got completely different results, namely the for loop handling hundreds of millions of items per second, and the loops with .shift() being handled literally and going at the speed of sludge (it reports 0.25 items per second).


This is stupid. The for loop doesn't mutate the list while the other do. What is the point of this?


I think the point is the while loop runs faster in Chrome. But the for loop is inefficiently written; it checks the array length every time. Change it to `for(var len = items.length; j<len; ++j)` and it's faster again. tl;dr - badly written for loops are slow.


Correctly me if I'm wrong but, even if mutation is neglected, isn't shift() an O(n) operation because of array reindexing? Doesn't that completely defeat the purpose of the while/do while loop since they'd be O(n^2)?


Just for fun added test case for reduce and forEach, in order to show how slow it is. http://jsperf.com/while-vs-for-queue/3


One thing to note though is that forEach and reduce can both be /theoretically/ faster; for and while loops both imply an order in processing the list, whilst forEach and definitely reduce don't; in theory, the JS interpreter can take the array, split it up into chunks (one for each core / thread), and process the list in parallel, which would / could produce results faster.

But, atm, functional operations on lists are still executed single-threaded. Wouldn't mind seeing a WebWorker implementation, for example. Although somehow I think the overhead for a relatively simple calculation like this would negate the results.


Interesting that reduce and forEach have about the same performance, suggesting the overhead is in the function invocation, no matter whether it needs to close over variables or not. I would have thought not closing over variables would be substantially quicker.


Thanks for the additions. I particularly like the forEach addition because it 'reads' better but is very unperforming. I can foresee this being unknowingly abused.


Also it is in fact testing the shift() function indirectly (and array access).


jsperf test cases like this don't produce valid results.

The amount of work done in the test case is very small (actually dwarfed by the setup part of the test), making it subject to huge variance due to factors like task switching, GC pauses, etc.

The result of the benchmark is also never used so it is possible for a valid JS runtime to partially or completely optimize it out.

Benchmark actual code doing actual work. Code reduced to this level of meaninglessness does not allow you to draw meaningful conclusions about actual JS performance.


What ways of testing performance do you recommend? I need to test different regexes and string manipulation. I was about to use jspref but I am very open to any suggestions,


It's not valid to see that do while is tenfold slower than while, across the board? Or the total lack of native forEach performance?

These are the sort of results that I'd think would be good to keep in the back of your mind, and little else.


'while' is actually faster than 'for'. But in this example the while loop does extra work (mutating the array). while loops are always faster than forEach, as they simply do less work.


There is an example that doesn't mutate the while. For still seems to be faster in certain browsers (Safari.) I think the bigger takeaway here for me is that mutation has little to no performance hit. In fact, the while shift() example was faster than the non mutating while version.


> jsperf test cases like this don't produce valid results.

I once went to a meetup presentation where the presenter was a supposed JS performance expert. His presentation was based on JS perf results, and Chrome was consistently showing 749.8M ops per second for complex side-effect free js code.

The only thing I learned from the presentation was that Chrome is good optimizing out code that does nothing, and the presenter's laptop ran at 2.25 Ghz.


Generally agree with you (benchmarking isn't what it used to be ;-).

However the following examples all do nothing (except increasing i) and get different results.

http://jsperf.com/codethatdoesnothing/2 http://jsperf.com/functionwithwhilethatdoesnothing


They don't increase i at all. It is impossible to count to 10,000 at a rate of 200Mhz. That implies a 2000Ghz processor.

JS Perf is fundamentally broken. It doesn't force a side effect, and from the debugger, the first step it does is something like:

    while(100,000,000--) {
      // your code here
    }
The chrome compiler then recognizes the loops can be swapped, and eliminated, thus it runs your code once, yet jsperf thinks it ran it hundreds of millions of times.


Not a valid test.

Two of the test cases destructively modify the list, while the first does not. Of course they will be slower.


lol none of the loops do any work after the first time. Hence why they run 200,000,000 times (this would be 10 trillion additions/second if the loops were actually running). Epic fail.


It took me a while to understand what you meant by this. It was obvious to me that the loops under test do do work (the shift/increment) every time through. But we're talking about different loops.

For others trying to understand this comment: the TEST loop which runs the code being benchmarked is short-circuited in the 'do' and 'while' cases because the setup code does not get re-run, so on the first test loop, it sums up an array of random numbers, and on subsequent runs, it sums an empty array. This also gave me the 'aha' that explained why the huge difference between 'while' and 'do': in the 'do' case, we execute the loop at least once in every one of the hundreds of millions of test runs, whereas in the 'while' case, the loop is executed 0 times in all but one run.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: