Hacker News new | past | comments | ask | show | jobs | submit login
Jsperf for Fast.js, Lodash, Native (jsperf.com)
42 points by jrajav on June 24, 2014 | hide | past | favorite | 25 comments



Tested on mobile safari on an iPhone 5S running iOS7, and native Function binding is nearly 5x faster than fast.js.

Even in cases where this lib soundly beats native, I think I'd prefer the native variants for their fluency. Combinations of filter, map and reduce are extremely powerful ways to manipulate arrays, and I find that non-native array functions discourage this pattern due to lack of fluency. Maybe they could implement a chain function like underscore?


I don't agree — library functions can more easily be used as first-class functions, whereas instance methods cannot. You can partially apply with a library function:

``` var startWith123 = _.partial(_.concat(1, 2, 3)); ```

and you can more easily pass them around without having to create anonymous functions, which is a lot more fluid in my opinion.


If I'm understanding you correctly, that's still possible with instance methods.

    var mapOverNums = _.partial(_.map, [1,2,3,4,5]);
    mapOverNums(function(n) { return n * n; }); // Square nums
    mapOverNums(function(n) { return Math.abs(n); }); // Absolute values
...can be reproduced with...

    var mapOverNums = Array.prototype.map.bind([1,2,3,4,5]);
Without a thrush-esque operator (-> in Clojure, |> in F#, etc.), full first class functions make chains of computation unpleasant to read. e.g. "I want the sum of the square roots of all numbers greater than 100 in this list".

    nums.
        filter(function(n) { return n > 100; }).
        map(Math.sqrt).
        reduce(function(acc, val) { return acc + val; }, 0);


guess that really depends.

  var nums = Array.apply(0, Array(300)).map(function(x, idx) {
    return idx;
  });

  var filter = ramda.filter;
  var map = ramda.map;
  var reduce = ramda.reduce;
  var compose = ramda.compose;

  var gt100pred = function(x) {
    return x > 100;
  };
  var add = function(a, b) {
    return a + b;
  };

  var result = nums
    .filter(gt100pred)
    .map(Math.sqrt)
    .reduce(add, 0);

  var gt100 = filter(gt100pred);
  var sqrt = map(Math.sqrt);
  var accum = reduce(add, 0);
  var result = compose(accum, sqrt, gt100)(nums);
=============

with the above your code does look shorter. but there is one big plus with the latter that not only can you pass around the predicate but the operation in itself can be passed around(gt100 is reusable etc..). (ramda actually works like a functional library wherein the function/predicate is the first argument and the data structure is the last. this makes partial function application and currying work much better)

also when we do away with the boilerplate the final thing looks like this:

  // "I want the sum of the square roots 
  // of all numbers greater than 100 in this list".
  var result = nums
    .filter(gt100pred)
    .map(Math.sqrt)
    .reduce(add, 0);
vs

  // think of it as pipelining from right to left.
  // get any number > 100, sqrt it, then accumulate it.
  var result = compose(accum, sqrt, gt100)(nums);
and if this was haskell:

  foldl (+) 0 . map sqrt . filter (> 100) $ [0..300]


Well, there's one of the big problems; param ordering isn't ideal. Believe me, you are preaching to the choir. I had a long argument with the people who maintain our JS library about the merits of adding reverse and curry function so that I could write something like the following:

    pluck = curry(reverse(object.get));
    var names = people.map(pluck('name'));
...but they weren't having it. :-) As much as I like writing Haskell (and believe me, I do) JS can't do it without heavy library support. You may be better off writing LiveScript or Fey at that point.


Purescript[0] seems to be hit a very good balance between not too far from JS while providing the closest to Haskell experience and most of the benefits.

Check out the "first steps" tutorial[1] solving Euler #1

0: http://www.purescript.org/ 1: http://www.purescript.org/posts/First-Steps/


oh for sure, i agree with your sentiments (since there are no intrinsic language features nor primitives to make any of this easy in js. even ramda is just a better crutch really). i am actually in the same situation :P but you did say it was unpleasant to read (which i still think it isnt, but not as good as other languages that have better support).

haha man i dont know how many time ive written a reverse/flip function in our codebase and then erased it an hour later cause i know i would be called out on it lol


Yes, this will be so much, much faster when your app calls this functions 1 million times!!! Awesome!!! You can probably save a whole millisecond or two!!!

A pity that for the average call count for this functions, the browser will expend much, much more loading and evaluating this code, rather than executing the function.


It's a good point. When I started doing JS full-time, I wanted to squeeze out ever ms, writing for loops over using $.each or _.each, for example. But in the end, it just adds more lines of code and makes code harder to read. Also, the fact that each browser has significantly different results means you may need to implement different methods for each. There are times to maximize performance, but 95% of the time, you don't need the small boost. You're better off writing legible/maintainable code and using more high-level optimizations, like deferring heavy processes in a timeout.


Nearly all of these are slower than native in Firefox 32.


I just ran the tests in FF 30, latest stable release, and by-and-large Fast.js is faster than both lodash and native in FF 30.

Are you testing a nightly build?


Yeah, the UX branch.

indexOf and lastIndex of are only a few percent faster.

forEach is 1/4 the speed of lodash and native.

bind is 1/6 the speed of native (though 2x faster than lodash).

map is 1/4 the speed of lodash and native.

reduce is 1/7 as the speed of native, though faster than lodash.

concat is about half as fast as native.


I'm running FF nightly, and bind and reduce are massively faster than lodash or native, with many of the others similar. This is what I wanted to see of course - if you can make a function that does the same as the native one in user code but faster, the native code should just be updated to use that right?


Similar for Iron 30.0

They've changed something drastically recently in SpiderMonkey. When I tried my application for FOS on desktop Firefox Nightly performance was much better than few months ago.


Thanks again for this, this evening I'll investigate why lodash beats fast.js on some of the iteration functions (although they're close)


Thanks for putting this together. There are definitely some places where there is a marked speed improvement over lodash, but lodash seems to still win in a other cases. I think I will stick with lodash for now and go look into augmenting it with these faster methods if I have a performance problem.


Script error on Chrome Canary 37.

UPDATE: the revisions work - http://jsperf.com/fast-vs-lodash/3


More than anything, these kinds of performance tests point to ways in which the VM can be improved, not best practices for performance.


This reminds me of the Sizzle selector engine which is used by most JS frameworks out there including jQuery. There was a time all the frameworks were competing on selector performance and I believe it was Prototype.JS that decided not to implement Sizzle because they wanted to keep the competition alive and didn't want homogeneity. Sizzle approached most major frameworks out there. The competition in selector performance ultimately benefited everyone.

Even if end-developers don't take this up it may end up benefiting framework developers like Sizzle, jQuery and others who have a strong test suite and can use this to benchmark their performance ceilings and push them further.


map, reduce and concat are all slower than the native.

Chrome / Win7

http://i.imgur.com/nksvDAE.png


aren't other processes on your machine going to effect these scores?


No, JS is executed in single thread. If you have at least two cores and don't have running antivirus full scan or searching for prime numbers, background processes shouldn't affect the score.


JS is executed in a single thread, but optimized JIT compilation happens on background threads. So if you don't have free cores in addition to the one the JS is running on you'll end up running in interpreter or baseline JIT for longer before the optimized code is available. Whether that affects scores depends on how the benchmark is scored and how long the lag is before the optimized code becomes available, and so forth.


ok - ta.


Underscore is also just as fast or faster in nearly all cases.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: