

ES6 Performance - g4k
http://www.incaseofstairs.com/2015/06/es6-feature-performance/

======
fenomas
The trouble with microbenchmarks like these is, JS engines nowadays are often
clever enough to simply eliminate the code being tested, or change its
character enough that the results are no longer meaningful. Vyacheslav Egorov
(a chrome v8 engineer) has written a bunch of very good blogs on this. E.g.

[http://mrale.ph/blog/2014/02/23/the-black-cat-of-
microbenchm...](http://mrale.ph/blog/2014/02/23/the-black-cat-of-
microbenchmarks.html)

[http://mrale.ph/blog/2012/12/15/microbenchmarks-fairy-
tale.h...](http://mrale.ph/blog/2012/12/15/microbenchmarks-fairy-tale.html)

Checking the tests here, the "default parameters" section shows some tests
being 2000x faster than others, which sounds suspicious. Here's an es5 test
case:

    
    
        function fn(arg, other) {
          arg = arg === undefined ? 1 : arg;
          other = other === undefined ? 3 : other;
          return other;
        }
        
        test(function() {
          fn();
          fn(2);
          fn(2, 4);
        });
    

Sure enough, an arbitrarily smart VM could compile that code down to
`test();`. How much this and other optimizations affect each test is anyone's
guess, but I think it's likely that at least some of these results are
dominated by coincidental features of how the tests are written.

~~~
iamstef
Although everything you say is true, your comment doesn't really add much
value.

As you mentioned, micro-benchmarks are riddled with bias, and confusing
results.

So putting micro-benchmarks aside and looking at ES5 / ES6 induced performance
bottlenecks in actual apps, they are clearly present. (note, this is of-course
limited to features actually implemented)

Unfortunately, a macro focused endeavor (only gauging full app performance)
isn't as nicely actionable as the micro.

So in practice, in an attempt to produce high value but actionable feedback a
hybrid approach, utilizing both micro/macro investigation yields the best
results.

In addition the micro-benchmark vs optimizing compiler trap can be mitigated
by investigating the intermediate and final outputs of the optimizing
compiler.

Anyways, /rant.

There exist an unfortunate amount of JS performance traps, I wish where taken
more seriously. Although more work, it would be quite valuable for someone to
perform a root cause analysis to investigate potential bottlenecks brought to
light by this post.

~~~
fenomas
> So in practice, in an attempt to produce high value but actionable feedback
> a hybrid approach, utilizing both micro/macro investigation yields the best
> results.

I disagree. JS is obviously not fast by its nature; the only reason it's fast
sometimes is because modern JS engines do incredible optimizations behind the
scenes. As such, for real-world performance, writing code that the engine
understands how to optimize _entirely dwarfs_ trivia like how Babel transpiled
your default parameters. (Even the most microbenchmarked function will run
dog-slow if the v8 optimizer bails out.) And if real-world performance is
dominated by optimizability, then it follows that microbenchmarks are largely
useless unless they happen to get optimized/deoptimized in the same way that
your code does.

Incidentally for v8 at least, currently using _any_ ES6 feature (even a single
"const" that's never used) causes the optimizer to bail out. So any question
of "ES6-induced bottlenecks" is beside the point - it doesn't matter how fast
the ES6 feature is if its mere presence slows down everything else.

------
dgreensp
TLDR, use Babel in "loose" mode and you'll be fine for pretty much all the ES6
syntax features (by which I exclude Maps, Sets, and generators). Most of the
features listed are zero-overhead when transpiled this way. As usual, the
native implementations are much slower for some reason (probably because they
aren't optimized yet).

Microbenchmarks and relative speeds are not super useful. I'm a performance
nut and I love optimizing code -- back in the day I wrote a syntax-
highlighting editor that was snappy in IE6's crappy JScript engine -- but I'm
not going to worry that some syntax feature is 3x or even 10x slower than
assigning to a local variable (which is what, one CPU instruction?). If you're
that concerned, you should be avoiding object allocations ({}) like the
plague, and that is just madness except in really performance-critical
sections of code (and games).

------
joshstrange
There might be some great information here but it's completely unreadable. Add
some borders on your table, It's impossible to read in it's current state.

------
ahoge
Interesting data, poor presentation.

The tables need some formatting and colors would be nice, too. Instead of
"slower" and "faster" it should be just a factor. So, 2x would mean that it
takes twice as long and 0.5x would mean that it's twice as fast.

Also, what's the baseline? Where does that 1x come from?

------
inglor
Promises - the assumption here is that native promises are fast - this is
amusing. Userland implementations like Bluebird promises are significantly
faster than native promises. Not to mention the fact that converting an API to
use promises is slow with native promises - a native `promisify` will have to
be provided for Node, it's being worked on.

~~~
skrebbel
How often do you do async stuff per millisecond that this starts to matter? At
least for browser stuff I really don't see the problem.

~~~
iamstef
The problem usually arises when the GC pressure introduced from the promises
(often the intermediate promises) created by many active/concurrent tasks
starts burning copious amounts of cycles.

As your concurrency increases, a poor promise implementation starts burning
copious amount of valuable cycles, often largely due to GC pressure.

In the abstract this doesn't sound that bad, but when comparing well behaved
implementations such as BlueBird, RSVP, When, ES6-Promise etc. with the
current state, (July 3, 2015) native Promises, the difference is still
staggering.

As for the browser, a promise is a great way to handle async and often a great
abstraction around a single potentially remote entity. As more ambitious
applications are created, it isn't uncommon to have thousands or tens of
thousands of these remote entities. Wouldn't it be nice, if the overhead of
using the promise was negligible?

------
skosch
Super interesting, and somewhat disappointing as well.

Also — off topic, but I really wish they'd provided graphs, or at least given
those tables some formatting love. Does anyone know of similar benchmarks that
have?

~~~
z3t4
I think you should complain on your browser's default table styling rather.
Looks good with w3m ;P

Seriously though, graphs can be manipulated to look exactly how you want the
user to interpret it.

------
sonnyp
See [http://kpdecker.github.io/six-speed/](http://kpdecker.github.io/six-
speed/) for overview and better readability.

------
haberman
Wow, this is all a bit disheartening to read the day after I spent all day
updating my app to ES6 + babel.

These performance hits are extreme. I would never have guessed that so many of
these features are taking 20-2000x speed hits.

I hope the browsers and V8 catch up soon so transpiling ES6 is no longer
necessary.

~~~
fenomas
Did you profile the before and after versions of your app? I think some of the
more extreme results (like 2000x) here may just be due to the benchmarks
getting optimized into empty functions.

------
vander_elst
tables not formatted!!!!!!!

------
marvel_boy
Add some borders to the tables, the presentation is horrible

