
Performance of ES6 features relative to ES5 - shawndumas
https://kpdecker.github.io/six-speed/?utm_source=ESnextNews.com&utm_medium=Weekly%20Newsletter&utm_campaign=Week%2010
======
stymaar
The results given by this benchmark are bothering me because they do not fit
with what I've seen in production. For example, the `for-of-array` is
transpiled by babel to something that has 2 nested try/catch blocks, and
unfortunately, V8 has a nasty deoptimisation when dealing with these, even if
no exception is ever thrown.

This deopt led to a several orders of magnitude slowdown compare to a simple
`for` loop in Google Chrome, forcing use to abandon the `for of` construction.

I suspect that the benchmark shown on this page does not expose this kind of
behavior because it doesn't iterate enough for the JIT to kick in. If it's the
case, it means that these values reflects only the behavior of the cold,
interpreted code, and not the hot one. (which is quite sad for a performance
benchmark, because the performance matter only for the former …)

~~~
spankalee
Chrome doesn't have an interpreter, it has baseline and optimizing compiler.
Also, I think try/catch preventing optimization was fixed.

Still your point could stand, or additionally there could be many
optimizations that interfere with such simple micro benchmarks, turning all or
parts of them into no-ops. If this is happening in ES5 and not ES6, the
results could be drastic. I suspect this is what's happening with some of the
larger differences in native ES6.

~~~
rgrove
> Also, I think try/catch preventing optimization was fixed.

Do you have a source for this? Try/catch deopts still bubble up in my Chrome
profiling, so it seems like they're still a problem.

~~~
goldbrick
Yes, my understanding is the problem is essentially undefined behavior as
exceptions can be caught at any point in the call stack which makes it
basically impossible to optimize.

~~~
bzbarsky
It's not impossible at all. try/catch in the non-throwing case is fast in both
Firefox (SpiderMonkey) and IE (Chakra); I'm not sure what the state of things
is in Safari (JavaScriptCore). In the throwing case you obviously have to do
some work, but as long as that case is rare it's not a problem. Contrast with
the V8 situation (which they are fixing), where simply having a try/catch in
your function at all will deoptimize the function, even if an exception is
never actually thrown.

Oh, and one optimization strategy for try catch is even pretty simple to
describe in general terms: you need a cheap way to check whether an exception
is thrown, and then after every operation that can throw (e.g. a call into the
vm to a function that is allowed to throw) you check whether it did. If not,
you just move along. If it did, you jump to an out of line path that does
cleanup and callstack unwinding. The devil, as usual, is in the details.

------
white-flame
The thing about JIT languages is that there's no such thing as overall speed
of a particular operation; you can only sample current speed on current
implementations.

New features pursue correctness, then as their place in the JITs mature
interesting new approaches will change their speed characteristics later.
Things like contrasting the speed of various ways of iterating an array are
all semantically equivalent, and as the JIT learns the various semantics they
should all theoretically approach similar speeds, even if the newer ones are
slower now.

~~~
paulddraper
> The thing about JIT languages is that there's no such thing as overall speed
> of a particular operation; you can only sample current speed on current
> implementations.

How in the world is that specific to JIT? Have you never heard of a
Sufficiently Smart Compiler?

~~~
acdha
The main difference is that you the developer can test with a known compiler
and know what the performance characteristics are before you release your
code: when Microsoft releases VisualStudio 2018, your existing application
doesn't change until you rebuild it. In contrast, for code running in a web
browser or something like the JVM, your existing code may run faster or slower
without you even knowing about the new version.

~~~
nostrademons
I'm not sure this is true anymore even in this case - with multicore
processors, cache dependency, and pervasive virtualization, the speed of your
program can be significantly affected by what else is running on the box.
Remember that the x86 itself has a JIT on the chip, converting x86 machine
code to whatever microcode the processor uses. When I was at Google, they
provided special dedicated machines for benchmarking, with custom run-times
that disabled a lot of the containerization/virtualization features, and there
was _still_ a lot of noise in benchmark times.

~~~
acdha
That's certainly true and I hope I didn't give the impression that I thought
this was a binary situation. It's just that code which runs in a JITed
environment, particularly one like the browser JavaScript runtimes which
aren't even versioned, has an even wider exposure to skew. Everything running
on a multiprocess/user operating system is exposed to resource contention but
e.g. your JavaScript code also has to worry about things like Chrome disabling
optimizations anywhere you use try/catch.

------
yoklov
More than just these, even innocuous features such as `let` and `const` over
`var` cause a substantial performance decrease in V8 (but not SpiderMonkey).

I do a lot of one-off demos, that I mostly write in ES6 these days. I don't
have a demonstration of pure `let` vs `var`, but if you change Babel to
JavaScript in this one (and click run), you'll see a fairly substantial
performance decrease in Chrome for this demo (Firefox stays the same though):
[https://jsfiddle.net/oa1sckzu/](https://jsfiddle.net/oa1sckzu/) . I have
others, and the results are largely the same for them, but I'll spare you.

~~~
rraval
There is a recent v8-users mailing list thread on this exact topic. Here's one
possible explanation on perf differences between `let` and `var`:
[https://groups.google.com/d/msg/v8-users/hsUrt4I2D98/ELsfO1e...](https://groups.google.com/d/msg/v8-users/hsUrt4I2D98/ELsfO1e6AQAJ)

------
heydenberk
These measures are JS performance are important for VM implementors, for
people writing node.js applications to be used at large scale and people doing
computationally-complex work in the browser (eg creating physics engines).

It's worth noting that for most of us, most of the time, this kind of JS
performance is less important than user-perceptible optimizations, like
batching DOM reads and writes and decreasing asset size

------
_greim_
I assumed the "es6" row header refers to the engine's native implementation of
the feature. In which case it should grey out "destructuring" under Node
4.2.6. But they don't, it's listed as "10x slower". What am I missing?

------
david-given
I would like to see Babel ES6-to-ES5 converted performance here too, because
that's how most real-world ES6 code is going to be run --- there are too many
non-ES6-compatible browsers out there.

~~~
masklinn
That's exactly the information the "babel" rows provide? Each row is a given
"ES6 implementation" and how it performs compared to the baseline "ES5 native"
implementation of a feature.

So the first row of each section is the babel-compiled (to ES5) ES6, the
second is traceur-compiled, third is typescript-compiled and last is native
ES6 runtime. If you look at the test files, most of them only have an es5 and
an es6 versions.

~~~
david-given
Oh, FFS. I managed to spend some time looking at the table without actually
seeing what the entire point of it was.

Tea, you have failed me!

Sorry about that.

------
atonse
It's kind of awesome how good TypeScript looks in all this. Testament to the
compiler talent MS has on staff.

------
snorrah
This makes me think of Python 2 vs 3 benchmarks for some reason, although I'm
sure I'm not remembering very well.

I assume there's plenty of scope for ES6 to improve and (hopefully?) surpass
ES5 in performance in many, if not all, areas?

What would be causing the large slowdowns in ES6 here? Is this a case of
having features that make JS nicer to code in, but giving up some performance?
Or is this more down to immature compilers not yet optimising ES6 as
substantially?

------
dagurp
Why is "identical" in green and "faster" in a slightly darker green? Is es6 a
library or is it the browser's implementation of ES6?

~~~
mattashii
ES6 is the native implementation in the browser or JS-engine you are using,
indeed.

Green is probably chosen because the same performance is nice to have with
this (subjectively) better syntax. The darker shade is probably chosen because
more performance is nice, as it pushes you to the newer (subjectively better)
syntax.

------
jameslk
For-of is slow (in Babel) because from my understanding it gets transpiled to
use Regenerator. That was a bit of a nasty discovery for me.

~~~
eltaco
You can use loose mode or the loose option in that case.

------
alxlu
Why is arrow-declare is so much slower on Firefox (16x to 325x slower)
compared to everything else (identical to 2.4x slower)?

------
balupton
There is also [https://github.com/bevry/esnext-
benchmarks](https://github.com/bevry/esnext-benchmarks)

------
cromwellian
I'd like to see Closure Compiler thrown into the mix. It not only translates
ES6 to ES5, but applies optimizations.

------
venning
Note, when it says "1.6x faster" what is really means is "1.6 as fast" or "60%
faster", or really "operates at 1.6x the speed of the baseline", not "1.6x
increase in speed".

For example, when looking at the data for Chrome 48's "arrow" tests [1], the
_baseline_ number is 57,858,016 and the _traceur_ number is 91,556,806, which
is 158.2% of the _baseline_ , but is reported as "1.6x faster".

I know this seems like semantics, but "1.0x faster" sounds a lot like "twice
as fast". (In this case, "1.0x faster" is reported as "identical" [2]).

[1] [https://github.com/kpdecker/six-
speed/blob/master/data.json#...](https://github.com/kpdecker/six-
speed/blob/master/data.json#L1933)

[2] [https://github.com/kpdecker/six-
speed/blob/master/tasks%2Fre...](https://github.com/kpdecker/six-
speed/blob/master/tasks%2Freport.js#L171)

~~~
aschampion
This is the convention when talking about speedup in software. 1x speedup
means identical time.

[1]
[https://en.wikipedia.org/wiki/Speedup](https://en.wikipedia.org/wiki/Speedup)

~~~
venning
If the phrase was "1.6x speedup" then I would agree and not have commented.
But the phrase is "1.6x faster", which is a bit ambiguous.

That there is disagreement among the commentariat both ways is indicative of
ambiguity. (Though, I am assuming no trolling here.)

While not exactly analogous, try replacing it with a percentage and re-
evaluate: What does "60% faster" mean and what does "160% faster" mean?

------
vegabook
The "commentariat" charge is legitimate, which is why there is a field known
as "mathematics" which is unambiguous. 1.6x by anybody's non-commentariat
definition is 60% faster, as anybody who has ever even scratched the surface
of any mathematical, engineering, scientific, statistical or indeed, computer
science discipline knows (though this is perhaps unknown to the Creative Suite
crowd). This comment is completely bogus, with apologies if you don't
understand. But you really should know what you don't know before posting such
garbage.

~~~
dang
Please don't post uncivil comments to Hacker News.

We detached this subthread from
[https://news.ycombinator.com/item?id=11204967](https://news.ycombinator.com/item?id=11204967)
and marked it off-topic.

