
Why the New V8 Is So Damn Fast - okket
https://nodesource.com/blog/why-the-new-v8-is-so-damn-fast
======
mwsherman
I speculated recently that TypeScript might _de facto_ help V8 optimizations,
and the post seems to confirm that. [https://clipperhouse.com/does-typescript-
make-for-more-perfo...](https://clipperhouse.com/does-typescript-make-for-
more-performant-javascript/)

~~~
readittwice
I don't think so: TypeScript/JavaScript-Types are too coarse-grained for
optimizations: For example number is a double in JS. For generating good
machine code you really need to know whether the number is int or double. The
same for string: Depending on the JS engine there are actually a lot of
different string types.

Even if we consider that good enough or somehow solved, understanding type
annotations only helps you reaching peak performance faster. It doesn't really
make peak performance faster.

Also there are the PyPy and Dart devs that in theory could use type
annotations, but both don't. For PyPy they even have an entry in the FAQ:
[http://doc.pypy.org/en/latest/faq.html#would-type-
annotation...](http://doc.pypy.org/en/latest/faq.html#would-type-annotations-
help-pypy-s-performance)

~~~
masklinn
I think the type annotations themselves don't do anything, but the typing
strongly pushes developers towards monomorphic (or at least non-megamorphic)
calls sites; and having proper-ish data types may also encourages consistent
order of initialisation.

I don't really know about the details fo Pypy, but from what I gathered from
performance talks about V8 between the hidden classes and inline caching it
would strongly benefit from these behavioural changes.

Basically, it doesn't preclude the runtime from having to do its analysis, but
it makes the analysis & optimisations "hit" at higher rates than they would
otherwise do.

------
jerhinesmith
As an internet user, I really enjoy reading about all these performance
improvements in V8 / javascript.

As a ruby developer, I'm insanely jealous.

~~~
sbjs
Just use Express.js or Sails.js, they're 1:1 to Sinatra or Rails. Join us!

~~~
out_of_protocol
Not true unfortunately. Rails quality unreachable for current js frameworks.
Well, i hope situation will change in the future

~~~
sbjs
Does that mean there's still room in the JS ecosystem for a Rails-alike? Maybe
one that could become very popular? Because I am looking for a major open
source project to create and spearhead, something that could get hundreds of
thousands of active users and a thriving subcommunity, but I've been holding
off until I find just the right project.

~~~
vinceguidry
Personally, having developed in both Rails and Node, I now believe that trying
to recreate Rails in JS is a foolish effort. Rails is a huge project with a
ton of weight behind it and unless you can generate a massive amount of
corporate investment, you won't ever be able to really catch up.

I'm not saying that there's no room in the JS ecosystem for another web
framework, what I'm saying is that if your design goal is re-implementing
Rails, you're already setting yourself up for failure. Sequelize has tried to
be ActiveRecord for how long? The only thing it makes me do is want to go back
to Ruby and Rails.

No, what you have to figure out if you want to do a JavaScript web framework
is how to get me to actually want to use JavaScript to program a website. How
can prototypical inheritance actually contribute to a workflow as opposed to a
more conventional inheritance scheme.

But honestly, quite frankly, I can't tell what anyone would want to use
JavaScript on the server at all for.

~~~
tlrobinson
As for the second half of your comment, it sounds like you haven't used
JavaScript in awhile. ES6 introduced syntax for classical inheritance, and a
bunch of other nice features.

~~~
vinceguidry
I use ES6 at my job, daily. There are some things about it that make me really
hate it. First, no, it doesn't introduce classical inheritance. It introduced
syntax that _looks_ like classical inheritance. It's still prototypal
inheritance under the hood. I'm not entirely sure what this means yet from the
standpoint of building things with it, but I'm not enthused at the prospect.

What I find happens fairly frequently with ES6 that I never found with Vanilla
JS is that the extra features built on top of JS 'break'. If you add syntax to
a language, that syntax needs to _work_. I need to be able to rely on it
working. I run into constant little issues that make me think I'm missing
something about scoping, when really it's some under-the-hood issue with a
library or something I just don't have the time to pin down.

One example is I tried using the spread operator to add keys to an object. But
the spread simply, well, failed. Passing in the object worked fine. Passing in
a spread version of the same object failed. I haven't resolved it yet, got
pulled onto another feature. This sort of breakage is hard to Google, when I
run into it again, and I'm sure I will, I may have to troubleshoot it all the
way to Babel.

Error reporting in Node with ES6 is garbage. Worse than garbage, it's a
veritable dumpster fire. Ideally the error points to the problem, when it
doesn't point to the problem you have to rely on experience and intuition to
lead you to the issue. Many many errors I come across in JS are of this sort.

I think most of these issues come down to the fact that ES6 is a transpiled
language. This makes me long for the days of good old CoffeeScript. At least
CS was close enough to Vanilla that it was easy to determine when you had an
issue with the transpiler, simply grab the snippet of code and paste it into
the online transpiler, look at the generated JS and work out your issue from
there. It wasn't the smoothest workflow but it was effective.

ES6 is stupid in ways that make building an effective workflow unreasonably
difficult. I can't wait to get back to Rails. Vanilla JS wasn't that bad. It
was well-understood and you could work with it effectively on the front-end.
It certainly wasn't as pleasant as Ruby, but it didn't feel like the pile of
hacks that ES6 does. Maybe once it stops being transpiled it'll get better.

~~~
vimslayer
Indeed, JS does not have classical inheritance. IMO introducing a class syntax
that looks so much like the classes from other languages and yet behaves
differently was a mistake.

The rest of your comment sounds a bit misinformed to me. You seem to have
decided that prototypical inheritance is somehow inherently worse than
classical inheritance, but don't mention why that would be. I have found very
little practical difference between the classes in JS and other languages in
everyday use, and can't quite imagine what the problems with it might be.

Furthermore, ES6 (ES2015) is not a "transpiled language". I think it's obvious
that if you take a very new language, say, ES2018 and want to run it on your
toaster that doesn't have support for such new languages, you are going to
have to do some kind of precompilation step. That is true for ES2018 today and
it will be true for ES2019 next year and for ES2020 the year after that.

ES2015, however, has been around for several years. All the modern browsers
(that is, all major browsers except IE) support it already. Node 10 even
supports the new module loading syntax (behind a flag), or you can use a very
lightweight transformer like esm[1] for older Node versions.

And if you do end up using and having problems with, say, Babel, it would be
more constructive to give concrete examples of the issues you've had. I
personally have never faced a syntax problem where the issue would've been due
to a bug in Babel instead of just my incorrect understanding of the language
feature.

[1]: [https://github.com/standard-things/esm](https://github.com/standard-
things/esm)

~~~
vinceguidry
I didn't intend to argue that prototypal inheritance was bad, just that I
didn't relish the prospect of building something in it, in the context of a
discussion about reinventing Rails in Javascript. The argument is that with
the amount of time and effort that went into Rails, the new framework has to
offer something a lot more unique than just "Rails in JS" if it wants to be
relevant, because you'll never even get remotely close to the maturity of
Rails.

The problem is that I can't give concrete examples because we're under the gun
of a deadline and I can't afford to spend the time to troubleshoot down to
root causes rather than just work around the problem and move onto another
feature.

I'd love to be able to tell you why the spread operator didn't work in that
case. But it didn't, and I made sure to get the whole team around me to tell
me I wasn't being crazy. The syntax simply didn't create the needed semantics,
and that means that something got messed up in the design of the language. I'm
pointing to Babel because that's the only thing I can point to as a root
cause.

Rails is _nowhere even close_ to this level of broken. You can rely on the
syntax and semantics of Ruby, sometimes gem authors play nasty games with
metaprogramming, I saw an example where someone monkey-patched Symbol to get a
more declarative method for describing SQL Where Conditions, but at least that
crap wasn't in ActiveRecord.

Rails, as a stack, fits together and experience with the framework will allow
you to trust it.

Syntax is the foundation of a programming language. If it doesn't work, if it
doesn't produce precisely the input that's being described, your language is
broken. We're not talking standard library here, we're talking about
`{...object}` not being the same as `object`. I don't have time right now to
dive into why, but that's the kind of shit I run into when I deal with ES6.
When syntax breaks, you can't trust the language anymore. It's a pile of hacks
and I wish it had never been invented. Coffeescript was better.

~~~
kwood
From what I remember, object spread came a year later and is ES7 (in case you
need a starting point for research)

------
zawerf
Why can't V8 deduce that the order of keys doesn't matter? It's pretty nuts
that just rearranging the keys from { x, y, z } to { y, x, z } would cause a
slowdown.

~~~
gsnedders
You can't deduce that; you'd need to show that the object is never passed to
for..in or Object.keys or similar to be able to avoid storing the insertion
order.

They could change the representation to not be insertion order dependent, and
store insertion order in a separate data structure, but that has its own
trade-offs.

~~~
Avshalom
Does for..in actually require that keys be returned in a specific order? Most
languages specifically call out that the order is not guaranteed (not that
that stops developers from relying on an order)

~~~
gsnedders
The ES spec doesn't require it, but every implementation does insertion order
(and insertion order was a deliberate decision by Brendan in the original
implementation, IIRC), and the web very much relies on it.

The "new" generation of JS VMs (V8, Chakra, Carakan) all dropped insertion
order for array index properties (that is properties whose name is a uint32),
but kept it for everything else; that broke about as much as browsers are
willing to break, and breaking the general case would be far, far worse.

~~~
drb91
Building a browser sometimes feels like building a language where the only
guarantee is the worst features will be used and maintained.

I mean this not as a slight on the ecmascript creators/maintainers but rather
as an observation of how difficult the problem is.

------
ShroudedNight
> In some extreme cases, developers had to write assembly code by hand for the
> four supported architectures.

Maybe I'm tainted from my experience building / debugging J9 / OpenJ9 but...
Why is this considered extreme? If one strays one iota from the standard
platform ABI, which is basically a necessity for producing a performant
runtime (PICs being an easy example), this is where one ends up.

------
barbegal
I used to believe that javascript was slow, C was fast and that there was no
way to change this because of the way that javascript is interpreted and not
compiled. But the more I've learnt about the benefits of "on the fly" / "just
in time" optimising compilers, the more I'm convinced this is the future of
computing. Being able to use multiple threads and hence multiple cores
simultaneously to optimise what is actually a single threaded piece of code is
quite amazing. And then being able to write code that is insanely portable and
optimised on any platform is great.You can take advantage of SIMD,
hyperthreading, multicore, large caches... without knowing if they're
available ahead of time.

Sure, you won't beat C for low memory devices or hardware that you have
complete control over but for 90% of use cases javascript actually makes sense
and can be the most performant.

~~~
chrisseaton
In my group at Oracle we're experimenting with running C using the same just-
in-time compilation techniques that JavaScript uses, and sometimes we see it
running faster than ahead-of-time native compilation, due to the effects of
things like inline caching and profiling.

~~~
mpweiher
Reports of _some_ jitted Java code running faster than C (after warmup) are
old. I remember seeing those claims for some super hot VM in the old IBM
System's journal from 2000, and yes, IIRC that was a VM written mostly in
Java.

However, those isolated examples rarely if ever translated to high performance
in real-world projects. The same issue has an experience report of the
travails of IBM's San Francisco project. Performance was a huge issue that to
the best of my knowledge they never fully resolved.

More in my article "Jitterdämmerung":
[http://blog.metaobject.com/2015/10/jitterdammerung.html](http://blog.metaobject.com/2015/10/jitterdammerung.html)

~~~
zmmmmm
Heh ... one of my employees took it into their head to code up some arithmetic
algorithms in C++ a month or so ago. We do not use C++ for anything, we are
all Python and JVM based. But he decided that he was going to achieve an
amazing win by optimising some numerical code to get order of magnitude
benefits, and without asking invested 4 hours into coding it up. I wrote a
naive implementation of the same thing in Groovy, of all languages. My
implementation was initially 20 times as fast and I coded it in 30 minutes.

So he debugged some more and figured out that he misunderstood some of the
inner workings of how vectors copy data and also that he did not understand
the threading library he was using properly. He then fixed those two things.
After this further exercise he reduced the difference to factor of 4. However
he was never able to work out why my code was 4 times as fast as his C++ and
abandoned it.

I know for sure that with appropriate expertise the C++ could _probably_ be
made to go perhaps twice as fast as my Groovy code. But the point is, none of
the supposed benefits come automatically regardless what language you are
using. And unless you flip over to GPU or FPGA accelerated methods, the final
outcome is well and truly in the same ballpark anyway.

But all this is to say that "rarely translated" might be true at the for
applications that are completely in the high performance domain. But for all
the applications where the high performance code is in niches at the edge and
there simply aren't resources or expertise to fully tune the native
implementation ... I think it's translated all the time.

~~~
physguy1123
In my experience, writing java (or groovy here) in c++ results in horribly
slow code which the jvm runs circles around, and it sounds like that's the
problem your employee ran into.

> But for all the applications where the high performance code is in niches at
> the edge and there simply aren't resources or expertise to fully tune the
> native implementation

It's interesting you say this, because in my experience it's the JVM which
requires absurd amounts of tuning and native programs which are much more
consistent. The proper and easier way that native programs are written lends
itself to fairly respectable performance, mostly because the object and stack
model of say C or C++ is so much friendlier to the CPU than in most dynamic
languages.

In general, for all that I hear statements along this line, I've only twice
seen code to back it up, and the C was so de-optimized from the OCaml version
that I suspect it was intentional - the author (same for each) was a
consultant for functional languages, and in one case switched the C inner loop
to use indirect calls for every iteration and in the other switched the hash
function between the C and functional comparison.

~~~
weberc2
It’s not native vs VM, but rather “has stack semantics/value types” vs “no
stack semantics/value types”. In particular, OCaml’s standard implementation
is a native, not VM.

Also worth calling out Go, which is rather unique in that it has stack
semantics but it also has a garbage collector, so it’s kind of the best of
both worlds in terms of ease of writing correct, performant code.

~~~
pjmlp
Go is not rather unique in having GC and stack semantics, there are plenty of
languages that have it, all the way back to Mesa/Cedar and CLU.

~~~
weberc2
I should have been more clear I guess; I was comparing it to other popular
languages. Few have value types and many that do (like C#) regard them as
second-class citizens.

------
hatcherdogg
For some reason I was expecting an internal combustion engine.

~~~
mingabunga
same ha ha. Interested either way

------
classics2
You have social icons floating over he text.

------
pspeter3
I have often wondered if frameworks like React which offer polymorphic methods
(eg React.createElement) cause deoptimizations. The common functions also seem
to be on the critical path since they will be called 100s-1000s of times per
render.

~~~
hajile
I'm not sure about React, but a fairly common pattern is to dispatch to
optimize. The parent function takes several different data types, but it is a
very small pass-through that calls a variety of monomorphic functions to do
the heavy lifting. It's not technically as fast, but as the amount of
processing in the monomorphic functions increases, the cost of the small
polymorphic function fades into the noise.

~~~
pspeter3
That's a good practical tip

------
antoaravinth
Does anyone used Chakara core? How does it compares itself with V8?

------
safgasCVS
Catfished. I thought this was going to be about cars.

------
AzzieElbab
Why hasn't node replaced PHP yet?

~~~
lucb1e
I don't like Javascript as a language. Everything from OOP to imports feel
hacked in rather than having been supported by the language itself. One can't
even add 0.1 and 0.2 together and get what you'd expect. Variable definitions
are global scoped by default, semicolons can randomly be omitted, etc. There
are just so many things about the language that feel hacky or unpolished.

So it's fast these days... great? There's more that I seek for in a language
than just speed. PHP7 and Pypy also do a pretty good job, and one can always
offload the heavy lifting to some compiled code if it's really that important
to squeeze the last few percent of performance out.

The only thing unique to JS (afaik) that I really like is addressing
dictionary keys as object properties (`a={'test':512}; console.log(a.test);`
instead of having to do `console.log(a['test']);`). I'm not sure why that is
so oddly satisfying, but it is.

~~~
b2ccb2
> One can't even add 0.1 and 0.2 together and get what you'd expect.

That's floating point math, and persists in many languages [1].

> Variable definitions are global scoped by default, semicolons can randomly
> be omitted, etc.

Nothing random about it, automatic semicolon insertion is well
defined/documented [2]. Also _let_ [3] is your scoping friend.

[1] [https://0.30000000000000004.com/](https://0.30000000000000004.com/)

[2] [https://www.ecma-
international.org/ecma-262/7.0/index.html#s...](https://www.ecma-
international.org/ecma-262/7.0/index.html#sec-rules-of-automatic-semicolon-
insertion)

[3] [https://developer.mozilla.org/en-
US/docs/Web/JavaScript/Refe...](https://developer.mozilla.org/en-
US/docs/Web/JavaScript/Reference/Statements/let)

~~~
lucb1e
> Nothing random about it, automatic semicolon insertion is well
> defined/documented

Figured someone would say that. Yes, of course it's not _random_ : it's a
computer, it follows the same instructions every time.

What I meant, of course, is that it isn't logical. You can't guess it without
having to check the spec. In bash you know that a newline is as good as a
semicolon, in C you know you always need a semicolon, and in Python you know
you never need it. In JS it's somewhere in between. Reading that spec, it's
when the next line would have syntax that is illegal if it were a
continuation... so you have to do JS simulation in your head to check if your
code could, perhaps, make sense as a line continuation. So I guess you just
always have to do it. I'm not saying it's a major issue and this is the one
reason I don't use it, but similar to how people criticize PHP for having
inconsistent function names (which is also not a bug or broken), it would have
been nice if it weren't the case.

> let [3] is your scoping friend

It's not about it being possible, it's that global scoped default is just
asking for abuse of the global scope. Even PHP doesn't do that, and PHP is
really made for quick and dirty web development (or at least it used to be, so
it supports many things that make that possible).

~~~
cztomsik
You're totally safe unless you start line with parentheses, bracket or
operator so it's not that hard -
[https://standardjs.com/rules.html#semicolons](https://standardjs.com/rules.html#semicolons)

const/let is _the_ way of doing javascript, setting global will raise an error
(click "run with js"
[http://jsbin.com/hogegikipo/edit?html,console,output](http://jsbin.com/hogegikipo/edit?html,console,output)).
This is true for ES5 strict mode, ES2015 modules and anything you will
transpile from ES2015 modules.

