
Chromium Blog: A New Crankshaft for V8 - twapi
http://blog.chromium.org/2010/12/new-crankshaft-for-v8.html
======
greattypo
The Chrome javascript engine team is simply a beast.

------
kenjackson
I'm curious to see if they broke any code, like the IE engine did recently.
For example, with loop-invariant code motion, what is legal in a language like
C, may not be in Javascript (for same reason the IE DCE optimization was
invalid).

I'd find it hard to believe that Goog would make the same mistake after all
the hullaballoo, but I'd love to see it validated.

~~~
VMG
I'd like to have a guide for writing code that is easily optimized by V8 and
similar engines. Having local variables is good, as far as I can tell, but it
would be nice to have a full overview with dos and don'ts

~~~
drivebyacct2
"Having local variables is good"

Reading sentences like this scare me because it reminds me that some people
don't know what the 'var' keyword means or think it's acceptable to shove
everything in window or global.

~~~
ams6110
Last I read, JavaScript has 2 scopes: function scope, and global scope. Unlike
many languages, there is no block scope (e.g. a counter in a for loop).

~~~
drivebyacct2
I feel dumb now because I fear I don't understand what you mean when you say
block scope...

    
    
        for(var i=0; i<3; i++) { console.log(i); }
    

appears to be valid... (or just in Chrome?)

~~~
aboodman
Valid, just doesn't do what you think.

function foo() { var x = 1; if (true) { var x = 2; var y = 3; }
console.log(x); console.log(y); }

foo(); // 2, 3 (vs 1, undefined if there was block scope)

~~~
drivebyacct2
Thanks for the explanation!

------
natmaster
"...performance of JavaScript property accesses, arithmetic operations, tight
loops..."

Does this mean Crankshaft includes a tracing JIT like Firefox? This layman
speak confuses me.

~~~
scott_s
Yes, look at the list of the four main components.

~~~
pbiggar
I don't think that says it uses a tracing compiler (naturally the terms are
vague in this field, so I'm not certain). Their architecture looks much more
like HotSpot than TraceMonkey.

~~~
wmf
Especially considering that HotSpot and V8 were designed by the same person.

~~~
pbiggar
I don't think this is true. Do you have a reference for that?

~~~
codebaobab
This may not be 100% literally true, but it is definitely true in spirit. (The
previous poster is referring to Lars Bak, but both Hotspot and V8 are/were
team efforts.) The family tree here is:

    
    
      self->hotspot->V8
    

But, yes, Lars is the _man_.

~~~
igouy

        self->hotspot-> Resilient Smalltalk Embedded Platform ->V8
    

[http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.84.7...](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.84.7354)

~~~
codebaobab
OK, you got me on that one. I've been a professional Smalltalk programmer for
almost 20 years and I've never heard of the Resilient Smalltalk Embedded
Platform.

~~~
igouy
aka OOVM it took a novel and interesting approach - use Eclipse as the IDE to
edit text files (there was a syntax for class definition) and sync bytecode
with an image on a remote device through tcpip.

------
swannodette
It will be interesting to see how this optimization affects V8's memory
profile (and how that in turn affects the currently slim memory profile of
Node.js).

~~~
panarky
Looks like this Node patch includes Crankshaft:
[https://github.com/ry/node/commit/c30f1137121315b0d3641af6dc...](https://github.com/ry/node/commit/c30f1137121315b0d3641af6dc61e3b047f940e1.patch)

------
thasmin
When Chrome was released two years ago, I noticed a significant difference in
speed. Nowadays I think the announcements JavaScript performance improvements
is a bit overenthusiastic. The only real world benchmark mentioned in the
article is that Gmail loads 12% faster. What JavaScript apps are constrained
by performance and what are Crankshaft's effects on them?

~~~
johnthedebs
I think the point of these optimizations is that they allow javascript-
intensive applications to be developed that would otherwise have just been too
slow.

In other words, they're paving the way for the future more so than trying to
squeeze every last ounce of speed from current applications (which just
happens to be a great side effect of their work).

~~~
jamesaguilar
Yeah, you won't see a lot of applications that benefit from these
optimizations immediately because if there were any that would mean that they
had been written before there was any device capable of running them.

------
ashot
unreal. how much theoretical headroom is left to optimize js compiler
performance?

I had assumed we were reaching some theoretical upper-bound because all major
frameworks were on par in terms of performance.

~~~
jerf
To a first approximation, my answer would be something like
[http://shootout.alioth.debian.org/u32/benchmark.php?test=all...](http://shootout.alioth.debian.org/u32/benchmark.php?test=all&lang=luajit&lang2=v8)
.

That's not necessarily the whole answer and I imagine JS can't ever quite go
that fast. But still....

~~~
SwellJoe
Would this imply that Lua has reached some pinnacle of speed and can't go any
faster? That seems to be a side effect of your statement.

I'm not familiar with Lua, beyond reading an article or two about it, but does
its simplicity imply some sort of maximal efficiency? Are the developers
behind Lua simply the best programmers in the world and already have
everything figured out with regard to optimizing a JIT? I'm not arguing with
you...it does seem like that's a reasonable goal for JavaScript JITs to strive
for in the near future. But, it doesn't really answer the question of how much
better performance can get (in JavaScript or Lua or any other language). Past
performance is not necessarily indicative of future performance when so many
people are working on the problem from so many angles.

~~~
jws
Browsing the alternatives in the shootout dataset, LuaJIT appears to be the
fastest of the dynamic languages and feature-wise matches Javascript well
enough to be a fair benchmark.

You can gain another factor of 2 or so in speed by going to a static language
like C or Ada, but that isn't really a fair comparison and you can see the
price paid in code size.

The good news for the web is that there may be another factor of 2 to 3
available for Javascript speedup.

~~~
eru
You can also go to a static language like OCaml. They don't blow up your code
size, but are also fast.

------
Splines
So - how long does it take for features to make their way into production? Am
I reading the release calendar correctly in that it'll take 12 weeks from
start of development to beta, and another 12 weeks from beta to stable?

It looks like Chrome 8 went stable on 12/2. So we'll see Chrome 10 in 4
months?

~~~
aboodman
Chrome stable releases are every 6 weeks.

The releases are overlapped though, so we are testing v n+1 in beta while v n
is in stable, and we are starting new feature development for v n+2 while v n
is in stable.

Chrome 8 just went stable, and we've just started testing Chrome 9.

So crankshaft will either be 6 weeks from today (if it's in 9), or 12 weeks
from now (if it's in 10).

I don't think it's been announced which it's targeted for.

HTH

~~~
Splines
Ah, ok. I guess I misread the release calendar. Thanks for the clarification
:).

> I don't think it's been announced which it's targeted for.

According to the perf comparison chart on the blog, it's in Chrome 10. Also,
it's mentioned that Crankshaft is available in the canary build, which is
currently at 10 too.

12 weeks then. I sometimes revert back to FF for the plugins, but I always
come back to Chrome for the perf :).

------
cosgroveb
Gmail really does seem to load in about half the time now in the Canary build.

~~~
ams6110
OTOH this does not bode well for the future potential bloat in Gmail.

------
tsta
I've benchmarked Google Chrome 9.0.597.10 and 10.0.603.3 (with Crankshaft) and
the latter is 30% faster. See the detailed results:
<http://dromaeo.com/?id=124912,124913>

------
CJefferson
Has anyone got any experience using javascript/V8 as a scripting language for
a C++ app? We currently use python with boost::python bindings, but are
finding we have to limit the amount of python code, as it is too slow.

~~~
kqr2
Have you looked into lua? It's a good embedded scripting language.

<http://www.lua.org/>

~~~
pygy_
Especially, for x86 and x64, where you can use LuaJIT.

LuaJIT2 is still in beta, but it is already very stable. Performance-wise, it
is comparable to Haskell and Java.

<http://luajit.org/>

~~~
nitrogen
Are there any plans to port LuaJIT to ARM or LLVM? I see a couple of posts
mentioning slow FP performance on ARM, but that could be solved with a
technique like that used by LNUM.

~~~
joeyo
There is sponsorship for a PPC LuaJIT port (targeting embedded systems, I
believe) and Mike Pall has expressed interest in an ARM port in the past, but
I don't know what the status of that is.

~~~
nitrogen
PPC LuaJIT sounds like it would be useful in console games (Why did the big
three consoles switch to PPC just as Macs switched to Intel, anyway?).

------
jorangreef
Next up, remove the 1.9GB max memory limit of V8 processes:
<http://code.google.com/p/v8/issues/detail?id=847>

------
todd3834
Will this have any effect on NodeJS?

~~~
chrislloyd
I doubt it.

From what I understand, the significant improvements in speed come from
Crankshafts tradeoff of compilation optimisation for startup speed. If your
app is a for loop with 2 iterations that code path won't be heavily optimised
as the interpreter would potentially be more spending more time compilation
code than in execution of unoptimised code. It will therefore startup faster.
However, hotspots (loops with 1,000 iterations, per se) will be heavily
optimised.

This is great for websites as speed and responsiveness is perceived as startup
time. You'll certainly notice a difference when using the Node as a scripting
tool. However, most Node applications are long running servers executing the
same code paths over and over. Its unlikely that Crankshaft is performing any
extra optimisations, it is just changing _when_ it performs these
optimisations. However, if Crankshaft _is_ doing significantly more advanced
optimisations (I don't know) then, yes, Node will benefit. Please correct me
if I am wrong, I would love to be.

~~~
cwp
Node-based servers will definitely benefit from this. The advantage of a two-
stage compilation scheme is that the "base" compiler generates non-optimized
code that is self-profiling: it collects type information as it runs. That
information is then used by the second-stage compiler to produce code that is
more optimized than a single-stage compiler (like pre-Crankshaft V8) can
produce.

------
StuffMaster
Sounds like they borrowed the tracing idea from mozilla.

~~~
scott_s
Not really. It's a well-known VM optimization technique.

~~~
cpr
First featured in Self many years ago, done by the same folks (Lars Bak and
crew) who brought you V8. Then Sun bought their Smalltalk/Self-based company,
and they built HotSpot for the JVM. Then Google hired them to do the same for
Javascript, and now we have V8.

It generally takes about 10-20 years to get truly new ideas from the labs to
consumer-level products.

~~~
effn
Neither Self nor HotSpot uses tracing. It's a relatively new compilation
technique, the implementation in the original tracing paper used java bytecode
as the source language.

I think you are confusing tracing with adaptive compilation.

~~~
kingkilr
Actually there's even older work using tracing for re-optimization of
assembley :)

------
erik_landerholm
I thought someone was actually developing a new kind of crankshaft for a real
V8 engine....

~~~
eru
Doesn't `Chromium Blog' and the URL give it away?

