
V8 Optimization Killers - diggan
https://github.com/petkaantonov/bluebird/wiki/Optimization-killers
======
_greim_
Which of these (if any) are purely incidental to the current state of the V8
project, rather than being inherently insurmountable problems for JS
optimization? In other words, if I memorize this particular list, what subset
of my knowledge may become obsolete in a few months or years?

~~~
azakai
> if I memorize this particular list

Please don't memorize this list. It's just for one JS engine among several,
all of which are changing rapidly. For example Apple just shipped FTL, an
LLVM-based JIT for Safari (so any list for Safari would have just been largely
obsoleted).

~~~
_greim_
I actually don't want to memorize any list, unless it's a list of things that
are inherently expensive/impossible to optimize in JS, in which case it's
worthwhile. I just hate the idea of having to write all my code according to
some list-of-the-week. What I want is a list that separates the always-know
stuff from the engine-specific-tweaks or might-not-apply-next-year stuff.

~~~
_greim_
Actually re-reading the article, I see this:

    
    
        Currently not optimizable:
    
        Generator functions
        Functions that contain a for-of statement
        Functions that contain a try-catch statement
        Functions that contain a try-finally statement
        Functions that contain a compound let assignment
        Functions that contain a compound const assignment
        Functions that contain object literals that contain __proto__, or get or set declarations.
    
        Likely never optimizable:
    
        Functions that contain a debugger statement
        Functions that call literally eval()
        Functions that contain a with statement
    

Which I now realize is sort of what I'm looking for.

------
binarymax
Really great! Some notes that popped out for me are that to always cache the
.length property for any array or arguments:

    
    
         function doesntLeakArguments() {
             var args = new Array(arguments.length);
             for(var i = 0; i < args.length; ++i) {
                 args[i] = arguments[i];
             }
             return args;
         }
    

becomes:

    
    
         function doesntLeakArguments() {
             var len  = arguments.length;
             var args = new Array(len);
             for(var i = 0; i < len; ++i) {
                 args[i] = arguments[i];
             }
             return args;
         }
    

And also, if you've got a switch statement with more than 128 cases, you've
probably got bigger problems on your hands than v8 optimizations.

I see some things in here that I can start switching to immediately for some
of my node modules.

~~~
pdw
Huge switches are common in the inner loop of emulators and interpreters. They
tend to end up with a 256-case switch to handle the next byte in the
instruction stream.

~~~
binarymax
I understand that use case, but in my opinion it would be much cleaner (and
speculating faster) to have an 256-slot array...and routing with
_opcodes[byte.charCodeAt(0)]();_ or something similar.

~~~
arcatek
Function calls are expensive if they are not optimized. However, the same is
true for switch/case trees, and even for JIT'd code (dynamically created
functions) ...

In the Javascript world, "Your Mileage May Vary" truly is a motto.

------
netcraft
I wonder if there is any way to check the optimization status in the browser?
It would be awesome to have a chrome audit that parsed through all of the
functions in a js file and told you the optimization status of each and why.

~~~
wpears
Not the friendliest program but
[http://mrale.ph/irhydra/1/](http://mrale.ph/irhydra/1/) can display all sorts
of intermediate representations spit out by v8

------
d0ugie
My team's developers, who are in a Firefox-only setting (so no V8 to
consider), use every optimization/squishing trick they can find on github -
grunt, browserify, uglifyjs etc - thinking that the smaller the number of
javascript resources and the smaller those files are the faster the pages will
not just download but also render.

Is it probable that SpiderMonkey and other engines suffer from forms of this
deoptimization hell when rendering? And would it be a safe guess that whatever
mod_pagespeed does to a page's javascript resources is less likely to result
in this hell than these other tools? Thanks.

~~~
arcatek
The only size-related optimization that I can think of is that V8 doesn't
inline functions (but does optimize them) when they are more than X characters
long, comments included.

I'm not sure about SpiderMonkey, but they may very well use this kind of
optimization too. However, I would guess that Uglifying files should not
increase execution speed for any noticeable amount.

------
ttty
Which is better `undefinedVar === void 0` or `typeof undefinedVar ===
'undefined'`?

here says: [http://www.2ality.com/2013/04/check-
undefined.html](http://www.2ality.com/2013/04/check-undefined.html) void 0 is
safe.

~~~
ekmartin
I think minifiers use the former because it's shorter, and thus requires less
space.

~~~
phpnode
the other benefit is that `void 0` cannot be overwritten in sloppy mode,
whereas you can do something crazy like this:

    
    
        undefined = 'lol';

~~~
evilpie
No you can't. You can do something like function(undefined) {}, but the global
'undefined' is non-writable and non-configurable.

~~~
ahoge
That's an ES5 change. With ES3, you can overwrite NaN, Infinity, and
undefined.

~~~
LunaSea
Only in strict mode I though no ?

~~~
ahoge
Generally. Technically, it's a "breaking change".

As you can imagine, this didn't actually affect anyone.

------
btown
An excellent overview! A bit worrying that using an object as a hash table and
iterating over its keys using ForIn would prevent optimization - I had always
thought that this was a common use case that would be well-supported by the
optimizer! I suppose in that case, if you need fast reads of all keys and can
afford slower writes, you could maintain an array of the keys at insertion
time and just loop through that?

~~~
phpnode
`for ... in` is a relatively slow construct anyway, the faster option (a lot
more verbose) is:

    
    
        var keys = Object.keys(obj),
            length = keys.length,
            key, i;
        for (i = 0; i < length; i++) {
          key = keys[i];
        }
    

While this is not _exactly_ the same as for..in, it usually behaves how you'd
expect and is significantly faster for a couple of reasons:

1\. In a for..in loop the engine must keep track of the keys already iterated
over, whereas in the fast version we can simply increment a counter and do a
fast array lookup.

2\. It's possible to add properties to the object that you're iterating within
the body of the for..in statement, and these will be iterated too. Doing such
a thing is obviously very rare but it's the kind of edge case the JS engine
must keep track of.

~~~
kipple
Is a plain for loop with a counter significantly faster than a forEach? With
Object.keys I've been enjoying the simplicity of statements like

    
    
      Object.keys(obj).forEach(function (key) {
        console.log(obj[key]);
      }); 
    

but does that hamstring my performance?

~~~
bzbarsky
Your mileage will vary depending on the browser. If you really care (as in
profiling identified this as a hotspot) you should measure the alternatives
you're considering.

But note that in your specific case chances are the cost of a bunch of
console.log() calls completely swamps the cost of either a for loop or a
forEach call. console.log() has _incredibly_ complicated behavior.

------
PSeitz
I created a jsperf to measure a hashtable-like object with for-in/Objekt.keys

The performance differs only with 50%, I expected more actually. Am I doing
something wrong?

[http://jsperf.com/for-in-with-hashtable-like-object](http://jsperf.com/for-
in-with-hashtable-like-object)

~~~
autnecare
I guess it depends on what you're intent is;

[http://jsperf.com/for-in-with-hashtable-like-object/4](http://jsperf.com/for-
in-with-hashtable-like-object/4)

~~~
PSeitz
The article states "Code compiled by the optimizing compiler can easily be,
say, 100x faster than the code generated by the generic compiler"

So expected a quite more than a factor of 2 ...

------
kretor
Instead of avoiding all those optimization killers, a much more higher impact
approach is to fix this in the V8 project. You not only make your own code
faster, you make thousands if not millions of pieces of code faster.

~~~
_greim_
I was wondering this too. I expect/hope the answer would be "we're working on
it" in most cases, but I worry it would be "it simply isn't possible" in some
or most scenarios.

------
iMark
Useful stuff. Does anyone have similar information for other javascript
engines?

------
killercup
Title should probably be "V8 Optimization Killers".

~~~
Touche
I'm really sick of these recent JavaScript performance related posts that only
talk about V8.

~~~
Igglyboo
Just a cursory glance at browser stats seems to show chrome as the most
popular which would make sense. I highly doubt anyone using internet explorer
is coming to SO.

~~~
rockdoe
Chrome on iOS doesn't even use V8.

~~~
jonpacker
Chrome on iOS is Safari in a box.

