
JS MythBusters – An optimization handbook from a high level point of view - karliky
http://mythbusters.js.org/
======
Twirrim
One of the defining thing about MythBusters was that they'd take a myth, build
an experiment with at least some notion of control, and then carry it out, so
you can actually see the results.

Instead this site seems to be "regurgitate myths". e.g.
[http://mythbusters.js.org/workflow/boolean-
conditions.html](http://mythbusters.js.org/workflow/boolean-conditions.html)

"Avoid using >= and <= unless necessary. It’s faster to use a simpler
comparison." There's a trivial code example, and then absolutely no evidence
to support the claim. No benchmarks, nothing.

~~~
mintplant
I'm extremely skeptical of that claim, too. As far as I can tell SpiderMonkey
for example will emit practically exactly the same code for >= and <= as other
comparisons. In loop optimization SpiderMonkey will actually sometimes rewrite
eg. `x < y` as `x + 1 <= y` to put linear inequalities into a standard form
for analysis [0].

They also recommend using `let` and `const` over `var` for performance reasons
in the "Scope" section, but I have to ask whether they benchmarked that at
all. Engines may grow to take advantage of these signals over time for further
optimization, but at the moment many ES6 features incur a performance penalty
just because the code behind them is less mature. I don't know specifically
about `let`/`const` but I'd be pleasantly surprised to learn that they _boost_
performance anywhere yet.

[0] [https://dxr.mozilla.org/mozilla-
central/source/js/src/jit/Io...](https://dxr.mozilla.org/mozilla-
central/source/js/src/jit/IonAnalysis.cpp#3061-3120)

------
bentlegen
I'm not sure how this can be called "mythbusters" when it doesn't provide
evidence for how some optimizations are better than others. Seems likely to
produce more myths rather than debunk them.

~~~
choward
Good point. It's the exact opposite of mythbusting.

------
hiteshk_msft
Some of these suggestions are good but some caveats: \- Lookup tables- while
lookup tables are great, engines might sometimes convert switch statements to
lookup tables too- Chakra does this when it's advantageous to do so \- Try-
catch- this seems like v8 specific advice. Chakra definitely does optimize
functions with try-catch in it, and I think SpiderMonkey does too \- Freeing
memory- setting the reference to null does not necessarily free the memory- it
just makes it likelier to get collected when the Garbage Collector runs

Disclaimer: MSFT employee, Engineer on Chakra

~~~
TAForObvReasons
> Try-catch- this seems like v8 specific advice. Chakra definitely does
> optimize functions with try-catch in it, and I think SpiderMonkey does too

[http://gs.statcounter.com/](http://gs.statcounter.com/) certainly suggests
that specifically optimizing for Chrome/V8 will benefit the majority of users
(58% not including mobile, 50% including mobile). As with all statistics, take
with a grain of salt.

~~~
mikeash
Also keep in mind that 50% of users as a whole may not equate to 50% of _your_
users.

------
glitcher
> from a high level point of view

Many of the examples I looked at were very granular and specific, but didn't
have much explanation. Does high level point of view really mean brief and
without elaboration?

From a high level on JS optimization I would like to see more emphasis on
which considerations are most likely to have the biggest impact on my code.
Throwing everything in together as if they were all equal seems like it may be
missing the point.

~~~
mcphage
That was my take, too. It says "from a high level point of view", but the
_very first optimization_ was about using the != operator vs the >= operator
in certain conditionals. I'm struggling to think of anything more _low_ -level
than that, in JS.

------
colemannerd
I think this is not well researched enough. There doesn't appear to be any
thought put into optimization vs. readability, and when the optimization
improves enough speed to justify the poor readability.

~~~
jbigelow76
The domain name, js.org, seems to lend more credibility then it should.

------
TheAceOfHearts
How many of these optimizations are just limited to V8?

I swear I've heard somewhere that the try/catch limitation is specific to V8,
and that other engines don't have that problem. Am I mistaken?

EDIT: Also, this page is very difficult to read. I have a laptop with a great
screen and the lack of contrast is annoying.

~~~
extrapickles
The styling definitely needs more contrast. The comments on the code examples
are nearly impossible for me to read without highlighting them. Looking at the
styling, comments are set to rgb(51,51,51) which is nowhere near enough
contrast between the black background.

------
prayerslayer
This handbook assumes that JS execution really is your bottleneck and not
network, DOM, what not. I doubt this is the case in the majority of
applications, so following these tips might be even counterproductive as they
sometimes come at the cost of readability.

------
recursive
Some of these performance tips are just not believable without _some_ kind of
justification. I'm to believe that garbage collection can't work if you use
delete? That seems pretty silly.

~~~
mintplant
This one actually seems fair to me. They're not saying that GC won't kick in
on the value the deleted property used to point to, but rather that using
`delete` on a property can be more expensive than you need if you're just
trying to let the GC know something can be cleaned up. This is because
optimized JIT code and caches guard on the structure of objects that pass
through them, and changing that structure (eg. deleting a property) will lead
to deoptimizations / tossing away some of that code.

On SpiderMonkey, deleting a property that isn't the last one that was defined
may incur a "dictionary mode" conversion [0]. This means the information
describing each object property, which is usually immutable and shared between
similar objects, will be copied one-by-one into a unique chain owned by the
particular object instance. This can only happen once per object instance,
however, and subsequent usages of `delete` on its properties will be less
expensive -- but still more expensive than simply setting property values to
`null`. I believe something similar happens with V8's "hidden classes"
mechanism.

[0] [https://dxr.mozilla.org/mozilla-
central/source/js/src/vm/Sha...](https://dxr.mozilla.org/mozilla-
central/source/js/src/vm/Shape.cpp#961-970)

~~~
recursive
I'm not saying it's not true. I don't know enough deep javascript magic to
make that claim. It just didn't _sound_ true, given that they've provided no
supporting information or even a plausible-sounding explanation of why one way
is better than another way.

If the entire site were rewritten using this level of detail plus benchmarks,
it would be 10x more useful.

------
k__
> high level point of view

> Avoid using >= and <= unless necessary

Is this an elaborate troll?

