
Firefox 9 uses type inference to improve JavaScript performance by 20-30% - 11031a
http://www.extremetech.com/computing/94532-firefox-9-javascript-performance-improved-by-20-30-with-type-inference
======
sjs
> Other languages, like JavaScript, are weakly typed, which means that the
> programmer doesn’t have to worry about such piffling minutiae; you can just
> write some code and let the compiler do the heavy lifting.

JavaScript isn't weakly typed, you can't for example "cast" a number to a
string. There is automatic coercion but that's a different thing. I think he
means dynamically typed in this case, that or duck typing.

> Type inference fills in the gap between strong and weak typing so that you
> can still write sloppy code, but reap some of the speed boost.

I don't even know what to say about this. Following this faulty logic might
lead one to believe that Haskell has type inference to make all that sloppy
Haskell code perform well. Type inference doesn't have anything to do with
strong and weak typing, it has to do with manifest and implicit (or inherent)
typing.

~~~
m0th87
Automatic coercion _is_ a part of weak typing [1], and JavaScript certainly is
weakly typed ([2], first line). If you define the ability to cast as weak
typing, then even C++ would be so. Obviously this definition does not hold.

I don't get how the type inference remark is faulty logic either. Haskell is
orthogonal because it is statically typed, but type inference _when applied to
dynamically typed languages_ certainly can provide a performance boost.

1: <http://en.wikipedia.org/wiki/Weak_typing>

2: <http://en.wikipedia.org/wiki/JavaScript>

~~~
randomdata
> If you define the ability to cast as weak typing, then even C++ would be so.

Which is funny because your first link states: "In C++, a weakly typed
language, you can cast memory to any type freely as long as you do so
explicitly."

I'm pretty sure the last time I read that article it said that there is no
generally accepted definition of weak typing. This discussion seems to echo
that.

~~~
nickknw
> I'm pretty sure the last time I read that article it said that there is no
> generally accepted definition of weak typing. This discussion seems to echo
> that.

Agreed. There ARE definitions but they aren't terribly specific, useful, or
even very consistent. Funnily enough, the wikipedia article on strong typing
includes C and C++ as an example of strong typing.

In this case anyway, I think it's clear that the author was really talking
about the static vs dynamic distinction, and should have used those terms
instead:

> Some languages are strongly typed, which means that the programmer must
> define the type of every class, function, and variable that he uses;
> tiresome, but it can have big pay-offs in terms of overall speed. Other
> languages, like JavaScript, are weakly typed, which means that the
> programmer doesn’t have to worry about such piffling minutiae; you can just
> write some code and let the compiler do the heavy lifting.

~~~
chc
Funnily enough, that definition (whether applied to static or strong typing)
would exclude Haskell, one of the most strongly and statically typed languages
known to man.

~~~
nickknw
Yeah it would, good catch.

So to be more accurate, he was comparing _explicit_ static typing with dynamic
typing.

------
simon_kun
If you're interested in the relative performance of the current crop of JS
engines, you may like to check
[http://arewefastyet.com/?a=b&view=regress](http://arewefastyet.com/?a=b&view=regress)
for commit by commit benchmarking over time.

------
ori_b
I'm surprised that this wasn't already done. It seems like it's relatively low
hanging fruit.

~~~
ootachi
It's not low-hanging fruit at all.

First, making it fast is extremely difficult. JavaScript compilers are under
extreme pressure to compile code quickly, unlike any other type-inferring
compiler, because JS compilation keeps the user from seeing the page.

Second, in order to be useful, type inference for JavaScript has to be
speculative. JS's semantics require that ints overflow into doubles, for
example, meaning that a conservative type inference engine would have to
assume every number can potentially be a double. This is too imprecise to be
useful, so the inference engine speculates. If the engine guesses wrong, the
compiler must not only recompile the function under the new assumptions but
also (since this is a global analysis) potentially update _every other
function_ that the function called.

Finally, the presence of eval() and the like mean that almost no inference can
be totally sound, requiring even more dynamic checks.

There's a reason that (despite what some others are saying in this thread) V8
and Chakra haven't done this yet: it takes a long time to get right.

~~~
ori_b
_> JavaScript compilers are under extreme pressure to compile code quickly,
unlike any other type-inferring compiler, because JS compilation keeps the
user from seeing the page_

You only run type inference after you detect the hot loops. Same with other
expensive optimizations. This is something that all JITs out there do -- you
start with simple interpretation (or extremely cheap compilation, in the case
of V8), and then when you detect that you're spending a whole lot of time in
one section of code, you replace it with a highly optimized version.

Type inference is not that expensive compared to several more interesting
optimizations done on several JITs that are in production systems. In some
long running JITs, the compilation of some hot loops takes far longer than
static compilations, running in a low priority thread in the background (while
the less optimized code keeps going in the foreground).

Finally, if anything, JITs allow _more_ time to run heavy optimizations
because they can leave those running in the background without affecting the
use of the program or hurting developer productivity.

 _> JS's semantics require that ints overflow into doubles, for example_

On recent x86 processors, you would just use doubles in most cases. It turns
out that they're on par with integers for speed, although I believe there were
some differences on latency. I'd have to look back at the Intel opt manual to
confirm, though. However, if you really want to solve the issue (say, on ARM
where integer/float performance actually matters -- especially since lots of
architectures are still on softfloat), then you'd use a combination of guards,
versioning, and VRP to solve the issue, not type inference.

 _> Finally, the presence of eval() and the like mean that almost no inference
can be totally sound, requiring even more dynamic checks._

Yep, eval is a terrible language feature if you care about making things fast.
Regardless of whether you have type inference or not.

~~~
bzbarsky
Actually, empirical data gathered in Tracemonkey shows that specializing to
integers where possible is a performance win even on x86.

This is especially the case in the common situation of code computing array
indices. Since those end up being treated as integers in the end, if you have
them as doubles for the arithmetic step you have to keep paying double-to-
integer conversion costs. The same considerations apply for some DOM methods
that take an integer, and similar considerations apply to cases when numbers
need to be converted to strings (e.g. when setting .style.top on an element):
this is a lot cheaper to do for integers than for doubles.

~~~
ori_b
Right. Conversions. That's what I was missing.

------
natmaster
They are confusing weak typing with static typing.

~~~
tlrobinson
I almost wish I hadn't learned the distinction between strong/weak and
static/dynamic typing, because it drives me crazy when people use the
incorrect terminology.

------
51Cards
Interesting that I'm not seeing this on Are We Fast Yet? I thought that was
always the bleeding edge JaegerMonkey specs.

~~~
bzbarsky
You can see the TI+JM graph at
[http://arewefastyet.com/?a=b&view=regress](http://arewefastyet.com/?a=b&view=regress)

------
thurn
Excellent, I hope we can get this in Chrome pretty soon too.

~~~
DrJokepu
(edit: I have said something stupid, nothing to see here, move along)

~~~
ootachi
You're confusing Crankshaft with type inference. Crankshaft, which landed
around Chrome 10, is an SSA-based optimization infrastructure. Type inference
augments an optimization infrastructure with additional information. The two
are complementary. V8 does not use type inference.

~~~
magicalist
I don't think that's quite correct. IonMonkey also uses an SSA-based
intermediate representation, and Crankshaft does generate type specialized
code (as does TraceMonkey).

The big thing that IonMonkey is adding is type inference _through static
analysis_ , which is a big deal for JavaScript.

~~~
rayiner
He's correct. This type inference is actually being added to JaegerMonkey
according to the article, though IonMonkey will have it too when it's ready.

Crankshaft uses type feedback, not type inference. Ie: it will monitor the
types of objects to generate type-specialized code, but will not do static
type inference on the code to generate type-specialized code.

------
Myrth
> By the time Firefox 9 reaches the Aurora channel at the end of September,
> though, type inference should just make your web surfing 20-30% faster —
> period.

Mmm.. I think they've forgot to factor in the ratio of JavaScript involvement
in overall web surfing experience?

~~~
masklinn
Since it's firefox we're talking about, this will likely (positively) impact
the browser's own chrome and any extension you've installed, as well as in-
page javascript.

------
Joeri
Mobile optimization fail. Their ipad-optimized version cuts off part of the
content, and the well-hidden link to go to the desktop version doesn't work.
And that's even sidestepping the whole issue that the interaction model of
their ipad version is worse than that of a regular website.

But it's nice to see that the race is still on, with each browser leapfrogging
the others every once in a while.

~~~
comex
Right, article is unreadable (doesn't scroll) on iPhone.

