
“The 'use asm' pragma is not necessary to opt into optimizations in V8” - elisee
https://code.google.com/p/v8/issues/detail?id=2599#c16
======
kevingadd
Sadly V8's dev team has yet to address the other problem that "use asm" solves
- falling out of JIT sweet spots due to broken heuristics. If you maintain a
large machine-generated JS codebase (like I do by proxy, with my compiler), it
is a regular occurrence that new releases of Chrome (and Firefox, to be fair)
will knock parts of your code out of the JIT sweet spot and suddenly start
optimizing other parts. Sometimes code that was fast becomes slow for no
reason, other times some slow code becomes fast and now you look at profiles
and realize you need to remove caching logic or that your code will be faster
if you remove an optimization.

The arms race never ends, and keeping up with it is a full-time job. asm.js
fixes this, by precisely specifying the 'sweet spot' and giving you a
guarantee that if you satisfy its requirements, _all_ your code will be
optimized, unless the VM is broken. This lets you build a compiler that
outputs valid asm.js code, verify it, and leave it alone.

These days I don't even have time to keep up with the constant performance
failures introduced by new releases, but JSIL is a nearly two-year-old project
now and they cropped up regularly the whole time. Ignoring the performance
failures isn't an option because customers don't want slow applications (and
neither do I).

~~~
ori_b
> The arms race never ends, and keeping up with it is a full-time job. asm.js
> fixes this, by precisely specifying the 'sweet spot' and giving you a
> guarantee that if you satisfy its requirements, all your code will be
> optimized, unless the VM is broken.

I'm not convinced that 'use asm' helps with that at all. Static compilation of
the sort that asm.js gives you is _also_ full of heuristics. Even in C, you've
got that sort of effect.

Tweak the inlining cost model, and suddenly your frequently called accessor
function now has function call overhead, slowing everything down. Or maybe it
decides that the cost of padding vectors is too high, and autovectorization
shouldn't be applied now. Or other dozens of similar heuristics.

It doesn't matter if you have a 'use asm'; tweaking the compiler will change
the heuristics and boot you out of the sweet spot.

~~~
LukeShu
Yes, but that isn't changing continuously.

You get one build that is targeted to a specific compiler. Sure, upgrading the
compiler might change what you need to optimize, but that happens when you,
the developer, decides to do that, as opposed to when the user decides to
upgrade their browser.

You can say "the JS sitting on the server, being served to clients, is fast
with asm.js" That won't change until you re-compile it. With the JS jit, you
can't know that.

~~~
ori_b
> "the JS sitting on the server, being served to clients, is fast with asm.js"

Which asm.js compiler? The one in Firefox today? The one tomorrow with
different inlining heuristics? The one that Opera decides to put in? The one
for ARM or for x86?

~~~
kevingadd
A broken inlining heuristic is not going to have the kind of performance
consequences for an asm.js app that you get when you get dropped into the
interpreter/baseline JIT in a javascript app today.

A function not getting inlined means the addition of function call overhead,
and that's it, in most cases. Dropping into an interpreter or baseline jit can
literally produce a 100x slowdown, or worse.

Any conforming asm.js implementation that does AOT compilation will produce
predictable performance. Yes, individual implementations will differ, but the
same is true when you compile applications against MSVC, Clang, and GCC. You
don't see anyone arguing that Clang and MSVC should have standardized inlining
heuristics, do you?

It's important to remember, also, that when we say things like 'JIT sweet
spot' we're not just referring to what code you wrote. It also matters what
code you run, in which order, with which arguments. Something as simple as
changing the order in which you call some functions (without changing the
actual code in those functions) can cause them to deopt in modern runtimes
because the heuristics are so sensitive to minor changes in data. Those kind
of variances can be caused by something as simple as a HTTP request taking a
bit longer to complete and changing the startup behavior of your app.

~~~
ori_b
Does that happen measurably often on hot paths in reality, or is it just a
theoretical worry? Because it sounds like you're describing a JIT with broken
heuristics.

~~~
kevingadd
Yes, it happens measurably often on hot paths in reality. I have test cases
that produce it against modern builds of v8 and spidermonkey. Naturally, they
are not all real-world examples, but they're based on problems from real apps.

------
Danieru
From what I understand writing fast javascript is hard because the engines are
improving so fast no one knows what is fast yet. Thus asm.js is a promise to
developers: "Stay within the subset and your javascript will be fast".

Yet it never was "use asm" which made asm.js fast. It was the js engine. "use
asm" carries symantical knowledge so a browser can warn if you break out of
the subset. It also serves as a strong hint that should make the optimizer's
easier.

This is why I think asm.js is the future. Mozila does not need buy in from
Google, or Apple, or MS. Instead developers can compile to asm.js and their
code will run, and in time it will run faster.

So in effect Chrome supports arm.js but they are not making a promise. I think
it would be better for the internet if they made this promise.

~~~
kybernetikos
I want "use asm" as a way of getting extra 'compile time' checking.

If I say "use asm" and do something that isn't optimizable by mistake, it
should give me an error rather than silently falling back to slow code.

~~~
aegiso
That's all very nice, but it ain't gonna happen because of backwards
compatibility. One of the main points of asm.js is that it runs everywhere the
same as plain JS, whether the platform knows about asm.js or not.

~~~
reubenmorais
What? It already happened. That's exactly what happens when you do something
that is not part of the asm.js subset in "use asm" code.

~~~
aegiso
I don't use Firefox but if that's the case then asm.js can no longer claim to
be "just javascript". I wasn't expecting that one.

Interestingly enough, this makes the V8 approach all the more welcome since it
doesn't break backwards compatibility.

~~~
yohanatan
It sounds like you missed the distinction between a) displays an error _and_
falls back to slower code and b) does not display an error and _silently_
falls back to slower code. Neither of these options break any compatibility
with Javascript proper (and yet one of them is highly more desirable than the
other).

------
cliffbean
Getting the Citadel demo to run at 60fps is impressive. It means that V8 is
"fast enough" to keep up with the video card on that application.

However, the benchmarks at [0] clearly show that this is not the end of the
story. The "workload0" runs measure startup time. All the other workloads show
runtime performance, and V8 is still quite a ways behind.

[0]
[http://arewefastyet.com/#machine=11&view=breakdown&suite=asm...](http://arewefastyet.com/#machine=11&view=breakdown&suite=asmjs-
apps)

~~~
zobzu
It's interesting to see how Firefox in these benchmarks is way, way faster
than Chrome - quite a turn of things.

------
TheZenPsycho
A point some people seem to be missing in this thread which I would like to
emphasize is the value of _predictable_ performance.

It is true that a JIT is in principle, capable of all the same things as an
AOT compiler. It is true, that improving the speed of a JIT is valuable.
However, that all glosses over the fact that a JIT is a black box. If I am
working on an application _today_ that has a mysterious slowdown after running
for about 80 seconds, the promise that won't happen in _next year 's_ browser
release is of no use to me whatsoever.

In fact, I would prefer the jit run at the same constant slow speed instead of
starting out fast, giving a false sense of performance. I can OPTIMISE for
that. I can work with that. I can't work with a JIT that can't decide how fast
it's going to run, and why from release to release, from second to second.
It's fine for most applications, but if I absolutely positively need to
generate a frame at a constant fixed rate, an unpredictable JIT is a huge
liability no matter how theoretically fast it can go on benchmarks. Stability
trumps raw performance.

------
nonchalance
[http://mrale.ph/blog/2013/03/28/why-asmjs-bothers-
me.html](http://mrale.ph/blog/2013/03/28/why-asmjs-bothers-me.html) suggested
that this would happen:

> When I sit down and think about performance gains that asm.js-implementation
> OdinMonkey-style brings to the table I don’t see anything that would not be
> possible to achieve within a normal JIT compilation framework and thus
> simultaneously make human written and compiler generated output faster.

~~~
devx
But would they have done this without Mozilla pushing the performance with
asm.js, or that fast? Competition is great, and it also makes me wonder, if
they keep this up - maybe NaCl won't be needed either?

~~~
aboodman
64 bit integers are one small example of something you can't do no matter how
fast you make JavaScript.

~~~
pcwalton
JavaScript needs 64-bit integers no matter what; the Node folks have been
clamoring for it for a long time. We need to add them. Same with SIMD.

JavaScript is not some immutable thing. It needs enhancing, not replacing.

------
s-macke
I made the same experience with my hand optimized asm.js code.
[http://s-macke.github.io/jor1k/](http://s-macke.github.io/jor1k/) It runs as
fast as asm.js in Firefox. Chrome is optimizing it really well.

------
k_bx
I think this only states that asm.js or it's tests are far from perfect now.

