
Mozilla can produce near-native performance on the Web - BruceM
http://arstechnica.com/information-technology/2013/05/native-level-performance-on-the-web-a-brief-examination-of-asm-js/
======
npalli
BTW, C/C++ code that is compared has multithreading and SIMD capability
disabled. In the third page they benchmark asm.js and native with
multithreading and SSE enabled. It shows upto 50x slowdown! Not exactly sure
what the point of a benchmark without multithreading is. I mean anyone writing
performance sensitive apps will use MT right? (given JS is single threaded, it
seems this is a dealbreaker).

[http://cdn.arstechnica.net/wp-
content/uploads/2013/05/classi...](http://cdn.arstechnica.net/wp-
content/uploads/2013/05/classic-native-v-optimized-v-asmjs.png)

~~~
crabasa
Is it fair to describe JS as single-threaded when all modern browsers support
web workers?

Related: Why does it seem like web workers don't exist? I can hardly think of
any popular libraries or web apps that make use of them, despite the seemingly
obvious utility.

~~~
kybernetikos
After getting very excited and doing a bunch of work, we did a lot of
performance tests and discovered that for our use case all they did was
increase processor load for the same work - the penalty associated with
serialising and deserialising things to go across postMessage was greater than
the work we were getting done on the other thread.

Of course this was in the early days, and it'd be good to retest it to see if
it's better now.

I think that there are a lot of interesting use cases for WebWorkers, but you
also have to be aware that for some things they will actually slow down your
application.

~~~
i_s
It is still not good. I've started working on an asteroids game [0], and I
tried moving my physics calculations to a background thread, but it ended up
being slower.

0 <http://www.isaksky.com/asteroids/>

------
ux-app
For most applications JS doesn't need to get any quicker. What makes web apps
feel like swimming through molasses is the DOM.

Even something as simple as getting a Div to follow the mouse on anything but
the simplest of pages is impossible to do without incurring ridiculous lag.

I don't know enough about them, but it seems that maybe Web Components may
offer some answers by encapsulating mini Document trees which presumably exist
in isolation from other components thereby reducing the penalties of dynamic
CSS.

~~~
just2n
Like this? <http://jsfiddle.net/eymAS/2/>

The DOM is faster than almost everyone believes. It's not 2001 anymore, guys.

Web components solve a lot of major problems, but it has little to do with
DOM/CSS performance and more to do with encapsulation (ie, so your CSS/HTML/JS
don't break a component). DocumentFragments and insertAdjacentHTML solved more
performance problems for the DOM than most other API changes have, and smarter
compositing/painting algorithms in browsers (along with hardware acceleration)
have made it possible to do some pretty ridiculous stuff.

~~~
cpleppert
The problem is that the DOM is a hard upper limit on performance compared to
what the performance of an unsafe language (asm.js) is. It is massively
complex because it must perform every UI task in the browser and simply moving
around DIVs doesn't give you a feel for its the performance in complex
scenarios.

If you think about how native applications perform and then imagine every one
having to go through a DOM interface and every button and UI interaction being
composed of DOM elements and re-styling these elements on every interaction
you get a sense of what the problem is. There is simply no way to get the DOM
to do something it wasn't designed to do, which is to be a document display
interface and not a general graphical user interface. You have to clobber
together a lot of interactions to get the desired behavior and not only does
this increase the overhead for both your program and the DOM you are going to
hit cases where the DOM isn't optimized.

This doesn't mean that the DOM is inherently slow. If you tried to offload the
UI work onto javascript you wouldn't get the performance benefit of doing that
on Asm.js or NACL. You really need unsafe code to solve this problem. Asm.js
and NACL are only useful if you are going to do this; targeting the Canvas and
WebGL gives you the ability to do that.

~~~
vladmoz
What you describe -- "imagine every one having to go through a DOM interface
and every button and UI interaction being composed of DOM elements and re-
styling these elements on every interaction" -- is exactly how Firefox is
built. The desktop Firefox UI is all DOM elements (not HTML, but still the
same underlying DOM code).

Hovering over a toolbar button or a tab causes a CSS style rule for a :hover
effect to be applied. Opening a new tab uses CSS animation for the transition.
Moving tabs around manipulates the DOM and moves the tab elements (which are
themselves composed of images, labels, etc.).

And yet, the Firefox UI is plenty snappy -- yes, you can argue that there are
slowdowns and issues that need to be fixed, but no more or less than in many
"native" apps.

------
cliffbean
The fascinating thing about this is that they didn't write a new JIT for
asm.js. It's just their existing JIT with more information and a few new
tricks. Among other things, this means that it doesn't yet have a lot of the
fancy optimizations that C/C++ compilers typically have in their backends,
like clever instruction selection or register allocation. It's already
impressively fast, and it has the potential to get even faster.

~~~
ndesaulniers
> It's just their existing JIT with more information and a few new tricks.

Spidermonkey is composed of the Baseline Compiler (JIT) and Ion Monkey (JIT),
but a new AOT compiler was written, called "Odin Monkey."

~~~
cliffbean
OdinMonkey uses the existing IonMonkey compiler to do the actual optimization
and code generation work.

This blog post has more information [0]; see especially the comments section
where the author responds to some questions and discusses the relationship
between OdinMonkey and regular JS compilation.

[0] [https://blog.mozilla.org/luke/2013/03/21/asm-js-in-
firefox-n...](https://blog.mozilla.org/luke/2013/03/21/asm-js-in-firefox-
nightly/)

~~~
ndesaulniers
oh, interesting! Good eye, thanks for pointing this out!

------
danbruc
I would much prefer having a sane new language replacing JavaScript instead of
this hack to improve performance. A new language could provide the same
performance advantage and make writing web applications much more pleasant.

Admittedly it is harder to introduce a new language across all (major) browser
but I think it would really be worth it.

~~~
callahad
I don't understand this stuff well enough, but doesn't Emscripten effectively
give you that? It converts LLVM bitcode into JavaScript, so if you can compile
it with LLVM, you can use it on the web.

~~~
axaxs
But that's like converting C into python. Why not begin a new/existing
language that can be compiled and ran at native speeds?

~~~
mortehu
Three such languages are called AMD64, IA32 and ARM machine code, and are
supported by Google Native Client (NaCl). I think they will be exceedingly
hard to beat when it comes to speed and loading time.

~~~
Yoric
They are fast, indeed. But by targeting NaCl, you are producing code that: \-
will only run on Chrome; \- will only run on the hardware platforms for which
you have developed.

This kind of kills all the fun in the web. By contrast, asm.js code will run
just about everywhere, today. And will get blazingly fast in ~12 weeks for
Firefox (a little later for Chrome and Opera).

~~~
ihnorton
> \- will only run on the hardware platforms for which you have developed.

What fast browser JITs are actively developed for platforms other than ARM,
AMD64 and IA32? V8 does not support anything else, and while Firefox enables
SpiderMonkey for MIPS and SPARC [1], "unsupported" is not a very hearty
endorsement.

[https://developer.mozilla.org/en-
US/docs/SpiderMonkey/1.8.8#...](https://developer.mozilla.org/en-
US/docs/SpiderMonkey/1.8.8#Platform_support)

~~~
bzbarsky
SpiderMonkey works on PPC, for what it's worth; the TenFourFox project is
actively maintaining it there. They're usually a bit behind on porting the
JITs, but they do actively port them.

But the real issue with NaCl is this thought experiment. Assume NaCl were
developed in 1998 and everyone had jumped on that bandwagon and the web in
2003 were full of NaCl blobs targeting the hardware architectures that
mattered in 2000-2001. Then ask yourself the following questions:

1) Would this have affected the choice of hardware for phones and tablets and
whatnot?

2) Would it be viable today to ship a web browser on an ARM system?

3) What makes us think that the currently-relevant set of hardware
architectures will still be the set we want to be using in 10-15 years? In 30
years?

The nice thing about asm.js or PNaCl compared to NaCl is that even if we ship
it right now and only run it right now on ARM/AMD64/IA32, if someone comes up
with a new hardware architecture they want to ship in consumer devices they
can simply implement a JS JIT for it (in the case of asm.js) or an LLVM
backend (for PNaCl), which is something they would need to do _anyway_ for
that "consumer devices" bit. On the other hand, if they have to deal with
legacy NaCl content they suddenly have to do hardware emulation or something
insane.

------
NinjaWarrior
> Emscripten builds taking between 10 and 50 times longer to compile than the
> native code ones.

Hey, this is fatal. With this massively long iteration time, you can't run
actual big projects. Only executing 1,000,000 lines of C++ code is not enough.
We care the build time as well as the perforamnce.

BTW, have you heard that the next Haswell processors can get only 5%
performance improvement? We should assume that we have no free lunch anymore.
One of the UNIX philosophies is already broken.

~~~
boyter
I thought the Haswell CPU was more about reducing power usage then adding
performance? I'd be curious to know how much effort was spent on the power
consumption vs performance.

------
mosqutip
It seems Mozilla and Google are moving more and more toward their own browser
specifications and mechanisms. Microsoft and Netscape did this in the 90s, and
it ended up causing nothing but pain.

Optimizing JS is interesting, but creating browser-specific applications of
code meant for a web audience disturbs me. I don't want a repeat of the first
browser wars. Hopefully Mozilla will work toward w3 standards in this effort,
since Google clearly hasn't with Dart.

~~~
callahad
As far as I know, asm.js is a strict subset of JavaScript -- I don't believe
it's "browser-specific" in a meaningful way. Rather, it shows that it's
_possible_ to take a subset of JavaScript and make it run at near-native
speeds, which means that there's still a lot of room left for the VMs to
improve.

From glancing at the benchmarks in the Ars article, it looks like asm.js
generally runs around twice to four times as fast as the same code without
asm.js-specific optimizations. This is a huge improvement, but it's not the
difference between something being blazing fast in one browser and unusably
slow in another.

~~~
ajuc
In some applications it doesn't matter, in some it's crucial. If your game
runs at 60 fps in firefox and 15 fps in other browsers, it's effectively
firefox-only game.

Additionaly browser games can't have main game loop (cause it would hang the
browser infinitely), but often use "requestAnimationFrame" to trigger updates
with the best sustained refresh rate available. Even if only one frame out of
every 10 in your game is taking 20 ms instead of 16 ms - your game won't run
with 60 fps anymore, but with 45 fps or 30 fps because browser is trying to
choose refresh rate that will work. And that's a big difference in experience.

------
ilaksh
I really disagree with the idea of ASM.js. I think that the whole asm.js thing
has got people more people confused into thinking that they can't build fast
applications in regular JavaScript.

That is not true. JavaScript code is usually compiled to native code now and
is very fast -- generally much faster than Ruby and Python.

I wish Mozilla and other core browser teams would focus on better WebGL
support in more drivers and devices as well as better JavaScript engines --
for example Safari especially stands out with their deliberately crippled
performance, and Microsoft with their deliberately crippled featureset.

The great thing about the web is convenient APIs like WebGL and nice languages
like CoffeeScript. ASM.js negates that for no good reason. There are huge
opportunities for exciting WebGL games which don't require the most optimized
possible JavaScript, that just haven't been explored yet, and within a couple
of years ordinary hardware will go much faster anyway.

~~~
omaranto
What do you dislike about the idea of asm.js? (You mention that some people
think asm.js is evidence that you can't make things fast enough in regular
JavaScript, but that is something you dislike about people's reactions to
asm.js, not an issue with the idea of asm.js.)

~~~
ilaksh
It promotes that idea.

Also, asm.js negates the advantages of the nice web APIs and type-free
programming languages etc.

Read what I wrote.

~~~
prodigal_erik
Being able to use _any_ language is better than having to use _one_ language
littered with WTF pitfalls because one guy had to hack it together in a week.
And the debate about type checking and static analysis is far from settled.

------
hrktb
It feels the most interesting use of asm would be to bring existing C code
into javascript and have it run at reasonable speeds. Things like small (or
not) database engines, authentication or encryption libraries, data parsing
utilities etc...

------
solomatov
Near native performance is a very good thing especially in the fields such as
game development.

However, it's a pity that Mozilla doesn't address the main problem with the
web platform, the lack of good platform and the set of API, something like
Java or .NET provides. There are a large number of good javascript
applications (Google docs, GMail, etc), which are produced by heroic efforts
of the teams who created them. The same effort could have been expended on
hundreds more useful software, if JavaScript was replaced by something more
adequate.

~~~
BrendanEich
Look around you at the Web: all the top sites use JS heavily and without
"heroic efforts" compared to Java (dead on the client) or .NET (WPF is dead
too).

If the old big-OO-frameworks really conferred such huge fitness advantages
over JS, you would not see them dead on the client. They would have been used,
and their plugins would have been supported better by the plugin vendors.

JS these days has a lot going for it, including IntelliJ-style IDEs (Cloud 9
offers one) and not-too-OO-or-huge frameworks.

/be

~~~
solomatov
>Look around you at the Web: all the top sites use JS heavily and without
"heroic efforts" compared to Java (dead on the client) or .NET (WPF is dead
too).

Java isn't used in the browser because of awfully bad user experience, and bad
security (personally, I disabled Java in the browser). JavaScript applications
just look much better and behave much more smoothly, which is more important
on the web than ease of development and performance. Web applications are much
more readily used than desktop applications, and that's why we have to use
them. It's just economic incentive. If we create a desktop application, user
base which we sell too will be much smaller if the same application will be
provided on the web. I'd love to use technologies, which make me more
productive than JavaScript like JavaFX and Silverlight, or at least Flash.
However, there's a large percentage of users whose browsers don't support them
and if they support them they provide inferior user experience. I (and many
other people) use JavaScript not because it's great but because it's the only
platform which is universally supported by browsers not and provides superior
user experience. It's the only reason to use it.

> without "heroic efforts" compared to Java (dead on the client) or .NET (WPF
> is dead too).

Most of the top sites are quite trivial in code complexity. They are more
complex design and ui-experience wise than codewise. There are, of course
complex web applications, like google docs, cloud9, gmail, google reader, but
they were created with definitely heroic efforts, and they don't reach the
complexity of the top desktop applications. Where's web based Mathematica, 3DS
Max, Autocad, IntelliJ? When web based office applications will have
performance of MS Office?

As an indicator of how complex these web applications are, you can take a look
at how many web frameworks have non trivial collections, like HashSets,
HashMaps, TreeMaps etc. Only the following frameworks support them: closure
tools, GWT, Dart. Most of the popular JS frameworks which are used by top
sites don't use them.

>JS these days has a lot going for it, including IntelliJ-style IDEs (Cloud 9
offers one) and not-too-OO-or-huge frameworks.

I actually work at JetBrains (the creator of IntelliJ), and I can say that
Cloud9 provides IDE experience from 90s. The only meaningful feature that it
supports is code completion and error highlighting. It has no refactorings,
find usages, and many other smart features. I think, the JavaScript is to
blame, and I feel that so complex code is near impossible to write in a
language such as JavaScript (because of its lack of static type system).

~~~
BrendanEich
Quick reply (thanks for the well-formatted cited text!).

* Java didn't have the bad security rep until relatively recently. Java had nice-looking UX in the 90s (Netscape bought Netcode on this basis), much nicer than Web content. Didn't help.

* Web != Desktop. Large desktop apps are the wrong paradigm on the web. You won't see a Web-based Mathematica rewritten by hand in HTML/JS/etc. You will see Emscripten-compiled 3DS Max (see my blog on OTOY for more). The reasons behind these outcomes should be clear. They have little to do with JS lacking Java's big-OO features.

* Large mutable-state collection libraries are an anti-pattern. Functional structures, when hashes and arrays do not suffice (and even there), are the future, for scaling and parallel hardware wins.

* Conway's Law still applies. Too often, bloated OO code is an artifact of the organization(s) that produced it. This applies even to open source (Mozilla's Gecko C++ code; we fight it all the time, including via JS). It definitely applies to Google (e.g., gmail, Dart at launch). Perhaps there's no other way to create such code, and we need such programs as constituted. I question both assumptions.

* Glad you brought up refactoring. It is doable in JS IDEs with modern, aggressive static analysis. See not only TypeScript but also Marijn Haverbeke's Tern and work by Ben Livshits, et al., at MSR. But automated refactoring is not as much in demand among Web developers I know, who do it by hand and who in general avoid the big-OO "Kingdom of Nouns" approach that motivates auto-refactoring.

In sum, if the web ever becomes big-OO as Java and .NET fans might like, I
fear it will die the same death those platforms have on the client side.
Another example: AS3 in Flash, also moribund. These systems (even ignoring
single-vendor conflicts) were too static.

The Web is not the desktop. Client JS-based code can be fatter or thinner as
needed, but it is not as constrained as in static languages and their
runtimes. Distribution, mobility, full-stack/end-to-end (Node.js) options,
offline operation, multi-party and after-the-fact add-on and mash-up
architectures, social and commercial benefits of the Web (not just of the
Internet) -- all these change the game from the old desktop paradigm.

JS has co-evolved with the Web, while the big-OO systems have not. This might
still end up in a bad place, but so far I don't see it. JS can be evolved far
more easily than it can be replaced.

/be

~~~
solomatov
>* Web != Desktop. Large desktop apps are the wrong paradigm on the web. You
won't see a Web-based Mathematica rewritten by hand in HTML/JS/etc. You will
see Emscripten-compiled 3DS Max (see my blog on OTOY for more). The reasons
behind these outcomes should be clear. They have little to do with JS lacking
Java's big-OO features.

I am actually not defending big-OO features (I think, 90s style big-OO is
obsolete). I like mix of OO and functional programming and like the results
which it confers to code (see for example, Reactive Extensions, it's very easy
to learn, expressive, and compact). The feature which I miss in JavaScript and
which platforms such as JVM and .NET have, is ease of maintaining code, mainly
through sound type system and languages created with tooling in mind.

>* Glad you brought up refactoring. It is doable in JS IDEs with modern,
aggressive static analysis. See not only TypeScript but also Marijn
Haverbeke's Tern and work by Ben Livshits, et al., at MSR.

The problem with algorithms similar to Tern's is that it works well until we
use reflexive capabilities of the language. However, most of the libraries do
use them, and as long as it happens, algorithms such as Tern's infer useless
type Object.

>But automated refactoring is not as much in demand among Web developers I
know, who do it by hand and who in general avoid the big-OO "Kingdom of Nouns"
approach that motivates auto-refactoring.

There are refactoring which can be useful in any language. My favorite one is
rename, I usually can't come up with a good name from a first attempt. Others
are extract/inline method (extract/inline variable is easy to implement in
JavaScript).

Another maintainability related feature is navigation to definition and find
usages. Unfortunately, language dynamism makes them imprecise and code
maintenance becomes nightmare especially if you have > 30 KLOCs of code. You
have to recheck everything manually and it's very error prone. Tests can help,
but they also require substantial effort.

~~~
BrendanEich
Tern's static analysis is based loosely on SpiderMonkey's type inference,
which does well with most JS libraries.

Yes, some overloaded octopus methods fall back on Object. What helps the
SpiderMonkey type-inference-driven JIT is online live-data profiling, as
Marijn notes. This may be the crucial difference.

However, new algorithms such as CFA2 promise more precision even without
runtime feedback.

And I suggest you are missing the bigger picture: TypeScript, Dart, et al.,
require (unsound) type annotations, a tax on all programmers, in hope of
gaining better tooling of the kind you work on.

Is this a good trade? Users will vote with their fingers provided the tools
show up. In big orgs (Google, where Closure is still used to preprocess JS)
they may, but in general, no.

Renaming is just not high-enough frequency from what I hear to motivate JS
devs to swallow type annotation.

/be

~~~
solomatov
>And I suggest you are missing the bigger picture: TypeScript, Dart, et al.,
require (unsound) type annotations, a tax on all programmers, in hope of
gaining better tooling of the kind you work on.

In many cases types can be inferred. ML is able to infer almost all types in a
program (however the algorithm requires that the language doesn't have
subtyping). Haskell has very good type inference which support subtyping (you
declare very few types). They both have strong static type system and don't
tax developers by making them having to declare every type. Algorithms which
are used in Haskell are complicated, but they can be implemented.

~~~
BrendanEich
I know about ML and Haskell but let's be realistic. Neither is anywhere near
ready to embed in a browser or mix into a future version of JS.

We worked in the context of ES4 on gradual typing -- not just inference (as
you imply, H-M is fragile) -- to cope with the dynamic code loading inherent
in the client side of the Web. Gradual typing is a research program, nowhere
near ready for prime time.

Unsound systems such as TypeScript and Dart are good for warnings but nothing
is guaranteed at runtime.

A more modular approach such as Typed Racket could work, but again: Research,
and TR requires modules and contracts of a Scheme-ish kind. JS is just getting
modules in ES6.

Anyway, your point of reference was more practical systems such as Java and
.NET but these do require too much annotation, even with 'var' in C#. Or so JS
developers tell me.

/be

------
eblade
It seemed that the general idea of web being slow is so polarizing that each
and everyone of us can't even pinpoint where the problem is. I suspect the
problem is a multidimensional one.

------
zobzu
tl;dr: its not as fast as native but its much better than AT expected, and
actually within 2x the speed of native.

Also its slightly slower (a few %) than regular JS on browsers without asm.js
support.

~~~
Arelius
> its slightly slower (a few %) than regular JS on browsers without asm.js
> support

You sure you meant what you said here?

~~~
kibibu
Check this out (from the asm.js spec)

    
    
        function add1(x) {
            x = x|0; // x : int
            return (x+1)|0;
        }
    

This declares x to be an int in asm.js, which allows a bunch of optimizations.
In a non-asm.js engine, it is an extra 2 operations.

It's not difficult to believe that a non-asm.js engine would be slower with
the type annotations than without, although I don't have any performance
numbers to check.

~~~
azakai
It's not an extra two operations. An optimizing JIT like CrankShaft or
IonMonkey can remove the first |0 operation in code paths where it is not
needed (that is, where you call it with an integer), and can remove the second
|0 even more easily by simple inference - in fact, the second |0 allows the
JIT to emit a 32-bit addition, with no overflow checks. So it can make code
faster, with or without asm.js.

~~~
kibibu
Count the number of times you wrote "can" vs how many times you typed "does".
Similar stuff happens with C++ compilers all the time, where certain code
"can" help the compiler but, in practice, doesn't always.

The linked benchmarks _do_ show a slowdown with asm.js in IE10. Whether this
is due to type annotations or something else I don't know.

~~~
azakai
Are you saying it's as unpredictable as C++ performance then? I'll take that
;)

I wrote "can" because there are no guarantees, but in practice, this is
definitely done in Firefox and Chrome. See

[http://www.arewefastyet.com/#machine=11&view=breakdown&#...</a><p>- they
don't get that close to native speed without such optimizations.<p>Not sure
what is going on in IE10, but I am more curious about IE11.

------
mekpro
It would be interesting to see an implementation of this on ARM platform.
Firefox OS powered mobile device would be really promising with this
optimization.

~~~
fzzzy
asm.js is already enabled on Firefox for Android nightly builds. We hope to
have asm.js enabled in Firefox OS by the 1.2 timeframe (middle/end of 2013)

------
mratzloff
I think it's becoming clear that there is room for two classes of web
language: one for general use (JavaScript) and one for high performance
applications or games. Let's get a real language alternative, free of half
measures or compromises.

~~~
azakai
All languages and VMs are compromises: JavaScript, JVM, CLR, LLVM, Flash, etc.
Question is which compromise you prefer.

------
melling
What do developers want for a development environment these days? Is C/C++ the
next wave of web development? Tooling is a lot better than when I did it 15
years ago. Dart, however, seems like a friendlier environment but it probably
won't be as fast a C++(asm.js).

~~~
fhd2
> Is C/C++ the next wave of web development?

asm.js is certainly not targeted at writing web applications in C or C++. It's
more about supporting arbitrary languages and getting existing native code
bases (e.g. libraries and game engines) on the web.

There's some more background in this presentation:
<http://kripken.github.io/mloc_emscripten_talk/>

(That said, based on anecdotal evidence, C++ seems to work pretty well on
large projects where predictability and reliability matter. So for really
large, complex web apps, C++ might become a popular choice. Not really seeing
this happen though, as web applications generally handle the hard stuff in the
backend.)

> Dart, however, seems like a friendlier environment but it probably won't be
> as fast a C++(asm.js).

It probably doesn't have to most of the time. Hardly anything in the web
applications I've worked on was ever CPU-bound, the network is usually the big
bottleneck. We're just scripting the browser after all, most of the hard stuff
(rendering, storage engines etc.), is already taken care of by native code.

------
grn
I wonder how the Internet would have looked like if it had developed the way
that Alan Kay proposed (i.e. a browser as a mini-operating system). It's quite
possible that we'd have much better performance (among other things) from the
start.

------
ksk
Why dont they just insert regular C/C++ source code inside HTML and be done
with it? Eventually the browser is just going to be a download manager that
downloads executable code and runs it locally :P

~~~
iso8859-1
Emscripten can't compile LLVM yet, so you can't put the compiler in a web
page. And considering that a PDF reader is couple of megabytes of JavaScript,
LLVM would be probably be tens of megabytes.

------
p0nce
No it can't. C is not the speed limit. Top native performance is achieved by
an ungodly combination of SIMD intrinsics, compiler-assisted optimization,
cache-aware data-structures and assembly.

------
jackmaney
Why don't they first produce near-Chrome performance for their browser? Every
time I use Firefox, it slows down to a snail's pace and eventually crashes if
there are more than 4--6 tabs open.

~~~
ricardobeat
Is your FF up to date? Since around v20 it feels way snappier than Chrome, at
least on OSX.

~~~
jackmaney
I switched to Chrome right around Firefox v3.6. I've tried it a few times
since, but I don't remember which version Firefox was at at the time.

------
namuol
Finally, I can write all my web apps in C/C++. Dream come true!

~~~
iso8859-1
Well, there still is no interface to the DOM. So it really depends on your
definition of "web app". It is quite comparable to applets IMHO. Except
applets are less of a hack. I wish Sun had invested more time in their
"sandbox".

------
leoc
> It's a limited, stripped down subset of JavaScript that It's a limited,
> stripped down subset of JavaScript that the company claims will offer
> performance that's within a factor of two of native—good enough to use the
> browser for almost _any_ application.

Except for running webpages scripted with Dart. It's not fast enough for
_that_ application apparently. <https://news.ycombinator.com/item?id=3095519>

~~~
BrendanEich
Who knows how fast Emscripten+asm.js-compiled DartVM would be compared to
Dart2JS in 2013? I don't know yet, and apparently neither do you. JITs can be
tricky, but @azakai is having good results with Emscripten+asm.js'ed LuaJIT2.

What I wrote 593+ days ago was about the Dash memo's Microsoft-like strategy.
My point then was not that it couldn't ever be defeated by something like our
work on asm.js and Emscripten. Never say never. My point rather was that the
Dash strategy intentionally used Google's big resources to push a gratuitously
non-standards-based agenda at the expense of its Web-standards-based efforts.

Indeed, since then it has become clear that Google miscalculated. Dart even
gave up bignums (the int type) to support Dart2JS/DartVM equivalence, which I
think is a mistake. Bignums are actually on the JS standards-based agenda:

<http://wiki.ecmascript.org/doku.php?id=strawman:bignums>

[http://wiki.ecmascript.org/doku.php?id=strawman:value_object...](http://wiki.ecmascript.org/doku.php?id=strawman:value_objects)

See also

<https://bugzilla.mozilla.org/show_bug.cgi?id=749786>

Given all the time that has passed since 2010, Google champions in Ecma TC39
could easily have worked bignums into ES7 if not ES6.

At a higher level, by over-investing in Dart and under-investing in JS (the V8
team was moved from Aarhus to Munich and with the no-remoties rule had to be
rebuilt), Google has missed opportunities such as game-industry work we
announced at GDC with Epic. This is "ok", it's their choice, but I still say
that it is inherently much more fragmenting than any optimization of the
asm.js kind, and that it under-serves the standards-based Web.

Maybe in a few years we can evolve JS to incorporate whatever helps DartVM
beat Dart2JS, if there's any gap left. However, the idea that mutable global
objects in JS necessarily mean VM-snapshotting is impossible is simply false.
On the other hand, do we really need VM snapshots to speed up gmail startup?
LOL!

/be

~~~
mraleph
You can't simply asm.js'fy LuaJIT2 because it has an interpreter hand written
in assembly and it generates native code to achieve peak performance. You can
only asm.js'fy a normal Lua interpreter which is up to 64x slower than LuaJIT
on benchmarks, when compared natively. So asm.js'fied Lua will be up to 128x
slower. Does not sound that impressive...

~~~
BrendanEich
Hi -- you make a good point, the one seemingly at issue in this tangent (but
not really the bone of contention).

As noted, I don't know which will ultimately prevail in pure performance:
DartVM or Dart2JS on evolved JS. In the near term, of course DartVM wins (and
that investment made "against cross-browser standards" was the strategic bone
of contention 594 days ago).

I do know that in the foreseeable term, we browser vendors don't all have the
ability to build two VMs (or three, to include Lua using LuaJIT2 or something
as fast, in addition to JS and Dart; or more VMs since everyone wants Blub
;-).

The cross-heap cycle collector required by two disjoint VMs sharing the DOM
already felled attempts to push Dart support into WebKit over a year ago.
Apple's Filip Pizlo said why here:

[https://lists.webkit.org/pipermail/webkit-
dev/2011-December/...](https://lists.webkit.org/pipermail/webkit-
dev/2011-December/018811.html)

Other browser vendors than Apple may have the resources to do more, but no
browser wants to take a performance hit "on spec". And Mozilla at least has
more than enough work to do with relatively few resources (compared to Apple,
Google, and Microsoft) on JS. As you've heard, asm.js was an easy addition,
built on our JIT framework.

So you're right, an optimizing JIT-compiling VM is not easily hosted via a
cross-compiler, or emulated competitively by compiling to JS. LuaJIT2 would
need a safe JIT API from the cross-compiler's target runtime, whether
NaCl/PNaCl's runtime or Emscripten/asm.js's equivalent.

Googling for "NaCl JIT" shows encouraging signs, although the first hit is
from May 2011. The general idea of a safe JIT API can be applied to asm.js
too. In any event, one would need to write a new back end for LuaJIT2.

Bottom line: we're looking into efficient multi-language VM hosting via asm.js
and future extensions, but this is obviously a longer road than C/C++ cross-
compiling where we've had good early wins (e.g., Unreal Engine).

