BTW, C/C++ code that is compared has multithreading and SIMD capability disabled. In the third page they benchmark asm.js and native with multithreading and SSE enabled. It shows upto 50x slowdown! Not exactly sure what the point of a benchmark without multithreading is. I mean anyone writing performance sensitive apps will use MT right? (given JS is single threaded, it seems this is a dealbreaker).
Of course single-threaded code will lose to SIMD or threaded code. That's not saying anything surprising.
The important thing is to realize that until just recently - a few months or so - people did not believe that SINGLE-threaded JS code on the web could be close to native speed. The Ars article here even has "surprise" in the title when it concludes that in fact that is possible :)
So that is a crucial milestone. Is the job done? No, as you say, SIMD and threads are important. But we had to first get single-threaded performance to the right area.
Next, browser vendors need to work together to standardize stuff that will allow SIMD and more threading. This is definitely possible if everyone is interested in making the web faster, which I certainly believe is the case.
>The important thing is to realize that until just recently - a few months or so - people did not believe that SINGLE-threaded JS code on the web could be close to native speed.
Homer Simpson: Kids: there's three ways to do things; the right way, the wrong way and the Max Power way!
Bart: Isn't that the wrong way?
Homer Simpson: Yeah, but faster!
> Why is it hard? Because the founding idea of a single sandboxed standard for all platforms is unworkable in the real world. Not only does it mean all developers surrendering complete control to the people defining the standard (who also happen to be big players in many other related markets), it relies on the big players actually agreeing, which is often not in their interest.
The innovation in browsers and webapps is coming from the vendors each acting individually, and puling standards bodies along with them.
If you as a developer, are waiting around for them all to agree on any technology, before leveraging it, then you are just paralyzed. Develop against WebGL and asm.js now to get early mover advantages. If you wait for universal uptake, you'll never get anything done.
Demonstrably wrong? Remind us how many years it took to get rounded corners into CSS?
>The innovation in browsers and webapps is coming from the vendors
Yes, which does not contradict what I said at all: "the people defining the standard (who also happen to be big players in many other related markets)". The vendors are big industry players who have their own agendas that rarely coincide with my interests (yes even Mozilla). They also get to veto any proposal they don't like.
The Kinect has been available for years now, which is an age in the consumer tech space. Where is the open standard web API? Do you see how it might be difficult to get Mozilla, Google and Apple to agree to standardise, implement and support an API for proprietary Microsoft technology? Creating a specific API for every type of device (photo cameras, video cameras, accelerometer, touch screens etc) and not providing a low level interface to hardware is the wrong route to take. High level APIs are fine, but it is important to also provide the ability for people to extend platform support in new and unforeseeable ways, and that requires lower level access.
WebRTC is another excellent example of conflicts of interest with Microsoft opposed to it because it challenges Skype (or because it is technically deficient, depending upon which side you are on). The difference between the web as a platform and real open operating systems is stark. If Windows had been like the web -- locked down with only a limited set of approved APIs -- then Skype could not have been created unless Microsoft decided to allow it. Substitute Microsoft for Apple/Google/Mozilla/Microsoft and you have the situation we have with the web.
Perhaps some people don't see the this because of the tribal nature of technology discussions. People don't see the web as being restrictive because their 'team' (be that Apple or Google or Microsoft or Mozilla) has a say in it. So most arguments devolve into discussion about how X is blocking a proposal by Y and the proposal is either good for the web or bad for the web depending on whether you favour X or Y. Few people seem to stop and wonder why we are creating a system where X can veto technologies, even for people who don't use a single product created by X.
>If you as a developer, are waiting around for them all to agree on any technology, before leveraging it, then you are just paralyzed. Develop against WebGL and asm.js now to get early mover advantages.
How does developing against asm.js and WebGL help me interface with new input devices? It doesn't help me one bit. All it does is let me create a faster shinier version of current webapps (that only runs in certain browsers). This is a reoccurring argument though: the web will be a decent platform once we finally get $NEW_API_PROPOSAL that fill fix everything 'once and for all'.
[EDIT: it looks like I may have misinterpreted the article; see replies]
To clarify, I believe they ran the benchmarks with SSE disabled. SSE is more than just SIMD; these days SSE is used for most (all?) floating-point calculations, even non-SIMD. If you disable SSE, you force floating-point ops to use the old x87 FPU, which has been discouraged by the CPU manufacturers for over 10 years.
In other words, disabling SSE is significantly crippling all floating-point performance.
Why would you assume they disabled SSE? Compilers generate SSE by default when optimizing, I would be very surprised if Peter Bright disabled it. And not only do normal C/C++ compilers generate SSE, but JS engines generate SSE as well. So SSE was being used on both sides of the comparisons here, I am pretty sure.
What was disabled was C/C++ code that explicitly used SIMD intrinsics or explicitly used SSE function calls in C/C++.
Your comment has made me curious. I tried to think of some examples of performance sensitive apps that aren't amenable to multi-threading and I couldn't, but it seems like they could offer interesting areas for R&D. Can you share some examples?
Consider a single tab of a web browser, for instance. By specification, much of the code must run in the same thread. Keeping the semantics while getting code out of the thread is extremely difficult, often impossible.
Many things are hard to parallelize using current technology, and there indeed has been lots of research done on it. History goes back to the supercomputer era (Cray, Connection Machine, Transputer etc.)
The term of art is "inherently sequential" problems (vs "embarrassingly parallel").
In addition, the programming effort gets untenable quickly due to Amdahl's law when the amount of parallelism increases. Eg. if you have 100 cores, even if only 1% of work in your program is sequential, your performance goes down the drain. Currently we're in the comfy phase of the curve with few cores...
We've been making good progress with parallel-minded reimaginings of programming languages and computer architectures on one front: GPUs. On the CPU we are saddled with the inertia and backwards compatibiltiy lock-in of tools and culture (witness lasting use of C++ as a tool to write parallel software)
If you have a multicore machine -- the only kind of machine that brings a benefit to multithreading -- and if your problem is embarassingly parallel, you can usually get away with multiprocesses (MapReduce etc). That way, your single-threaded process can run on many cores.
All modern web browsers (I'll try not to troll about IE) support web workers. However, the semantics of web workers is message-passing (by copy, most of the time), which is the semantics generally associated to multi-process rather than multi-threading. Most developers using threads expect some kind of memory sharing, which is impossible with web workers at the moment.
Shared workers (not implemented in any browser atm, I believe) will take care of some usage scenarios. Other alternatives are being explored (e.g. [Parallel JS](http://smallcultfollowing.com/babysteps/blog/2012/01/09/para...). However, for the time being, don't expect any support from JS to do shared memory concurrency.
After getting very excited and doing a bunch of work, we did a lot of performance tests and discovered that for our use case all they did was increase processor load for the same work - the penalty associated with serialising and deserialising things to go across postMessage was greater than the work we were getting done on the other thread.
Of course this was in the early days, and it'd be good to retest it to see if it's better now.
I think that there are a lot of interesting use cases for WebWorkers, but you also have to be aware that for some things they will actually slow down your application.
1. It's hard to use them in libraries due to restrictions on how to load them, Opera, Ie 10 and safari 5 require a same origin file, makes it hard to drop a worker into a slow part of jQuery.
2. Message passing can be slow and transferring only works for typed arrays which take time to convert to. Unless your calculations are much bigger then your data or converting to typed arrays isn't a bottleneck it may slow you down.
3. OO based inheritance patterns make it very hard to put pieces of your code into the worker, if your slow thing involved a call to jQuery. Ajax you can't call jQuery from the worker because it references the DOM but you can't pull out the Ajax method because it relies on $. extend and others meaning you have to rewrite $. Ajax from scratch (mot too bad in this case) or do major surgery on the library.
> anyone writing performance sensitive apps will use MT right?
No, multithreading not that simple - usually it's the last trick you should try to pull and even then only if the stars align just right.
You need to have expert programmers on hand (of the rare breed who can pull this off), the good luck to succeed and the time budget to spend on the big code reachitecting and experimentation.
If you're running on something else than modern consoles, you don't know how many cores your users will have, the average might be 2 so your average returns on the effort will suck (vs spending equivalent effort elsewhere).
Witness modern Web browsers, for example, where Google, Mozilla, Apple and Microsoft employ some of the best C++ programmers in the world, routinely of pulling off heroic feats, and they haven't seen it worth the effort yet despite Core 2 and Athlon X2 hitting mainstream desktop at about 2005-2006, 8 years ago.
I think you are overstating the problems writing multithreaded code a little here. Yes, it's a lot harder than single-threaded code, yes you need to know what you are doing, yes you need to carefully design your code to enable efficient and safe multithreading, and no you don't get speed for free. All true, but you are making it sound as if writing multithreaded code is rocket science, some kind of mad skill only the top 0.5% programmers can handle. It's not that bad you know... If you're a half-way decent programmer and don't litter your single-threaded code with globals, shared memory and functions with side-effects, like you shouldn't anyway, parallelizing it after the fact to at least is usually pretty straightforward even. Maybe not optimal, but safe, and faster than single-core on multi-core hardware.
Sure, you can write the code if you're a half-way decent programmer. But getting it to work correctly and bug free in presence of nondeterministic behaviour and races is in the domain of the very small minority, when you're talking about large code bases.
I'm betting the minority is less than 0.5%. Half way decent C++ programmers aren't many % of programmers, and people who can make a reliably working parallelized do-over for a nontrivial code base are maybe 1% of that. I'll say 0.05% max.
The challenge is to design your code in a way that either makes it impossible to end up with race conditions or non-deterministic behavior, or if at least limit the number of possible points-of-failure. This requires a good understanding in what coding styles and data structures to avoid or pay extra attention to, but if you're aware of the pitfalls, it's actually not all that difficult. Just make sure your workers don't share any state, collect their outputs in a single thread-safe queue, use off the shelf components to represent threads, thread pools and locking mechanisms, be aware of things like multiple-read-single-write locks, and if you really can't avoid writes to shared state, only then start thinking about mutexes or semaphores. If you're smart about how you set up the code, more often than not you can get away with just some thread-safe queues. Take a look at Objective-C's block syntax and GCD for example, it makes maybe 90% of tasks suitable for multithreading dead easy.
Your estimate only 0.05% of programmers could write reliable multithreaded code is extremely pessimistic, cynical even. I almost feel flattered, having written a fair amount of multithreaded code that works perfectly myself, but I most definitely would dare count me with the 0.05% top programmers or whatever. My estimate is that with proper preparation, at least 50% of all programmers could learn to write reliable multithreaded code, and the remaining 50% probably wouldn't have a use for it anyway.
I was talking about the percentage of existing programmers with the abilities to pull it off. I'm sure more than 0.05% have the potential to learn it eventually, given enough time, motivation and opportunity. Like musicians & theremin!
You're making writing multi-threaded code sound like a herculean task. Sure, doing MT with mutexes is a pain, but programming with actors (message passing) is simpler. You can easily scale this for variable number of cores.
It really should be a 100x slowdown! Their "optimized" Linpack numbers (41 GFLOPs) are too low for this system. The CPU they use can deliver 108 GFLOPs at base frequency and 120 GFLOPs at max turbo frequency, and good LU implementations (e.g. MKL) can achieve >90% of these peak numbers.
For most applications JS doesn't need to get any quicker. What makes web apps feel like swimming through molasses is the DOM.
Even something as simple as getting a Div to follow the mouse on anything but the simplest of pages is impossible to do without incurring ridiculous lag.
I don't know enough about them, but it seems that maybe Web Components may offer some answers by encapsulating mini Document trees which presumably exist in isolation from other components thereby reducing the penalties of dynamic CSS.
The DOM is faster than almost everyone believes. It's not 2001 anymore, guys.
Web components solve a lot of major problems, but it has little to do with DOM/CSS performance and more to do with encapsulation (ie, so your CSS/HTML/JS don't break a component). DocumentFragments and insertAdjacentHTML solved more performance problems for the DOM than most other API changes have, and smarter compositing/painting algorithms in browsers (along with hardware acceleration) have made it possible to do some pretty ridiculous stuff.
So now we're going to say that in 2013 it is impressive that 160 rectangles can follow a mouse cursor? I see the advantages of client/server applications (as they used to be called), and presumably js/css apps will continue to get faster and asymptotically approach their client-only functional equivalents, but there is a thing about a hammer and all problems looking like nails that springs to mind when I see people claim that moving 160 rectangles in 16 ms / frame is 'scary fast'...
The parent's comment was that it is essentially infeasible to have a single div follow the cursor due to performance issues with the DOM. The fiddle includes 160 divs and opacity to demonstrate that that claim is false.
There's an awful lot of red herring going on in your post. It sounds like you have some problems with the web, but they most certainly aren't relevant to this discussion.
Oh I have no problems with the web, it has been providing me with a good living for well over a decade. Maybe that's my problem - that I remember the times when things that are being described now as 'great' or 'fast' were solved already, and my frustration is that instead of moving forward, we now have to repeat the last decade except with js/css before we can really start to innovate again.
Unreal engine in the browser? Seriously? Because of what - because it's easier to update people's clients when they download it again every time they visit your website? We had web launchers and auto-updating that solved everything the web brings as an advantage over desktop applications (in the use cases that require fast graphics, i.e. not CRUD line of business applications) years ago.
The elephant in the room here is that people want to push web applications because it's easier to make money from them, and you can make more of it, over a longer time. Look, I do it too, I know that the web beats the pants off client-only software from a business point of view. But let's call a spade a spade and not tip toe around it with bullshit pseudo-arguments like 'ease of deployability' and 'social sharing' and 'portability' and 'platform-agnosticity'.
If an app that had no sustainable future as a native app becomes possible as a web app, don't users also benefit? The fact is that many people are willing to pay for a service they continue to get value from month after month. And the recurring revenue also usually comes with recurring support because companies have to keep winning customer loyalty in a way they never have to with a one time sale. I think it is a net win for both customers and the companies selling the services.
And developers win because there are more viable businesses which need their skills over the long term.
Should it be possible to sell a native app as a service? Maybe, but supporting a native app professionally is much more difficult than you seem to be acknowledging. Have you ever supported a native app in the wild across a wide variety of generic PC hardware for novice users?
"If an app that had no sustainable future as a native app becomes possible as a web app, don't users also benefit?"
Of course they do, therefore charging for software (desktop & web alike) = good. Make no mistake on which side in that debate I'm on. But let's call it like that.
"And developers win because there are more viable businesses which need their skills over the long term."
Sure, and it's no more than fair, and the natural order to boot.
"Have you ever supported a native app in the wild across a wide variety of generic PC hardware for novice users?"
I'm glad you ask, because actually yes I do, and it's a pain in the ass for the most part. And for the parts where it makes sense, I have moved parts of the services we offer to web applications. Therefore I feel I'm experienced enough to compare the advantages and disadvantages of the two, and from that I criticize the developments of people wanting to shoehorn everything into the browser, either because of covered up ulterior motives, or because it's all they know and want to know. But again - then say so, and don't sugarcoat it under the fuzzy 'it's better for the user' rhetoric that is so prevalent.
I should add that another reason can be "because I can", which is a perfectly fine reason too, but then label it as such so that naive greenhorns don't start pushing production systems in this direction, requiring others to then deal with the fallout.
The problem is that the DOM is a hard upper limit on performance compared to what the performance of an unsafe language (asm.js) is. It is massively complex because it must perform every UI task in the browser and simply moving around DIVs doesn't give you a feel for its the performance in complex scenarios.
If you think about how native applications perform and then imagine every one having to go through a DOM interface and every button and UI interaction being composed of DOM elements and re-styling these elements on every interaction you get a sense of what the problem is. There is simply no way to get the DOM to do something it wasn't designed to do, which is to be a document display interface and not a general graphical user interface. You have to clobber together a lot of interactions to get the desired behavior and not only does this increase the overhead for both your program and the DOM you are going to hit cases where the DOM isn't optimized.
What you describe -- "imagine every one having to go through a DOM interface and every button and UI interaction being composed of DOM elements and re-styling these elements on every interaction" -- is exactly how Firefox is built. The desktop Firefox UI is all DOM elements (not HTML, but still the same underlying DOM code).
Hovering over a toolbar button or a tab causes a CSS style rule for a :hover effect to be applied. Opening a new tab uses CSS animation for the transition. Moving tabs around manipulates the DOM and moves the tab elements (which are themselves composed of images, labels, etc.).
And yet, the Firefox UI is plenty snappy -- yes, you can argue that there are slowdowns and issues that need to be fixed, but no more or less than in many "native" apps.
> Would be great if Chrome had this option off by default.
Disabling Vsync causes page tearing artifacts (https://en.wikipedia.org/wiki/Page_tearing). It should always be enabled by default. Some new gpu drivers also have "adaptive vsync", where it's enabled by default but gets temporarily disabled when deadlines are not met.
I've been wondering why I keep hearing that the DOM is supposed to be slow. Don't people mean to say that rendering (i.e. mostly CSS) is slow? This fiddle is certainly not doing much with the DOM except setting style attributes.
The problem with the DOM is not that it's intrinsically slow, but that it's very easy to do slow things with it inadvertently. There are often N different ways to do things, due to the evolution of HTML. The newer approaches tend to be designed with things like GPU acceleration in mind (e.g. CSS transforms), whereas older methods often had lots of heavy interactions and computations because they weren't expected to be used dynamically.
Just like optimizing any software takes skill and knowledge, so too does writing HTML, whether dynamic or not. Tools for understanding what's going on with the DOM and with dynamic HTML are slowly becoming available as well, which will help with this.
People generally mean their manipulation of DOM is effectively slow. That's a perfectly accurate statement, but the problem is calling the DOM slow as a result. The real culprit is often that doing stupid things with the DOM cause style recalcs, reflows, layouts, compositing, painting, etc. Those are all comparatively expensive.
> What makes web apps feel like swimming through molasses is the DOM.
I would contend that network latency and the plethora of domains queried to be a greater impediment to a speedy web. DOM manipulation may be absurdly slow however it cannot hold a candle to latency hold-ups.
Most readers on HN live in a quiet and peaceful bubble of high-quality network connections, but the majority of the world is beginning to wake up to the web and they'll have to endure latency issues that we haven't experienced since "broadband" was a new concept.
| DOM manipulation may be absurdly slow however
| it cannot hold a candle to latency hold-ups.
Just an anecdote, but jQueryUI + IE8 took 8 seconds to style all of the buttons we had on a page during one call. Granted, we had a ton of buttons (most were hidden), but that's a lot of time. The same call on Firefox and Chrome were fractions of a second (can't remember the exact numbers, but Chrome was faster at the time), and this was a couple of years ago. DOM manipulation can be a real drag.
But we do know how to handle bad latency, don't we? Speculative loading, which got a bad name because it makes wallhacks on FPSes work but really has almost no downside once the client can't do anything malicious with a little information the human isn't using yet.
If the real problem isn't latency but bandwidth or bandwidth caps, that will require other solutions.
For most applications, the Dom is the weak point because manipulating the Dom is as ambitious as the apps get. The exciting part of asm.js is not that it can speed up existing apps, it's all the cool new things you can do on the client side now that were simply to slow to be possible before.
Not sure I agree with this. Very few web applications are CPU bound to the point of locking the UI. Web Workers are extremely useful, however very few current web apps benefit from them. Given they cannot interact with the DOM they are limited to pure number crunching. Great if you're ray tracing or encoding/decoding sound or video, not much use to the other 99% of web apps.
My understanding is that parsing CSS rules and updating the DOM are incredibly costly operations which none of the browser vendors have yet decided to optimise.
> The reason they're slow is that they're also extremely complex and slow operations.
In that case it seems that we need something new to replace CSS for layout.
Or maybe a 'strict mode' equivalent for CSS where styling is separated from layout. In this mode rules that affect style (bold, font etc) cannot affect container dimensions. This mode would include a subset of rules that affect only layout (position, width, height etc).
This would presumably greatly reduce the impact of the parse/dom traversal/paint cycle of CSS.
> In that case it seems that we need something new to replace CSS for layout
There's so much that CSS (and browsers in general) do that we take for granted, which is good in one sense but bad since you can't dip below those abstractions and grapple with the layout engine (or garbage collector, or HTML parser, or selector engine) at a low level. You get the whole ball of wax and all that stuff runs all the time, even if you don't actually need it. In my view this is essentially what separates HTML apps from native ones.
I totally think CSS layout could use some improvements, but isn't the separation you're talking about here already pretty strong?
I'm struggling to think of an example of a situation where text styles affect layout on containers where dimensions are anything other than 'auto'. And any situation at all where text styles affect positioning.
An example from my experience- auto-complete filtering. Loading a list of the user's Facebook friends (which can number into the thousands) and filter as they type.
When that's on the main UI thread it's hideous- the page stops responding to typing, so people hit the key twice, the whole page feels like it's hanging. It's not constant CPU usage, but it's CPU usage at the exact moment the user expects the site to react immediately. Same goes for calculations when a user is swiping, or something like that.
To add to ricardobeat, it's all repaints and recalculations. Web devs have become overzealous with smashing JS into every nook and cranny to manipulate the dom, it seems to be a battle Google is trying to educate people about, and good on them.
Open dev tools in chrome, the click network/frames, hit record and refresh the page. Do this on a sluggish site that uses tons of js and you will see the problem.
> Very few web applications are CPU bound to the point of locking the UI.
It comes up regularly around here. We've had to resort to all kinds of ugly workarounds (such, as, e.g. chunking work and running small pieces off setTimeout() or requestAnimationFrame). A simple thing like changing fonts on the Canvas more than a couple of things can kill any hope of a responsive UI unless you very carefully manage your rendering, depending on browser (Canvas font handling is ok on Chrome, and ridiculously slow on Firefox, for example).
The fascinating thing about this is that they didn't write a new JIT for asm.js. It's just their existing JIT with more information and a few new tricks. Among other things, this means that it doesn't yet have a lot of the fancy optimizations that C/C++ compilers typically have in their backends, like clever instruction selection or register allocation. It's already impressively fast, and it has the potential to get even faster.
Admittedly it is harder to introduce a new language across all (major) browser but I think it would really be worth it.
1. "Sane" is highly subjective. There is no single optimal language for all problems and audiences.
2. Pushing a new language to all major browsers is hard, like you said. Maybe even impossible, several huge corporations with complex politics are involved.
Google is trying this approach with Dart, and I personally don't think it's the panacea, nor do I expect all major browsers to support it.
So the only alternative I see is to have a very flexible VM that can run pretty much any language. asm.js proves that JS is actually quite good at that. I'd still prefer something like native client, but asm.js is highly promising: It's downwards compatible and has some potential for optimisation at the same time.
That's what asm.js is for, meant as bytecode for compilers.
"from whatever language you want" - in practice, not really. If you have a language that relies on gc or some kind of vm then you are also going to have to deliver all the asm.js code for the vm. That's potentially a lot of code to deliver to the browser, could you use threads efficiently in your implementation etc. There's no story yet for anything beyond a statically compiled language and even if there is a story you would probably have to wait a long time to see whether it would deliver anything in practice.
I'm not entirely sure myself, but it seems to be a bit more involved than that. I recently came across a thread  discussing how to add support for Objective-C and it seems the approach is to compile the Obj-C to C++... I think only C and C++ are supported at this point. But I'd bet they eventually get there.
The bigger problem however, is that most of the commonly used languages don't just compile to native executables. They include a runtime handling stuff like garbage collection and just-in-time compilation. Since these runtimes are usually written in C or C++, we'll get there if C compiled to asm.js gets fast enough. That's one of the goals of Emscripten :)
There's a demo  showing off various language runtimes compiled to asm.js. I'm not sure how they perform in comparison with JS, but the REPLs work pretty well.
Three such languages are called AMD64, IA32 and ARM machine code, and are supported by Google Native Client (NaCl). I think they will be exceedingly hard to beat when it comes to speed and loading time.
> - will only run on the hardware platforms for which you have developed.
What fast browser JITs are actively developed for platforms other than ARM, AMD64 and IA32? V8 does not support anything else, and while Firefox enables SpiderMonkey for MIPS and SPARC , "unsupported" is not a very hearty endorsement.
SpiderMonkey works on PPC, for what it's worth; the TenFourFox project is actively maintaining it there. They're usually a bit behind on porting the JITs, but they do actively port them.
But the real issue with NaCl is this thought experiment. Assume NaCl were developed in 1998 and everyone had jumped on that bandwagon and the web in 2003 were full of NaCl blobs targeting the hardware architectures that mattered in 2000-2001. Then ask yourself the following questions:
1) Would this have affected the choice of hardware for phones and tablets and whatnot?
2) Would it be viable today to ship a web browser on an ARM system?
3) What makes us think that the currently-relevant set of hardware architectures will still be the set we want to be using in 10-15 years? In 30 years?
The nice thing about asm.js or PNaCl compared to NaCl is that even if we ship it right now and only run it right now on ARM/AMD64/IA32, if someone comes up with a new hardware architecture they want to ship in consumer devices they can simply implement a JS JIT for it (in the case of asm.js) or an LLVM backend (for PNaCl), which is something they would need to do _anyway_ for that "consumer devices" bit. On the other hand, if they have to deal with legacy NaCl content they suddenly have to do hardware emulation or something insane.
> Emscripten builds taking between 10 and 50 times longer to compile than the native code ones.
Hey, this is fatal. With this massively long iteration time, you can't run actual big projects. Only executing 1,000,000 lines of C++ code is not enough. We care the build time as well as the perforamnce.
BTW, have you heard that the next Haswell processors can get only 5% performance improvement? We should assume that we have no free lunch anymore. One of the UNIX philosophies is already broken.
It seems Mozilla and Google are moving more and more toward their own browser specifications and mechanisms. Microsoft and Netscape did this in the 90s, and it ended up causing nothing but pain.
Optimizing JS is interesting, but creating browser-specific applications of code meant for a web audience disturbs me. I don't want a repeat of the first browser wars. Hopefully Mozilla will work toward w3 standards in this effort, since Google clearly hasn't with Dart.
From glancing at the benchmarks in the Ars article, it looks like asm.js generally runs around twice to four times as fast as the same code without asm.js-specific optimizations. This is a huge improvement, but it's not the difference between something being blazing fast in one browser and unusably slow in another.
In some applications it doesn't matter, in some it's crucial. If your game runs at 60 fps in firefox and 15 fps in other browsers, it's effectively firefox-only game.
Additionaly browser games can't have main game loop (cause it would hang the browser infinitely), but often use "requestAnimationFrame" to trigger updates with the best sustained refresh rate available. Even if only one frame out of every 10 in your game is taking 20 ms instead of 16 ms - your game won't run with 60 fps anymore, but with 45 fps or 30 fps because browser is trying to choose refresh rate that will work. And that's a big difference in experience.
At Google I/O this year, Google indicated some interest in asm.js, and noted that since asm.js was released V8 has already gotten significantly faster in running asm.js code.
With regards to Dart, Google has said that they plan to get the language formally standardized after reaching the stable 1.0 release. It doesn't make sense to try to standardize something that's still in beta.
I think this is actually is a very good thing. Before the next big thing can be standardized, it should be created. Google tried to create a next big thing, by moving in several directions, among them are both Dart and PNaCl (which IMO, is the way to go, it provides good browser integration and 95% of the native code with both multithreading and other stuff). Mozilla, IMO, is more interest in keeping the status quo and resisting any significant innovation.
There's two kinds of innovation around technology platforms: you can innovate with the platform itself, by designing new operating systems, programming languages, browsers, libraries etc. Or you can innovate on top of the platform, by doing new things a level up. We benefit most from a balance of the two.
Near native performance is a very good thing especially in the fields such as game development.
Look around you at the Web: all the top sites use JS heavily and without "heroic efforts" compared to Java (dead on the client) or .NET (WPF is dead too).
If the old big-OO-frameworks really conferred such huge fitness advantages over JS, you would not see them dead on the client. They would have been used, and their plugins would have been supported better by the plugin vendors.
JS these days has a lot going for it, including IntelliJ-style IDEs (Cloud 9 offers one) and not-too-OO-or-huge frameworks.
>Look around you at the Web: all the top sites use JS heavily and without "heroic efforts" compared to Java (dead on the client) or .NET (WPF is dead too).
> without "heroic efforts" compared to Java (dead on the client) or .NET (WPF is dead too).
Most of the top sites are quite trivial in code complexity. They are more complex design and ui-experience wise than codewise. There are, of course complex web applications, like google docs, cloud9, gmail, google reader, but they were created with definitely heroic efforts, and they don't reach the complexity of the top desktop applications. Where's web based Mathematica, 3DS Max, Autocad, IntelliJ? When web based office applications will have performance of MS Office?
As an indicator of how complex these web applications are, you can take a look at how many web frameworks have non trivial collections, like HashSets, HashMaps, TreeMaps etc. Only the following frameworks support them: closure tools, GWT, Dart. Most of the popular JS frameworks which are used by top sites don't use them.
>JS these days has a lot going for it, including IntelliJ-style IDEs (Cloud 9 offers one) and not-too-OO-or-huge frameworks.
Quick reply (thanks for the well-formatted cited text!).
* Java didn't have the bad security rep until relatively recently. Java had nice-looking UX in the 90s (Netscape bought Netcode on this basis), much nicer than Web content. Didn't help.
* Web != Desktop. Large desktop apps are the wrong paradigm on the web. You won't see a Web-based Mathematica rewritten by hand in HTML/JS/etc. You will see Emscripten-compiled 3DS Max (see my blog on OTOY for more). The reasons behind these outcomes should be clear. They have little to do with JS lacking Java's big-OO features.
* Large mutable-state collection libraries are an anti-pattern. Functional structures, when hashes and arrays do not suffice (and even there), are the future, for scaling and parallel hardware wins.
* Conway's Law still applies. Too often, bloated OO code is an artifact of the organization(s) that produced it. This applies even to open source (Mozilla's Gecko C++ code; we fight it all the time, including via JS). It definitely applies to Google (e.g., gmail, Dart at launch). Perhaps there's no other way to create such code, and we need such programs as constituted. I question both assumptions.
* Glad you brought up refactoring. It is doable in JS IDEs with modern, aggressive static analysis. See not only TypeScript but also Marijn Haverbeke's Tern and work by Ben Livshits, et al., at MSR. But automated refactoring is not as much in demand among Web developers I know, who do it by hand and who in general avoid the big-OO "Kingdom of Nouns" approach that motivates auto-refactoring.
In sum, if the web ever becomes big-OO as Java and .NET fans might like, I fear it will die the same death those platforms have on the client side. Another example: AS3 in Flash, also moribund. These systems (even ignoring single-vendor conflicts) were too static.
The Web is not the desktop. Client JS-based code can be fatter or thinner as needed, but it is not as constrained as in static languages and their runtimes. Distribution, mobility, full-stack/end-to-end (Node.js) options, offline operation, multi-party and after-the-fact add-on and mash-up architectures, social and commercial benefits of the Web (not just of the Internet) -- all these change the game from the old desktop paradigm.
JS has co-evolved with the Web, while the big-OO systems have not. This might still end up in a bad place, but so far I don't see it. JS can be evolved far more easily than it can be replaced.
>* Web != Desktop. Large desktop apps are the wrong paradigm on the web. You won't see a Web-based Mathematica rewritten by hand in HTML/JS/etc. You will see Emscripten-compiled 3DS Max (see my blog on OTOY for more). The reasons behind these outcomes should be clear. They have little to do with JS lacking Java's big-OO features.
>* Glad you brought up refactoring. It is doable in JS IDEs with modern, aggressive static analysis. See not only TypeScript but also Marijn Haverbeke's Tern and work by Ben Livshits, et al., at MSR.
The problem with algorithms similar to Tern's is that it works well until we use reflexive capabilities of the language. However, most of the libraries do use them, and as long as it happens, algorithms such as Tern's infer useless type Object.
>But automated refactoring is not as much in demand among Web developers I know, who do it by hand and who in general avoid the big-OO "Kingdom of Nouns" approach that motivates auto-refactoring.
Another maintainability related feature is navigation to definition and find usages. Unfortunately, language dynamism makes them imprecise and code maintenance becomes nightmare especially if you have > 30 KLOCs of code. You have to recheck everything manually and it's very error prone. Tests can help, but they also require substantial effort.
>And I suggest you are missing the bigger picture: TypeScript, Dart, et al., require (unsound) type annotations, a tax on all programmers, in hope of gaining better tooling of the kind you work on.
In many cases types can be inferred. ML is able to infer almost all types in a program (however the algorithm requires that the language doesn't have subtyping). Haskell has very good type inference which support subtyping (you declare very few types). They both have strong static type system and don't tax developers by making them having to declare every type. Algorithms which are used in Haskell are complicated, but they can be implemented.
I know about ML and Haskell but let's be realistic. Neither is anywhere near ready to embed in a browser or mix into a future version of JS.
We worked in the context of ES4 on gradual typing -- not just inference (as you imply, H-M is fragile) -- to cope with the dynamic code loading inherent in the client side of the Web. Gradual typing is a research program, nowhere near ready for prime time.
Unsound systems such as TypeScript and Dart are good for warnings but nothing is guaranteed at runtime.
A more modular approach such as Typed Racket could work, but again: Research, and TR requires modules and contracts of a Scheme-ish kind. JS is just getting modules in ES6.
Anyway, your point of reference was more practical systems such as Java and .NET but these do require too much annotation, even with 'var' in C#. Or so JS developers tell me.
With some optimizations unrealized: it's not like everything can use SIMD or that all problems which could theoretically use SIMD actually benefit. Comparing scalar code is still useful for the vast majority of programs executed.
Second, compilers also do auto vectorization (automatically vectorizing code that is not explicitly written to use vector types / SIMD intrinsics) and in addition SSE is used even for scalar floating point operations these days. It doesn't look like asm.js even lets you operate on single precision (32 bit) floating point values which can be a performance issue in some cases.
Last, web workers are less flexible than native multithreading with shared memory between threads and atomic instructions / mutexes etc.
These and other issues mean it will not be possible to squeeze as much performance (and as a result power savings) out of asm.js as from native code or PNaCL.
It's not an extra two operations. An optimizing JIT like CrankShaft or IonMonkey can remove the first |0 operation in code paths where it is not needed (that is, where you call it with an integer), and can remove the second |0 even more easily by simple inference - in fact, the second |0 allows the JIT to emit a 32-bit addition, with no overflow checks. So it can make code faster, with or without asm.js.
Count the number of times you wrote "can" vs how many times you typed "does". Similar stuff happens with C++ compilers all the time, where certain code "can" help the compiler but, in practice, doesn't always.
The linked benchmarks do show a slowdown with asm.js in IE10. Whether this is due to type annotations or something else I don't know.
Grandparent may be saying that, but that's not true.
The comparison that is close is code produced from a C compiler for normal JS, versus code produced from the same compiler and code for asm.js. BUT code that would be written by a human would be expected to be more compact and run much faster than either of those.
Furthermore the code that they wrote would not run on all browsers, and if it did run often crashed the browser. Human written code would not have those problems.
What do developers want for a development environment these days? Is C/C++ the next wave of web development? Tooling is a lot better than when I did it 15 years ago. Dart, however, seems like a friendlier environment but it probably won't be as fast a C++(asm.js).
asm.js is certainly not targeted at writing web applications in C or C++. It's more about supporting arbitrary languages and getting existing native code bases (e.g. libraries and game engines) on the web.
(That said, based on anecdotal evidence, C++ seems to work pretty well on large projects where predictability and reliability matter. So for really large, complex web apps, C++ might become a popular choice. Not really seeing this happen though, as web applications generally handle the hard stuff in the backend.)
> Dart, however, seems like a friendlier environment but it probably won't be as fast a C++(asm.js).
It probably doesn't have to most of the time. Hardly anything in the web applications I've worked on was ever CPU-bound, the network is usually the big bottleneck. We're just scripting the browser after all, most of the hard stuff (rendering, storage engines etc.), is already taken care of by native code.
I wonder how the Internet would have looked like if it had developed the way that Alan Kay proposed (i.e. a browser as a mini-operating system). It's quite possible that we'd have much better performance (among other things) from the start.
I don't see how this is related, asm.js applications benefit from better WebGL support all the same. It's still JS running in the browser, it doesn't have direct access to any low level APIs.
I think you can build pretty neat web-based games even with plain JS, thanks to WebGL. Probably not with really high-end graphics or physics, but good enough. However, are you expecting game companies to rewrite their engines in JS just to get into the browser, not even knowing if it'll pay off for them? With asm.js, it's just another platform supporting C++ they can port to (C++ is ubiquitous in the industry). Mozilla and Epic are already working on porting Unreal Engine (Mass Effect and tons of others) to the web.
If browser games turn out to be the future, I'm sure we'll get engines focusing just on browser games, gradually rewriting large parts of their engines in JS. At the same time, we'll see better support for web-based games in consoles and on mobile devices.
Being able to use any language is better than having to use one language littered with WTF pitfalls because one guy had to hack it together in a week. And the debate about type checking and static analysis is far from settled.
Well, there still is no interface to the DOM. So it really depends on your definition of "web app". It is quite comparable to applets IMHO. Except applets are less of a hack. I wish Sun had invested more time in their "sandbox".
You seem to say "ran" which implies it was a some time in past.
When exactly was this? Also FF is moving very rapidly, I remember FF 19(possibly 20) with no addons being way slower on my laptop than Chrome while its nightly at the time - FF 22 (or 23-24) ran blazingly fast (with same addons as FF 19) on par with Chrome.
3.6 was a while ago, and I believe you didn't need add-ons to have the trouble you describe.
I use Firefox and Chrome every day. My results line up with recent HN comments about how Firefox is competitive, uses less memory at scale (lots of tabs), still janks worse, but may be actually more stable at the moment. Flash is a big source of instability still, for both browsers.
My bottom line: both Firefox and Chrome are competitive, leading-edge browsers. Firefox on desktops is pure open source, Chrome is not (if that matters to you; it does to me a bit, but I'm a realist). AwesomeBar wins for me over Omnibox.
You are far from the first Mozilla employee that has responded to my complaints about Firefox, but you are the only one thus far that has been civil. In fact, just a few days ago, I was trolled on twitter by a Mozilla employee over my responses here.
Sorry to hear about the trollery. I'm looking into it.
I wonder if you have a profile that goes back years and has some property that tickles a latent bug. Sorry if I missed it: have you tried running on a brand new profile? You'll have to use the -ProfileManager option on the command line at startup, I think.
My profile is apparently constantly being corrupted. If I have to wipe Firefox's data on me every other day because of profile corruption then what is the point? If addons are guaranteed to mess things up then why does Firefox have that feature?
Let's be honest, this isn't meant to be helpful, it's just meant to shift blame back to the user.
Who knows how fast Emscripten+asm.js-compiled DartVM would be compared to Dart2JS in 2013? I don't know yet, and apparently neither do you. JITs can be tricky, but @azakai is having good results with Emscripten+asm.js'ed LuaJIT2.
What I wrote 593+ days ago was about the Dash memo's Microsoft-like strategy. My point then was not that it couldn't ever be defeated by something like our work on asm.js and Emscripten. Never say never. My point rather was that the Dash strategy intentionally used Google's big resources to push a gratuitously non-standards-based agenda at the expense of its Web-standards-based efforts.
Indeed, since then it has become clear that Google miscalculated. Dart even gave up bignums (the int type) to support Dart2JS/DartVM equivalence, which I think is a mistake. Bignums are actually on the JS standards-based agenda:
Given all the time that has passed since 2010, Google champions in Ecma TC39 could easily have worked bignums into ES7 if not ES6.
At a higher level, by over-investing in Dart and under-investing in JS (the V8 team was moved from Aarhus to Munich and with the no-remoties rule had to be rebuilt), Google has missed opportunities such as game-industry work we announced at GDC with Epic. This is "ok", it's their choice, but I still say that it is inherently much more fragmenting than any optimization of the asm.js kind, and that it under-serves the standards-based Web.
Maybe in a few years we can evolve JS to incorporate whatever helps DartVM beat Dart2JS, if there's any gap left. However, the idea that mutable global objects in JS necessarily mean VM-snapshotting is impossible is simply false. On the other hand, do we really need VM snapshots to speed up gmail startup? LOL!
What about PNaCl? What's the reason you don't support it in Mozilla? It's based on open source tools, and can be easily integrated into any browser.
Your green-colored id says you are new here, and yet unwilling to do basic research in HN on the topic you ask about. I'll take this in the "Dear LazyWeb" spirit, assume that you are not trolling, and give some links. First, one of a few obvious searches:
Back to your comment: "It's based on open source tools" is true of Emscripten too.
The bit about "can be easily integrated into any browser" is false due to Pepper, the large new target runtime for *NaCl. Pepper is a non-standard and unspecified-except-by-C++-source plugin API abstracting over both the OS and WebKit -- now Blink -- internals.
To make such an airy assertion in an under-researched comment makes me suspect that you don't know that much about either PNaCl or "any browser". So why did you make that confident-sounding claim?
These days, large apps are written in JS, even by hand. GWT is not growing much from what I can tell, compared to its salad days. Closure is used the most within Google, and Dart has yet to replace Google's use of GWT + Closure.
Outside Google, hundreds of languages compile to JS (http://altjs.org/). CoffeeScript is doing well still. TypeScript is Microsoft's answer to Dart, and more intentionally aligned with the evolving JS standard.
"Does Mozilla has any plans to improve the situation here?"
Have you heard of ES4? Mozillans including yours truly poured years into it, based on the belief that programming-in-the-large required features going back to "JS2" in 1999 (designed by Waldemar Horwat), such as classes with fixed fields and bound methods, packages, etc.
Some of the particulars in ES4 didn't pan out (but could have been fixed with enough time and work). Others are very much like equivalent bits of Dart. One troublesome idea, namespaces (after Common Lisp symbol packages) could not be rescued.
But ES4 failed, in part due to objections from Microsofties (one now at Mozilla) and Googlers. In a Microsoft Channel 9 interview with Lars Bak and Anders Hejlsberg, Lars and Anders both professed to like the direction of ES4 and wondered why it failed. _Quel_ irony!
As always, Mozilla's plans to improve the situation involve building consensus on championed designs by one or two people, in the standards bodies, and prototyping as we specify. This is bearing fruit in ES6 and the rest of the "Harmony era" editions (ES7 is being strawman spec'ed too now; both versions have partial prototypes under way).
For programming in the large, ES6 offers modules, classes, let, const, maps, sets, weak-maps, and many smaller affordances. I hope this helps. Even more is possible, if only the browser vendors and others invested in JS choose to keep working on evolving the language.
>To make such an airy assertion in an under-researched comment makes me suspect that you don't know that much about either PNaCl or "any browser". So why did you make that confident-sounding claim?
Unfortunately, I can evaluate technologies according only to my experience and knowledge. According to my limited knowledge and experience, what PNaCl evolves to seems like the way to go: I can use whatever language I like to write code; I can utilize resources of computers efficiently; The code is performed in a sandbox; It's based on non proprietary standards (I mean LLVM, not Pepper).
>The bit about "can be easily integrated into any browser" is false due to Pepper, the large new target runtime for *NaCl. Pepper is a non-standard and unspecified-except-by-C++-source plugin API abstracting over both the OS and WebKit -- now Blink -- internals.
It's a new technology, how is it supposed to be standardized? IMO, it's better to create some implementation,
>Have you heard of ES4? Mozillans including yours truly poured years into it, based on the belief that programming-in-the-large required features going back to "JS2" in 1999 (designed by Waldemar Horwat), such as classes with fixed fields and bound methods, packages, etc.
I closely monitored what have been happening to Harmony, and it was obvious that this effort wouldn't have panned out. There were just too many complex features to be implemented (for example, generics) in a single release.
>These days, large apps are written in JS, even by hand. GWT is not growing much from what I can tell, compared to its salad days. Closure is used the most within Google, and Dart has yet to replace Google's use of GWT + Closure.
Both Mozilla and Google agree on goodness of LLVM, since both PNaCl, and asm.js toolchain are based on it. Google has solved the problem of efficiently representing LLVM in a platform independent way and safely executing it in a sandboxed environment. Why not create a PNaCl and LLVM based standard to allow, efficient execution of code? Is it so difficult?
Mozilla promotes asm.js as a "standard compliant" way to do the same thing as PNaCl. However, there are a lot of disadvantages in this approach: asm.js is text based and very verbose, binary format will save a lot of traffic; asm.js doesn't provide access to SIMD and multi-threading which are crucial for good performance; asm.js won't work out of the box, it requiress some effort to support on the side of browser vendors, for example, the only browser where Unreal demo initially worked was Firefox nightly build. Is it so hard to find a consensus among browser vendors (or at least Mozilla and Google) in this respect?
What a formatting mess -- I hope you can you still edit your post and put two newlines after the >-cited lines from my post, and the text where you start replying? Thanks.
If I can read through the mess, you seem to be saying you can't evaluate PNaCl, so you'll just make airy and overconfident assertions about it. Even on HN, that doesn't fly. -1!
When I challenge your further assertion that something is "easily integrated into any browser", you switch subjects to arguing for prototyping and developing a complex system before standardizing it. Good idea (but beware the difficulty standardizing it late if simpler evolutionary hops beat it in the meanwhile; more below on this). However true, that is a dodge, it does not excuse your assertion about "easily integrated".
Doubling down by citing "the DOM" as if Pepper were only about OS and DOM APIs, or as if even just the DOM implementations in different engines had enough in common to make interfacing from native code (not from JS, which is hard enough even with a decade+ history of written standards and market-driven interop testing) easy, just repeats this bad pattern. So -2.
Then you seem to confuse ES4 with Harmony (which I made clear came about after ES4 failed). -3.
At this point on USENET, you'd hear a plonk. But I'll close with one more point:
Mozilla and Google using LLVM does not equate to standardizing PNaCl. Pepper is one problem, a big one, but not the only one. Consider also the folly of using a linker-oriented intermediate representation for a shelf-stable Web-scale object file format. See Dan Gohman's "LLVM IR is a compiler IR" post:
Shortest-path evolution usually wins on the Web. Trying to standardize PNaCl -- including Pepper and LLVM IR abused or patched into shape as a long-lived object file format -- is an impossibly long, almost exclusively Google-dependent, path.
Extending JS among competing engines whose owners cooperate in Ecma TC39, via a series of much shorter and independently vetted and well-justified (by all those engine owners) path-steps? That is happening in front of your eyes.
You may not like it. You may want to say "Go home, evolution, you are drunk":
>What a formatting mess -- I hope you can you still edit your post and put two newlines after the >-cited lines from my post, and the text where you start replying? Thanks.
Sorry, for the bad formatting, when I understood that it's broken I wasn't able to edit it.
>At this point on USENET, you'd hear a plonk. But I'll close with one more point:
I didn't want to argue with you, I just described how it looked to me (and I mentioned it in the previous comments). I just have no expertise here and now see that there're problems which aren't mentioned by googlers in their presentations. May be you are right, may be they are right. I am sorry if I offended you.
>Shortest-path evolution usually wins on the Web. Trying to standardize PNaCl -- including Pepper and LLVM IR abused or patched into shape as a long-lived object file format -- is an impossibly long, almost exclusively Google-dependent, path.
Not looking to argue, just add some value: you say "too much time until it happens" and I've heard that before, many times -- most recently re: Dart (see that Channel 9 Lars&Anders interview).
Funny thing, it has been years since Dart started, ditto PNaCl. Who says JS is the slow path? I suspect it will get there faster (for tangible definitions of "there", e.g., a Rust2JS compiler for you) with well-focused work.
Pretty please! Excited to see pcwalton's zero.rs work, which I understand to be a prerequisite for a Rust2JS. But deeper JS integration than emscripten currently provides (e.g., mapping rust structs to harmony binary data StructTypes?) would be even more awesome. When are you going to land 749786, anyway?
Re: Dart and PNaCl--they've been useful for providing political cover to people who want to evolve JS more aggressively.
Heh, I really do need to land int64/uint64 support. The patch needs tests and some cleanup first. This week may afford some hacking time (finally) instead of just rebasing time.
In theory it shouldn't be necessary to mal-invest in long-odds/single-vendor (and therefore hard to standardize without dominant market power) innovations at high cost, just to drive minimal investment in evolving the web in shorter-odds ways. Especially when the same company is collaborating in TC39 to evolve JS, and owns the V8 engine!
So therefore I dont like asm.js either. I much prefer LLVM or PNaCI. But that is in an ideal world. In real world i would have to agree asm.js seems the way forward. Even with asm.js being a shortcut, it would still take many years for it to gain enough traction and improvement to be really useful as an universal compiler target. Much like what is currently happening now with JVM. Except almost everyone already has a JS Engine Installed, no need to download a Java Runtime.
That way everyone can really use what ever languages that love or they want. With Everything2js.
May the day of languages independent web come faster.
Some days, I hate JS too. But hate leads to the dark side. It is what it is. Might as well hate endosymbionts who power our cells for being ugly.
We're all over performance, including SIMD. Multiple approaches but definitely up for the Dart-like direct approach. Talking to John McCutchan of Google about lining up JS on this front so Dart2JS has a complete target.
You can't simply asm.js'fy LuaJIT2 because it has an interpreter hand written in assembly and it generates native code to achieve peak performance. You can only asm.js'fy a normal Lua interpreter which is up to 64x slower than LuaJIT on benchmarks, when compared natively. So asm.js'fied Lua will be up to 128x slower. Does not sound that impressive...
Hi -- you make a good point, the one seemingly at issue in this tangent (but not really the bone of contention).
As noted, I don't know which will ultimately prevail in pure performance: DartVM or Dart2JS on evolved JS. In the near term, of course DartVM wins (and that investment made "against cross-browser standards" was the strategic bone of contention 594 days ago).
I do know that in the foreseeable term, we browser vendors don't all have the ability to build two VMs (or three, to include Lua using LuaJIT2 or something as fast, in addition to JS and Dart; or more VMs since everyone wants Blub ;-).
The cross-heap cycle collector required by two disjoint VMs sharing the DOM already felled attempts to push Dart support into WebKit over a year ago. Apple's Filip Pizlo said why here:
Other browser vendors than Apple may have the resources to do more, but no browser wants to take a performance hit "on spec". And Mozilla at least has more than enough work to do with relatively few resources (compared to Apple, Google, and Microsoft) on JS. As you've heard, asm.js was an easy addition, built on our JIT framework.
So you're right, an optimizing JIT-compiling VM is not easily hosted via a cross-compiler, or emulated competitively by compiling to JS. LuaJIT2 would need a safe JIT API from the cross-compiler's target runtime, whether NaCl/PNaCl's runtime or Emscripten/asm.js's equivalent.
Googling for "NaCl JIT" shows encouraging signs, although the first hit is from May 2011. The general idea of a safe JIT API can be applied to asm.js too. In any event, one would need to write a new back end for LuaJIT2.
Bottom line: we're looking into efficient multi-language VM hosting via asm.js and future extensions, but this is obviously a longer road than C/C++ cross-compiling where we've had good early wins (e.g., Unreal Engine).
Furthermore, these are microbenchmarks. Large codebases might paint a different picture.
I am also not sure why LuaJIT is the most interesting comparison. Yes, LuaJIT is a work of art, but even normal Lua is quite fast, beating Python and Ruby easily. So even a half-speed Lua-VM-in-JS would be competitive with other dynamic languages - which means it is fast enough for many uses.
Finally, we can certainly compile VMs that JIT, we just need to create JIT backends for them that emit JS (preferably asm.js). But LuaJIT is more tricky, as you note, because it lacks a portable C interpreter.