Hacker News new | comments | show | ask | jobs | submit login
Mozilla can produce near-native performance on the Web (arstechnica.com)
335 points by BruceM 1371 days ago | hide | past | web | 200 comments | favorite



BTW, C/C++ code that is compared has multithreading and SIMD capability disabled. In the third page they benchmark asm.js and native with multithreading and SSE enabled. It shows upto 50x slowdown! Not exactly sure what the point of a benchmark without multithreading is. I mean anyone writing performance sensitive apps will use MT right? (given JS is single threaded, it seems this is a dealbreaker).

http://cdn.arstechnica.net/wp-content/uploads/2013/05/classi...


Of course single-threaded code will lose to SIMD or threaded code. That's not saying anything surprising.

The important thing is to realize that until just recently - a few months or so - people did not believe that SINGLE-threaded JS code on the web could be close to native speed. The Ars article here even has "surprise" in the title when it concludes that in fact that is possible :)

So that is a crucial milestone. Is the job done? No, as you say, SIMD and threads are important. But we had to first get single-threaded performance to the right area.

Next, browser vendors need to work together to standardize stuff that will allow SIMD and more threading. This is definitely possible if everyone is interested in making the web faster, which I certainly believe is the case.


>The important thing is to realize that until just recently - a few months or so - people did not believe that SINGLE-threaded JS code on the web could be close to native speed.

Single threaded Javascript was already 'fast enough' for the CRUD interfaces that makes up most web apps. Web apps are horrible to use because the network is unreliable for many people, latency is terrible for just about everybody, browsers provide a very poor and limited set of APIs for many of the tasks people actually want to do (interacting with hardware, file systems, audio, inter-app interoperability etc) and apps are forced to run inside a sandbox that can't be escaped. Cloud services nearly always mean that users lose control of their data. Web apps are rarely open source (and if they are you still can't change the version of the app that you actually use because it runs on a server you don't control). A faster Facebook with 3D effects is still a terrible platform.

Homer Simpson: Kids: there's three ways to do things; the right way, the wrong way and the Max Power way!

Bart: Isn't that the wrong way?

Homer Simpson: Yeah, but faster!

I think a lot of this obsession over Javascript speed is a distraction (support for more languages is nice for developers of course). Making slightly faster VMs is just an engineering problem: throw more resources at it and things will improve. The hard problems are political and also related to the basic network infrastructure (constrained by the laws of physics). Trying to recreate the entire operating system inside the browser, for all possible uses (even performance sensitive things), is a huge overreach when we don't seem to have a clue how to make the basic web app experience good. Why is it hard? Because the founding idea of a single sandboxed standard for all platforms is unworkable in the real world. Not only does it mean all developers surrendering complete control to the people defining the standard (who also happen to be big players in many other related markets), it relies on the big players actually agreeing, which is often not in their interest. If the world had moved to Gopher based OSs in the early 90s the web as we know it would not have come into existence. What future possible technologies are we destroying by locking ourselves into the web sandbox?


Your premise is demonstrably wrong

> Why is it hard? Because the founding idea of a single sandboxed standard for all platforms is unworkable in the real world. Not only does it mean all developers surrendering complete control to the people defining the standard (who also happen to be big players in many other related markets), it relies on the big players actually agreeing, which is often not in their interest.

The innovation in browsers and webapps is coming from the vendors each acting individually, and puling standards bodies along with them.

If you as a developer, are waiting around for them all to agree on any technology, before leveraging it, then you are just paralyzed. Develop against WebGL and asm.js now to get early mover advantages. If you wait for universal uptake, you'll never get anything done.


Demonstrably wrong? Remind us how many years it took to get rounded corners into CSS?

>The innovation in browsers and webapps is coming from the vendors

Yes, which does not contradict what I said at all: "the people defining the standard (who also happen to be big players in many other related markets)". The vendors are big industry players who have their own agendas that rarely coincide with my interests (yes even Mozilla). They also get to veto any proposal they don't like.

The Kinect has been available for years now, which is an age in the consumer tech space. Where is the open standard web API? Do you see how it might be difficult to get Mozilla, Google and Apple to agree to standardise, implement and support an API for proprietary Microsoft technology? Creating a specific API for every type of device (photo cameras, video cameras, accelerometer, touch screens etc) and not providing a low level interface to hardware is the wrong route to take. High level APIs are fine, but it is important to also provide the ability for people to extend platform support in new and unforeseeable ways, and that requires lower level access.

WebRTC is another excellent example of conflicts of interest with Microsoft opposed to it because it challenges Skype (or because it is technically deficient, depending upon which side you are on). The difference between the web as a platform and real open operating systems is stark. If Windows had been like the web -- locked down with only a limited set of approved APIs -- then Skype could not have been created unless Microsoft decided to allow it. Substitute Microsoft for Apple/Google/Mozilla/Microsoft and you have the situation we have with the web.

Perhaps some people don't see the this because of the tribal nature of technology discussions. People don't see the web as being restrictive because their 'team' (be that Apple or Google or Microsoft or Mozilla) has a say in it. So most arguments devolve into discussion about how X is blocking a proposal by Y and the proposal is either good for the web or bad for the web depending on whether you favour X or Y. Few people seem to stop and wonder why we are creating a system where X can veto technologies, even for people who don't use a single product created by X.

>If you as a developer, are waiting around for them all to agree on any technology, before leveraging it, then you are just paralyzed. Develop against WebGL and asm.js now to get early mover advantages.

How does developing against asm.js and WebGL help me interface with new input devices? It doesn't help me one bit. All it does is let me create a faster shinier version of current webapps (that only runs in certain browsers). This is a reoccurring argument though: the web will be a decent platform once we finally get $NEW_API_PROPOSAL that fill fix everything 'once and for all'.


Ok thanks. I guess I'm confused about the gains with asm.js. Looks like it is in the same ballpark performance wise as regular JS (and slower in some cases). So, why would anyone write in a crippled C-like subset of javascript and get normal javascript like performance. In the future if threading and SIMD is standardized, won't regular JS gain performance as well?


No, look again at the numbers - asm.js gets much closer to native speed than normal JS, in many cases, especially large apps. In addition to the numbers in the article, you can see for example,

http://kripken.github.io/mloc_emscripten_talk/#/28

asm.js code makes it easier to get within reach of native performance.


> why would anyone write in a crippled C-like subset of javascript

Because asm.js is not meant for people, but for compilers.

For example the Unreal Engine has been recently compiled from C/C++ to asm.js and is running in the browser: http://www.unrealengine.com/html5/


... best viewed in ... netscape ?


“Best viewed in Netscape” still wins over “Download for Windows (x86)”.


you are not expected to write asm.js, it's to be considered a compiler target (for e.g. emscripten etc)


> multithreading and SIMD capability disabled

[EDIT: it looks like I may have misinterpreted the article; see replies]

To clarify, I believe they ran the benchmarks with SSE disabled. SSE is more than just SIMD; these days SSE is used for most (all?) floating-point calculations, even non-SIMD. If you disable SSE, you force floating-point ops to use the old x87 FPU, which has been discouraged by the CPU manufacturers for over 10 years.

In other words, disabling SSE is significantly crippling all floating-point performance.

http://www.realworldtech.com/physx87/4/

http://stackoverflow.com/questions/3206101/extended-80-bit-d...

(nothing against asm.js, which I think is very cool)


Why would you assume they disabled SSE? Compilers generate SSE by default when optimizing, I would be very surprised if Peter Bright disabled it. And not only do normal C/C++ compilers generate SSE, but JS engines generate SSE as well. So SSE was being used on both sides of the comparisons here, I am pretty sure.

What was disabled was C/C++ code that explicitly used SIMD intrinsics or explicitly used SSE function calls in C/C++.


I must have misinterpreted this remark:

> In general, I took the highest-ranked pure C++ routines for each test. Versions that used, for example, SSE functions were excluded, as were those with explicit multithreading.

For "functions" I interpreted "functionality," but I think now it must mean SSE intrinsics?

Sorry for the confusion.


Yes, sorry, I meant SSE intrinsics. As I think I note later in the article, the compiler I used emitted scalar SSE2 for floating point.


I'm glad you made the comment, others might read it the same way as you did originally. This clears the potential misunderstanding.


anyone writing performance sensitive apps will use multithreading

That assumes the performance sensitive app is amenable to multithreading. Many aren't.


Your comment has made me curious. I tried to think of some examples of performance sensitive apps that aren't amenable to multi-threading and I couldn't, but it seems like they could offer interesting areas for R&D. Can you share some examples?


Consider a single tab of a web browser, for instance. By specification, much of the code must run in the same thread. Keeping the semantics while getting code out of the thread is extremely difficult, often impossible.

I should know, that's my dayjob :)


So the reason you can't make it faster than JavaScript by multi-threading it is... because of JavaScript? :-)

I had a look at your profile and it looks like you're doing some very valuable work. I was rather hoping for some examples which didn't involve JavaScript, though.


It used to be that almost all PC games were single threaded. Many still are: Civilization 4, Dwarf Fortress, and Morrowind, for example.


Many things are hard to parallelize using current technology, and there indeed has been lots of research done on it. History goes back to the supercomputer era (Cray, Connection Machine, Transputer etc.)

The term of art is "inherently sequential" problems (vs "embarrassingly parallel").

In addition, the programming effort gets untenable quickly due to Amdahl's law when the amount of parallelism increases. Eg. if you have 100 cores, even if only 1% of work in your program is sequential, your performance goes down the drain. Currently we're in the comfy phase of the curve with few cores...

We've been making good progress with parallel-minded reimaginings of programming languages and computer architectures on one front: GPUs. On the CPU we are saddled with the inertia and backwards compatibiltiy lock-in of tools and culture (witness lasting use of C++ as a tool to write parallel software)


If you have a multicore machine -- the only kind of machine that brings a benefit to multithreading -- and if your problem is embarassingly parallel, you can usually get away with multiprocesses (MapReduce etc). That way, your single-threaded process can run on many cores.


Is it fair to describe JS as single-threaded when all modern browsers support web workers?

Related: Why does it seem like web workers don't exist? I can hardly think of any popular libraries or web apps that make use of them, despite the seemingly obvious utility.


Let me detail tachyonbeam's answer.

All modern web browsers (I'll try not to troll about IE) support web workers. However, the semantics of web workers is message-passing (by copy, most of the time), which is the semantics generally associated to multi-process rather than multi-threading. Most developers using threads expect some kind of memory sharing, which is impossible with web workers at the moment.

Shared workers (not implemented in any browser atm, I believe) will take care of some usage scenarios. Other alternatives are being explored (e.g. [Parallel JS](http://smallcultfollowing.com/babysteps/blog/2012/01/09/para...). However, for the time being, don't expect any support from JS to do shared memory concurrency.

Note: I'm a Firefox dev.


After getting very excited and doing a bunch of work, we did a lot of performance tests and discovered that for our use case all they did was increase processor load for the same work - the penalty associated with serialising and deserialising things to go across postMessage was greater than the work we were getting done on the other thread.

Of course this was in the early days, and it'd be good to retest it to see if it's better now.

I think that there are a lot of interesting use cases for WebWorkers, but you also have to be aware that for some things they will actually slow down your application.


It is still not good. I've started working on an asteroids game [0], and I tried moving my physics calculations to a background thread, but it ended up being slower.

0 http://www.isaksky.com/asteroids/


Working on it :)


1. It's hard to use them in libraries due to restrictions on how to load them, Opera, Ie 10 and safari 5 require a same origin file, makes it hard to drop a worker into a slow part of jQuery. 2. Message passing can be slow and transferring only works for typed arrays which take time to convert to. Unless your calculations are much bigger then your data or converting to typed arrays isn't a bottleneck it may slow you down. 3. OO based inheritance patterns make it very hard to put pieces of your code into the worker, if your slow thing involved a call to jQuery. Ajax you can't call jQuery from the worker because it references the DOM but you can't pull out the Ajax method because it relies on $. extend and others meaning you have to rewrite $. Ajax from scratch (mot too bad in this case) or do major surgery on the library.


Web workers are more like multiple processes than multiple threads, but yes, there are ways to use more than one CPU core in JS too.


Our hashtagify.me uses them :)


There is work going on in enabling Javascript to get better at doing things in parallel - both on CPU and GPU.

Eg. check out the ParallelArray() from the River Trail project by Intel: http://wiki.ecmascript.org/doku.php?id=strawman:data_paralle... & http://en.m.wikipedia.org/wiki/River_Trail_(JavaScript_engin... At least Firefox has an initial implementation: https://developer.mozilla.org/en-US/docs/JavaScript/Referenc...

The benefit if asm.js being based on ordinary javascript is that they can use these improvements to javascript itself right away more or less.


> anyone writing performance sensitive apps will use MT right?

No, multithreading not that simple - usually it's the last trick you should try to pull and even then only if the stars align just right.

You need to have expert programmers on hand (of the rare breed who can pull this off), the good luck to succeed and the time budget to spend on the big code reachitecting and experimentation.

If you're running on something else than modern consoles, you don't know how many cores your users will have, the average might be 2 so your average returns on the effort will suck (vs spending equivalent effort elsewhere).

Witness modern Web browsers, for example, where Google, Mozilla, Apple and Microsoft employ some of the best C++ programmers in the world, routinely of pulling off heroic feats, and they haven't seen it worth the effort yet despite Core 2 and Athlon X2 hitting mainstream desktop at about 2005-2006, 8 years ago.


I think you are overstating the problems writing multithreaded code a little here. Yes, it's a lot harder than single-threaded code, yes you need to know what you are doing, yes you need to carefully design your code to enable efficient and safe multithreading, and no you don't get speed for free. All true, but you are making it sound as if writing multithreaded code is rocket science, some kind of mad skill only the top 0.5% programmers can handle. It's not that bad you know... If you're a half-way decent programmer and don't litter your single-threaded code with globals, shared memory and functions with side-effects, like you shouldn't anyway, parallelizing it after the fact to at least is usually pretty straightforward even. Maybe not optimal, but safe, and faster than single-core on multi-core hardware.


Sure, you can write the code if you're a half-way decent programmer. But getting it to work correctly and bug free in presence of nondeterministic behaviour and races is in the domain of the very small minority, when you're talking about large code bases.

I'm betting the minority is less than 0.5%. Half way decent C++ programmers aren't many % of programmers, and people who can make a reliably working parallelized do-over for a nontrivial code base are maybe 1% of that. I'll say 0.05% max.


The challenge is to design your code in a way that either makes it impossible to end up with race conditions or non-deterministic behavior, or if at least limit the number of possible points-of-failure. This requires a good understanding in what coding styles and data structures to avoid or pay extra attention to, but if you're aware of the pitfalls, it's actually not all that difficult. Just make sure your workers don't share any state, collect their outputs in a single thread-safe queue, use off the shelf components to represent threads, thread pools and locking mechanisms, be aware of things like multiple-read-single-write locks, and if you really can't avoid writes to shared state, only then start thinking about mutexes or semaphores. If you're smart about how you set up the code, more often than not you can get away with just some thread-safe queues. Take a look at Objective-C's block syntax and GCD for example, it makes maybe 90% of tasks suitable for multithreading dead easy.

Your estimate only 0.05% of programmers could write reliable multithreaded code is extremely pessimistic, cynical even. I almost feel flattered, having written a fair amount of multithreaded code that works perfectly myself, but I most definitely would dare count me with the 0.05% top programmers or whatever. My estimate is that with proper preparation, at least 50% of all programmers could learn to write reliable multithreaded code, and the remaining 50% probably wouldn't have a use for it anyway.


I was talking about the percentage of existing programmers with the abilities to pull it off. I'm sure more than 0.05% have the potential to learn it eventually, given enough time, motivation and opportunity. Like musicians & theremin!


You're making writing multi-threaded code sound like a herculean task. Sure, doing MT with mutexes is a pain, but programming with actors (message passing) is simpler. You can easily scale this for variable number of cores.


It really should be a 100x slowdown! Their "optimized" Linpack numbers (41 GFLOPs) are too low for this system. The CPU they use can deliver 108 GFLOPs at base frequency and 120 GFLOPs at max turbo frequency, and good LU implementations (e.g. MKL) can achieve >90% of these peak numbers.


It's good progress all the same, but doesn't do much for the end user that still relies on an often shitty Internet connection.


For most applications JS doesn't need to get any quicker. What makes web apps feel like swimming through molasses is the DOM.

Even something as simple as getting a Div to follow the mouse on anything but the simplest of pages is impossible to do without incurring ridiculous lag.

I don't know enough about them, but it seems that maybe Web Components may offer some answers by encapsulating mini Document trees which presumably exist in isolation from other components thereby reducing the penalties of dynamic CSS.


Like this? http://jsfiddle.net/eymAS/2/

The DOM is faster than almost everyone believes. It's not 2001 anymore, guys.

Web components solve a lot of major problems, but it has little to do with DOM/CSS performance and more to do with encapsulation (ie, so your CSS/HTML/JS don't break a component). DocumentFragments and insertAdjacentHTML solved more performance problems for the DOM than most other API changes have, and smarter compositing/painting algorithms in browsers (along with hardware acceleration) have made it possible to do some pretty ridiculous stuff.


So now we're going to say that in 2013 it is impressive that 160 rectangles can follow a mouse cursor? I see the advantages of client/server applications (as they used to be called), and presumably js/css apps will continue to get faster and asymptotically approach their client-only functional equivalents, but there is a thing about a hammer and all problems looking like nails that springs to mind when I see people claim that moving 160 rectangles in 16 ms / frame is 'scary fast'...


The parent's comment was that it is essentially infeasible to have a single div follow the cursor due to performance issues with the DOM. The fiddle includes 160 divs and opacity to demonstrate that that claim is false.

There's an awful lot of red herring going on in your post. It sounds like you have some problems with the web, but they most certainly aren't relevant to this discussion.


Oh I have no problems with the web, it has been providing me with a good living for well over a decade. Maybe that's my problem - that I remember the times when things that are being described now as 'great' or 'fast' were solved already, and my frustration is that instead of moving forward, we now have to repeat the last decade except with js/css before we can really start to innovate again.

Unreal engine in the browser? Seriously? Because of what - because it's easier to update people's clients when they download it again every time they visit your website? We had web launchers and auto-updating that solved everything the web brings as an advantage over desktop applications (in the use cases that require fast graphics, i.e. not CRUD line of business applications) years ago.

The elephant in the room here is that people want to push web applications because it's easier to make money from them, and you can make more of it, over a longer time. Look, I do it too, I know that the web beats the pants off client-only software from a business point of view. But let's call a spade a spade and not tip toe around it with bullshit pseudo-arguments like 'ease of deployability' and 'social sharing' and 'portability' and 'platform-agnosticity'.


If an app that had no sustainable future as a native app becomes possible as a web app, don't users also benefit? The fact is that many people are willing to pay for a service they continue to get value from month after month. And the recurring revenue also usually comes with recurring support because companies have to keep winning customer loyalty in a way they never have to with a one time sale. I think it is a net win for both customers and the companies selling the services.

And developers win because there are more viable businesses which need their skills over the long term.

Should it be possible to sell a native app as a service? Maybe, but supporting a native app professionally is much more difficult than you seem to be acknowledging. Have you ever supported a native app in the wild across a wide variety of generic PC hardware for novice users?


"If an app that had no sustainable future as a native app becomes possible as a web app, don't users also benefit?"

Of course they do, therefore charging for software (desktop & web alike) = good. Make no mistake on which side in that debate I'm on. But let's call it like that.

"And developers win because there are more viable businesses which need their skills over the long term."

Sure, and it's no more than fair, and the natural order to boot.

"Have you ever supported a native app in the wild across a wide variety of generic PC hardware for novice users?"

I'm glad you ask, because actually yes I do, and it's a pain in the ass for the most part. And for the parts where it makes sense, I have moved parts of the services we offer to web applications. Therefore I feel I'm experienced enough to compare the advantages and disadvantages of the two, and from that I criticize the developments of people wanting to shoehorn everything into the browser, either because of covered up ulterior motives, or because it's all they know and want to know. But again - then say so, and don't sugarcoat it under the fuzzy 'it's better for the user' rhetoric that is so prevalent.


I should add that another reason can be "because I can", which is a perfectly fine reason too, but then label it as such so that naive greenhorns don't start pushing production systems in this direction, requiring others to then deal with the fallout.


For me the movement is smooth but it still lags behind the mouse with 10-20 px. The Firefox version of the demo has a little less lag but still looks disconnected from the mouse when moved.


The problem is that the DOM is a hard upper limit on performance compared to what the performance of an unsafe language (asm.js) is. It is massively complex because it must perform every UI task in the browser and simply moving around DIVs doesn't give you a feel for its the performance in complex scenarios.

If you think about how native applications perform and then imagine every one having to go through a DOM interface and every button and UI interaction being composed of DOM elements and re-styling these elements on every interaction you get a sense of what the problem is. There is simply no way to get the DOM to do something it wasn't designed to do, which is to be a document display interface and not a general graphical user interface. You have to clobber together a lot of interactions to get the desired behavior and not only does this increase the overhead for both your program and the DOM you are going to hit cases where the DOM isn't optimized.

This doesn't mean that the DOM is inherently slow. If you tried to offload the UI work onto javascript you wouldn't get the performance benefit of doing that on Asm.js or NACL. You really need unsafe code to solve this problem. Asm.js and NACL are only useful if you are going to do this; targeting the Canvas and WebGL gives you the ability to do that.


What you describe -- "imagine every one having to go through a DOM interface and every button and UI interaction being composed of DOM elements and re-styling these elements on every interaction" -- is exactly how Firefox is built. The desktop Firefox UI is all DOM elements (not HTML, but still the same underlying DOM code).

Hovering over a toolbar button or a tab causes a CSS style rule for a :hover effect to be applied. Opening a new tab uses CSS animation for the transition. Moving tabs around manipulates the DOM and moves the tab elements (which are themselves composed of images, labels, etc.).

And yet, the Firefox UI is plenty snappy -- yes, you can argue that there are slowdowns and issues that need to be fixed, but no more or less than in many "native" apps.


asm.js is not an "unsafe language".

/be


That is very cool. I got quite bad rubber banding initially (with a reported 60fps), however after turning off v-sync in chrome://flags I got super smooth 120fps performance.

Would be great if Chrome had this option off by default.


> Would be great if Chrome had this option off by default.

Disabling Vsync causes page tearing artifacts (https://en.wikipedia.org/wiki/Page_tearing). It should always be enabled by default. Some new gpu drivers also have "adaptive vsync", where it's enabled by default but gets temporarily disabled when deadlines are not met.


isn't tearing preferable to input lag? As I mentioned, with v-sync on there is a very noticeable rubber-banding effect, with it off the result is (perceptually) far smoother.

>It [vsync] should always be enabled by default

Not sure about this. Surely the goal is to maximize perceived performance not 'correctness' (where correctness in this context is defined as each frame being fully drawn before the next one starts).


> isn't tearing preferable to input lag?

In a properly written app, vsync should not cause input lag despite gamer superstitions. But there are lots of not-properly written apps where this is a problem.

WebGL / Chrome should solve the problem in some other way, not disabling vsync. Or using adaptive vsync if available as a workaround.

The culprit here might be the ANGLE OpenGLES2-on-D3D9 wrapper that Chrome uses, which is notoriously slow on synchronous operations like vsync, glGetError or glFinish.


That was just caused by me not processing input properly. I've set up a proper rendering loop here and now only render the most recent mouse position on each frame:

http://jsfiddle.net/eymAS/84/

It's better, but still limited to 60fps, and since I render to the state at 0s, the worst delay we can see is ~16.6ms + rendering time.


Here's a version that's not Chrome-only: http://jsfiddle.net/eymAS/21/

I've been wondering why I keep hearing that the DOM is supposed to be slow. Don't people mean to say that rendering (i.e. mostly CSS) is slow? This fiddle is certainly not doing much with the DOM except setting style attributes.


The problem with the DOM is not that it's intrinsically slow, but that it's very easy to do slow things with it inadvertently. There are often N different ways to do things, due to the evolution of HTML. The newer approaches tend to be designed with things like GPU acceleration in mind (e.g. CSS transforms), whereas older methods often had lots of heavy interactions and computations because they weren't expected to be used dynamically.

Just like optimizing any software takes skill and knowledge, so too does writing HTML, whether dynamic or not. Tools for understanding what's going on with the DOM and with dynamic HTML are slowly becoming available as well, which will help with this.


People generally mean their manipulation of DOM is effectively slow. That's a perfectly accurate statement, but the problem is calling the DOM slow as a result. The real culprit is often that doing stupid things with the DOM cause style recalcs, reflows, layouts, compositing, painting, etc. Those are all comparatively expensive.


Man, that is just scary fast :-)


It should be, it's hardware-accelerated (take a look at the translate3d hack)

Paul Irish did a good writeup here: http://www.paulirish.com/2012/why-moving-elements-with-trans...


Yep. Here's a version without using 3d (no layers generated for the followers): http://jsfiddle.net/eymAS/29/

The layout/composition/painting etc is taking roughly 5x longer now. Chrome is smart enough to do it all just once, though, so it's still relatively performant.


> What makes web apps feel like swimming through molasses is the DOM.

I would contend that network latency and the plethora of domains queried to be a greater impediment to a speedy web. DOM manipulation may be absurdly slow however it cannot hold a candle to latency hold-ups.

Most readers on HN live in a quiet and peaceful bubble of high-quality network connections, but the majority of the world is beginning to wake up to the web and they'll have to endure latency issues that we haven't experienced since "broadband" was a new concept.


Quite a few HNers have probably experienced lossy connections such as Amtrak or busy coffee shops. I would think that those high-loss situations would feel somewhat like a high latency experience.


Lossy mobile connections are very common in the world.


  | DOM manipulation may be absurdly slow however
  | it cannot hold a candle to latency hold-ups.
Just an anecdote, but jQueryUI + IE8 took 8 seconds to style all of the buttons[1] we had on a page during one call. Granted, we had a ton of buttons (most were hidden), but that's a lot of time. The same call on Firefox and Chrome were fractions of a second (can't remember the exact numbers, but Chrome was faster at the time), and this was a couple of years ago. DOM manipulation can be a real drag.

[1] http://jqueryui.com/button/


But we do know how to handle bad latency, don't we? Speculative loading, which got a bad name because it makes wallhacks on FPSes work but really has almost no downside once the client can't do anything malicious with a little information the human isn't using yet.

If the real problem isn't latency but bandwidth or bandwidth caps, that will require other solutions.


For most applications, the Dom is the weak point because manipulating the Dom is as ambitious as the apps get. The exciting part of asm.js is not that it can speed up existing apps, it's all the cool new things you can do on the client side now that were simply to slow to be possible before.


Yeah that's a fair point. What kinds of new apps do you think this will enable (besides games)?

It seems that the DOM will continue to be the major bottleneck since presumably the GUI for these new apps will still be DOM based.


I expect Virtual Machines for domain-specific languages. I also expect (de)compression, cryptography, number-crunching, graphics algorithms, etc.


i see porting over existing codebases and rendering not via DOM but webgl as the path this takes on.


What makes web apps feel like swimming through molasses is the DOM.

In my experience, it's the lack of threading. Or any way to perform an operation without affecting the UI. Web Workers go a long way to solving this, but they're limited in what they can do.


Not sure I agree with this. Very few web applications are CPU bound to the point of locking the UI. Web Workers are extremely useful, however very few current web apps benefit from them. Given they cannot interact with the DOM they are limited to pure number crunching. Great if you're ray tracing or encoding/decoding sound or video, not much use to the other 99% of web apps.

My understanding is that parsing CSS rules and updating the DOM are incredibly costly operations which none of the browser vendors have yet decided to optimise.


> My understanding is that parsing CSS rules and updating the DOM are incredibly costly operations which none of the browser vendors have yet decided to optimise.

These operations are extremely well-optimized. The reason they're slow is that they're also extremely complex and slow operations. It's not something you can hack around.


> The reason they're slow is that they're also extremely complex and slow operations.

In that case it seems that we need something new to replace CSS for layout.

Or maybe a 'strict mode' equivalent for CSS where styling is separated from layout. In this mode rules that affect style (bold, font etc) cannot affect container dimensions. This mode would include a subset of rules that affect only layout (position, width, height etc).

This would presumably greatly reduce the impact of the parse/dom traversal/paint cycle of CSS.


In this mode rules that affect style (bold, font etc)

But bold and font would affect the width of the text which would affect text flow and possibly affect whether the container ought to grow and shrink or whether it ought to provide scroll bars...

No change is simple, in that sense.


> In that case it seems that we need something new to replace CSS for layout

There's so much that CSS (and browsers in general) do that we take for granted, which is good in one sense but bad since you can't dip below those abstractions and grapple with the layout engine (or garbage collector, or HTML parser, or selector engine) at a low level. You get the whole ball of wax and all that stuff runs all the time, even if you don't actually need it. In my view this is essentially what separates HTML apps from native ones.


I totally think CSS layout could use some improvements, but isn't the separation you're talking about here already pretty strong?

I'm struggling to think of an example of a situation where text styles affect layout on containers where dimensions are anything other than 'auto'. And any situation at all where text styles affect positioning.


So we need asm.css? :)


An example from my experience- auto-complete filtering. Loading a list of the user's Facebook friends (which can number into the thousands) and filter as they type.

When that's on the main UI thread it's hideous- the page stops responding to typing, so people hit the key twice, the whole page feels like it's hanging. It's not constant CPU usage, but it's CPU usage at the exact moment the user expects the site to react immediately. Same goes for calculations when a user is swiping, or something like that.


The browser vendors definitely do micro-optimize these. DOM binding performance is measured in nanoseconds these days.


Then what is it?


Repaints/reflow. DOM+CSS+JS is a complex system. Watch this: http://www.paulirish.com/2011/dom-html5-css3-performance/


To add to ricardobeat, it's all repaints and recalculations. Web devs have become overzealous with smashing JS into every nook and cranny to manipulate the dom, it seems to be a battle Google is trying to educate people about, and good on them.

Open dev tools in chrome, the click network/frames, hit record and refresh the page. Do this on a sluggish site that uses tons of js and you will see the problem.


The most commons causes of slowness I've seen are:

1) Interleaving changes to style and requests for layout information: this causes lots of relayouts instead of just one.

2) Using libraries that under the hood helpfully end up doing item #1 for you.

3) Using complex graphical effects (blurred shadows are a common one, as are certain gradients) that are actually very expensive to implement and may not be amenable to being GPU-accelerated.


> Very few web applications are CPU bound to the point of locking the UI.

It comes up regularly around here. We've had to resort to all kinds of ugly workarounds (such, as, e.g. chunking work and running small pieces off setTimeout() or requestAnimationFrame). A simple thing like changing fonts on the Canvas more than a couple of things can kill any hope of a responsive UI unless you very carefully manage your rendering, depending on browser (Canvas font handling is ok on Chrome, and ridiculously slow on Firefox, for example).


The fascinating thing about this is that they didn't write a new JIT for asm.js. It's just their existing JIT with more information and a few new tricks. Among other things, this means that it doesn't yet have a lot of the fancy optimizations that C/C++ compilers typically have in their backends, like clever instruction selection or register allocation. It's already impressively fast, and it has the potential to get even faster.


> It's just their existing JIT with more information and a few new tricks.

Spidermonkey is composed of the Baseline Compiler (JIT) and Ion Monkey (JIT), but a new AOT compiler was written, called "Odin Monkey."


OdinMonkey uses the existing IonMonkey compiler to do the actual optimization and code generation work.

This blog post has more information [0]; see especially the comments section where the author responds to some questions and discusses the relationship between OdinMonkey and regular JS compilation.

[0] https://blog.mozilla.org/luke/2013/03/21/asm-js-in-firefox-n...


oh, interesting! Good eye, thanks for pointing this out!


Not really a new compiler.

OdinMonkey parses the asm.js type system, but then just passes the information it learned into IonMonkey, the existing SSA-optimizing JIT.



I would much prefer having a sane new language replacing JavaScript instead of this hack to improve performance. A new language could provide the same performance advantage and make writing web applications much more pleasant.

Admittedly it is harder to introduce a new language across all (major) browser but I think it would really be worth it.


I see two big issues here:

1. "Sane" is highly subjective. There is no single optimal language for all problems and audiences.

2. Pushing a new language to all major browsers is hard, like you said. Maybe even impossible, several huge corporations with complex politics are involved.

Google is trying this approach with Dart, and I personally don't think it's the panacea, nor do I expect all major browsers to support it.

So the only alternative I see is to have a very flexible VM that can run pretty much any language. asm.js proves that JS is actually quite good at that. I'd still prefer something like native client, but asm.js is highly promising: It's downwards compatible and has some potential for optimisation at the same time.


That's what asm.js is for, meant as bytecode for compilers.

Take asm.js (e.g. an efficient subset of Javascript), in combination with source maps meant for debugging, and presto, you have everything needed to compile source-code from whatever language you want, including C/C++.


"from whatever language you want" - in practice, not really. If you have a language that relies on gc or some kind of vm then you are also going to have to deliver all the asm.js code for the vm. That's potentially a lot of code to deliver to the browser, could you use threads efficiently in your implementation etc. There's no story yet for anything beyond a statically compiled language and even if there is a story you would probably have to wait a long time to see whether it would deliver anything in practice.


I don't understand this stuff well enough, but doesn't Emscripten effectively give you that? It converts LLVM bitcode into JavaScript, so if you can compile it with LLVM, you can use it on the web.


I'm not entirely sure myself, but it seems to be a bit more involved than that. I recently came across a thread [1] discussing how to add support for Objective-C and it seems the approach is to compile the Obj-C to C++... I think only C and C++ are supported at this point. But I'd bet they eventually get there.

The bigger problem however, is that most of the commonly used languages don't just compile to native executables. They include a runtime handling stuff like garbage collection and just-in-time compilation. Since these runtimes are usually written in C or C++, we'll get there if C compiled to asm.js gets fast enough. That's one of the goals of Emscripten :)

There's a demo [2] showing off various language runtimes compiled to asm.js. I'm not sure how they perform in comparison with JS, but the REPLs work pretty well.

[1] https://groups.google.com/forum/?fromgroups#!topic/emscripte...

[2] http://repl.it/languages


That's correct, you can compile many things to JavaScript. But I think this is a suboptimal solution - besides paying the performance penalty because you have the hard-to-optimize JavaScript in the stack it makes development harder, for example debugging generated code in the browser is unnecessarily hard.


> besides paying the performance penalty because you have the hard-to-optimize JavaScript in the stack

Well the article proofs you quite wrong.


But that's like converting C into python. Why not begin a new/existing language that can be compiled and ran at native speeds?


Three such languages are called AMD64, IA32 and ARM machine code, and are supported by Google Native Client (NaCl). I think they will be exceedingly hard to beat when it comes to speed and loading time.


They are fast, indeed. But by targeting NaCl, you are producing code that: - will only run on Chrome; - will only run on the hardware platforms for which you have developed.

This kind of kills all the fun in the web. By contrast, asm.js code will run just about everywhere, today. And will get blazingly fast in ~12 weeks for Firefox (a little later for Chrome and Opera).


> - will only run on the hardware platforms for which you have developed.

What fast browser JITs are actively developed for platforms other than ARM, AMD64 and IA32? V8 does not support anything else, and while Firefox enables SpiderMonkey for MIPS and SPARC [1], "unsupported" is not a very hearty endorsement.

https://developer.mozilla.org/en-US/docs/SpiderMonkey/1.8.8#...


SpiderMonkey works on PPC, for what it's worth; the TenFourFox project is actively maintaining it there. They're usually a bit behind on porting the JITs, but they do actively port them.

But the real issue with NaCl is this thought experiment. Assume NaCl were developed in 1998 and everyone had jumped on that bandwagon and the web in 2003 were full of NaCl blobs targeting the hardware architectures that mattered in 2000-2001. Then ask yourself the following questions:

1) Would this have affected the choice of hardware for phones and tablets and whatnot?

2) Would it be viable today to ship a web browser on an ARM system?

3) What makes us think that the currently-relevant set of hardware architectures will still be the set we want to be using in 10-15 years? In 30 years?

The nice thing about asm.js or PNaCl compared to NaCl is that even if we ship it right now and only run it right now on ARM/AMD64/IA32, if someone comes up with a new hardware architecture they want to ship in consumer devices they can simply implement a JS JIT for it (in the case of asm.js) or an LLVM backend (for PNaCl), which is something they would need to do _anyway_ for that "consumer devices" bit. On the other hand, if they have to deal with legacy NaCl content they suddenly have to do hardware emulation or something insane.


Google already has NaCl


I'd vote for LuaJIT as a new, sane, replacement for JavaScript.


> Emscripten builds taking between 10 and 50 times longer to compile than the native code ones.

Hey, this is fatal. With this massively long iteration time, you can't run actual big projects. Only executing 1,000,000 lines of C++ code is not enough. We care the build time as well as the perforamnce.

BTW, have you heard that the next Haswell processors can get only 5% performance improvement? We should assume that we have no free lunch anymore. One of the UNIX philosophies is already broken.


I thought the Haswell CPU was more about reducing power usage then adding performance? I'd be curious to know how much effort was spent on the power consumption vs performance.


Intel is getting their lunch handed to them by ARM. People value power consumption over performance these days, so why would they continue pumping out faster chips?


> BTW, have you heard that the next Haswell processors can get only 5% performance improvement?

Bollocks - using the new gather AVX instructions, I've seen close to a 40% increase over IB on some floating-point code I've hand-written with intrinsics.

Existing C++ code is around 13-16% faster thanks to better cache bandwidth and a huge L4 cache. Turn on FMA (fused multiply–add) optimisation and that goes to ~20% faster.


It seems Mozilla and Google are moving more and more toward their own browser specifications and mechanisms. Microsoft and Netscape did this in the 90s, and it ended up causing nothing but pain.

Optimizing JS is interesting, but creating browser-specific applications of code meant for a web audience disturbs me. I don't want a repeat of the first browser wars. Hopefully Mozilla will work toward w3 standards in this effort, since Google clearly hasn't with Dart.


As far as I know, asm.js is a strict subset of JavaScript -- I don't believe it's "browser-specific" in a meaningful way. Rather, it shows that it's possible to take a subset of JavaScript and make it run at near-native speeds, which means that there's still a lot of room left for the VMs to improve.

From glancing at the benchmarks in the Ars article, it looks like asm.js generally runs around twice to four times as fast as the same code without asm.js-specific optimizations. This is a huge improvement, but it's not the difference between something being blazing fast in one browser and unusably slow in another.


In some applications it doesn't matter, in some it's crucial. If your game runs at 60 fps in firefox and 15 fps in other browsers, it's effectively firefox-only game.

Additionaly browser games can't have main game loop (cause it would hang the browser infinitely), but often use "requestAnimationFrame" to trigger updates with the best sustained refresh rate available. Even if only one frame out of every 10 in your game is taking 20 ms instead of 16 ms - your game won't run with 60 fps anymore, but with 45 fps or 30 fps because browser is trying to choose refresh rate that will work. And that's a big difference in experience.


At Google I/O this year, Google indicated some interest in asm.js, and noted that since asm.js was released V8 has already gotten significantly faster in running asm.js code.

With regards to Dart, Google has said that they plan to get the language formally standardized after reaching the stable 1.0 release. It doesn't make sense to try to standardize something that's still in beta.


I hope Dart's beta won't last for 10 years, like their other betas.


The team said at I/O that they're trying to hit 1.0 sometime this summer.


There a bug for V8 about implementing asm.js: http://code.google.com/p/v8/issues/detail?id=2599


I think this is actually is a very good thing. Before the next big thing can be standardized, it should be created. Google tried to create a next big thing, by moving in several directions, among them are both Dart and PNaCl (which IMO, is the way to go, it provides good browser integration and 95% of the native code with both multithreading and other stuff). Mozilla, IMO, is more interest in keeping the status quo and resisting any significant innovation.


There's two kinds of innovation around technology platforms: you can innovate with the platform itself, by designing new operating systems, programming languages, browsers, libraries etc. Or you can innovate on top of the platform, by doing new things a level up. We benefit most from a balance of the two.

Mozilla aren't resisting innovation. They've chosen to keep Javascript (the platform) and innovate on top of it, e.g. with pdf.js. Google is aiming to develop alternative platforms. They're both valuable activities.


I really disagree with the idea of ASM.js. I think that the whole asm.js thing has got people more people confused into thinking that they can't build fast applications in regular JavaScript.

That is not true. JavaScript code is usually compiled to native code now and is very fast -- generally much faster than Ruby and Python.

I wish Mozilla and other core browser teams would focus on better WebGL support in more drivers and devices as well as better JavaScript engines -- for example Safari especially stands out with their deliberately crippled performance, and Microsoft with their deliberately crippled featureset.

The great thing about the web is convenient APIs like WebGL and nice languages like CoffeeScript. ASM.js negates that for no good reason. There are huge opportunities for exciting WebGL games which don't require the most optimized possible JavaScript, that just haven't been explored yet, and within a couple of years ordinary hardware will go much faster anyway.


> I think that the whole asm.js thing has got people more people confused into thinking that they can't build fast applications in regular JavaScript

I don't think you can blame asm.js for this. It's not targeted at web developers and regular JS is by all means fast enough for web applications in general. But some high performance applications like games and language runtimes are running into performance bottlenecks with plain JS, and there are large legacy code bases that are too expensive to port. It's not meant to hurt JavaScript at all, quite the contrary. It's an alternative to actually running native code on the browser, like NaCL. asm.js applications can in fact gradually migrate to JS.

> I wish Mozilla and other core browser teams would focus on better WebGL support in more drivers and devices as well as better JavaScript engines

I don't see how this is related, asm.js applications benefit from better WebGL support all the same. It's still JS running in the browser, it doesn't have direct access to any low level APIs.

> There are huge opportunities for exciting WebGL games which don't require the most optimized possible JavaScript, that just haven't been explored yet

I think you can build pretty neat web-based games even with plain JS, thanks to WebGL. Probably not with really high-end graphics or physics, but good enough. However, are you expecting game companies to rewrite their engines in JS just to get into the browser, not even knowing if it'll pay off for them? With asm.js, it's just another platform supporting C++ they can port to (C++ is ubiquitous in the industry). Mozilla and Epic are already working on porting Unreal Engine (Mass Effect and tons of others) to the web.

If browser games turn out to be the future, I'm sure we'll get engines focusing just on browser games, gradually rewriting large parts of their engines in JS. At the same time, we'll see better support for web-based games in consoles and on mobile devices.


What do you dislike about the idea of asm.js? (You mention that some people think asm.js is evidence that you can't make things fast enough in regular JavaScript, but that is something you dislike about people's reactions to asm.js, not an issue with the idea of asm.js.)


It promotes that idea.

Also, asm.js negates the advantages of the nice web APIs and type-free programming languages etc.

Read what I wrote.


Being able to use any language is better than having to use one language littered with WTF pitfalls because one guy had to hack it together in a week. And the debate about type checking and static analysis is far from settled.


It feels the most interesting use of asm would be to bring existing C code into javascript and have it run at reasonable speeds. Things like small (or not) database engines, authentication or encryption libraries, data parsing utilities etc...


Near native performance is a very good thing especially in the fields such as game development.

However, it's a pity that Mozilla doesn't address the main problem with the web platform, the lack of good platform and the set of API, something like Java or .NET provides. There are a large number of good javascript applications (Google docs, GMail, etc), which are produced by heroic efforts of the teams who created them. The same effort could have been expended on hundreds more useful software, if JavaScript was replaced by something more adequate.


Look around you at the Web: all the top sites use JS heavily and without "heroic efforts" compared to Java (dead on the client) or .NET (WPF is dead too).

If the old big-OO-frameworks really conferred such huge fitness advantages over JS, you would not see them dead on the client. They would have been used, and their plugins would have been supported better by the plugin vendors.

JS these days has a lot going for it, including IntelliJ-style IDEs (Cloud 9 offers one) and not-too-OO-or-huge frameworks.

/be


>Look around you at the Web: all the top sites use JS heavily and without "heroic efforts" compared to Java (dead on the client) or .NET (WPF is dead too).

Java isn't used in the browser because of awfully bad user experience, and bad security (personally, I disabled Java in the browser). JavaScript applications just look much better and behave much more smoothly, which is more important on the web than ease of development and performance. Web applications are much more readily used than desktop applications, and that's why we have to use them. It's just economic incentive. If we create a desktop application, user base which we sell too will be much smaller if the same application will be provided on the web. I'd love to use technologies, which make me more productive than JavaScript like JavaFX and Silverlight, or at least Flash. However, there's a large percentage of users whose browsers don't support them and if they support them they provide inferior user experience. I (and many other people) use JavaScript not because it's great but because it's the only platform which is universally supported by browsers not and provides superior user experience. It's the only reason to use it.

> without "heroic efforts" compared to Java (dead on the client) or .NET (WPF is dead too).

Most of the top sites are quite trivial in code complexity. They are more complex design and ui-experience wise than codewise. There are, of course complex web applications, like google docs, cloud9, gmail, google reader, but they were created with definitely heroic efforts, and they don't reach the complexity of the top desktop applications. Where's web based Mathematica, 3DS Max, Autocad, IntelliJ? When web based office applications will have performance of MS Office?

As an indicator of how complex these web applications are, you can take a look at how many web frameworks have non trivial collections, like HashSets, HashMaps, TreeMaps etc. Only the following frameworks support them: closure tools, GWT, Dart. Most of the popular JS frameworks which are used by top sites don't use them.

>JS these days has a lot going for it, including IntelliJ-style IDEs (Cloud 9 offers one) and not-too-OO-or-huge frameworks.

I actually work at JetBrains (the creator of IntelliJ), and I can say that Cloud9 provides IDE experience from 90s. The only meaningful feature that it supports is code completion and error highlighting. It has no refactorings, find usages, and many other smart features. I think, the JavaScript is to blame, and I feel that so complex code is near impossible to write in a language such as JavaScript (because of its lack of static type system).


Quick reply (thanks for the well-formatted cited text!).

* Java didn't have the bad security rep until relatively recently. Java had nice-looking UX in the 90s (Netscape bought Netcode on this basis), much nicer than Web content. Didn't help.

* Web != Desktop. Large desktop apps are the wrong paradigm on the web. You won't see a Web-based Mathematica rewritten by hand in HTML/JS/etc. You will see Emscripten-compiled 3DS Max (see my blog on OTOY for more). The reasons behind these outcomes should be clear. They have little to do with JS lacking Java's big-OO features.

* Large mutable-state collection libraries are an anti-pattern. Functional structures, when hashes and arrays do not suffice (and even there), are the future, for scaling and parallel hardware wins.

* Conway's Law still applies. Too often, bloated OO code is an artifact of the organization(s) that produced it. This applies even to open source (Mozilla's Gecko C++ code; we fight it all the time, including via JS). It definitely applies to Google (e.g., gmail, Dart at launch). Perhaps there's no other way to create such code, and we need such programs as constituted. I question both assumptions.

* Glad you brought up refactoring. It is doable in JS IDEs with modern, aggressive static analysis. See not only TypeScript but also Marijn Haverbeke's Tern and work by Ben Livshits, et al., at MSR. But automated refactoring is not as much in demand among Web developers I know, who do it by hand and who in general avoid the big-OO "Kingdom of Nouns" approach that motivates auto-refactoring.

In sum, if the web ever becomes big-OO as Java and .NET fans might like, I fear it will die the same death those platforms have on the client side. Another example: AS3 in Flash, also moribund. These systems (even ignoring single-vendor conflicts) were too static.

The Web is not the desktop. Client JS-based code can be fatter or thinner as needed, but it is not as constrained as in static languages and their runtimes. Distribution, mobility, full-stack/end-to-end (Node.js) options, offline operation, multi-party and after-the-fact add-on and mash-up architectures, social and commercial benefits of the Web (not just of the Internet) -- all these change the game from the old desktop paradigm.

JS has co-evolved with the Web, while the big-OO systems have not. This might still end up in a bad place, but so far I don't see it. JS can be evolved far more easily than it can be replaced.

/be


>* Web != Desktop. Large desktop apps are the wrong paradigm on the web. You won't see a Web-based Mathematica rewritten by hand in HTML/JS/etc. You will see Emscripten-compiled 3DS Max (see my blog on OTOY for more). The reasons behind these outcomes should be clear. They have little to do with JS lacking Java's big-OO features.

I am actually not defending big-OO features (I think, 90s style big-OO is obsolete). I like mix of OO and functional programming and like the results which it confers to code (see for example, Reactive Extensions, it's very easy to learn, expressive, and compact). The feature which I miss in JavaScript and which platforms such as JVM and .NET have, is ease of maintaining code, mainly through sound type system and languages created with tooling in mind.

>* Glad you brought up refactoring. It is doable in JS IDEs with modern, aggressive static analysis. See not only TypeScript but also Marijn Haverbeke's Tern and work by Ben Livshits, et al., at MSR.

The problem with algorithms similar to Tern's is that it works well until we use reflexive capabilities of the language. However, most of the libraries do use them, and as long as it happens, algorithms such as Tern's infer useless type Object.

>But automated refactoring is not as much in demand among Web developers I know, who do it by hand and who in general avoid the big-OO "Kingdom of Nouns" approach that motivates auto-refactoring.

There are refactoring which can be useful in any language. My favorite one is rename, I usually can't come up with a good name from a first attempt. Others are extract/inline method (extract/inline variable is easy to implement in JavaScript).

Another maintainability related feature is navigation to definition and find usages. Unfortunately, language dynamism makes them imprecise and code maintenance becomes nightmare especially if you have > 30 KLOCs of code. You have to recheck everything manually and it's very error prone. Tests can help, but they also require substantial effort.


Tern's static analysis is based loosely on SpiderMonkey's type inference, which does well with most JS libraries.

Yes, some overloaded octopus methods fall back on Object. What helps the SpiderMonkey type-inference-driven JIT is online live-data profiling, as Marijn notes. This may be the crucial difference.

However, new algorithms such as CFA2 promise more precision even without runtime feedback.

And I suggest you are missing the bigger picture: TypeScript, Dart, et al., require (unsound) type annotations, a tax on all programmers, in hope of gaining better tooling of the kind you work on.

Is this a good trade? Users will vote with their fingers provided the tools show up. In big orgs (Google, where Closure is still used to preprocess JS) they may, but in general, no.

Renaming is just not high-enough frequency from what I hear to motivate JS devs to swallow type annotation.

/be


>And I suggest you are missing the bigger picture: TypeScript, Dart, et al., require (unsound) type annotations, a tax on all programmers, in hope of gaining better tooling of the kind you work on.

In many cases types can be inferred. ML is able to infer almost all types in a program (however the algorithm requires that the language doesn't have subtyping). Haskell has very good type inference which support subtyping (you declare very few types). They both have strong static type system and don't tax developers by making them having to declare every type. Algorithms which are used in Haskell are complicated, but they can be implemented.


I know about ML and Haskell but let's be realistic. Neither is anywhere near ready to embed in a browser or mix into a future version of JS.

We worked in the context of ES4 on gradual typing -- not just inference (as you imply, H-M is fragile) -- to cope with the dynamic code loading inherent in the client side of the Web. Gradual typing is a research program, nowhere near ready for prime time.

Unsound systems such as TypeScript and Dart are good for warnings but nothing is guaranteed at runtime.

A more modular approach such as Typed Racket could work, but again: Research, and TR requires modules and contracts of a Scheme-ish kind. JS is just getting modules in ES6.

Anyway, your point of reference was more practical systems such as Java and .NET but these do require too much annotation, even with 'var' in C#. Or so JS developers tell me.

/be


"JS can be evolved far more easily than it can be replaced." - this sums up everything :)


> HashSets, HashMaps, TreeMaps etc.

As a note, the built-in JS object type is a hash map; although it has the annoying property of requiring keys to be strings, it still suffices for most uses of maps and sets.


Indeed. See also JSON.

ES6 brings Map, Set, WeakMap, and WeakSet. First three are already prototyped in Firefox and (under a flag) Chrome.

/be


It seemed that the general idea of web being slow is so polarizing that each and everyone of us can't even pinpoint where the problem is. I suspect the problem is a multidimensional one.


tl;dr: its not as fast as native but its much better than AT expected, and actually within 2x the speed of native.

Also its slightly slower (a few %) than regular JS on browsers without asm.js support.


"within 2x the speed of native"

Of crippled native, with things like SIMD instructions disabled.


With some optimizations unrealized: it's not like everything can use SIMD or that all problems which could theoretically use SIMD actually benefit. Comparing scalar code is still useful for the vast majority of programs executed.


Furthermore, explicit SIMD code (for example, using SSE functions) is not portable. The comparison here was portable C/C++ to portable asm.js.


First of all clang and LLVM support the GCC vector extensions which means you can write portable SIMD code: http://clang.llvm.org/docs/LanguageExtensions.html#vectors-a...

Second, compilers also do auto vectorization (automatically vectorizing code that is not explicitly written to use vector types / SIMD intrinsics) and in addition SSE is used even for scalar floating point operations these days. It doesn't look like asm.js even lets you operate on single precision (32 bit) floating point values which can be a performance issue in some cases.

Last, web workers are less flexible than native multithreading with shared memory between threads and atomic instructions / mutexes etc.

These and other issues mean it will not be possible to squeeze as much performance (and as a result power savings) out of asm.js as from native code or PNaCL.


> First of all clang and LLVM support the GCC vector extensions which means you can write portable SIMD code

It's portable in one sense, but not another - it doesn't work in other compilers like MSVC, which was used in this article.

> Second, compilers also do auto vectorization (automatically vectorizing code that is not explicitly written to use vector types / SIMD intrinsics)

Yes, if the native compiler did auto vectorization here, it helped it. asm.js still did very well though, so I suspect auto vectorization could not be achieved.

> and in addition SSE is used even for scalar floating point operations these days.

Yes, in both native compilers and JS engines. Likely SSE was used in both sides of the comparison here.

> It doesn't look like asm.js even lets you operate on single precision (32 bit) floating point values which can be a performance issue in some cases.

True, we are investigating that. Note that you can in some cases optimize double precision operations into single precision ones, even without explicit syntax.

> These and other issues mean it will not be possible to squeeze as much performance (and as a result power savings) out of asm.js as from native code or PNaCL.

If we are talking about the current state now, then PNaCl does not support SIMD either. If we are talking about eventually, then eventually JS might gain those things too.


Ah, I didn't know PNaCl didn't have explicit SIMD through vector types yet. Sounds like it should be relatively easy to add though since LLVM already provides them.


> its slightly slower (a few %) than regular JS on browsers without asm.js support

You sure you meant what you said here?


Check this out (from the asm.js spec)

    function add1(x) {
        x = x|0; // x : int
        return (x+1)|0;
    }
This declares x to be an int in asm.js, which allows a bunch of optimizations. In a non-asm.js engine, it is an extra 2 operations.

It's not difficult to believe that a non-asm.js engine would be slower with the type annotations than without, although I don't have any performance numbers to check.


It's not an extra two operations. An optimizing JIT like CrankShaft or IonMonkey can remove the first |0 operation in code paths where it is not needed (that is, where you call it with an integer), and can remove the second |0 even more easily by simple inference - in fact, the second |0 allows the JIT to emit a 32-bit addition, with no overflow checks. So it can make code faster, with or without asm.js.


Count the number of times you wrote "can" vs how many times you typed "does". Similar stuff happens with C++ compilers all the time, where certain code "can" help the compiler but, in practice, doesn't always.

The linked benchmarks do show a slowdown with asm.js in IE10. Whether this is due to type annotations or something else I don't know.


Are you saying it's as unpredictable as C++ performance then? I'll take that ;)

I wrote "can" because there are no guarantees, but in practice, this is definitely done in Firefox and Chrome. See

http://www.arewefastyet.com/#machine=11&view=breakdown&#...

- they don't get that close to native speed without such optimizations.

Not sure what is going on in IE10, but I am more curious about IE11.


[deleted]


Grandparent may be saying that, but that's not true.

The comparison that is close is code produced from a C compiler for normal JS, versus code produced from the same compiler and code for asm.js. BUT code that would be written by a human would be expected to be more compact and run much faster than either of those.

Furthermore the code that they wrote would not run on all browsers, and if it did run often crashed the browser. Human written code would not have those problems.


Thanks for pointing out my error.

(Hope you don't mind that I deleted my comment.)


Usually in a case like that I'd edit it to add an update admitting the correction.

The rest of what you said was quite useful for people who didn't want to read a long article.


It would be interesting to see an implementation of this on ARM platform. Firefox OS powered mobile device would be really promising with this optimization.


asm.js is already enabled on Firefox for Android nightly builds. We hope to have asm.js enabled in Firefox OS by the 1.2 timeframe (middle/end of 2013)


I think it's becoming clear that there is room for two classes of web language: one for general use (JavaScript) and one for high performance applications or games. Let's get a real language alternative, free of half measures or compromises.


All languages and VMs are compromises: JavaScript, JVM, CLR, LLVM, Flash, etc. Question is which compromise you prefer.


What do developers want for a development environment these days? Is C/C++ the next wave of web development? Tooling is a lot better than when I did it 15 years ago. Dart, however, seems like a friendlier environment but it probably won't be as fast a C++(asm.js).


> Is C/C++ the next wave of web development?

asm.js is certainly not targeted at writing web applications in C or C++. It's more about supporting arbitrary languages and getting existing native code bases (e.g. libraries and game engines) on the web.

There's some more background in this presentation: http://kripken.github.io/mloc_emscripten_talk/

(That said, based on anecdotal evidence, C++ seems to work pretty well on large projects where predictability and reliability matter. So for really large, complex web apps, C++ might become a popular choice. Not really seeing this happen though, as web applications generally handle the hard stuff in the backend.)

> Dart, however, seems like a friendlier environment but it probably won't be as fast a C++(asm.js).

It probably doesn't have to most of the time. Hardly anything in the web applications I've worked on was ever CPU-bound, the network is usually the big bottleneck. We're just scripting the browser after all, most of the hard stuff (rendering, storage engines etc.), is already taken care of by native code.


I wonder how the Internet would have looked like if it had developed the way that Alan Kay proposed (i.e. a browser as a mini-operating system). It's quite possible that we'd have much better performance (among other things) from the start.


Why dont they just insert regular C/C++ source code inside HTML and be done with it? Eventually the browser is just going to be a download manager that downloads executable code and runs it locally :P


Emscripten can't compile LLVM yet, so you can't put the compiler in a web page. And considering that a PDF reader is couple of megabytes of JavaScript, LLVM would be probably be tens of megabytes.


No it can't. C is not the speed limit. Top native performance is achieved by an ungodly combination of SIMD intrinsics, compiler-assisted optimization, cache-aware data-structures and assembly.


Why don't they first produce near-Chrome performance for their browser? Every time I use Firefox, it slows down to a snail's pace and eventually crashes if there are more than 4--6 tabs open.


Is your FF up to date? Since around v20 it feels way snappier than Chrome, at least on OSX.


I switched to Chrome right around Firefox v3.6. I've tried it a few times since, but I don't remember which version Firefox was at at the time.


Your Firefox crashes with more than 4-6 tabs? That's definitely not normal. Check for misbehaving addons and malware and file a bug if it's reproducible.


I didn't use a single addon.


I have about 300 tabs open on my Firefox, and the Gecko Profiler is active. Things are running very smoothly, thank you very much.

So, my best guess is that you have an add-on killing your performance and stability.


Nope, I ran no addons.


You seem to say "ran" which implies it was a some time in past.

When exactly was this? Also FF is moving very rapidly, I remember FF 19(possibly 20) with no addons being way slower on my laptop than Chrome while its nightly at the time - FF 22 (or 23-24) ran blazingly fast (with same addons as FF 19) on par with Chrome.


I switched to Chrome shortly after FF 3.6 and have only looked back 3 or 4 times since.


3.6 was a while ago, and I believe you didn't need add-ons to have the trouble you describe.

I use Firefox and Chrome every day. My results line up with recent HN comments about how Firefox is competitive, uses less memory at scale (lots of tabs), still janks worse, but may be actually more stable at the moment. Flash is a big source of instability still, for both browsers.

My bottom line: both Firefox and Chrome are competitive, leading-edge browsers. Firefox on desktops is pure open source, Chrome is not (if that matters to you; it does to me a bit, but I'm a realist). AwesomeBar wins for me over Omnibox.

I hope you'll give Firefox a try again.

/be


You are far from the first Mozilla employee that has responded to my complaints about Firefox, but you are the only one thus far that has been civil. In fact, just a few days ago, I was trolled on twitter by a Mozilla employee over my responses here.

So, I just wanted to say thank you.


Sorry to hear about the trollery. I'm looking into it.

I wonder if you have a profile that goes back years and has some property that tickles a latent bug. Sorry if I missed it: have you tried running on a brand new profile? You'll have to use the -ProfileManager option on the command line at startup, I think.

/be


I'll give it a try on my home system again tonight. As for the trollery, the exchange is still visible via my Twitter profile (@jackmaney).


That might be true in your case, but I'm running Firefox 21 smoothly with over 11 tabs open as I type this. And I have a truckload of addons changing my addon bar approximately 12(of 24 installed).

What you are experiencing is definitely abnormal. The only days I have that kind of performance issue is on my old laptop. In which case try switching to Aurora/Nightly, it might help.


Exactly. I don't know how I'm supposed to trust them "making the web faster" when I can't scroll up and down a page.


You may have an addon that is leaking memory or your profile may be corrupted. Consider restarting without any Addons or try the Reset feature to start with a new profile but with your key user data migrated. https://support.mozilla.org/en-US/kb/reset-firefox-easily-fi...


My profile is apparently constantly being corrupted. If I have to wipe Firefox's data on me every other day because of profile corruption then what is the point? If addons are guaranteed to mess things up then why does Firefox have that feature?

Let's be honest, this isn't meant to be helpful, it's just meant to shift blame back to the user.


s/(shift blame back to the user)/be passive agressive and \1/


Finally, I can write all my web apps in C/C++. Dream come true!


Well, there still is no interface to the DOM. So it really depends on your definition of "web app". It is quite comparable to applets IMHO. Except applets are less of a hack. I wish Sun had invested more time in their "sandbox".


> It's a limited, stripped down subset of JavaScript that It's a limited, stripped down subset of JavaScript that the company claims will offer performance that's within a factor of two of native—good enough to use the browser for almost any application.

Except for running webpages scripted with Dart. It's not fast enough for that application apparently. https://news.ycombinator.com/item?id=3095519


Who knows how fast Emscripten+asm.js-compiled DartVM would be compared to Dart2JS in 2013? I don't know yet, and apparently neither do you. JITs can be tricky, but @azakai is having good results with Emscripten+asm.js'ed LuaJIT2.

What I wrote 593+ days ago was about the Dash memo's Microsoft-like strategy. My point then was not that it couldn't ever be defeated by something like our work on asm.js and Emscripten. Never say never. My point rather was that the Dash strategy intentionally used Google's big resources to push a gratuitously non-standards-based agenda at the expense of its Web-standards-based efforts.

Indeed, since then it has become clear that Google miscalculated. Dart even gave up bignums (the int type) to support Dart2JS/DartVM equivalence, which I think is a mistake. Bignums are actually on the JS standards-based agenda:

http://wiki.ecmascript.org/doku.php?id=strawman:bignums

http://wiki.ecmascript.org/doku.php?id=strawman:value_object...

See also

https://bugzilla.mozilla.org/show_bug.cgi?id=749786

Given all the time that has passed since 2010, Google champions in Ecma TC39 could easily have worked bignums into ES7 if not ES6.

At a higher level, by over-investing in Dart and under-investing in JS (the V8 team was moved from Aarhus to Munich and with the no-remoties rule had to be rebuilt), Google has missed opportunities such as game-industry work we announced at GDC with Epic. This is "ok", it's their choice, but I still say that it is inherently much more fragmenting than any optimization of the asm.js kind, and that it under-serves the standards-based Web.

Maybe in a few years we can evolve JS to incorporate whatever helps DartVM beat Dart2JS, if there's any gap left. However, the idea that mutable global objects in JS necessarily mean VM-snapshotting is impossible is simply false. On the other hand, do we really need VM snapshots to speed up gmail startup? LOL!

/be


What about PNaCl? What's the reason you don't support it in Mozilla? It's based on open source tools, and can be easily integrated into any browser.

P.P.S. IMO, the biggest problem about JavaScript isn't its performance. Performance is ok, and enough for the games, thanks to WebGL. The language, like many other dynamic languages, is just not very suitable for large scale development. People who write such applications, have to resort to using tools like GWT, which add substantial overhead (both in compilation time and performance). Does Mozilla has any plans to improve the situation here?


Your green-colored id says you are new here, and yet unwilling to do basic research in HN on the topic you ask about. I'll take this in the "Dear LazyWeb" spirit, assume that you are not trolling, and give some links. First, one of a few obvious searches:

https://www.google.com/search?q=site:news.ycombinator.com+pn...

From the results, an entry point into a deep thread, posted by Maciej Stachowiak of Apple:

https://news.ycombinator.com/item?id=4648045

Further in this sub-thread, I comment on the loaded language used to open-wash NaCl and portray critics of it as haters:

https://news.ycombinator.com/item?id=4650689

Back to your comment: "It's based on open source tools" is true of Emscripten too.

The bit about "can be easily integrated into any browser" is false due to Pepper, the large new target runtime for *NaCl. Pepper is a non-standard and unspecified-except-by-C++-source plugin API abstracting over both the OS and WebKit -- now Blink -- internals.

To make such an airy assertion in an under-researched comment makes me suspect that you don't know that much about either PNaCl or "any browser". So why did you make that confident-sounding claim?

These days, large apps are written in JS, even by hand. GWT is not growing much from what I can tell, compared to its salad days. Closure is used the most within Google, and Dart has yet to replace Google's use of GWT + Closure.

Outside Google, hundreds of languages compile to JS (http://altjs.org/). CoffeeScript is doing well still. TypeScript is Microsoft's answer to Dart, and more intentionally aligned with the evolving JS standard.

"Does Mozilla has any plans to improve the situation here?"

Have you heard of ES4? Mozillans including yours truly poured years into it, based on the belief that programming-in-the-large required features going back to "JS2" in 1999 (designed by Waldemar Horwat), such as classes with fixed fields and bound methods, packages, etc.

Some of the particulars in ES4 didn't pan out (but could have been fixed with enough time and work). Others are very much like equivalent bits of Dart. One troublesome idea, namespaces (after Common Lisp symbol packages) could not be rescued.

But ES4 failed, in part due to objections from Microsofties (one now at Mozilla) and Googlers. In a Microsoft Channel 9 interview with Lars Bak and Anders Hejlsberg, Lars and Anders both professed to like the direction of ES4 and wondered why it failed. _Quel_ irony!

As always, Mozilla's plans to improve the situation involve building consensus on championed designs by one or two people, in the standards bodies, and prototyping as we specify. This is bearing fruit in ES6 and the rest of the "Harmony era" editions (ES7 is being strawman spec'ed too now; both versions have partial prototypes under way).

Founding Harmony email (on the demise of ES4):

https://mail.mozilla.org/pipermail/es-discuss/2008-August/00...

A talk I gave last fall on ES6 and beyond.

http://brendaneich.github.io/Strange-Loop-2012/#/

For programming in the large, ES6 offers modules, classes, let, const, maps, sets, weak-maps, and many smaller affordances. I hope this helps. Even more is possible, if only the browser vendors and others invested in JS choose to keep working on evolving the language.


>To make such an airy assertion in an under-researched comment makes me suspect that you don't know that much about either PNaCl or "any browser". So why did you make that confident-sounding claim? Unfortunately, I can evaluate technologies according only to my experience and knowledge. According to my limited knowledge and experience, what PNaCl evolves to seems like the way to go: I can use whatever language I like to write code; I can utilize resources of computers efficiently; The code is performed in a sandbox; It's based on non proprietary standards (I mean LLVM, not Pepper).

>The bit about "can be easily integrated into any browser" is false due to Pepper, the large new target runtime for *NaCl. Pepper is a non-standard and unspecified-except-by-C++-source plugin API abstracting over both the OS and WebKit -- now Blink -- internals. It's a new technology, how is it supposed to be standardized? IMO, it's better to create some implementation, and then, to standardize it, not vice versa. AFAIU, how Pepper API works, the application is run as a separate process, operating system APIs are isolated with Pepper API and there's a way to post JavaScript code to be executed in browser's event dispatch thread to update and query DOM. This API seems to be narrow and generic enough to be supported in any browser. Code which provides OS services can be reused from code as is, the only variable part is DOM interaction, which is narrow enough to be supported in any browser.

>Have you heard of ES4? Mozillans including yours truly poured years into it, based on the belief that programming-in-the-large required features going back to "JS2" in 1999 (designed by Waldemar Horwat), such as classes with fixed fields and bound methods, packages, etc. I closely monitored what have been happening to Harmony, and it was obvious that this effort wouldn't have panned out. There were just too many complex features to be implemented (for example, generics) in a single release.

>These days, large apps are written in JS, even by hand. GWT is not growing much from what I can tell, compared to its salad days. Closure is used the most within Google, and Dart has yet to replace Google's use of GWT + Closure. I myself write large scale JS apps with GWT. It's really painful but if it was written in plain JavaScript it would have been much worse (I think it event would have been impossible to write and debug code which I write). GWT has its disadvantages: compilation time is very slow, I don't have access to many APIs (mainly reflection and dynamic code loading), devmode for all browsers but Firefox is awfully slow. But despite its disadvantages, it's the best tool for my job (I debug most of the code in JVM, by abstracting away access to DOM).

Both Mozilla and Google agree on goodness of LLVM, since both PNaCl, and asm.js toolchain are based on it. Google has solved the problem of efficiently representing LLVM in a platform independent way and safely executing it in a sandboxed environment. Why not create a PNaCl and LLVM based standard to allow, efficient execution of code? Is it so difficult?

Mozilla promotes asm.js as a "standard compliant" way to do the same thing as PNaCl. However, there are a lot of disadvantages in this approach: asm.js is text based and very verbose, binary format will save a lot of traffic; asm.js doesn't provide access to SIMD and multi-threading which are crucial for good performance; asm.js won't work out of the box, it requiress some effort to support on the side of browser vendors, for example, the only browser where Unreal demo initially worked was Firefox nightly build. Is it so hard to find a consensus among browser vendors (or at least Mozilla and Google) in this respect?


What a formatting mess -- I hope you can you still edit your post and put two newlines after the >-cited lines from my post, and the text where you start replying? Thanks.

If I can read through the mess, you seem to be saying you can't evaluate PNaCl, so you'll just make airy and overconfident assertions about it. Even on HN, that doesn't fly. -1!

When I challenge your further assertion that something is "easily integrated into any browser", you switch subjects to arguing for prototyping and developing a complex system before standardizing it. Good idea (but beware the difficulty standardizing it late if simpler evolutionary hops beat it in the meanwhile; more below on this). However true, that is a dodge, it does not excuse your assertion about "easily integrated".

Doubling down by citing "the DOM" as if Pepper were only about OS and DOM APIs, or as if even just the DOM implementations in different engines had enough in common to make interfacing from native code (not from JS, which is hard enough even with a decade+ history of written standards and market-driven interop testing) easy, just repeats this bad pattern. So -2.

Then you seem to confuse ES4 with Harmony (which I made clear came about after ES4 failed). -3.

At this point on USENET, you'd hear a plonk. But I'll close with one more point:

Mozilla and Google using LLVM does not equate to standardizing PNaCl. Pepper is one problem, a big one, but not the only one. Consider also the folly of using a linker-oriented intermediate representation for a shelf-stable Web-scale object file format. See Dan Gohman's "LLVM IR is a compiler IR" post:

http://lists.cs.uiuc.edu/pipermail/llvmdev/2011-October/0437...

Shortest-path evolution usually wins on the Web. Trying to standardize PNaCl -- including Pepper and LLVM IR abused or patched into shape as a long-lived object file format -- is an impossibly long, almost exclusively Google-dependent, path.

Extending JS among competing engines whose owners cooperate in Ecma TC39, via a series of much shorter and independently vetted and well-justified (by all those engine owners) path-steps? That is happening in front of your eyes.

You may not like it. You may want to say "Go home, evolution, you are drunk":

http://wtfevolution.tumblr.com/post/40659237456/this-pelican...

But it would be foolish to bet against evolution.

/be


>What a formatting mess -- I hope you can you still edit your post and put two newlines after the >-cited lines from my post, and the text where you start replying? Thanks.

Sorry, for the bad formatting, when I understood that it's broken I wasn't able to edit it.

>At this point on USENET, you'd hear a plonk. But I'll close with one more point:

I didn't want to argue with you, I just described how it looked to me (and I mentioned it in the previous comments). I just have no expertise here and now see that there're problems which aren't mentioned by googlers in their presentations. May be you are right, may be they are right. I am sorry if I offended you.

>Shortest-path evolution usually wins on the Web. Trying to standardize PNaCl -- including Pepper and LLVM IR abused or patched into shape as a long-lived object file format -- is an impossibly long, almost exclusively Google-dependent, path.

Yes, it is. I have no choice other than to use JavaScript in one form or another. However, I would love to be able to use a beautiful modern and well thought language on the client instead of rusty JavaScript (or Dart or GWT). I like the Mozilla Rust, but it is a system programming language. I'd love to see something of similar spirit but oriented towards client. Unfortunately, it will take too much time until it happens.


Not looking to argue, just add some value: you say "too much time until it happens" and I've heard that before, many times -- most recently re: Dart (see that Channel 9 Lars&Anders interview).

Funny thing, it has been years since Dart started, ditto PNaCl. Who says JS is the slow path? I suspect it will get there faster (for tangible definitions of "there", e.g., a Rust2JS compiler for you) with well-focused work.

/be


> Rust2JS

Pretty please! Excited to see pcwalton's zero.rs work, which I understand to be a prerequisite for a Rust2JS. But deeper JS integration than emscripten currently provides (e.g., mapping rust structs to harmony binary data StructTypes?) would be even more awesome. When are you going to land 749786, anyway?

Re: Dart and PNaCl--they've been useful for providing political cover to people who want to evolve JS more aggressively.


Heh, I really do need to land int64/uint64 support. The patch needs tests and some cleanup first. This week may afford some hacking time (finally) instead of just rebasing time.

In theory it shouldn't be necessary to mal-invest in long-odds/single-vendor (and therefore hard to standardize without dominant market power) innovations at high cost, just to drive minimal investment in evolving the web in shorter-odds ways. Especially when the same company is collaborating in TC39 to evolve JS, and owns the V8 engine!

Competitive strategy or just a side effect? I suspect the latter, and even if strategy, it's not cheap. Anyway, we didn't need it as cover to do asm.js. That came from @azakai's work on Emscripten plus Mozilla Research's "make the Web better" agenda and answer the "JavaScript -- is there anything it can't do?" q. with results (negative or positive).

/be


I hate Javascript. Sorry if this offended anyone. I think it was good for its limited use case during the early Web 1.0 - Web 2.0 era.

So therefore I dont like asm.js either. I much prefer LLVM or PNaCI. But that is in an ideal world. In real world i would have to agree asm.js seems the way forward. Even with asm.js being a shortcut, it would still take many years for it to gain enough traction and improvement to be really useful as an universal compiler target. Much like what is currently happening now with JVM. Except almost everyone already has a JS Engine Installed, no need to download a Java Runtime.

That way everyone can really use what ever languages that love or they want. With Everything2js.

I hope the future evolution of Javascript will take performance as a consideration. Such as SIMD etc.

May the day of languages independent web come faster.


Some days, I hate JS too. But hate leads to the dark side. It is what it is. Might as well hate endosymbionts who power our cells for being ugly.

We're all over performance, including SIMD. Multiple approaches but definitely up for the Dart-like direct approach. Talking to John McCutchan of Google about lining up JS on this front so Dart2JS has a complete target.

/be


You can't simply asm.js'fy LuaJIT2 because it has an interpreter hand written in assembly and it generates native code to achieve peak performance. You can only asm.js'fy a normal Lua interpreter which is up to 64x slower than LuaJIT on benchmarks, when compared natively. So asm.js'fied Lua will be up to 128x slower. Does not sound that impressive...


Hi -- you make a good point, the one seemingly at issue in this tangent (but not really the bone of contention).

As noted, I don't know which will ultimately prevail in pure performance: DartVM or Dart2JS on evolved JS. In the near term, of course DartVM wins (and that investment made "against cross-browser standards" was the strategic bone of contention 594 days ago).

I do know that in the foreseeable term, we browser vendors don't all have the ability to build two VMs (or three, to include Lua using LuaJIT2 or something as fast, in addition to JS and Dart; or more VMs since everyone wants Blub ;-).

The cross-heap cycle collector required by two disjoint VMs sharing the DOM already felled attempts to push Dart support into WebKit over a year ago. Apple's Filip Pizlo said why here:

https://lists.webkit.org/pipermail/webkit-dev/2011-December/...

Other browser vendors than Apple may have the resources to do more, but no browser wants to take a performance hit "on spec". And Mozilla at least has more than enough work to do with relatively few resources (compared to Apple, Google, and Microsoft) on JS. As you've heard, asm.js was an easy addition, built on our JIT framework.

So you're right, an optimizing JIT-compiling VM is not easily hosted via a cross-compiler, or emulated competitively by compiling to JS. LuaJIT2 would need a safe JIT API from the cross-compiler's target runtime, whether NaCl/PNaCl's runtime or Emscripten/asm.js's equivalent.

Googling for "NaCl JIT" shows encouraging signs, although the first hit is from May 2011. The general idea of a safe JIT API can be applied to asm.js too. In any event, one would need to write a new back end for LuaJIT2.

Bottom line: we're looking into efficient multi-language VM hosting via asm.js and future extensions, but this is obviously a longer road than C/C++ cross-compiling where we've had good early wins (e.g., Unreal Engine).


> You can only asm.js'fy a normal Lua interpreter which is up to 64x slower than LuaJIT on benchmarks

Yes, LuaJIT is fast, but even their own numbers are nowhere near 64x being the mean or median,

http://luajit.org/performance_x86.html

Furthermore, these are microbenchmarks. Large codebases might paint a different picture.

I am also not sure why LuaJIT is the most interesting comparison. Yes, LuaJIT is a work of art, but even normal Lua is quite fast, beating Python and Ruby easily. So even a half-speed Lua-VM-in-JS would be competitive with other dynamic languages - which means it is fast enough for many uses.

Finally, we can certainly compile VMs that JIT, we just need to create JIT backends for them that emit JS (preferably asm.js). But LuaJIT is more tricky, as you note, because it lacks a portable C interpreter.


It is SOOO odd to see folks not "get" the JavaScript Everywhere revolution. Node, JS/CoffeeScript, JSON: Server, Client, Network Data. Complete stack, period.

C++?? Gack, so yesterday. Python, Ruby, etc .. so without dynamic windows everywhere.

Google?? So confused. What's their next project jerked out of our workflow? When is Google coming out of beta?

Keep up the good work /be

   -- Owen




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: