Hacker News new | past | comments | ask | show | jobs | submit login
Firefox Nightly Gets New Baseline JIT Compiler (blog.mozilla.org)
248 points by barkingcat on Apr 5, 2013 | hide | past | web | favorite | 102 comments

The amount of innovation coming from Mozilla has been staggering the last year. IonMonkey, asm.js, impressive memory usage improvements (Firefox is now by far the least memory hungry browser among the modern ones), new baseline jit...Rock on, Mozilla.

And Rust, linked to that Servo seems very interesting.

> impressive memory usage improvements (Firefox is now by far the least memory hungry browser among the modern ones)

What evidence do you base this on -- your own experience, or that of others, or measurements such as those done by Tom's Hardware, or something else?

I'm not disputing your claim, I'd just like to know :)

Thanks for the links. Unfortunately the tests that tech sites do for memory consumption are quite rudimentary -- usually they don't do much more than start the browser, open a few sites, and measure.

This kind of testing produces results that favour Firefox, because Firefox does very well when you measure memory consumption in a browser session that hasn't been alive for long, but less well in longer-running sessions. This is partly due to its mostly-single-process architecture, which makes for smaller per-tab/site overhead, but fragmentation and leaks hurt more over time.

As a result, at this point in time I find reports from actual users more convincing than measurements done by tech sites. Firefox's memory consumption has improved a lot in the last 18 months, and I've heard lots of users say it uses less memory for them than it used to. I was hoping that antonios might have something similar to say.

There's also Mozilla's own "Are we slim yet?" interface to track their progress in memory usage.


You can even see the bugs that lead to memory regressions there.

AWSY's graphs suggest that Firefox's memory consumption was best around Firefox 12, in January 2012. So if that's your evidence, it's not particularly convincing.

Fortunately, although AWSY is useful, it's also a single benchmark with a number of flaws and limitations, and should be viewed accordingly. In particular, it can miss big improvements such as https://bugzilla.mozilla.org/show_bug.cgi?id=695480 and https://bugzilla.mozilla.org/show_bug.cgi?id=689623.

MAJOR EDIT: The first tests I accidentally ran against aurora (22a) and not the nightly (23a). Running against the nightly, the numbers are much closer, though sometimes better on either 21 or 23a for some simple things.

But running the typed array/pixel manipulation test, 23a seems twice as fast as the beta!


Please feel free to confirm using http://jsperf.com/canvas-pixel-manipulation/6

Original post below, with erroneous benches removed:


Hmm, I have a lot of HTML canvas-based performance tests, so I plucked two of them to give a small comparison a go. Comparing the results of these simple exercises doesn't seem too promising.

The first test I used was a simple one that merely sets every canvas propery (some can be time consuming): http://jsperf.com/can-attribs

And the second one tests different ways of filling single pixels: http://jsperf.com/filling-pixels

From a practical standpoint canvas performance matters very much to me, but my tests probably aren't the best metric. Are there better tests that could be used to see the difference between the old and new compilers here?

edit: on a test comparing typed arrays to plain, performance seemed nearly (merely?) identical: http://jsperf.com/canvas-pixel-manipulation/6

I'm not sure which builds you compared. Firefox 22 is currently in Aurora phase, and Firefox 21 is Beta. It does seem like there's been a regression in the aggregate scores.

Thanks for bringing it up, though, that should be looked into. Have you submitted this regression to bugzilla.mozilla.org? If not, I'll create one for it to be looked into.

Just for kicks I ran can-attribs on a pre-Baseline and post-Baseline build of firefox nightly on my macbook, and the scores received a modest (about 2-3% average) bump from the landing.

Firefox Nightly is version 23, not 22.

Oh dear I opened Aurora and not Nightly. I feel like a dunce!

I made edits to my comment, including (thankfully) the conclusion.

That's understandable. It sounds like you have a whole lot of Firefoxes. :) Plus Firefox Nightly switched from 22 to 23 just a few days ago.

I got 5,159 and 7,703 with Firefox Nightly (v23). I got 2,566 and 4,895 with Chrome Canary (v28).

Nice work, I hope Mozilla guys see this and fix it to be close to their respective performances :)

i also have a lot of canvas-heavy code and use typed arrays pretty extensively. i think that mozilla's canvas implementation is actually the bottleneck here, not the JS. chrome's canvas proves to be much faster, though.

i reported the issue about a year ago, FYI.


That issue is basically fixed in most sane code, actually; I just resolved it as such. For example, on http://jsperf.com/pixel-pre-rendering/5 I see identical numbers for the two versions of the test.

nice, i guess it's some other issue then :(

If there's a testcase showing the problem, please file bugs! Doesn't need to be minimal or anything, just a link to the page showing the problem.

Also, to remark on recent happeningsā€¦ given the recent flurry of news surrounding asm.js and OdinMonkey, there have been concerns raised (by important voices) about high-level JavaScript becoming a lesser citizen of the optimization landscape. I hope that in some small way, this landing and ongoing work will serve as a convincing reminder that the JS team cares and will continue to care about making high-level, highly-dynamic JavaScript as fast as we can.

Great answer.

If someone could answer me this it would be much appreciated: The blogpost describes the current flow of code as such: Interpreter -> JaegerMonkey -> IonMonkey

Is process repeated every time I open a website or do browsers cache the generated bytecode? What about popular libraries? Will jQuery get JITed differently for every website that has it?

There will be separate jitcode generated for each page. It's hard to share this kind of code because during generation we often bake-in immediate pointers to various things (like global objects, etc.) into the generated jitcode.

The jitcode itself isn't that big of a memory issue, though. The collected type info is more heavyweight, but that's really hard to share across different pages. There are security concerns, concurrency concerns, and just complexity.

Also, the generated JIT code isn't static and does not simply map source to native like in statically compiled languages. A lot of JS JITs, including Firefox'es, dynamically rewrite the emitted code as it executes to add support for newly-seen types. So not only will the code be different environment to environment, it will also be different within the same script execution.

Would the same library generate the same jitcode on different websites? Does how the library is used account for this. At a high level, if I never used jQuery animation code on my site, but another site does, with the same jQuery version, would the jitcode for my site have removed the unnecessary animation stuff?

> Would the same library generate the same jitcode on different websites?

No. The jitcode is specialized based on actual observed types, so depending on how you call the methods you get different jitcode.

And do you cache jitcode of the same page between different visits?

No. We however had several ideas over the years to cache bytecode together with the JavaScript code. But that never really went anywhere. Caching JIT-code is difficult, because addresses (like jumps) change unless you malloc at the exact same location or manually fix up everything.

It sounds like you need two separate JITcode phases: first, generate Position-Independent JITcode (and cache that); then for each script that requires it, load it and "link" it.

I see. I was thinking in something more bytecode-like, didn't knew that jitcode has direct adressess in it.

How much worse peformance-wise would generating PIC be to address this?

If a page goes into the back/forward cache, then its js code generally gets kept around. But we don't persistently store jitcode for a page, and I don't know of any JS engine that does.

More sharing would be great for memory consumption, but it's really hard. See https://bugzilla.mozilla.org/show_bug.cgi?id=846173 for an interesting example.

Also different sites use different versions of jQuery

Damn this browser business is getting crazy these days.

I wonder if we'll look at how the browser performs in say 5 years from now with the sense that 2013 was basically the dark ages. Exciting times...

Dark Ages would imply that progress was set back significantly by some catastrophic event (ie: the fall of Netscape). Renaissance would be a more apt term, given the rapid advances we're going through.

Dark Ages would imply that progress was set back significantly by some catastrophic event (ie: the fall of Netscape).

Some of us would argue that JavaScript itself was that catastrophic event. No, really -- I'm not trolling. It's no longer reasonable for people to deny that JavaScript doesn't lend itself well to optimizing run-time performance.

These Herculean efforts are impressive, but perhaps it's time to fix the language, even if that breaks backwards compatibility.

They're working on it; it's called ES6. But in terms of your claim that "It's no longer reasonable for people to deny that JavaScript doesn't lend itself well to optimizing run-time performance", I'm not sure what to say to that. JavaScript code is some of the fastest interpreted code around, and projects like asm.js take that even farther. It's a phenomenally beautiful and expressive language once you get around the fact that it has some minor warts.

I might have agreed with you 10 years ago, but I don't think there claim that JS simply doesn't lend itself to being performant is true at all -- all the evidence I see, both as a web JavaScript and Node.js developer, and following the recent relative news, has pointed me to quite the opposite: javascript is doing great right now.

> JavaScript code is some of the fastest interpreted code around, and projects like asm.js take that even farther.

That says something about market forces, but nothing about the language. JavaScript is the fastest dynamic language because it was the language that was most profitable to optimize.

We don't have any real-world comparisons to other languages where an equal amount of brainpower was spent on optimization so that we could see how the language itself affects things.

> We don't have any real-world comparisons to other languages where an equal amount of brainpower was spent on optimization so that we could see how the language itself affects things.

Yes we do: Lua--optimized by lots of big brains for use in gaming. And it turns out that LuaJIT is much faster than any current Javascript JIT--with the reason frequently given that it is a much simpler language.

By "lots of big brains," I think you mean Mike Pall :). But yeah, LuaJIT is really fast.

Julia (http://julialang.org/) is another dynamic language designed from the start for high performance, although the objectives are slightly different from those of Lua. While LuaJIT is fast at everything, Julia is really optimized so that running the same code many times is fast. Running a function the first time can be much slower than in other dynamic languages, I think because there is no baseline JIT or interpreter.

My impression would be that

* LuaJIT is comparable to the best javascript engines in performance

* LuaJIT is entirely the work of one person.

Lua is well designed and values simplicity, which has lent itself well to Pall's success.

And Mike Pall is a superhero when it comes to writing JIT compilers.

Also PyPy, which is generally about as fast as v8 on benchmarks.

And also because it isn't a shitty language, like js.

If it will really true, Mozilla would be writing Servo in JS, not Rust. The realty is, Rust is designed for optimum performance, JS is not, and so extracting performance is significantly more complex and also more fragile in the sense that it's easier to write code that screws up the JIT.

I really with that the effort that Mozilla was putting into Rust could be combined with the effort Google is putting into Dart to design a new, from the ground up, language for the Web that combines a lot of the benefits of JS, but with more predictable performance.

It's awesome that the JITs are getting better, but they don't really solve all the warts of JS.

A "bit slower" is being generous. And asm.js doesn't really represent how a general purpose web application will perform, like a Gmail, it represents mostly how a WebGL targeted cross-compiled game can perform.

The same overselling that happened with Java is happening again, don't worry, a magical JIT is right around the corner that will come within spitting distance of native C. This time it'll be different. When TraceMonkey was announced, there was a lot of excitement about how Tracing JITs were going to rock C level performance.

Are we really thinking that Javascript's VM semantics are going to last for decades? That 30 years from now we'll still be stuck with the same JS, just better VMs?

IMHO, Sooner or later we're going to have to admit that a language designed in 10 days is stretched to its limit and as amazing as the JITs and hacks like asm.js are, eventually we'll need to get together and design the next generation.

> The same overselling that happened with Java is happening again, don't worry, a magical JIT is right around the corner that will come within spitting distance of native C

For many purposes the JVM achieved the goals you mention. Humans have an infinite capacity for hand-made optimizations, which is why it's pretty hard for a virtual machine or a compiler to beat hand-written assembly by a developer that knows his shit, but it takes a herculean effort to build big, long running, highly concurrent apps in C/C++ and it takes companies with the resources of Google or Mozilla to do it.

For instance Mozilla is still struggling with memory leaks in Firefox. Why is that? Because memory gets fragmented due to improper allocation patterns, not to mention hard to prevent memory leaks due to cyclic references. And to get around that without a generational garbage collector, you have to use object pools and manage allocations to a really fine level of detail. Or you have to make your app use multiple processes and simply not care much about it, like Google did in Chrome with their one process per tab model, which is why Chrome chokes on multiple long-running tabs opened.

With a precise generational garbage collector for instance, problems of fragmentation and memory leaks due to cyclic references simply go away. People complain about the latency of JVM's CMS garbage collector, but after seeing it in action in a web app that is able to serve 10000 requests per second per server in under 10ms per request, I'm actually quite impressed. It gets problematic when you've got a huge heap though, because from time to time CMS still has to do stop-the-world sweeps and with big heap sizes the process can block for entire seconds. However CMS is actually old generation and there's also the new G1 from JDK7 that should be fully non-blocking when it matures and if you need a solution that works right now you can shell out the cash for Azul's pauseless garbage collector.

It's really hard to build a precise generational garbage collector on top of a language that allows manual memory allocation. Mono's new garbage collector for instance is not precise for stack-allocated values. Go's garbage collector is a simple parallel mark-and-sweep that's conservative, non-precise and non-generational. The most common complain you'll hear about Go from people that actually used it is about its garbage collector, a problem that will simply not go away because Go is too low-level.

And I'm really happy about Mozilla improving Firefox. Firefox is my browser, but how many years did it take for them to solve the memory issues that Firefox had?

Yes, you probably couldn't build a reasonably efficient browser on top of the JVM right now, especially since browsers also have to run on top of devices that are less efficient, but most developers can't build browsers anyway. And in a couple of years from now, mark my words, security will be considered much more important than performance and suddenly the usage of languages in which buffer overflows are a fact of life will be unacceptable.

Also in regards to big iron, I chose Cassandra (a Java app) instead of MongoDB (a C++ app that's the darling of the NoSQL crowd). I did that because Cassandra scales better horizontally and because performance degrades less on massive inserts. Apparently low-level optimizations can't beat architectures more tuned to the problems you're having, go figure.

Rust is not intended to be a web scripting language at all AFAIK. The fact that JavaScript is a bit slower than a systems language is hardly an indictment of JavaScript.

I'm not too keen on the fact that for ALL programming tasks in ANY system you can pick amongst dozens of languages, but the web is still limited to just one. That is not modern at all.

People need and demand choice, that web programming is still a monopoly to this day is bad, even though people do achieve tremendous things with Javascript every day.

>It's no longer reasonable for people to deny that JavaScript doesn't lend itself well to optimizing run-time performance.

For a language that "doesn't lend itself well to optimizing run-time performance", it did far better than: Python, Ruby, Perl and most other dynamic languages...

I think that is not a really fair comparison, Python with PyPy is pretty fast. Actually I would like to see a comparison for Python on PyPy vs JS in IonMonkey/V8. There was just a higher incentive to optimize Javascript as it was pretty slow to begin with and was exposed to a lot more users (in the sense that it runs on clients rather than servers) than Python, Ruby and Perl

Edit: First hit on Google shows that jited Python (PyPy) is very close to jited Javascript: http://blog.kgriffs.com/2012/10/23/python-vs-node-vs-pypy.ht...

I think that PyPy is pretty impressive.. I also thought that IronPython's performance was impressive.

If you look at the link to the followup post, it does show that in certain use cases NodeJS does a lot better... though without any code to review/reproduce it's hard to say.

I happen to like JS.. Python's probably next on my list of languages to learn, but right now, I'm so deep in getting more proficient with NodeJS + Grunt + RequireJS, it isn't funny... our next-gen stack is much more NodeJS and MongoDB as a few tests, and backend processes have shown them to work very well together...

We have a newer site on ASP.Net MVC 4 (started as 3, with EF), and an aging site built on layers of .Net cruft since 2006 that's nearly unmaintainable) ... So I'm trying to structure things moving forward so that they will be well maintainable for the future as much as possible. Which means some new, and some bleeding edge stuff.

It also means some things I just don't care as much for... I actually like how the OneJS/Browserify takes CommonJS/NodeJS patterns more than AMD (RequireJS), but AMD seems better for the client side... I also don't care for Jade so much, but it was a group decision, and going that direction to share templates for email/client/server usage.

Still working out sharing Backbone models, etc... it's all work. Sorry for blathering on.

If I were doing desktop development, I'd be far more inclined towards Python today. As it stands, imho JS is a better fit for web development.

I've pretty much ignored javascript until this week.

What exactly is wrong with it? Perhaps that is too broad. Can you give an example of something in javascript that impedes runtime performance optimization?

The browser will eat everything.

How feasible would it be to have webservers compile javascript and serve it to the browser already preprocessed (as far as possible)? The same URL could be used, but a different mime type acceptance signaled by the browser using the Accepts header. The webservers, instead of returning text/javascript would return application/javascript+moz22.

Then, most of this race towards hotspot optimization would be less important.

I do not trust 99 servers out of 100 including not only those I intend to communicate with but others sitting in the middle of my connection, to deliver the same binary my browser would render in a few milliseconds.

The web is a wild place, let's not increase our attack surface.

is there a way to signal to the browser that I'd like all of my JavaScript to be compiled as best as possible and that I don't mind my users waiting while this happens? Could this be useful for complex games where start-up speed may not be deemed as important as runtime performance?

If you compiled immediately you would probably find that your games performed much worse, because the compiler would have very little information about the functions and the kind of arguments that will be passed to them.

Once a function has run many times, the compiler can use the gathered statistics (such as knowing that the arguments were always floats) to produce better optimised machine code (such as using float-specific ops, and eliminating code paths that expect arguments other than floats).

This is a common tradeoff for all JITs; Should I wait longer to produce better code, or compile sooner to run the faster code earlier? You can play with this number in most Java JVMs (OpenJDK or Sun/Oracle) using the argument "-XX:CompileThreshold". This sets how many times a function should run before it is compiled. It defaults to about 1500 for a client VM and 10000 for a server VM.

As mentioned in a sibling comment, asm.js is a partial alternative to this tradeoff, as your asm.js code specifies all data types ahead of time, and hence they can be compiled up-front fairly well.

In a sense that's what asm.js is.

Obviously not quite what you're asking for, though.

I'm not a javascript nor browser pro but I wonder why the compilation is done client side. Cant's the servers do it and send the jit code to the client instead?

Too many combinations of operating system, cpu, browser, browser version -- the server would need to have a huge number of versions built and cached. And then you need to ensure your server continually has support for all the latest browsers.

But even that pales in comparison to the biggest reason, which is security. In order to run fast, the compiled version needs to be "trusted" to do the right thing; by allowing the server to send pre-compiled code, you're opening up a huge number of attack vectors.

While the great sunspider battles will be great for the future, practically does this really matter? Does it really have any impact at all for an average user, where, I suspect, the up-front costs of JIT never actually pay themselves off.

This isn't intended to be Luddism, and these improvements will benefit future apps, but so much is made about JavaScript performance (see the hoopla about JIT being disabled for embedded browsers on iOS), yet I seriously doubt it makes an ounce of difference in the real world, where the overwhelming majority of the performance limitations exist in the DOM.

Faster performance enables new types of applications - game, sound decoding/encoding, video rendering/encoding, phone apps, office apps, graphic apps, editors, etc.

Also new JIT can be used in backend systems similar to Node where any performance bump helps.

People have been saying this for well over a decade now. The performance impediment remains the DOM, just as it has always been. And where real benefits come to JavaScript engines, it is through the exposure of native code to the duct-tape of JavaScript: WebGL, SVG, the Media Capture APIs, CSS hardware transforms, and on and on. Where JavaScript is the thinnest veneer over native functionality.

For example, the game logic loop cannot be offloaded to the plug-ins. Simple path-finding is expensive; updating the physical properties of thousands or tens of thousands objects 60 fps is expensive. These kinds of game don't exist because Javascript is not fast enough.

I'll give you another example. That was one game I have in mind that involved simply pixels. 100x100 grid is 10K objects; 400x300 grid is 120K objects. 40fps x 120K is about 5M objects/second. Each object involves however so much processing. Doubling to 800x600 gives 20M objects/second. It adds up fast.

The Baseline JIT will have more of an impact on common web code than highly optimized JITs like Ion. Most web code executes for a few iterations, and runs for short periods of time. Baseline takes over from the interpreter far sooner than either Jaeger or Ion, and compiles far faster than either of them. Much more common web code will end up getting executed under the baseline jit than with the existing jits.

I disagree with your sentiment that it does not make a difference in the real world. The real world is changing. The web now runs on devices which span an order of magnitude in processing power (from low end phones, to high-end workstations). Fast javascript has only been around for a few years or so. Ubiquitous connectivity (everybody connected to the internet all the time) has only just started happening. And full-fledged APIs to make the web a robust development platform (e.g. WebAudio, WebRTC, WebGL, etc.) have only recently come on the scene, or are still being solidified. I think we have yet to see the full potential of all of these technologies tapped.

In a previous life I worked in bioinformatics, and worked on a web-application to allow scientists to invoke and visualize the results of analyses on their tissue samples. Web 2.0 was all the rage back in those days, and V8's optimized JS engine was announced AFTER we developed most of our app. If we had been able to assume that kind of performance at the beginning, our app would have pushed much more of the UI to the frontend. Instead of clunky static server-generated images, we would have built beautiful, dynamic client-side visualization engines.

The last few years of progress in JS execution has opened the web up as a platform target for a large range of apps. And every inch we push that forward opens the door for a few more potential applications to reasonably be moved to the web. That's what it means to me, anyway. Push the line forward a few inches, and it means a few more features that somebody wasn't able to implement before, but are able to now. A few more apps that can target the web.

That means more to some people and less to others. But I can say with certainty that it would have meant a lot to my prior bioinformatician self. And I'm sure there are developers like me, out there, who look at what JS can do, increasing day by day, and ask themselves "now what can I build with that?".

I think it's a good thing to make the answer to that question as open ended as possible :)

Is there some particular reason to want every single application to run in a web browser?

Why do you doubt that it makes a difference in the real world?

One of the things you need to remember is that in general, web app developers need to target yesterday's browser on yesterday's hardware. So they tend to be fairly conservative about the performance requirements. But as browsers and hardware improve, the benefits trickle down until suddenly, it's possible to do something that you never could do before. So sure, the average web app, which is designed to run on old versions of IE on outdate hardware, probably won't see much of an improvement to this. But cutting edge demos absolutely will. And when this version is the "old browser running on old hardware", general purpose web apps will absolutely be able to take advantage of these improvements and do things that would simply be unheard of beforehand.

One of the things you need to remember

I have been building and architecting very rich web applications before most caught onto this web thing. I was an original tester of XmlHttpRequest back when it was a safe-for-scripting ActiveX "abomination". I recently made a ridiculously full featured stop-motion recording studio -- in the browser -- for my children after they were enamored by something similar at the Science Centre.

And JavaScript performance has not been -- remotely -- an issue for a long, long time. The facets that make these apps powerful are technologies like WebGL, Canvas, SVG, CSS, and so on. Things that I wouldn't imagine doing more than simple orchestrations in code.

People use JavaScript performance on a completely artificial benchmark like Sunspider as a proxy for real-world browser performance. Yet it is at most a minor correlation, and has incredibly little to do with causation of a good experience.

If you think I'm joking or overstating, turn off the JIT in your browser and use it for a day. Is your experience devastated? Can you even tell the difference?

There's a big difference between "my use case" and "all use cases".

Sure, JavaScript is fast enough for a lot of things.

But it's also too slow for many things that reasonable people would like to do on the web.

See Mozilla's pdf.js: it makes perfect sense to have a PDF viewer written in JavaScript (and not a blob of insecure C++ code in the form of the plugin).

pdf.js would not be possible at all "long, long time ago", even though JS was adequate for your use cases at that time.

pdf.js is barely possible today - it works for simple documents but its performance on more complex documents is still so much worse than what C++ plugin gives that many people go back to plugins.

Performance is one of the most important competitive aspects of software. Given 2 products with similar features, people go for the faster, which is a big reason why Chrome was so successful.

We're still far away from making JS speed covering all the important use cases.

Until JavaScript can decode mp4 in JavaScript (at speeds not much slower than optimized C++), JavaScript is not fast enough.

> There's a big difference between "my use case" and "all use cases".

Yes. This is an appropriate response to a significant fraction of the complaints on this site and all other tech sites. I usually think of it as "your experience is not universal".

Given the direction of gaming and streaming, your mp4 example seems apt... I think being able to do a 1080p stream at 30-60fps in JS is an appropriate goal for both hardware, and software. Canvas+JS is likely to be the "thin" client structure or the future... I think if it were possible to do this in JS, we wouldn't need ignorant plugin points for DRM.. encrypted streaming could "just work" on any platform, not that I want more DRM.

I like that we are finally at a point, where legacy emulators in JS are a reality. I do think some attention to worker interaction, and perhaps locking say a canvas, or audio channel to a worker should be something able to be done.

I think being able to do a 1080p stream at 30-60fps in JS is an appropriate goal

Why is that an appropriate goal? How about a video element -- which the browser implements in the ideal manner (which is usually a thin layer over hardware decoding) -- that handles h264? Why in the world would you want to do that in JavaScript, besides as a "because I can" challenge?

Further your notion that DRM would "just work" is really unclear. I don't think you understand the concerns of the DRM folks if you think that you'll just do it in decrypt.js and all will be good.

I'd like to have a meet-up and invite all the dudes like you and all of the kind of dude that complains about how asm.js performance is still so slow that they don't see the point and will just keep writing "native".

You're wading into a discussion that is above you. Generally that isn't a good idea.

JavaScript proper cannot be a high performance language because of fundamental design choices. Asm.js, which I've referenced multiple times, undoes many of those choices, effectively becoming a completely different language (which is why you target it with C/C++ and LLVM, not by writing it directly). The notion that there are going to be legitimate video stream processing or h264 decoders in JavaScript -- that are more than a impractical "if you ignore normal considerations" demo -- is 0%. It makes no sense, and it is using the wrong tool, making everything into a nail because someone "knows JavaScript". This is doubly true on mobile where you want to make every instruction count.

All of the examples in here are a riot. Further gaming in HTML5 thus far has been "welcome to the 80s". It has gone nowhere. Nor has any other "pure JavaScript" high computing adventure. nodejs, which I love, is a wonderful asynchronous glue, but everyone knows that you dare not do any actual computations in it because...JavaScript.

> You're wading into a discussion that is above you. Generally that isn't a good idea.

Making assumptions about the knowledge of people you're responding to is cool though.

> Further gaming in HTML5 thus far has been "welcome to the 80s".

Oh, see I guess I was thinking Unreal 3 and Sauerbraten were released much more recently than that. Oops, silly me.

You're right, the browser should just only ever be for web pages, everything should just stay how it is now and all these damn kids should get off your lawn.

Oh, see I guess I was thinking Unreal 3

The Unreal 3 demo was an entirely different thing. Asm.js is a dramatic rethinking of JavaScript that tosses much of the language, and adds hacked typing metadata, to essentially act as a proxy for C. The single benefit that it brings over just going with C is that existing browsers can limpingly run it in crippled mode.

You're right

And then you boringly completely misrepresent my argument. You should tell the kids to get off your lawn, as you're busy trying to pretend everything is a nail because all you know is your hammer.

It's interesting you truncated my example which was pre-Asm.js but still managed to be playable.

They took a shortcut, sure. Historically JS performance hasn't supported that kind of application, and thus there are few modern or impressive 3d game engines written in it.

You were wondering why we want JS to be faster, there's your answer. I'm not sure what's left to explain.

You seem to be saying "Browsers shouldn't do X because they are too slow." "Barring X, browsers are plenty fast and don't need to be any faster." I'm not sure why you don't see that those cancel out. Also X seems to be a fairly vague set of applications, some of which have working demos.

Regarding "my" nails and hammers, I'm actually not a huge fan of much of the tech involved in modern browsers. I'm just not able to ignore what Mozilla and Google are accomplishing here nor the interest that it garners from developers as a cross-platform VM. Can we stop trying to insult me out of having a valid opinion?

If we want make this personal - personally, I don't really care about any platform besides the ones I use, and those (deliberately) tend to be easy targets for portable C/C++.

That's apparently too much work though; people want (to at least believe they have) a single target. The browser is coming to be an option for that, whether anyone likes it or not.

JIT doesn't kick in until a part of the code has already become hot, as determined by watching it. That event handler that only gets run once in a while won't get JITted; it'll continue to be interpreted, ebcause it's not a hot spot.

when it comes to complex web apps, it makes a huge difference. big data is coming to browsers in the form of massive typed Arrays from canvas and maybe soon, audio and video. We're talking about hotspots that get executed 10M+ times. These aren't your grandma's onclick handlers.

when it comes to complex web apps, it makes a huge difference.

Such as what? Give some examples. Further -- as a developer -- I cannot fathom "big data coming to browsers".

This is racing tires on street cars: Theoretically useful, but of absolutely zero relevance for the overwhelming majority of users.

Never has there been an actually practical benchmark of javascript engines and legitimate, real-world websites. Instead it's all nonsense like Sunspider. And don't misunderstand -- this actually can be a net negative for users because what benefits a 10M iteration loop is of negative consequence to an occasionally run event handler.

i'm doing interactive image analysis in canvas. i have pixel iteration/algo loops that easily hit 10M. my canvases are typically 1550x2006.

mozilla also demoed js decoding H.264 video in realtime, the utility of which is of course questionable...

the point is, though, that typed arrays are already a reality, and they are enormous. you can argue about how good an idea that was, but now that they're here, JITs are absolutely necessary.

btw, here's the source code for Broadway.js, the JavaScript H.264 decoder:


Fun project, but how many users of your app are there? One, including you?

Such "high intensity" apps see incredibly limited marketplace success because it generally isn't the ideal platform for them, and the marginal gains from JavaScript engine to JavaScript engine (without a major reboot like asm.js) makes them completely non-competitive to alternative platforms.

Talk about goalpost-moving. First you ask for apps that benefit from these kinds of optimizations. Then someone gives you an example and you say it isn't important because it probably doesn't have many users?

That's a catch-22. Apps like that can't have many users because the improvements they depend on aren't widely deployed. However, those improvements will never be widely deployed if people persuasively argue against their importance using evidence like a small user base...

Goalpost-moving? You mean in my OP where I asked whether it benefits the average user? Where I said that the majority of performance issues remain in the DOM?

I moved no goalpost. I am saying exactly what I always said. What the GP said that they are doing is almost certainly an ill-conceived project because JavaScript -- without a major change like asm.js -- simply cannot offer competitive performance as a facet of the language. Which is why we don't build a renderer in JavaScript, but instead layer it over WebGL, for instance.

yes, it's an internal tool for several people at the moment. your characterization of "real-world websites" is somewhat misguided, I think. JITs, by design are targeted towards compute-heavy applications built for the web platform, not "typical" websites. so to say they will have little effect on websites is probably accurate, but misplaced. mozilla is building Firefox OS, where these JITs will play a critical role in smoothness of interaction and almost certainly fewer cpu cycles on mobile devices to save battery life.

Photoshop, GIMP, or photo touchup kind of apps are for the mass. The GP's app is the first stage of such app. Photo effect update and photo editing involve processing all the pixels. 10M per pass is the norm. Make that interactive and updating in real time with multiple passes.

The next wave will be audio and video processing and editing. Applying effects like cleaning up the colors in every frame is very expensive.

I actually have an example not long ago. There are some long backend job processing that I want to display the status log, beyond just the progress bar. The log records are polled in via Ajax. When it got pretty big, like couple hundreds K, Chrome crawled to its knee, freezing hard. I thought it was the DOM updates but it was the Javascript code. Firefox has no problem. I ended up having to re-structure the feature to not showing the whole log.

The long log problem I experienced as well in Chrome, didn't know it was JS related though, I also assumed the DOM to be culpable!

I commented out the DOM update portion of the code and left just the Javascript code running. It still hung Chrome, so it's Javascript related, at least on Chrome's version of Javascript.

Aren't people starting to use it for writing games now?

>practically does this really matter?

JavaScript is also a compiler target. Even smaller less heavy applications can benefit from these optimizations.

>up-front costs of JIT

You have to reach some threshold. The code is optimized as-needed.

... which finally adds support for a common bytecode format, usable as a compilation target for various languages.

Oh wait, that would actually be useful.

Better just design yet-another JIT compiler to speed up that language design train wreck that is Java Script.


Wrong thread; I think you meant to post a happy comment in one of the threads about asm.js, which actually could be a halfway decent common bytecode format. (It also happens to be valid JavaScript, and an ugly hack, but oh well.)

I had high hope, but no, it isn't.

It would only let you build your own asm.js-based VM on top of the existing JavaScript-like VM which could implement some bytecode format and interpret it.

I have seen something like that already. Let me tell you, you don't want to download 20 MB for some simple web-based Tetris-clone.

asm.js is useless for pretty much any modern language which relies on garbage collection.

I want less indirections, not more.

> I have seen something like that already. Let me tell you, you don't want to download 20 MB for some simple web Tetris-clone.


> asm.js is useless for pretty much any modern language which relies on garbage collection.

That is being worked on with the Binary Data spec: http://asmjs.org/faq.html

Again, these are C/C++ use-cases. That's exactly what asm.js was made for.

Apart from the potential of making some of Mozilla's insane ideas like implementing a PDF reader in JavaScript easier/faster, it's applicability is pretty much limited.

No one out there who is waiting for a sane language for the web will rejoice that he can now use C/C++ instead of JavaScript.

> That is being worked on with the Binary Data spec: http://asmjs.org/faq.html

I'll believe it when I see it working. Most likely they will still find a way to make it next to useless, forcing people to keep writing/targeting JavaScript.

I'll believe it when I see it working. Most likely they will still find a way to make it next to useless, forcing people to keep writing/targeting JavaScript.

FYI, you're talking to them right now. Patrick Walton is a Moz employee.

Of course. I know his nick from the work on Rust.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact