What evidence do you base this on -- your own experience, or that of others, or measurements such as those done by Tom's Hardware, or something else?
I'm not disputing your claim, I'd just like to know :)
googling is not rocket science.
and these three links are only the newest ones, there are more from 2011.
This kind of testing produces results that favour Firefox, because Firefox does very well when you measure memory consumption in a browser session that hasn't been alive for long, but less well in longer-running sessions. This is partly due to its mostly-single-process architecture, which makes for smaller per-tab/site overhead, but fragmentation and leaks hurt more over time.
As a result, at this point in time I find reports from actual users more convincing than measurements done by tech sites. Firefox's memory consumption has improved a lot in the last 18 months, and I've heard lots of users say it uses less memory for them than it used to. I was hoping that antonios might have something similar to say.
You can even see the bugs that lead to memory regressions there.
Fortunately, although AWSY is useful, it's also a single benchmark with a number of flaws and limitations, and should be viewed accordingly. In particular, it can miss big improvements such as https://bugzilla.mozilla.org/show_bug.cgi?id=695480 and https://bugzilla.mozilla.org/show_bug.cgi?id=689623.
But running the typed array/pixel manipulation test, 23a seems twice as fast as the beta!
Please feel free to confirm using http://jsperf.com/canvas-pixel-manipulation/6
Original post below, with erroneous benches removed:
Hmm, I have a lot of HTML canvas-based performance tests, so I plucked two of them to give a small comparison a go. Comparing the results of these simple exercises doesn't seem too promising.
The first test I used was a simple one that merely sets every canvas propery (some can be time consuming): http://jsperf.com/can-attribs
And the second one tests different ways of filling single pixels: http://jsperf.com/filling-pixels
From a practical standpoint canvas performance matters very much to me, but my tests probably aren't the best metric. Are there better tests that could be used to see the difference between the old and new compilers here?
edit: on a test comparing typed arrays to plain, performance seemed nearly (merely?) identical: http://jsperf.com/canvas-pixel-manipulation/6
Thanks for bringing it up, though, that should be looked into. Have you submitted this regression to bugzilla.mozilla.org? If not, I'll create one for it to be looked into.
Just for kicks I ran can-attribs on a pre-Baseline and post-Baseline build of firefox nightly on my macbook, and the scores received a modest (about 2-3% average) bump from the landing.
I made edits to my comment, including (thankfully) the conclusion.
Is process repeated every time I open a website or do browsers cache the generated bytecode? What about popular libraries? Will jQuery get JITed differently for every website that has it?
The jitcode itself isn't that big of a memory issue, though. The collected type info is more heavyweight, but that's really hard to share across different pages. There are security concerns, concurrency concerns, and just complexity.
No. The jitcode is specialized based on actual observed types, so depending on how you call the methods you get different jitcode.
I wonder if we'll look at how the browser performs in say 5 years from now with the sense that 2013 was basically the dark ages. Exciting times...
These Herculean efforts are impressive, but perhaps it's time to fix the language, even if that breaks backwards compatibility.
We don't have any real-world comparisons to other languages where an equal amount of brainpower was spent on optimization so that we could see how the language itself affects things.
Julia (http://julialang.org/) is another dynamic language designed from the start for high performance, although the objectives are slightly different from those of Lua. While LuaJIT is fast at everything, Julia is really optimized so that running the same code many times is fast. Running a function the first time can be much slower than in other dynamic languages, I think because there is no baseline JIT or interpreter.
* LuaJIT is entirely the work of one person.
Lua is well designed and values simplicity, which has lent itself well to Pall's success.
I really with that the effort that Mozilla was putting into Rust could be combined with the effort Google is putting into Dart to design a new, from the ground up, language for the Web that combines a lot of the benefits of JS, but with more predictable performance.
It's awesome that the JITs are getting better, but they don't really solve all the warts of JS.
The same overselling that happened with Java is happening again, don't worry, a magical JIT is right around the corner that will come within spitting distance of native C. This time it'll be different. When TraceMonkey was announced, there was a lot of excitement about how Tracing JITs were going to rock C level performance.
IMHO, Sooner or later we're going to have to admit that a language designed in 10 days is stretched to its limit and as amazing as the JITs and hacks like asm.js are, eventually we'll need to get together and design the next generation.
For many purposes the JVM achieved the goals you mention. Humans have an infinite capacity for hand-made optimizations, which is why it's pretty hard for a virtual machine or a compiler to beat hand-written assembly by a developer that knows his shit, but it takes a herculean effort to build big, long running, highly concurrent apps in C/C++ and it takes companies with the resources of Google or Mozilla to do it.
For instance Mozilla is still struggling with memory leaks in Firefox. Why is that? Because memory gets fragmented due to improper allocation patterns, not to mention hard to prevent memory leaks due to cyclic references. And to get around that without a generational garbage collector, you have to use object pools and manage allocations to a really fine level of detail. Or you have to make your app use multiple processes and simply not care much about it, like Google did in Chrome with their one process per tab model, which is why Chrome chokes on multiple long-running tabs opened.
With a precise generational garbage collector for instance, problems of fragmentation and memory leaks due to cyclic references simply go away. People complain about the latency of JVM's CMS garbage collector, but after seeing it in action in a web app that is able to serve 10000 requests per second per server in under 10ms per request, I'm actually quite impressed. It gets problematic when you've got a huge heap though, because from time to time CMS still has to do stop-the-world sweeps and with big heap sizes the process can block for entire seconds. However CMS is actually old generation and there's also the new G1 from JDK7 that should be fully non-blocking when it matures and if you need a solution that works right now you can shell out the cash for Azul's pauseless garbage collector.
It's really hard to build a precise generational garbage collector on top of a language that allows manual memory allocation. Mono's new garbage collector for instance is not precise for stack-allocated values. Go's garbage collector is a simple parallel mark-and-sweep that's conservative, non-precise and non-generational. The most common complain you'll hear about Go from people that actually used it is about its garbage collector, a problem that will simply not go away because Go is too low-level.
And I'm really happy about Mozilla improving Firefox. Firefox is my browser, but how many years did it take for them to solve the memory issues that Firefox had?
Yes, you probably couldn't build a reasonably efficient browser on top of the JVM right now, especially since browsers also have to run on top of devices that are less efficient, but most developers can't build browsers anyway. And in a couple of years from now, mark my words, security will be considered much more important than performance and suddenly the usage of languages in which buffer overflows are a fact of life will be unacceptable.
Also in regards to big iron, I chose Cassandra (a Java app) instead of MongoDB (a C++ app that's the darling of the NoSQL crowd). I did that because Cassandra scales better horizontally and because performance degrades less on massive inserts. Apparently low-level optimizations can't beat architectures more tuned to the problems you're having, go figure.
For a language that "doesn't lend itself well to optimizing run-time performance", it did far better than: Python, Ruby, Perl and most other dynamic languages...
If you look at the link to the followup post, it does show that in certain use cases NodeJS does a lot better... though without any code to review/reproduce it's hard to say.
I happen to like JS.. Python's probably next on my list of languages to learn, but right now, I'm so deep in getting more proficient with NodeJS + Grunt + RequireJS, it isn't funny... our next-gen stack is much more NodeJS and MongoDB as a few tests, and backend processes have shown them to work very well together...
We have a newer site on ASP.Net MVC 4 (started as 3, with EF), and an aging site built on layers of .Net cruft since 2006 that's nearly unmaintainable) ... So I'm trying to structure things moving forward so that they will be well maintainable for the future as much as possible. Which means some new, and some bleeding edge stuff.
It also means some things I just don't care as much for... I actually like how the OneJS/Browserify takes CommonJS/NodeJS patterns more than AMD (RequireJS), but AMD seems better for the client side... I also don't care for Jade so much, but it was a group decision, and going that direction to share templates for email/client/server usage.
Still working out sharing Backbone models, etc... it's all work. Sorry for blathering on.
If I were doing desktop development, I'd be far more inclined towards Python today. As it stands, imho JS is a better fit for web development.
Then, most of this race towards hotspot optimization would be less important.
The web is a wild place, let's not increase our attack surface.
Once a function has run many times, the compiler can use the gathered statistics (such as knowing that the arguments were always floats) to produce better optimised machine code (such as using float-specific ops, and eliminating code paths that expect arguments other than floats).
This is a common tradeoff for all JITs; Should I wait longer to produce better code, or compile sooner to run the faster code earlier? You can play with this number in most Java JVMs (OpenJDK or Sun/Oracle) using the argument "-XX:CompileThreshold". This sets how many times a function should run before it is compiled. It defaults to about 1500 for a client VM and 10000 for a server VM.
As mentioned in a sibling comment, asm.js is a partial alternative to this tradeoff, as your asm.js code specifies all data types ahead of time, and hence they can be compiled up-front fairly well.
Obviously not quite what you're asking for, though.
But even that pales in comparison to the biggest reason, which is security. In order to run fast, the compiled version needs to be "trusted" to do the right thing; by allowing the server to send pre-compiled code, you're opening up a huge number of attack vectors.
Also new JIT can be used in backend systems similar to Node where any performance bump helps.
In a previous life I worked in bioinformatics, and worked on a web-application to allow scientists to invoke and visualize the results of analyses on their tissue samples. Web 2.0 was all the rage back in those days, and V8's optimized JS engine was announced AFTER we developed most of our app. If we had been able to assume that kind of performance at the beginning, our app would have pushed much more of the UI to the frontend. Instead of clunky static server-generated images, we would have built beautiful, dynamic client-side visualization engines.
The last few years of progress in JS execution has opened the web up as a platform target for a large range of apps. And every inch we push that forward opens the door for a few more potential applications to reasonably be moved to the web. That's what it means to me, anyway. Push the line forward a few inches, and it means a few more features that somebody wasn't able to implement before, but are able to now. A few more apps that can target the web.
That means more to some people and less to others. But I can say with certainty that it would have meant a lot to my prior bioinformatician self. And I'm sure there are developers like me, out there, who look at what JS can do, increasing day by day, and ask themselves "now what can I build with that?".
I think it's a good thing to make the answer to that question as open ended as possible :)
One of the things you need to remember is that in general, web app developers need to target yesterday's browser on yesterday's hardware. So they tend to be fairly conservative about the performance requirements. But as browsers and hardware improve, the benefits trickle down until suddenly, it's possible to do something that you never could do before. So sure, the average web app, which is designed to run on old versions of IE on outdate hardware, probably won't see much of an improvement to this. But cutting edge demos absolutely will. And when this version is the "old browser running on old hardware", general purpose web apps will absolutely be able to take advantage of these improvements and do things that would simply be unheard of beforehand.
I have been building and architecting very rich web applications before most caught onto this web thing. I was an original tester of XmlHttpRequest back when it was a safe-for-scripting ActiveX "abomination". I recently made a ridiculously full featured stop-motion recording studio -- in the browser -- for my children after they were enamored by something similar at the Science Centre.
If you think I'm joking or overstating, turn off the JIT in your browser and use it for a day. Is your experience devastated? Can you even tell the difference?
But it's also too slow for many things that reasonable people would like to do on the web.
pdf.js would not be possible at all "long, long time ago", even though JS was adequate for your use cases at that time.
pdf.js is barely possible today - it works for simple documents but its performance on more complex documents is still so much worse than what C++ plugin gives that many people go back to plugins.
Performance is one of the most important competitive aspects of software. Given 2 products with similar features, people go for the faster, which is a big reason why Chrome was so successful.
We're still far away from making JS speed covering all the important use cases.
Yes. This is an appropriate response to a significant fraction of the complaints on this site and all other tech sites. I usually think of it as "your experience is not universal".
I like that we are finally at a point, where legacy emulators in JS are a reality. I do think some attention to worker interaction, and perhaps locking say a canvas, or audio channel to a worker should be something able to be done.
Further your notion that DRM would "just work" is really unclear. I don't think you understand the concerns of the DRM folks if you think that you'll just do it in decrypt.js and all will be good.
Making assumptions about the knowledge of people you're responding to is cool though.
> Further gaming in HTML5 thus far has been "welcome to the 80s".
Oh, see I guess I was thinking Unreal 3 and Sauerbraten were released much more recently than that. Oops, silly me.
You're right, the browser should just only ever be for web pages, everything should just stay how it is now and all these damn kids should get off your lawn.
And then you boringly completely misrepresent my argument. You should tell the kids to get off your lawn, as you're busy trying to pretend everything is a nail because all you know is your hammer.
They took a shortcut, sure. Historically JS performance hasn't supported that kind of application, and thus there are few modern or impressive 3d game engines written in it.
You were wondering why we want JS to be faster, there's your answer. I'm not sure what's left to explain.
You seem to be saying "Browsers shouldn't do X because they are too slow." "Barring X, browsers are plenty fast and don't need to be any faster." I'm not sure why you don't see that those cancel out. Also X seems to be a fairly vague set of applications, some of which have working demos.
Regarding "my" nails and hammers, I'm actually not a huge fan of much of the tech involved in modern browsers. I'm just not able to ignore what Mozilla and Google are accomplishing here nor the interest that it garners from developers as a cross-platform VM. Can we stop trying to insult me out of having a valid opinion?
If we want make this personal - personally, I don't really care about any platform besides the ones I use, and those (deliberately) tend to be easy targets for portable C/C++.
That's apparently too much work though; people want (to at least believe they have) a single target. The browser is coming to be an option for that, whether anyone likes it or not.
Such as what? Give some examples. Further -- as a developer -- I cannot fathom "big data coming to browsers".
This is racing tires on street cars: Theoretically useful, but of absolutely zero relevance for the overwhelming majority of users.
mozilla also demoed js decoding H.264 video in realtime, the utility of which is of course questionable...
the point is, though, that typed arrays are already a reality, and they are enormous. you can argue about how good an idea that was, but now that they're here, JITs are absolutely necessary.
That's a catch-22. Apps like that can't have many users because the improvements they depend on aren't widely deployed. However, those improvements will never be widely deployed if people persuasively argue against their importance using evidence like a small user base...
The next wave will be audio and video processing and editing. Applying effects like cleaning up the colors in every frame is very expensive.
>up-front costs of JIT
You have to reach some threshold. The code is optimized as-needed.
Oh wait, that would actually be useful.
Better just design yet-another JIT compiler to speed up that language design train wreck that is Java Script.
I have seen something like that already. Let me tell you, you don't want to download 20 MB for some simple web-based Tetris-clone.
asm.js is useless for pretty much any modern language which relies on garbage collection.
I want less indirections, not more.
> asm.js is useless for pretty much any modern language which relies on garbage collection.
That is being worked on with the Binary Data spec: http://asmjs.org/faq.html
> That is being worked on with the Binary Data spec: http://asmjs.org/faq.html
FYI, you're talking to them right now. Patrick Walton is a Moz employee.