The majority of most processing time in the most common of web pages is consumed by issues unrelated (but not disconnected) to JavaScript performance.
This is my bête noire. The app I work on, though very js-intensive, is nevertheless performance-bound by these other constraints and has actually gotten slower under some recent browser releases (Firefox 3). Not just a little slower, either: slower enough to become borderline-unusable. That's ironic when everybody is slapping each other on the back about their enormous performance advancements. "Of course it's faster. Look at these benchmarks!" It reminds me of my old professor's anecdote. She was visiting her sister and said "Oh, I'm cold." The sister went to look at the thermostat and said, "No you're not, it's 20˚ in here!" (70˚F)
That being said, WebKit kicks Gecko's ass right now (at least for this particular app). It renders and scrolls many times faster and in fact is just shockingly good in Safari 3 (not quite as great in Chrome).
Edit: This comment sounds a bit grumpier than I intended. The main thing is that browser innovation is back with a vengeance. Competition should sort all this out in the long run.
Indeed, DOM performance is often the real bottleneck.
In Cappuccino we go to great lengths to avoid touching the DOM wherever possible. Views (which wrap DOM elements) keep track of their position and size, so we never have to ask the DOM. We defer DOM manipulations to the end of each "run loop" where we coalesce them, removing redundant operations.
The overhead of this bookkeeping (which is reduced by increase JS engine performance) is less than the overhead of extra DOM operations, so it's worth it.
We're already seeing other frameworks adopt some of these practices.
I'm not the expert on this among us, but from what I understand there are situations where changing one thing causes something else to change and so forth. The specific operations we coalesce are appending nodes, removing nodes, setting the position and setting the size.
Are you changing the position of the divs multiple times per "event" (i.e. mouse/keyboard event, etc) or just very frequently on separate events?
No idea how well this would work, but maybe you could have a timeout fire N times per second (5, 10, 15, 30?) and update the DOM positions then? Though with thousands of divs I don't know if that will help.
What are all these divs? Are they simple enough they could be drawn with Canvas?
Are you changing the position of the divs multiple times per "event" (i.e. mouse/keyboard event, etc) or just very frequently on separate events?
Very frequently on separate events.
What are all these divs?
The cells of a spreadsheet. (The pos/size resets happen as the user scrolls or does something resizey. Oh, and the content of the divs changes under these conditions too.)
Are they simple enough they could be drawn with Canvas?
They're about as simple as you could get (at least before you add in formatting and all that). I never considered Canvas as an option - do you recommend trying it?
We might not need to do anything, since the deterioration in FF3 is already getting better with the next release of FF (Shiretoko), though not as good as FF2. On the other hand, performance of this particular UI is critical to its user-acceptance, and if there's something we can do to get significant improvement I'd probably go for it.
"... That being said, WebKit kicks Gecko's ass right now (at least for this particular app). It renders and scrolls many times faster and in fact is just shockingly good in Safari 3 (not quite as great in Chrome). ..."
Where is the slowdown? Is it the DOM? Is this with code custom code or libraries? I'm just curious as I've been re-listening/watching the Resig "Best Practices in Javascript Library Design" http://www.youtube.com/watch?v=0LKDImgRfrg
The slowdown is in the DOM. It's pretty simple, as far as I can tell: setting the size and position of (absolute) divs, and setting their text content.
i couldn't agree with you more. competition will sort all this ugliness out in the medium to long term. today's web applications are so complex with so many moving pieces that bounding performance to any reasonable range is becoming more and more difficult.
i'm pretty sure amazons dynamo paper has been mentioned here on more than one occasion although it does not speak to javascript performance specifically. what it does do is attack the problem of bounded performance in their space, but may be abstracted in general. the paper talks at length about specific issues in their problem space but takes an overarching approach to isolating performance bottlenecks and ensuring performance at a finally granulated level.
i think the way that the dynamo paper attacks the larger problem by breaking it down into its constituent parts and then ensuring bounds on those subsections has value in this discussion and beyond. no doubt a great way to ensure overall performance is to strongly decouple constituent parts and ensure tight performance bounds on those parts which you have greatest control. in total, you will be able to predict with more confidence a performance range that you may feel comfortable with.
I'm surprised to hear that. My apps all use relatively simple js, so I have no experience one way or the other personally. But I'm curious to hear what these "other constraints" are?
Rendering and DOM. For example, setting the position and size of a div.
The difference is dramatic: Webkit is probably an order of magnitude faster than Gecko right now. And as far as I can tell, JS has nothing to do with it. (In fact the most intensive JS code we have runs slower in V8 than in IE7, which is bizarre, but doesn't matter because it's no bottleneck.)
Well, the real difference between browsers that I care about is how well they run my app! However, I'm not sure I'm understand your point. How would those test suites help?
"... The slowdown is in the DOM ... setting the size and position of (absolute) divs, and setting their text content ..."
Run the tests (set size, position of divs, set text content) in the testing app, check the numbers returned to see how each DOM operation effects different browsers. The test app allows for DOM selection, modification or attributes for example. You can run these tests against various browsers (Firefox 2+, Safari 3+, and Opera 9+, and Internet Explorer 6+). You could use this to test which browser is slow for particular DOM operations ~ https://wiki.mozilla.org/Dromaeo#Running_the_Tests and possibly isolate where.
Of course I didn't ask if you are doing this as raw Javascript or using a library? Are you using JQuery, Prototype, DOJO?
This is my bête noire. The app I work on, though very js-intensive, is nevertheless performance-bound by these other constraints and has actually gotten slower under some recent browser releases (Firefox 3). Not just a little slower, either: slower enough to become borderline-unusable. That's ironic when everybody is slapping each other on the back about their enormous performance advancements. "Of course it's faster. Look at these benchmarks!" It reminds me of my old professor's anecdote. She was visiting her sister and said "Oh, I'm cold." The sister went to look at the thermostat and said, "No you're not, it's 20˚ in here!" (70˚F)
That being said, WebKit kicks Gecko's ass right now (at least for this particular app). It renders and scrolls many times faster and in fact is just shockingly good in Safari 3 (not quite as great in Chrome).
Edit: This comment sounds a bit grumpier than I intended. The main thing is that browser innovation is back with a vengeance. Competition should sort all this out in the long run.