Although it is necessary in Java, it is entirely pointless to
specify the length of an array ahead of time in JavaScript. [...]
Rather, you can just set up an empty array and allow it to grow as
you fill it in. Not only is the code shorter, but it runs faster too.
Faster? That ought to raise suspicion. JS's dynamic hash-arrays are neat, but now they're supposed to be immune from the laws that govern memory allocation in any other language?
As it happens, I had occasion to test this a few months ago.
function preallocate(len) {
var arr = new Array(len);
for (var n = 0; n < len; n += 1) {
arr[n] = n;
};
return arr;
}
function noPreallocate(len) {
var arr = [];
for (var n = 0; n < len; n += 1) {
arr[n] = n;
};
return arr;
}
On my machine, noPreallocate is 4% faster in FF, but it's 15% slower in IE8 and a whopping 70% slower in Chrome.
I'm pretty sure that in modern browsers new Array(length) doesn't allocate anything, it just sets the length property. The results I'm seeing would agree with Google really.
Perhaps you were seeing GC events slowing down the test?
The other thing about for(var i=0;i<arr.length;i++) is it can end up as an infinite loop if you're modifying the length inside the for loop:
Agree, the author is detailing a lot of arcane specifics of JS that are probably valid in a very small subset of browsers, let's see what he writes:
"Perhaps the most important thing these libraries do is make sophisticated vector graphics possible in Internet Explorer, where JavaScript performance is relatively poor."
Well, yes. Sure, suit yourself, but I think that most modern browsers come with something called a JS code profiler/optimizer and JIT code (FF new JS engine and Chrome surely do, others probably too).
Not to mention that Closure comes with it's own code optimizer that does this for the border cases.
For sanities sake, please don't start to preach that we should do the work of a compiler and bend the code in that direction. Humans are bad at that, and it makes the code unmaintainable.
There are some valid points in the article though, I admit, some pitfalls that might screw up when multiple JS frameworks are running (namespace issues) mostly.
My feeling is that even the compilers written in CS101 will optimize this. I'm guessing that Google tested their code with V8, performance was fine, and they thought nothing of it.
I just did a benchmark with node.js. I made a 50000000 element array, and timed how long each way took.
Trial one:
for( var i = 0; i < array.length; i++ ) { array[i]++ }
That took, on average, 0.93001866 seconds.
Trial two:
for( var i = 0; i < len; i++ ) { array[i]++ }
That took, on average, 0.809920 seconds.
A lot of stressing-out over what ends up being a rounding error.
> My feeling is that even the compilers written in CS101 will optimize this.
Then it might be harder than you think. You can't know that array.length won't change during the course of the loop. Even in your simple example, I would say it takes a fair bit of analysis (not for a human, of course), to be certain that "array[i]++" won't add elements to the list.
> My feeling is that even the compilers written in CS101 will optimize this
Closure comes with a compiler/optimizer/minifier that will do all sorts of optimizations to your code. I am not sure if this is one of them, but I would not be surprised if the library code is optimized for readability and they let the compiler do its tricks to optimize it for speed.
My guess is that this makes no difference in real life. Should you write clean code that performs well? Yes. But should you be fixated on a tiny bug in Google's library? Nope. Send patch, get .0000000001 seconds per element back, and move on.
It's not about a single 50mil element array. It's about sub-optimal code running in a bunch of places and it adds up. But in any case this probably won't be the bottleneck.
Still I am a fan of running the most optimal code possible on the server. Absolutely no reason not to.
Client-side js is different. Often times algorithmic optimizations have no impact (unless we are talking about animation.)
I would not trust people who do not respect optimizations like these to run code on my server.
Time in web applications is not used looking up array lengths - it's used in IO, layout, and DOM manipulation. If iterating through arrays was found to ever be a noticeable issue in practice, the Closure compiler could just be modified to emit more efficient code. That's one of the advantages of having the compiler - you don't have to make a convenience/readability trade.
Closure was not thrown together by novices new to the language. It was started by Erik Arvidsson and Dan Pupius, two JS hackers that have been doing this kind of work longer than just about anyone else. Its differences from other libraries aren't the result of ignorance, they're mostly the result of conscious tradeoffs to make compilation more effective.
> Time in web applications is not used looking up array lengths - it's used in IO, layout, and DOM manipulation.
Unless you're implementing a cross-browser stable sorting algorithm for manipulating tables 50,000 rows long or longer (that was exactly my most recent project). Don't say "it should be done on the server" because that statement is true only as long as everybody keeps writing bad Javascript.
> "...was that people would switch from truly excellent JavaScript libraries like jQuery to Closure on the strength of the Google name."
This is ridiculous. Does not the mere fact that jquery keep announcing 4000% speedups with every new release not tell you something about the efficiency of jquery?
Unbelievably biased. If you looked at the jquery code you'd find the same sort of things, and some far worse.
From jquery release notes:
... coming in almost 30x faster than our previous solution
... coming in about 49% faster than our previous engine
... much, much faster (about 6x faster overall)
... Seeing an almost 3x jump in performance
... improved the performance of jQuery about 2x compared
to jQuery 1.4.1 and about 3x compared to jQuery 1.3.2
... Event Handling is 103% Faster
... jQuery.map() method is now 866% faster
... .css() is 25% faster
Maybe it's just me, but when someone says they've speeded up their code so it runs 30 times as fast, you have to really wonder just how badly it was written to start with, and how badly it's still written.
These improvements have occurred over time, as browsers gain new features and new techniques are discovered. They (the jQuery contributors) focus on the features and optimize what can be optimized when there is a need.
The optimized solution is often much uglier than the simple but less efficient one.
If you're building a large JavaScript application, Closure might be your best option given that Closure Compiler (in ADVANCED mode) produces small obfuscated output files that contain only the functions your program uses. ADVANCED mode restricts how you write your JavaScript (but not onerously), but that's where Closure Library comes in: a 1 million LOC "standard library" already annotated for Compiler.
I've found working with Closure Library/Compiler enjoyable, typically more than Python, because the Compiler's type system finds plenty of bugs as I work. It has even caught bugs in my Python code (after I ported it to JavaScript, of course).
Closure is one of the most intuitive libraries I have used, ever.
I use Closure for everything, which is too big for jQuery.
Compared to its next best competitor YUI, it's a joy (eg. first really good cross-browser richtext editor).
I have not found many features, not already included in the library.
Code can be easily scaled, and is fast enough. Especially on the production system, where you, thanks to the Closure compiler, can have a compiled version (I also prefer the compiler over YUI's).
Have I told you about the excellent testing framework...
Have I told you about the excellent documentation...
Have I told you about its very readable code...
When it was released, and I had read some of its code, I knew I wanted to use this at my work as soon as possible. But exactly this Blogpost had a super high google rank for the query "Google Closure".
If you, too, run into the problem of your co-workers reading that post, just link to the HN-Comments. Worked for me. Here is the older HN-Link: http://news.ycombinator.com/item?id=937175
Reasons you might consider using Closure instead of something like jQuery, plain-old-js:
1. Your javascript file is getting huge and you want to break things out into manageable pieces.
2. You find yourself needing namespaces that are easy to implement.
3. You want to learn how to build structured javascript (Closure is great at encouraging well documented, "object-oriented" coding)
4. You've got too many js files (2+) and you want to only have one in production for faster page loading (use closure compiler)
5. You're building an application with a team of developers; closure helps create modular, well documented code
6. You want to build a snappy, client-side heavy application
Before I ever used Closure, I used javascript more like frosting on a cake. Javascript can be frosting, but it can also do some amazing things. My biggest complaint with javascript in the past has been it's unwieldy nature in medium to large projects. I stuck to using javascript/jQuery to decorate html pages and had the page generation, business logic, templating, etc., on the server side (Python). Then I wrote a medium sized application in closure, and it worked, and it's maintainable, and it didn't require a lot of server side code, and it was fast.
I couldn't be happier.
My only complaint is it seems Closure development doesn't have the velocity that other projects like GWT have. Google, it seems, is putting it's money more on GWT than something like closure; or so it seems based on the amount of announcements for GWT, the quality of the tools and libraries being produced, the number of updates to closure compared to GWT. While GWT is a powerful tool, it's more complex (thanks to Java), harder to setup, harder to get started. In some ways I wish they would take the tools and frameworks they have for GWT and build them for Closure.
for (var i = fromIndex, ii = arr.length; i < ii; i++) {
Speed aside, this introduces a bug if the length of the array changes in the body of the loop, but ignoring this booby trap I ran benchmarks on the original clear version and the slightly more complicated fragile version.
clear fragile
empty loop body 5ms 1ms
single number add 7ms 6ms
single DOM lookup 82ms 81ms
That is for an array of a million elements on an iMac running Safari. (Apparently Safari is particularly good at doing nothing, but otherwise this "optimization" is lost in the loop body's time.)
Edit: I checked Chrome on Linux as well. It was also unimpressive.
You know while raw speed is an important piece of a library, it is not the only thing, there are other factors that carry just as much weight when it comes to importance. 3rd party library ecosystem, community support, integration with other technologies, ease of use and a host of other are all just as important factors when I evaluate a library.
As well, IIRC Closure was an internal project that was built to build apps like Gmail, if that is the case then it, is reasonable to think that it has some cruft in their given that the state of the art in Javascript libraries came after Gmail, Oulook on the web, and other Browser based apps showed what was possible.
It was programmer transitioning from other languages to JavaScript that built these first toolkits and they brought over a good deal of their language constructs that they where familiar with as time went on other programmer from other disciplines joined in and some of the frameworks started to morph.
I remember when Dojo threw away their entire toolkit because of this and I commend them for doing so. They came to realize that their was a better way than just reimplementing Java or C# in the browser.
Closure on the other hand remained an internal project outside those learning. That being said, I do think their are much better frameworks available than Closure, Dojo and jQuery being two prime examples, but I do cut them some slack based on the fact that they would possible qualify as one of the oldest frameworks and that they did not benefit from the learning the communities went through as the state of the art evolved.
There is a TechTalk about Closure where the speaker makes a big deal out of the whole project being done by many different developers in their twenty percent time, so yeah, they might get cut some slack and hopefully they'll be good about accepting patches to get some of these things fixed.
Gmail works pretty well, so the library can't be too horrible. I'm glad I read this, I was thinking of doing a project using the Closure library, but I guess I'll stick with jQuery.
I was thinking of doing a project using the Closure library, but I guess I'll stick with jQuery
Might I suggest taking a look at Dojo as well, depending on your requirements, Dojo is really good at it's target. Which is large browser based apps. I use jQuery extensively for the smaller problem sets, but their is no substitute for Dojo when you start getting into large browser apps. Dojo provides the full stack for browser app developers and would be similar to Closure in the segment that it targets. It's worth s look to see if you like it, further it can be used in conjunction with jQuery they are not incompatible and play very well together.
Note, this was written over a year ago, so stuff may have changed since then. It would probably be worth taking a look to see how things have improved.
Reading the article, it seems hard to find anything that does not suck (or at least, is not counter-intuitive) in JS.
So many trivialities have to be taken into account.
- You have to store the array length in advance before a loop? wtf?
- "for in" loops are inherently dangerous. wtf?
Every language has pitfalls but javascript seems king above even C++...
No. "for-in" loops that iterate through the properties of an object, and do something with them, without checking to ensure that the property is specific to that object, as opposed to something another library added to the Object.prototype, are dangerous.
The article did a poor job in that section of pointing out _what_ exactly is dangerous. It's not _every_ for-in loop. It's that one in particular, really.
"I’m not sure what this pattern is called in Java, but in JavaScript it’s called a ‘memory leak’."
The comment is in regards to goog.memoize but is terribly backwards. The complaint about goog.memoize is that it will grow uncontrollably because it does not cap the size of the caching object. A memory leak is the inability of a program to free memory it has allocated.
Since js is garbage collected causing a memory leak involves creating a circular reference fooling the garbage collector into thinking that an object is still in use.
> A memory leak is the inability of a program to free memory it has allocated.
Unexpected memoization/caching also counts as a memory leak. There are (unfortunately) a few places in Closure Library where unexpected memoization might cause a memory leak.
> Since js is garbage collected causing a memory leak involves creating a circular reference fooling the garbage collector into thinking that an object is still in use.
Browser environments are expected to handle circular references. They don't fool garbage collectors, except in old versions of IE when a circular reference crosses the JScript/DOM boundary.
Are you saying that the memory allocated by the memoizer is not recoverable e.g. won't be released until the browser is killed? If not then it is not a memory leak.
As it happens, I had occasion to test this a few months ago.
On my machine, noPreallocate is 4% faster in FF, but it's 15% slower in IE8 and a whopping 70% slower in Chrome.