

Google Closure: How not to write JavaScript - rams
http://blogs.sitepoint.com/2009/11/12/google-closure-how-not-to-write-javascript/

======
gruseom
The article says:

    
    
      Although it is necessary in Java, it is entirely pointless to
      specify the length of an array ahead of time in JavaScript. [...]
      Rather, you can just set up an empty array and allow it to grow as
      you fill it in. Not only is the code shorter, but it runs faster too.
    

Faster? That ought to raise suspicion. JS's dynamic hash-arrays are neat, but
now they're supposed to be immune from the laws that govern memory allocation
in any other language?

As it happens, I had occasion to test this a few months ago.

    
    
      function preallocate(len) {
          var arr = new Array(len);
          for (var n = 0; n < len; n += 1) {
              arr[n] = n;
          };
          return arr;
      }
    
      function noPreallocate(len) {
          var arr = [];
          for (var n = 0; n < len; n += 1) {
              arr[n] = n;
          };
          return arr;
      }
    

On my machine, noPreallocate is 4% faster in FF, but it's 15% slower in IE8
and a whopping 70% slower in Chrome.

~~~
axod
<http://axod.net/arraytest.html>

    
    
      After 20 iterations:
    
      Browser               Pre-alloc    No pre-alloc
      Firefox 3.6.13 OSX       824ms         829ms
      Safari 5.0.3 OSX         812ms         948ms
      Chrome 9.0.597.16 OSX    1317ms        992ms
    

I'm pretty sure that in modern browsers new Array(length) doesn't allocate
anything, it just sets the length property. The results I'm seeing would agree
with Google really.

Perhaps you were seeing GC events slowing down the test?

The other thing about for(var i=0;i<arr.length;i++) is it can end up as an
infinite loop if you're modifying the length inside the for loop:

    
    
      for (var i=0;i<arr.length;i++) {
        arr[10 + i*2] = "foo";
      }
    
      // Infinite loop.

~~~
gruseom
_Perhaps you were seeing GC events slowing down the test?_

Perhaps. Or perhaps it varies by array size?

------
jrockway
My feeling is that even the compilers written in CS101 will optimize this. I'm
guessing that Google tested their code with V8, performance was fine, and they
thought nothing of it.

I just did a benchmark with node.js. I made a 50000000 element array, and
timed how long each way took.

Trial one:

    
    
        for( var i = 0; i < array.length; i++ ) { array[i]++ }
    

That took, on average, 0.93001866 seconds.

Trial two:

    
    
        for( var i = 0; i < len; i++ ) { array[i]++ }
    

That took, on average, 0.809920 seconds.

A lot of stressing-out over what ends up being a rounding error.

~~~
kwamenum86
For js running in a browser this does not matter but on a server this will
make a huge difference.

~~~
jrockway
How many 50 million element arrays do you have?

My guess is that this makes no difference in real life. Should you write clean
code that performs well? Yes. But should you be fixated on a tiny bug in
Google's library? Nope. Send patch, get .0000000001 seconds per element back,
and move on.

~~~
kwamenum86
It's not about a single 50mil element array. It's about sub-optimal code
running in a bunch of places and it adds up. But in any case this probably
won't be the bottleneck.

Still I am a fan of running the most optimal code possible on the server.
Absolutely no reason not to.

Client-side js is different. Often times algorithmic optimizations have no
impact (unless we are talking about animation.)

I would not trust people who do not respect optimizations like these to run
code on my server.

------
aboodman
Time in web applications is not used looking up array lengths - it's used in
IO, layout, and DOM manipulation. If iterating through arrays was found to
ever be a noticeable issue in practice, the Closure compiler could just be
modified to emit more efficient code. That's one of the advantages of having
the compiler - you don't have to make a convenience/readability trade.

Closure was not thrown together by novices new to the language. It was started
by Erik Arvidsson and Dan Pupius, two JS hackers that have been doing this
kind of work longer than just about anyone else. Its differences from other
libraries aren't the result of ignorance, they're mostly the result of
conscious tradeoffs to make compilation more effective.

 _Edit:_ Oh, and the string thing... If you ever do

    
    
      new String("foo")
    

in JavaScript, you're doing it wrong.

~~~
aboodman
Here is an example of a real-world performance bottleneck that was discovered
by the closure team:

<http://pupius.co.uk/blog/2007/03/garbage-collection-in-ie6/>

------
axod
> "...was that people would switch from truly excellent JavaScript libraries
> like jQuery to Closure on the strength of the Google name."

This is ridiculous. Does not the mere fact that jquery keep announcing 4000%
speedups with every new release not tell you something about the efficiency of
jquery?

Unbelievably biased. If you looked at the jquery code you'd find the same sort
of things, and some far worse.

From jquery release notes:

    
    
      ... coming in almost 30x faster than our previous solution
      ... coming in about 49% faster than our previous engine
      ... much, much faster (about 6x faster overall)
      ... Seeing an almost 3x jump in performance
      ... improved the performance of jQuery about 2x compared
         to jQuery 1.4.1 and about 3x compared to jQuery 1.3.2
      ... Event Handling is 103% Faster
      ... jQuery.map() method is now 866% faster
      ... .css() is 25% faster
    

Maybe it's just me, but when someone says they've speeded up their code so it
runs 30 times as fast, you have to really wonder just how badly it was written
to start with, and how badly it's still written.

~~~
brunoc
These improvements have occurred over time, as browsers gain new features and
new techniques are discovered. They (the jQuery contributors) focus on the
features and optimize what can be optimized when there is a need.

The optimized solution is often much uglier than the simple but less efficient
one.

------
ivank
Previously <http://news.ycombinator.com/item?id=937175>

If you're building a large JavaScript application, Closure might be your best
option given that Closure Compiler (in ADVANCED mode) produces small
obfuscated output files that contain only the functions your program uses.
ADVANCED mode restricts how you write your JavaScript (but not onerously), but
that's where Closure Library comes in: a 1 million LOC "standard library"
already annotated for Compiler.

I've found working with Closure Library/Compiler enjoyable, typically more
than Python, because the Compiler's type system finds plenty of bugs as I
work. It has even caught bugs in my Python code (after I ported it to
JavaScript, of course).

There's also good book out there for Closure:
<http://www.amazon.com/dp/1449381871/>

------
julius
Closure is one of the most intuitive libraries I have used, ever.

I use Closure for everything, which is too big for jQuery. Compared to its
next best competitor YUI, it's a joy (eg. first really good cross-browser
richtext editor).

I have not found many features, not already included in the library.

Code can be easily scaled, and is fast enough. Especially on the production
system, where you, thanks to the Closure compiler, can have a compiled version
(I also prefer the compiler over YUI's).

Have I told you about the excellent testing framework...

Have I told you about the excellent documentation...

Have I told you about its very readable code...

When it was released, and I had read some of its code, I knew I wanted to use
this at my work as soon as possible. But exactly this Blogpost had a super
high google rank for the query "Google Closure".

If you, too, run into the problem of your co-workers reading that post, just
link to the HN-Comments. Worked for me. Here is the older HN-Link:
<http://news.ycombinator.com/item?id=937175>

~~~
nswanberg
At what point do you decide something is too big for jQuery? Lines of code?
Number of developers? Certain features needed?

Does it make sense to begin with jQuery and switch at a certain time?

~~~
RyanDScott
Reasons you might consider using Closure instead of something like jQuery,
plain-old-js:

1\. Your javascript file is getting huge and you want to break things out into
manageable pieces.

2\. You find yourself needing namespaces that are easy to implement.

3\. You want to learn how to build structured javascript (Closure is great at
encouraging well documented, "object-oriented" coding)

4\. You've got too many js files (2+) and you want to only have one in
production for faster page loading (use closure compiler)

5\. You're building an application with a team of developers; closure helps
create modular, well documented code

6\. You want to build a snappy, client-side heavy application

Before I ever used Closure, I used javascript more like frosting on a cake.
Javascript can be frosting, but it can also do some amazing things. My biggest
complaint with javascript in the past has been it's unwieldy nature in medium
to large projects. I stuck to using javascript/jQuery to decorate html pages
and had the page generation, business logic, templating, etc., on the server
side (Python). Then I wrote a medium sized application in closure, and it
worked, and it's maintainable, and it didn't require a lot of server side
code, and it was fast.

I couldn't be happier.

My only complaint is it seems Closure development doesn't have the velocity
that other projects like GWT have. Google, it seems, is putting it's money
more on GWT than something like closure; or so it seems based on the amount of
announcements for GWT, the quality of the tools and libraries being produced,
the number of updates to closure compared to GWT. While GWT is a powerful
tool, it's more complex (thanks to Java), harder to setup, harder to get
started. In some ways I wish they would take the tools and frameworks they
have for GWT and build them for Closure.

------
jws
Example 1: Slow Loop

The author claims writing:

    
    
      for (var i = fromIndex; i < arr.length; i++) {
    

…is slow and can be much faster as…

    
    
      for (var i = fromIndex, ii = arr.length; i < ii; i++) {
    

Speed aside, this introduces a bug if the length of the array changes in the
body of the loop, but ignoring this booby trap I ran benchmarks on the
original clear version and the slightly more complicated fragile version.

    
    
                            clear     fragile
      empty loop body         5ms         1ms
      single number add       7ms         6ms
      single DOM lookup      82ms        81ms
    

That is for an array of a _million_ elements on an iMac running Safari.
(Apparently Safari is particularly good at doing _nothing_ , but otherwise
this "optimization" is lost in the loop body's time.)

Edit: I checked Chrome on Linux as well. It was also unimpressive.

------
kls
You know while raw speed is an important piece of a library, it is not the
only thing, there are other factors that carry just as much weight when it
comes to importance. 3rd party library ecosystem, community support,
integration with other technologies, ease of use and a host of other are all
just as important factors when I evaluate a library.

As well, IIRC Closure was an internal project that was built to build apps
like Gmail, if that is the case then it, is reasonable to think that it has
some cruft in their given that the state of the art in Javascript libraries
came after Gmail, Oulook on the web, and other Browser based apps showed what
was possible.

It was programmer transitioning from other languages to JavaScript that built
these first toolkits and they brought over a good deal of their language
constructs that they where familiar with as time went on other programmer from
other disciplines joined in and some of the frameworks started to morph.

I remember when Dojo threw away their entire toolkit because of this and I
commend them for doing so. They came to realize that their was a better way
than just reimplementing Java or C# in the browser.

Closure on the other hand remained an internal project outside those learning.
That being said, I do think their are much better frameworks available than
Closure, Dojo and jQuery being two prime examples, but I do cut them some
slack based on the fact that they would possible qualify as one of the oldest
frameworks and that they did not benefit from the learning the communities
went through as the state of the art evolved.

~~~
pinchyfingers
There is a TechTalk about Closure where the speaker makes a big deal out of
the whole project being done by many different developers in their twenty
percent time, so yeah, they might get cut some slack and hopefully they'll be
good about accepting patches to get some of these things fixed.

Gmail works pretty well, so the library can't be too horrible. I'm glad I read
this, I was thinking of doing a project using the Closure library, but I guess
I'll stick with jQuery.

~~~
nickik
Can you post the link to that TechTalk? I cant find it.

~~~
amattie
[http://closuretools.blogspot.com/2010/06/closure-library-
tec...](http://closuretools.blogspot.com/2010/06/closure-library-tech-talk-at-
google-io.html)

------
oomkiller
Note, this was written over a year ago, so stuff may have changed since then.
It would probably be worth taking a look to see how things have improved.

------
mfukar
Why are we (and by we, I mean the article author) getting worked up about what
should be a single, or maybe more, bug reports?

It'd be a lot more interesting if you could use those conclusions to find out
who wrote those parts of the code.

------
_ques
This article is over a year old.

~~~
araneae
True, but as someone who has a java background and is working on js, it's nice
to know that switches suck in js :)

~~~
gruseom
I would be very careful (i.e. run my own tests, in multiple browsers) before
believing that.

~~~
rbanffy
Or, like my college teachers told me, "measure, don't guess".

I am a bit ashamed to confess I do a lot of guessing in my work...

------
abraham
I wish the code snippets were linked to the loc.
[http://code.google.com/p/closure-
library/source/browse/trunk...](http://code.google.com/p/closure-
library/source/browse/trunk/closure/goog/array/array.js?r=2#63)

------
kwamenum86
"I’m not sure what this pattern is called in Java, but in JavaScript it’s
called a ‘memory leak’."

The comment is in regards to goog.memoize but is terribly backwards. The
complaint about goog.memoize is that it will grow uncontrollably because it
does not cap the size of the caching object. A memory leak is the inability of
a program to free memory it has allocated.

Since js is garbage collected causing a memory leak involves creating a
circular reference fooling the garbage collector into thinking that an object
is still in use.

~~~
ivank
> A memory leak is the inability of a program to free memory it has allocated.

Unexpected memoization/caching also counts as a memory leak. There are
(unfortunately) a few places in Closure Library where unexpected memoization
might cause a memory leak.

> Since js is garbage collected causing a memory leak involves creating a
> circular reference fooling the garbage collector into thinking that an
> object is still in use.

Browser environments are expected to handle circular references. They don't
fool garbage collectors, except in old versions of IE when a circular
reference crosses the JScript/DOM boundary.

~~~
kwamenum86
Are you saying that the memory allocated by the memoizer is not recoverable
e.g. won't be released until the browser is killed? If not then it is not a
memory leak.

~~~
ivank
It's potentially recoverable, but stuck in some "private" object your
JavaScript application will never bother to look at. It's still a memory leak.

