
The Story of Google's Closure: Advanced JavaScript Tools - mariorz
http://blog.louisgray.com/2009/11/story-and-impact-of-closure-googles.html
======
andrewljohnson
What he doesn't tell you is how much the Reader code would be if it was just
gzipped. That's the real comparison to make.

If gzipping alone would get the code down to 800kb or something, then the
improvement made by closure doesn't seem so awesome.

Nonetheless, it's obviously good to get the file size down to the absolute
minimum when you are going to serve millions or billions of copies.

~~~
nostrademons
I've seen (and used) benchmarks with compressors other than Closure. The
compiled + GZipped size is usually about half the size of the just GZipped
file. Closure may do a bit better because it performs pretty aggressive dead-
code elimination. 800K-1MB sounds pretty reasonable as a rough estimate.

I'd disagree on this not being awesome, though. 800K -> 143K is _huge_. You're
talking about shaving several seconds off your user-visible load times. You
get less cache thrash through having smaller pages. And at Google scale, this
could be exabytes worth of bandwidth per month.

I've had code reviews held up while I shaved _3_ bytes off the compiled
GZipped size, and proposed changes vetoed because they'd add 19K to the
GZipped JS size. The entire JS for websearch is something like 13K. Saving
600K+ is absolutely enormous.

~~~
patio11
_You're talking about shaving several seconds off your user-visible load
times._

This is the sort of statement that SHOULD set off a veritable explosion in
your head of "If what he is saying is true, that is awesome. Seventeen
exclamation points elided here!", because shaving seconds off user-perceptible
load times results in huge, automatic, reproducible impacts to the bottom
line, across a wide range of different web sites. You can reproduce it for
yourself by A/B testing adding unnecessary delay into your pages versus not
adding unnecessary delay, and watching what it does to conversions. Google has
done this: they've found that less load time means more searches, which means
more money. Amazon has done this: they found that 1 second more means 1% less
-- revenue, that is.

Although I don't routinely A/B test bad practices against the default to make
a point, I did get a handful of percentage points worth of free conversion
back when I first saw one of the YSlow presentations and got religion on this.
Seriously -- if you don't already pay attention to page load speed, you have a
money tree in your back yard. Go put a basket under it and shake a bit.

~~~
JeffJenkins
Did Amazon (or Google) publish this result?

~~~
patio11
Yes.

[http://glinden.blogspot.com/2006/11/marissa-mayer-at-
web-20....](http://glinden.blogspot.com/2006/11/marissa-mayer-at-web-20.html)

------
barrkel
I find this whole Closure story amusing, because what it amounts to is
applying static typing techniques to a dynamic language - and guess what,
static typing makes "it easier to catch errors much earlier". Where are the
dynamic typing extremists when you need them?

~~~
yuan
I don't remember seeing any "dynamic typing extremists" who would argue that
static typing is totally worthless, just that it has cost (e.g., it's an
additional concern for the programmers) too in addition to the benefits it
provides, and there are often times and cases where the cost dwarves the
benefits. I do see static typing advocates who would argue that static typing
should be everywhere, at all time, and non-optional.

To the extent that a single case can prove anything, what is being done here
in fact strengthens the case for dynamic typing, for it shows that a dynamic
language can reap the benefits of static analysis too, WHEN it is beneficial.

~~~
jganetsk
_To the extent that a single case can prove anything, what is being done here
in fact strengthens the case for dynamic typing, for it shows that a dynamic
language can reap the benefits of static analysis too, WHEN it is beneficial._

You don't know that dynamic typing + static analysis is better than just
static typing. Closure definitely does not catch everything. Hell, even just
using the [] operator on a non-array object, Closure breaks down.

------
neilc
_Mihai estimates that without Closure, Reader's JavaScript code would be a
massive 2 megabytes_

Wow. That just seems wrong to me.

~~~
louismg
The direct quote from Mihai from my e-mail reads: "The benefits of the
compiler system are tremendous. The most obvious are the size ones, without it
Reader's JavaScript would be 2 megaytes, with it it goes down to 513K, and
184K with gzip (the compiler's output is actually optimized for gzip, since
nearly all browsers support it)."

(I left in the typo for this excerpt) \-- I'm the author of the piece. I can't
say why Reader's JS would be so high. I assume RSS can be heavy, and many
feeds at once can be browser intensive.

~~~
barrkel
There is no necessary relationship between the size of code (the JS) and the
size of data ("many feeds at once"), and since data is almost always much
larger than the code, there is proportionally very little benefit from
reducing code size to fit more data.

------
cloudhead
I'd be interested to know how it compares to the current version of Dojo.

