Hacker News new | comments | show | ask | jobs | submit login
Comparing the Closure compiler to Uglify (syntaxsuccess.com)
44 points by thelgevold 38 days ago | hide | past | web | 45 comments | favorite



An enormous amount of the shavings and inlining that the Closure compiler is capable of assumes that you write your code according to rules that reduce ambiguity and dynamic features in your code. Variable names get mangled, so things like using string names for properties is out the window, which means invoking methods by string names is out as well. It uses objects for namespaces so things like `Shape.Quad.Square` get renamed to `Shape$Quad$Square` before getting minified to some one-letter variable, so don't use `this` outside of constructors or prototype methods.

There's a larger list of restrictions at https://developers.google.com/closure/compiler/docs/limitati...

If you do write code obeying these rules (check out the closure library), or you use something like ClojureScript that generates straightforward, monomorphic, closure-compiler-compatible js, the payoff in both file size and parse/execution performance is pretty great.


Closure Compiler was indeed written to target Google's very specific rules on how to structure code and name stuff. However, Closure still does a better job than Uglify and other minifiers even if you don't do this. Its basic set of optimizations are generic and won't break code (unless you enable the advanced mode, which has been historically iffy).

The only downside is that it's slow. Like really slow. It takes about a minute to minify our app.

Edit: Looks like Uglify has gotten better lately. I'll submit a separate top-level comment.


Seems like a lot of gotchas... you could end up shaving a couple K off your bundle in exchange for bugs that are very hard to track down. I mean, sure, if you write your code ground-up to follow a set of rules it's probably great.


Clojurescript a lot of its success to the use of the Google Closure compiler, which really shines when you transpile because you can ensure that the resulting js is suitable for the Closure compiler.


Scala.js also uses GCC, of course. However, though it's definitely great at shaving off code size, we found that it can have a (sometimes significant) negative impact on runtime performance.

Also, we seem to run into correctness bugs more often that we would like. I guess we are really stressing it.


Yes. Angular is working on a similar approach. Great results so far.


I read there was a TypeScript Compiler that would output CC copatible type annotation comments.


Tsickle by Angular does that: https://github.com/angular/tsickle

It wraps the TypeScript Compiler and outputs CC compatible type annotation comments.


- It's being built by a handful of people (I remember seeing the original author complain he doesn't really have the time for it)

- It's a hack on top of TypeScript compiler

- It gets broken by TypeScript upgrades and is usually several weeks behind TS version upgrades.

I'd be really wary using it in production


[author here]

- Pretty much everything you use (e.g. Angular or React or Babel or Webpack or equivalents in other languages) is being built by a handful of people

- It interacts with TypeScript in a way that tends to be more sensitive to changes, but being some weeks behind the latest release of a compiler is not the end of the world, and we do continuously upgrade

I agree you should be wary of using it in production though because it's not really user-friendly yet. Though if you can figure it out it does work and some people are successfully shipping some apps.


I sounded curt, but I think tsickle is awesome.

I can't justify bringing it in in our current setup though :(


I would love to see it used with real-world examples. There are two main situations IMO where this would be useful:

- For developers creating websites. The cost of set-up would be high because you'd need concatenation+minification+etc in place, so I'd love seeing some examples to see whether or not is worth it. From what I know, this could be similar to tree-shaking, so there could be huge gains here.

- For library creators. However the top-most variables would have to be preserved, since other developers need to use them (which seems possible[1]), so I'll be trying it out.

I am also wondering about the performance boost for JS (besides file size) and whether this would be useful for something like Node.js or not (free performance). If anyone has more information please share it.

[1] http://stackoverflow.com/q/3025827/938236


At Lucid Software we use the Closure Compiler with advanced optimizations to build both Lucidchart and Lucidpress. You can see some work we did with it in respect to Angular here: https://www.lucidchart.com/techblog/2016/09/26/improving-ang...

I don't have a direct comparison to Uglify, but as of last fall advanced optimizations does save us about 2.5 MB over simple optimizations.

We're also using Tsickle (https://github.com/angular/tsickle) and Clutz (https://github.com/angular/clutz) to be able to use TypeScript with our Closure Compiler compatible codebase. Closure Code can be depended upon in TypeScript code and vice versa. It's been awesome to use the type system of TypeScript in combination with the minification power of the Closure Compiler. The build process is definitely a bit crazy at the moment, though.

Disclaimer: I work at Lucid Software.


I have a comparison using an Angular app here: http://www.syntaxsuccess.com/viewarticle/angular-application...



Thanks, apparently I already saw that! Since one comment there is mine.


The biggest real-world example is arguably Angular's codebase, referenced in answers to your comment.

What these comments don't mention though:

- Angular's build system is an unholy mess, good luck understanding how it works

- They had to build their own tools such as hacks into TypeScript compiler to make sure Google Closure Compiler understands their code

Other "answers" reference things like "a custom snapshot build of Angular".


Uglify has actually gotten better the last few years, but Closure is still faster on multi-core machines. Here's on one of our apps:

    $ wc -c index.js | grep -v total
    9320511 index.js

    $ cat index.js | time closure-compiler \
      --warning_level quiet \
      --third_party \
      --jscomp_off es3 \
      --compilation_level SIMPLE_OPTIMIZATIONS \
      --language_in ECMASCRIPT5 > index.js.closure 2>/dev/null
    33.54s user 0.93s system 369% cpu 9.336 total

    $ cat index.js | time closure-compiler \
      --warning_level quiet \
      --third_party \
      --jscomp_off es3 \
      --compilation_level ADVANCED_OPTIMIZATIONS \
      --language_in ECMASCRIPT5 > index.js.closure-advanced 2>/dev/null
    33.54s user 0.93s system 369% cpu 9.336 total

    $ cat index.js | time node_modules/.bin/uglifyjs \
      --screw-ie8 --mangle --compress - > index.js.uglify 2>/dev/null
    18.88s user 0.24s system 103% cpu 18.475 total

    $ wc -c index.js.*. | grep -v total
    1750478 index.js.closure
    1515331 index.js.closure-advanced
    1763350 index.js.uglify

    $ gzip -9 index.js.*

    $ wc -c index.js*.gz | grep -v total
     369749 index.js.closure-advanced.gz
     397468 index.js.closure.gz
     387896 index.js.uglify.gz
(Closure 20170218 on Oracle Java 8, uglify-js 2.4.19 on Node 6.8.1, running on MBP Pro 2015).

It's worth noting that Closure is heavily parallelized, and uses 50% of the time to complete by using all 4 cores at almost 100%. If you only have one core, though, Closure will be much slower.

Our apps don't use any of Google's conventions, but Closure is still able to minify as well than Uglify. I don't know if there are any specific Closure or Uglify options that could make a bigger difference.

Also worth noting that we only minify for staging/production, not during development, which would incur too much of a performance hit.

We've always used Closure, and been very happy with it.


Exact inverse of my situation- CC is 3-4x slower on our codebase.


I wrote about combining Closure and Uglify, for ClojureScript here: https://blog.jeaye.com/2016/02/16/clojurescript/

The result was about 20% savings ungzipped; 10% savings gzipped.


I struggled quite some time with Closure Compiler, and in particular with CG's broken/inofficial support for CommonJS modules. I succeeded by pasting all module code into one file, removing `require()` calls and presenting only the single concatenated file to CG (and the minification and linting results were very good).

Granted, this is only possible if you reference modules/classes uniformly by the same name across your source, rather than using CommonJS "import as", and it only applies if you have no or few external dependencies which isn't realistic in the general case. Supposedly, using ES6 modules fixes this but I haven't checked yet.


You should use Browserify or Webpack to bundle code together like that. They will handle the import chains for you and give you a single file that can be passed to Closure without any issues. Browserify is particularly easy to get going with; Webpack requires a bit more config.

Edit: See this comment: https://news.ycombinator.com/item?id=13910621.


There is a flag to tell it to process commonJS. I played with that here: http://www.syntaxsuccess.com/viewarticle/combining-es2015-mo...

Combining es6 and commonJS in the same project.


In my tests on my companys code base Closure Compiler wasn't worth it.

It did however save a few kilobytes - after gzip - but took around 3-4X longer to run (much worse if you use their JS implementation for Webpack)

If time to build is not an issue for you, and you're ok with the java dependency, then it might be worthwhile.


Exactly, when comparing the size after gzip compression, minifcation itself is often not worth the hassle. To get any serious benefit out of Closure Compiler you have to annotate your source code, not use certain conventions, and generally worry about things breaking.

That's probably just fine when you are transpiling (i.e. Google Web Toolkit) but its a PITA for everyone else.


What about parse time? This is arguably an equally important metric. Make sure throttle CPU and network to simulate a budget smartphone when you test.


Again, only based on my codebase, it was better but in the same way it was better on post gzip size: by not a lot.

I'm sure in other specific cases it may be a lot better. Just not in mine.


If you have customers on ancient smartphones and the engineering budget to manage the complexity ... go for it! :D


We're not talking ancient here. The mobile test on Webpagetest (https://www.webpagetest.org/easy) uses a Moto G4, which is less than a year old. But this test is punishingly brutal on most JavaScript-heavy websites. If you can make wins on parse time, you absolutely need to be pursuing them.


Most JavaScript heavy sites in the real world are that way due to 3rd party ad code (sad fact of the modern web) and Closure Compiler won't affect those


Can TypeScript or Flow do similar optimizations? Or is there Type Script to Closure Compiler compiler?

It baffled me a bit that someone in comments on OP site abbreviated Closure Compiler as GCC. Imagine my surprise reading that GCC is written Java.


Here is a Typescript discussion on the issue: https://github.com/Microsoft/TypeScript/issues/8



UglifyJS takes about 5 minutes to setup, and it works.

It's impossible to reliably set up Closure compiler if you are outside of Google's infrastructure and/or outside of Google's Closure Library.

End of comparison.


Luckily, that's not correct.

Just pipe your JavaScript into Closure and it will minify it as easily as Uglify. We're migrating to Webpack, but we've been using this for about 4 years:

    node_modules/.bin/browserify index.js | closure-compiler \
      --warning_level quiet \
      --third_party \
      --jscomp_off es3 \
      --compilation_level SIMPLE_OPTIMIZATIONS \
      --language_in ECMASCRIPT5 \
      --create_source_map "index.min.js.map" \
      --output_wrapper "%output%//# sourceMappingURL=/index.min.js.map" \
      > index.min.js
This generates a minified file plus a source map.

Part of the magic happens in Browserify [1], which generates a single file from all your inputs, taking care to follow the import chains in the correct order. If you have ES6 code, you'll just install Babel and add "-r babel-register" to the Browserify command line.

I read the rant in your other comment, and I don't understand what it was that you struggled with. You certainly don't need to follow Google's Closure conventions in order for it to optimize and minify your code effectively. We don't; we use React and Babel with all the ES6 bells and whistles, but none of the Google conventions.

[1] http://browserify.org


We moved away from browserify and the need to concatenate everything about two years ago, and we're definitely never going back :)

So maybe concatenating everything works with Closure Compiler. I don't know. We couldn't make it work with their advertised support for node_modules.


Why do you need to minify if you're not bundling?


We are bundling. That's the last step after all the code has bin babelified etc. So Closure compiler will do nothing, really. And it fails spectacularly on webpack's helper functions that are bundled in (especially if the functions are exctracted to the HTML file)


While still in experimental stage, I'm hoping the pure JS version of Closure compiler will solve the problem to a large extent.

https://github.com/google/closure-compiler-js


As it's a port of the Java version with the same capabilities, it'll probably have the same problems still.

In my personal experience I couldn't get the compiler (none of the versions) work with a simple-enough project and node_modules in all the correct places.

UglifyJS just works.


Closure definitely has a steeper learning curve. I also agree that it's not always realistic to use it.


I'd say the learning curve is insurmountable.

- The official docs are basically non-existent.

- If you're lucky enough to realize that GitHub wiki has better docs, good luck figuring them out.

- Even though the support for node_modules is kinda finally there, it's still impossible to figure out how to reliably set the compiler in a way that recognizes them.

My old-ish rant about this: https://gist.github.com/dmitriid/7bd6f2c10d263bae40e0addc7ed...


Is a 5-line example really a great basis for a comparison?


I kept it simple to try to illustrate the main difference between the two approaches.


Babili actually has a pretty complete comparison which compares the compression rates of major libraries.

https://github.com/babel/babili




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: