Wow, esbuild comes away as the clear winner overall. It’s always the fastest — often at least 10x as fast as many of the others – and its compression ratio was within 2% or so of the best in all cases.
If you're mostly worried about how quickly you can minify when you do minify, then you're worrying about the wrong thing. You want as small results as possible that can be executed the fastest, how long time it takes doesn't really matter as you only minify on delivery to end-users, not every time you make a change locally.
So instead it seems Google Closure is the best, in the cases the author got it to work. Otherwise it's UglifyJS/Terser, depending on your needs.
It might be a bit of a red herring but build (and thus deploy) times absolutely do matter.
Going from a 30 second deploy to a 2 minute deploy to a 30 minute deploy has severe impact on your workflow, how atomic your changes can be and how immediate your feedback loop is.
I think (without data but from my own experience) it's one of the most massive underlooked productivity-sinks especially in web.
True and I agree with you, fast deploys are one of the most massively underlooked productivity-sinks. Minification tends to be the slowest step in a frontend pipeline (at least in my experience) but it's closer to a two minute step with Google Closure Compiler on a small-size codebase (+20K LoC) than 30 minutes, and the size difference is more important (for us) than the difference between seconds when deploying.
I disagree only because a large e2e test suite can be very long. I often see 20-30 minute e2e suites for large applications. Applications which compile in a few minutes.
Totally agree re: deploy being very important to optimize.
You would also develop and deploy against closure using simple for your dev workflow and only run advanced more infrequently. Advanced also gives deeper type schecking. I have not worked with closure compiler in years, but it is a different and incredibly powerful beast. Easily extendible with your own compile passes as well
The point is that the two are competing priorities. You still care about minimizing end user download size, but you also care about your own developer experience (and concomitant velocity).
Different people may weight those things differently, but it is unlikely that someone would assign a weight of zero to one or the other (so people are unlikely to just throw up their hands and say "no minification").
> If you're mostly worried about how quickly you can minify when you do minify, then you're worrying about the wrong thing.
You don't need to be 'mostly' worried about how quickly you can minify for esbuild to be the top choice- if you're only, say, 10% worried about build times and still 90% ('mostly') worried about size + execution speed, esbuild still comes out on top from these benchmarks.
Or, put in another way: If you're are 10% worried about that you get to do production deploys 10% faster and 90% worried about how fast you can deliver the code to your users when they load your site (and/or bandwidth costs that increases with each user), then esbuild might make more sense for you.
For the rest of us that are 100% focused on the best experience for our users, we stick with the tools that does the best minification while being a bit slower, and throw more hardware at it if needed for the deploys.
If you can do 10x the builds in a day, you can end up catching more issues before your users. Meanwhile the difference for them is in single digit percents. Interesting test to do next would be how performant are the compiled versions.
> Or, put in another way: If you're are 10% worried about that you get to do production deploys 10% faster
Correction: 10x faster, not 10%. The 60-minute deploy is shortened to 6 minutes, not 54 minutes. This is a significant difference in impact on a deployment workflow.
> we stick with the tools that does the best minification while being a bit slower
Correction: not 'a bit slower', _10 times_ slower, which is significant.
Closure compiler can produce truly impressive output, but it comes at a cost. There are additional rules for how one's code must be written – primarily that you must not use reflection/metaprogramming techniques. If any code, including in dependencies, violates these rules, there's often no warning or error. Instead the output JS will just be wrong. Usually it's immediately and unambiguously wrong, with a runtime exception as the application starts, but every once in a while it's wrong in a very subtle way that requires careful debugging.
For various reasons, I end up debugging a disproportionate number of these cases, and I gotta say that the combination of unsafe optimizations plus long slow compile times can make for some unfun times.
IMO the best option is to try very hard not to have a closure-compiler shaped problem. Keep your client-side code small. If you can't do that, compose it out of small, largely independent, lazily loadable modules. And if you can't do _that_, come to terms with it as early as possible and start using Closure Compiler from the beginning.
Because as much trouble as Closure Compiler can be, there's a scale of application where nothing else comes close. It has top tier dead code elimination (not just tree shaking, but proper DCE), can split an application into lazily loadable chunks, will move code between modules, and will perform pervasive type-aware optimizations that AFAIK no other minifier comes close to.
Of course developers prefer faster build times. But what do the end-users who you write code for prefer? (Especially in a typical CI/CD environment where developers often don't need to monitor and wait for builds to finish)
2% sounds small. And it is, if your traffic is small. It's not small when you have millions of users.
It's tiny, and usually negligible in context with all the other data that needs to be transferred, even with millions of users. Millions of users might be the situation where I'd start thinking of integrating slower builds as an alternative, provided they can seamlessly live side-by-side with the fast build tools.
Why would end users don’t care if there are a million other people getting a bit larger download size..? Accounts payable might care, but they aren’t end users.
2% file size reduction is probably an over optimization for a new startup searching for their first users.
But for an established product with substantial traffic, swapping out a js minifier for one that achieves even single digit % compression improvement seems worthwhile to me - if the only downside is adding a few extra seconds to build time.
End users prefer that we ship the features and squash the bugs they care about, which can be done faster with shorter build cycles. Our webpack prod build, which we have to run as sometimes minifying breaks things, takes 6 minutes. It’s the longest build step we have.
Working on codebases where the compilation times can often be 30-60s, a 10x improvement makes a massive difference in development flow and is definitely worth a 2% regression in size. It's the difference between being multi-tasking while compiling and not.
Of course you could use esbuild for dev and terser for prod, but maintaining two may be a support headache.
A 2% difference is pretty small, though. Small enough that the top four or so minifiers appeared to have very similar performance - just depended on which codebase you were using. If you're picking a minifier, I'd definitely go with esbuild given that it falls in that 'top of the pack' group and is vastly vaster.
If someone were only interested in the smallest possible minified size, the appropriate solution would be to use all of these minifiers every time and choose the smallest result.
I'm very bullish on esbuild. But in testing with my own app, it produces output 8-12% bigger than the same files processed with terser. I would bet that it's a small number of optimizations that are missing that benefit codebases like mine (JSX-heavy).
My biggest frustration is that I am not nearly as skilled in Go as I am in other languages, and don't have the confidence jumping into a large existing Go codebase than I would with JS/TS/etc. I have the same issue with Flow, which is written in OCaml.
I’ll warn that the two big perf claimants (esbuild and SWC) have absolutely barfed on perfectly legal syntax in a project I contribute to. SWC didn’t even say what the problem was, esbuild complained about spreading props into a JSX expression even where other props were being spread into other JSX expressions in the same module. I’m not disparaging either effort but they have some rough edges.