Often overlooked things when discussing esbuild here:
1. It's not just a faster replacement for a single %tool_name% in your build chain: for the vast majority of cases, it's the whole "chain" in a single cli command if you're doing it right.
That is, you don't just stick it inside, say, webpack as a faster replacement for babel (although you can). No, you look carefully through your webpack configs and its myriad of plugins, ask yourself whether you really need to inline css in jsx in png while simultaneously optimizing it for IE 4.0, realize you don't, through out the whole thing, and use esbuild instead.
I have two 50K+ LOC projects using esbuild, and I would use it even if it was slower than webpack/babel/tsc simply not to worry about the build chain breaking due to an update to some obscure dependency or plugin.
2. It is fast because it's written from scratch to do a set of particular things and do it fast, not just because it's Go and parallelized.
If you look at the commit log you will notice a lot of performance tweaks. If you look into the issues, you will find a lot of requests for additional features rejected, often due to possible negative performance impact.
3. The most impressive part about esbuild development is not just that it's one guy writing it: it is the level of support and documentation he manages to provide alongside.
The release notes alone are a good course into nitty-gritty details of the web ecosystem: all the addressed edge cases are explained in detail. To top it all off--all opened issues, no matter how uninformed they seem, find a courteous response.
Remember when all of these points applied to webpack, when it was the “single simple fast tool” to replace everyone’s grunt scripts?
It seems there’s a feature treadmill at work here where projects inexorably bloat as they get popular. But we tried “compose tools in the Unix way” with grunt too, and that led to spaghetti scripts, unique to each project, that were hard to reason about. I wonder if there is a middle way that can prevent the tool from giving in to the pressure to add features.
Something a lot of people don’t appreciate is that the past ten years have been an anomaly for JavaScript. They’ve been very tumultuous because there was a ton of evolving that suddenly needed to happen. And I think we’re nearing the end.
Babel was necessary because core syntaxes were changing so fast. Webpack’s sprawling nature was needed because there were so many alternate dialects, module systems to support, etc. Esbuild is only possible because we’ve generally converged on TypeScript, JSX, ES modules, etc. It knows what you probably were going to configure webpack to do anyway, so it can just do it.
So I wouldn’t call it a “treadmill”, I’d call it growing pains
This is a great point. A "Cambrian explosion" the likes of which the JS ecosystem experienced over the last 10-15 years will slow down eventually. Curious to hear whether other folks agree or not.
To me, it feels like the same effect as Ruby ~15-10 years ago. Lots of chaos as good (subsequently winnowed down to preferred), patterns were discovered, and then things stabilized. It isn't sexy anymore, but to this day I've yet to discover a better tool, that lets me blast out features and make changes at rapid pace, than Ruby (and Rails on the web side).
Yeah, it’s a great point. I always assumed that there was so much tool churning just because of the nature of the front-end itself as a target for building software.
However I think the “Cambrian explosion” point here makes a lot of sense.
I take your point, but I think you could fairly say that JS has been going through growing pains since its inception 25 years ago. It's always been a fast-moving, inconsistently-implemented language. (E.g. Netscape / Mozilla vs. IE support for browser features).
Maybe things are going to calm down a bit? I could believe it. But I just don't see the churn stopping. The browser-as-OS is going to keep getting new features, which JS must bind to. And some users are going to use old browsers that don't support them. So the runtime is inexorably fragmented, vs. say a server-side environment where you mostly write code for a well-known runtime that you get to define.
And what about when everybody starts using wasm to compile other languages into JS? Another explosion of tooling and changes to how we do web development is just round the corner.
Regardless of whether we're coming to the end of it, I think it's more specific than just "growing pains" though - it's not just that we're fixing issues, it's that we're repeatedly throwing away old tools in favor of smaller, more-focused new tools, that then in turn grow in scope over time.
I'm not even mad at all this; I think it's a fundamental part of how software languages and communities make progress; there's no real path for a language/tool/framework to get _smaller_, so they either increase in scope or stay the same, with the latter being quite rare, and both options giving a path for some other thing to supersede them.
I just think it's most pronounced in the JS ecosystem, and find it amusing that we've come full circle on so many of these points, again - although I believe with genuine improvements on the previous iterations. (So more like a spiral; the same location in some dimensions, but with a higher elevation.)
I would say this is a far off possibility. The need to write WASM code rarely surfaces for the average SPA, and really seems like its more applicable to games or sites with heavy processing needs.
For better or worse, it's going to be JS/TS for a long while yet.
Perhaps I used the wrong phrase with “just around the corner”. My prediction would be that we’ll see major SPAs in wasm in ~3-5 yrs.
This is not imminent from a developer’s perspective, but I think it is close on the “language evolution” timescale. As in, soon enough that I’m not sure we will have a long period where JS stabilizes before wasm revolutionizes it again.
Not a high-confidence prediction either way though.
I avoided this fact for years, running from JS/TS for “better languages”. After a stint with Haskell I’ve decided the grass isn’t that much greener so just accepting that JS is what it is - one of the worlds most popular languages which can run in most places and therefore worth learning.
ESBuild’s author and docs[1] are quite clear about its future scope:
> [… a list of features that are already done…]
> After that point, I will consider esbuild to be relatively complete. I'm planning for esbuild to reach a mostly stable state and then stop accumulating more features. This will involve saying "no" to requests for adding major features to esbuild itself. I don't think esbuild should become an all-in-one solution for all frontend needs. In particular, I want to avoid the pain and problems of the "webpack config" model where the underlying tool is too flexible and usability suffers.
That said, now quoting you…
> But we tried “compose tools in the Unix way” with grunt too, and that led to spaghetti scripts, unique to each project, that were hard to reason about.
In this respect, ESBuild’s firm stance has a major strength, and a major weakness:
- Strength: the Unix philosophy is easy to achieve, with esbuild-plugin-pipe[2]. There’s just one, simple plugin API, everything follows that same format
- Weakness: since ESBuild doesn’t expose its AST, plugins are often slow which can undermine the benefits of the tool
"Weakness: since ESBuild doesn’t expose its AST, plugins are often slow which can undermine the benefits of the tool"
I think by the time we even care about the performance of a plugin being "golang" fast, we will have rome built in rust.
Rome will be the full kit and caboodle at native speeds (bundler, tree shaking compiler, linter, type checker, etc...), whereas esbuild will be the rome-lite for when you just want to bundle some code and have it done.
> I think by the time we even care about the performance of a plugin being "golang" fast
I certainly care about it being faster than estree-*/babel. There are a lot of static analysis and optimization things I’d like to do that are slow enough in JS to negate a lot of ESBuild’s advantage.
> we will have rome built in rust.
> Rome will be the full kit and caboodle at native speeds (bundler, tree shaking compiler, linter, type checker, etc...), whereas esbuild will be the rome-lite for when you just want to bundle some code and have it done.
Is Rome-Rust doing something special to optimize arbitrary AST-based plugins written in JS? I hadn’t heard that but I’d be interested if it is.
> Remember when all of these points applied to webpack, when it was the “single simple fast tool” to replace everyone’s grunt scripts?
Don't recall perf being the win with Webpack, Webpack was "Web-Pack" because it allowed you to use CommonJS to co-locate external assets (SVGs, CSS, etc) with code, and then being able to produce distinct bundles from that dependency graph. Grunt had no clue what your source dep graph looked like, you had to build your own pipeline (specifying dependent tasks for each task). Of course, now everybody has their own Webpack config that alters some input or output, but it's a considerably more powerful tool than Grunt ever was.
I really don’t, webpack config was was a cluster&€@ from day 1.
Also, webpacks goal always was to do everything and the kitchen sink, much different from esbuild.
> 3. The most impressive part about esbuild development is not just that it's one guy writing it: it is the level of support and documentation he manages to provide alongside.
And the one guy writing it is Evan Wallace, co-founder and CTO of Figma. I don't know how he has the time!
> Instead of attempting to get one of [HTML/SVG/JS Canvas] to work, we implemented everything from scratch using WebGL. Our renderer is a highly-optimized tile-based engine with support for masking, blurring, dithered gradients, blend modes, nested layer opacity, and more. All rendering is done on the GPU and is fully anti-aliased. Internally our code looks a lot like a browser inside a browser; we have our own DOM, our own compositor, our own text layout engine, and we’re thinking about adding a render tree just like the one browsers use to render HTML.
To most people, esbuild would be a full-time job. Based on the above, it seems that to Evan it's a fraction of the work he did in Figma's early days all at once!
This seems like a pet project. Reason I say that is if it was built for work, it would likely be from figma. Instead this project is from Evan himself.
I'm not so sure about that as you. Not all companies are like Google and "steal" the credit of work done by employees, even if done on work time but unrelated to the core business. Plenty of companies let employees work on open source and still remain the owners of the software produced.
> I have two 50K+ LOC projects using esbuild, and I would use it even if it was slower than webpack/babel/tsc simply not to worry about the build chain breaking due to an update to some obscure dependency or plugin.
This was the reason that Phoenix 1.6 switched away from webpack to esbuild, apparently half the reported issues were webpack related!
This is exactly why the Elixir Phoenix team switched from Webpack to esbuild as the new default. They were spending more time responding to Webpack issues than Phoenix issues and it was an endless time sink.
Number 2 is a common pattern. At first developers are exploring things and changing approaches from time to time, so the most flexible solution wins (express, redux, webpack). Then they understand exactly what they need, so they can make a new tool with focus on particular set of use cases and features from the start.
Yeah, we actually went through this with Redux itself.
When Redux was first released in 2015, it was deliberately designed to be minimal and extensible. Other Flux libraries at the time had various forms of async handling built in (support for dispatching actions via promises, etc). Dan and Andrew wanted to avoid locking users in to any single form of async handling [0], so the middleware API was designed to let users pick their preferred async approach and syntax.
Similarly, the store setup process was entirely left up to users to add whatever middleware, enhancers, and other configuration users felt was appropriate. The docs were also always unopinionated about preferred file structures, how to organize logic, etc.
Over time, it became very clear that users _wanted_ more specific guidance about how to structure their apps, and wanted Redux itself to build in default setup and configuration.
As a result, we wrote a "Style Guide" docs page [1] that lists our recommended best practices, and created our official Redux Toolkit package [2] as the standard way to write Redux logic. RTK was designed to solve the most common problems and use cases we saw in the ecosystem [3], including store setup, defining reducers, immutable update logic, and even creating entire "slices" of state at once.
RTK has been extremely successful - we routinely get users telling us how much they enjoy using RTK [4], even if they disliked "vanilla Redux" previously.
We also recently released a new "RTK Query" API [5] [6] in RTK 1.6, which is a built-in data fetching and caching API inspired by libraries like Apollo and React Query. Again, similar theme - we looked at what users were doing and what pain points they were running into, and built an official API to help address those use cases.
How is the integration with things like a dev server and tools present in create-react-app like react-fast-refresh?
Also, in case of working on an Electron project: How well does it handle main/render/preload compile targets and handling of native modules and linking?
Electron-forge is, for instance, the recommended toolchain for building Electron apps and the Webpack stuff is a particular pain in the ass.
You can use vite for that, which is an enhanced wrapper for esbuild. I use it on all of my projects except the ones I'm forced to use webpack on for legacy reasons.
I guess it depends on your project and workflow preferences. Personally, I'm not a big fan of HMR and the bundling times are negligible even without "pre-bundling" npm dependencies (as Vite puts), so I see little reason for Vite.
Has anyone given you crap for your username? I know you had the username long before MAGA became a popular word in the american lexicon, but I'm still curious!
Yeah, the funny part is that I have never even been to US of A let alone having anything in common with the said movement. No, to be honest, I don't recall anyone mentioning it here on HN or GitHub. I don't use other social media, though, I guess it would be more of an issue there. However, I was wrongly suspended on Twitter once despite not posting (or even "liking") anything the whole decade of being there. I just used the account for OAuth subscription, one day it stopped working, I had to write a letter and wait for manual "unsuspension.
I chose my now "nom de plume" way back in zeros for its seeming originality and simplicity, when compared to my rather common and unwieldy "real" Polish name.
Work on esbuild started at the start of 2020. It is primarily authored and maintained by Evan Wallace, who, in addition to making this tremendous contribution to the JavaScript ecosystem, is the CTO and co-founder of Figma. Incredible output.
I came here too say this. The man authored in the neighborhood of 100k LOC in a year, just on this. There's a living 10x dev, it's not a myth. What is ridiculous is to think someone can 10x a normal developer, it's more like the difference between the top few percent and the bottom 10-20%. Evan Wallace is a beast, no doubt.
While he is certainly an excellent developer with great productivity to boot, LOC is an obtuse metric. For example there is a package-lock.json commit which is almost 20K lines. Otherwise I totally agree.
Agree, if anything it should be the reverse, less LOC produced means more efficient developer. But that's also a shit metric, as then people start cramming in as much complexity in every line as possible.
Best would be if we could actually measure "complexity" in a objective manner.
I really wish people would stop idolising so called '10x' developers.
Anyone that is comfortable and familiar with their toolset (e.g. go, .NET, Java, C++) and has a deep understanding of a problem (and has likely solved it once already), can churn out code far, far, faster than an onlooker.
I’ve found that in any tech company, while there are many people that write good code and do a great job, there are always a handful (even at a place like Apple) that truly push the industry forward and in certain directions, partly because of how they see years ahead, partly because they are supremely talented, and partly because they attract other really good talent just to work with them.
And we know many of their names. Folks like Brian Cantrill, Yehuda Katz, Fabrice Bellard, John Carmack, Bret Taylor.
They aren’t just good programmers. They’re constantly dwelling in uncharted territory.
I’m not advocating worshipping them, just stating that their talent and output is hell of a lot more than even 10x.
I understand what you're saying, but I'd still argue that they're supremely familiar with their toolchain, and have a deep understanding of the problems they're trying to solve. My comment said they've usually solved the problem at least once already, but I don't discount people solving new problems.
Perhaps that is what makes them '10x developers': a great familiarity with their toolset and a deep understanding of the problems they are solving. In this sense I don't think it's bad to idolize them, as familiarity and understanding are achievable by many people, and they could all become a '10x developer'.
Baloney. I don't think it's "black magic", but as a software engineer with some deep familiarity with my tool chain I'm not going to pretend that I could ever be as productive as some of these folks. It's like when I see another post about a Fabrice Bellard project, and I think "he gets done in a year what it would take me about a career".
I don't understand this fetishization of "everyone has nearly equal ability" in the face of tons of overwhelming evidence to the contrary.
I don't believe everyone has the same ability. What I do believe is that performance is explainable and there's a practical upper limit to raw output.
I made a side project that I spent 1-2years building on and off and had people saying "wow I could never build that". That's bullshit. If they were of similar intelligence and put in a similar amount of time and effort I'm sure they'd be able to replicate what I did.
The luxury of being able to spend ample amounts of time on a project or idea is also massively overlooked.
That's not all it is. Some people are just better than others. No matter how hard every tries to be the best football or basketball player, only an extremely small set of people even have the chance to make it. There are physical traits they'll never have and mental fortitude to use those traits. You can't grow yourself to 7 ft tall and you never will be able to. What makes you think the brain is any different.
And I really wish people would stop denying that there can be a massive difference in productivity from one individual to the next. It doesn't really matter what the theoretical limits are. What matters is the people you have and are hiring, and whether or not they are moving the needle.
Maybe we can put aside whether or not there is No True 10x Developer. But there are certainly 0.1x developers, and even -1x developers.
> has a deep understanding of a problem (and has likely solved it once already), can churn out code far, far, faster
Is this a bad thing to discourage? Perhaps one way to increase your output as a developer is to narrow your focus (rather than jumping on the latest framework or build system which require constantly re-learning new solutions to the same problem)
No. It's what I encourage people in my team to do all the time. In fact it's something I was also encouraged to do.
The full quote was:
"Become an expert in at least one thing in your job or preferably career, and do it while you have the time" (e.g. when you're junior and expectations are low, or not a manager).
I dont think the parent is just about LOC, but the quality of those LOC.
There are people producing JS bundler and are comfortable and familiar with their toolset along with deep understanding of the problem. And they dont even come close to what is being described here.
Being proficient in your tech stack helps your output sure but it's only one part (and a small one IMO) of the equation for creating something like esbuild. The author of esbuild surely has a strong knowledge of computer architecture, data structure and algorithms, operating systems and some math.
That's a bit of a strawman. My full comment is that being familiar with your language of choice, and having a deep understanding of a problem means that can produce a solution faster than someone that doesn't. I used the phrase 'churn out', I could have simply said 'produce'.
I've worked in the industry for almost 30 years, and I've worked with a lot of people in that time. Those that you might qualify as '10x', all have had both of the qualities I mention.
I would not expect any of those people to switch languages and fields and still be a '10x' developer.
I would. Languages aren’t as important as you would think. Certain people can pick up the new idioms quickly and run with it.
Look at someone like Yehuda Katz. Substantial contributions in ruby (merb, bundler), JS (jQuery core team, created Ember.js), and rust (created cargo).
But trying to elevate him or make him uncomfortable (he’s also a really humble dude), but just saying there are examples of polyglots that make substantial contributions.
But isn’t what you said a bit of a straw man too? Back in the music comparison, no one would expect an excellent guitarist to take up playing piano and still be a virtuoso.
And yet… we do recognize that some people are impressively better than others at playing piano, running yards, fighting, and all sorts of other narrowly specific tasks.
It’s just that the more creative the task, the harder it to measure how much better someone is.
> As a good CTO you shouldn't have anything to do.
Is this for real? I mean, yeah, I don't think a CTO should be debugging build scripts, but hiring a great team, mentoring, aligning teams with a common technical vision, meeting with other company leaders to ensure the technical direction meets the needs of the business is an immense amount of work.
I don't understand this perspective. Everyone only has so many hours in a day, and there's only so fast you can work.
If he's writing esbuild that is taking time away from being the CTO of Figma. Either he's working a shit ton, one of the things (Esbuild or Figma) is being somewhat neglected, or his output is actually not as high as it looks.
As someone that ran a company while working on side open source projects, don't underestimate the therapeutic value of writing code. (As a CEO, my job had almost no coding, and working on my opensource projects made me happy, and restored a lot of my energy.)
CTO and lead developer on a focused project are very different jobs - my guess would be he relaxes from the very strategy and soft-skill heavy day job by diving into a challenging problem that keeps his dev chops up and lets him focus on a finite problem.
No idea on Evan Wallace's perspective but there is a possibility that it isn't seen as work. It would like be me solving sudoku or some of my friends building LEGOs or solving extremely hard jigzaw puzzles
To echo what others have said here, my role as CTO and now CEO has gone from 95% coding to about 5% these days. So some nights I code on ideas and things that have been swirling in my head, just because it’s nice to just quietly write code and solve a finite (but possibly difficult) problem without interruptions. It actually IS therapeutic.
Bun [1] is a JS bundler based on esbuild’s source, but written in Zig. And it is about 3x faster than esbuild. I think its author Jarred is on HN as well.
Probably worth a submission on its own but I am just waiting till it is fully open source.
Edit: ( Deleted those Stats, since it may not be a fair comparison and it was probably not meant to be a fair benchmark in the first place. The details are still in the linked tweets. I do not know the author or am I in anyway affiliate with Bun. )
I am also wondering how much of those optimisation could be used on ESbuild. Since Rails 7 and Phoenix 1.6 will be using esbuild and not Webpack.
I’m pretty sure ESBuild’s creator has agreed that Bun’s performance claims are probably correct, and that there’s still more room for optimization of/beyond both.
I've been trying to figure out how to build JS projects with the evolving tools (grunt => gulp => webpack => parcel => back to webpack) for years. I stumbled on esbuild and thought why not. Within about 15 minutes, I had solved pretty much all our build issues. Admittedly, our use case was simple-- we needed to transpile React-flavored TS to a npm package. In about 6 lines of code, I had a working bundle. There were no .esbuildrc or esbuild.config.js files, no babel dependencies, and no order of build operations to consider. The tool just worked and it was screaming fast. My first impression was that it _didn't_ work because the process closed in my terminal so quickly.
After my first experiment with it, I rewrote our hundreds of lines Cloud Functions deploy script in about 15 lines (most of which is configuration options on the `build()` method).
I'm curious to explore the tool more. Kudos and thanks to the author for an unbelievably useful contribution.
ESbuild is getting fantastic traction. It’s the default in Phoenix from 1.6 and comes as a default option in the current alpha of Rails 7, which you can get with a simple
rails new your_app -j esbuild
The only sort of issue I’ve had with it so far is you can’t use it with Inertiajs[1] as it does not support dynamic imports out of the box. Although I’m hesitant to call it an issue if its not in the scope of the project. Perhaps there are plugins I can use.
Esbuild w/rails 7 is nice, but if you’re using rails, check out vite_ruby [1]. I used it in a side project and it comes with plugins for views HMR + all the good stuff that comes built into vite.
Yes 100% - I’m actually using Vite Ruby in a project as I really wanted to use Inertia + React and that was by far the easiest way to get everything up and running.
I’d go so far as to say I wish -j vite was an option in js-bundling :)
Does it not support dynamic imports at all, or does it just not support “dynamic dynamic imports” i.e. dynamic imports where the module path is not constant?
If it’s the latter, you could have your Inertia page resolver be a giant switch statement of every possible page, where each case is a dynamic import call with a constant module name.
Kind of a pain but I think I’d prefer that if it meant I never had to write a webpack config again.
- It's written in Go and compiles to native code. [...] While esbuild is busy parsing your JavaScript, node is busy parsing your bundler's JavaScript. By the time node has finished parsing your bundler's code, esbuild might have already exited and your bundler hasn't even started bundling yet. [...] Go is designed from the core for parallelism while JavaScript is not.
Even in single threaded mode it’s fast. I think the main idea is that it creates an AST only a couple of times and then caches it so that the AST can be reused. Webpack on the other hand gets engulfed by its plugins which often do so multiple times.
We recently switched on a few of our project from Webpack and the difference is incredible. Running a watch using this is practically instantaneous compared to our previous setup. I've been recommending it to all my colleagues and we're replacing Webpack slowly but surely.
The main draw for me is the simplicity of the config too. Webpack config (even using things like Symfony's Encore) is pretty convoluted and confusing to track. This, at least in my experience, has greater readability and is simpler to understand.
I cannot count the days I've spend on webpack config breakage during the last years at work. It's never really been good at all either. The gulp setup we had before worked faster, better and didn't break once a month. Webpack really only is a pet project of the idiocratic React-community. Facebook not only screws your personal data over, also your dev workflow!
esbuild is fast but it has a lot of place you have to figure out yourself and get into your way of doing thing.
1. dev server: you have to write a bit of code for this server to act as webpack dev server
2. scss: need to install plugin, and plugin cannot use from command line then you need to write a bit of JavaScript
3. global function: if you do `process.env` now you need to inject into build script
4. node package: if the package use node stuff you have to define thing like `fs/stream` into package.json
very quickly it get into your way of doing thing.
However, once you get past that base line, the cost is constant, the complexity just stop right there and not adding up.
This is exactly what I found when I tried esbuild a few months ago. I gave up and went with Parcel 2 at the time as I found it easier to get going (although there were teething problems with Parcel being beta at the time).
How is the SCSS performance? I've tried just about every trick in the book in an attempt to get our bootstrap-based SCSS projects to compile faster, and I'm at my wits end with it.
See also SWC, something similar to esbuild but written in Rust. NextJS uses SWC as well as Deno.
Rome is also being rewritten in Rust, it's more of a complete set of packages that subsume Webpack, Babel and all the other parts of the JS / TS development experience.
The announcements from NextJS has been really confusing. I don't think they are using SWC yet. They are just working on it. The reason for the confusion is because they write about the progress in the release notes, making it look like they are using it.
I'm using this to compile typescript lambda functions for AWS with great success.
Combined with cdk and its NodeJsFunction you can even deploy/compile without local docker.
I looked at using ESbuild and I use Typescript. It was all looking good until reading the docs and it said ESbuild doesn't typecheck Typescript and to rely on your IDE to flag errors.
Is that correct and how is that working for you practically if it is? The whole point for Typescript for me is to have a compiler typecheck my code and block errors at compile time. ESbuild not typechecking seemed like a major contradiction to using Typescript so I set up a Webpack build using the standard ts compiler.
I've been out the loop of client side stuff a couple of years so to start was bit of a rabbit hole. Grunt/Gulp had gone and now Webpack seems common with a growing fanbase for ESbuild because of it's speed.
Consider TypeScript like a linter. ESBuild doesn't run ESLint for you either - you can run it separately, or in parallel.
This means that for example, during development, you can see your running code quickly, while your editor runs tsc to highlight type checking errors. And in your build system, you can produce a production bundle to test while in parallel checking for type errors.
First, forgive my ignorance of the JS/TS and bundler ecosystems.
> while in parallel checking for type errors.
Why are you suggesting this not be done during development? Is it bc while ESBuild is fast and runs in parallel, it's still only as fast as the slowest parallelized task, and in this case the slowest task is checking for type errors? And I assume checking for type errors is the slowest because it has to invoke an external resource, tsc?
Would it not make sense to have two development bundlers then? One for getting your code up and running quickly, and a second that outputs a dummy build artifact, but allows a more thorough production-like build that includes type checking (or other long running activities)? That way you get all the verification you would like, but don't pay a price on waiting for your code to deploy?
You can absolutely run it during development! In fact, many people will use an editor that automatically does this and highlights any potential errors - which means you don't need your compiler to display them (or even block compiling) as well.
And yes, type checking is relatively slow, and also strictly extra work, since it's not required to make your code runnable.
You'll often see that people do still do a more thorough production build in their CI systems, but not necessarily by using a bundler that includes type checking; rather, they just run type checking and bundling as separate tasks. That way, you do indeed get all the verification you would like without having to delay deployment.
(To make it somewhat more confusing, a project like Vite does use a fast bundler (ESBuild) during development and a more thorough one for production (Rollup), but that's independent of type checking. It's more about the latter doing more optimisations, and the former merely making the code runnable by the browser.)
Not blocking on typecheck failures is one of my favorite features of TypeScript -- you can rapidly test/debug/iterate on code with "broken" or incomplete types while in the prototyping stages of feature development.
This is pretty common if you're looking for build performance. E.g. typecheck as a lint step with tsc, build with babel removing your typescript (it doesn't typecheck either), etc. It's true that tsc on its own can replace a build step for some folks, but you'll quickly find it limiting both in the options to rollup or combine files and in its ability to be extended with plugins (unlike Babel), etc. On the other hand, tsc and its language server is a first-class type-checker when run with no-emit. ;-) I haven't played as much with esbuild yet but it's on my todo list.
Only if a project is pretty small.
Change in one place, especially if it's a reusable code, might ruin code in multiple different places and IDE will not recompile the whole project on every change, it will only watch your currently opened files and, maybe, some files in opened folders.
By "supporting typescript" it parses typescript code and won't fail. That is what a bundler promises. Bundler doesn't have to do the typechecking. It is actually conventional to set it up that way for any bundler. In my job we have webpack bundling typescript with babel in the "build" step. We could've use ts-loader but we want hot reload to be fast. Then in the "check" step we have ts, linter, unit tests. Those run on CI.
You're likely literally doing the same thing with Webpack. There is only one complete implementation of the TS compiler in existence, so you have to use that to typecheck, it doesn't matter what bundler tool is used. If you need the type definitions as part of the output (eg you are distributing a library), then you have to involve the compiler to construct the output definition files, but for the code itself, it doesn't really matter because you're just generating JS. The TS compiler is very slow (and in any case is not designed to produce code bundles - it just dumbly translates all the TS to JS), so the standard way to speed this up is to use a module bundling tool that ignores the types and compiles the code as JS, and have the TS compiler set to not emit any files itself.
Nothing in the above precludes using the compiler to typecheck the code, that's the primary usecase & what sibling is saying about thinking of it as a linter: if typechecking fails, don't build
`eslint && tsc --noEmit && esbuild` or something like that, just simple process chaining. Means you can build something you know is going to be illegal to the TypeScript compiler, but you want to very quickly test an assumption, for example.
I don't think rejecting a comparison with an uncomplete nightly build is unfair. It looks like it was added a little after this issue was closed because the blocker was fixed.
Parcel 2 was released as stable yesterday though and Evan has already updated comparison.
You know you are getting old when you watch the arrival of the fourth JavaScript build tool of your career. I still remember when everyone was waving goodbye to Gulp in favour of Webpack. Webpack was going to save us all from the hell of massive convoluted gulp.js files. Fast forward five years and it's the same mess it was supposed to avoid. Slow, bloated and confusing.
I just switched to esbuild on our main project and the build time went from 7 minutes on CI to 1 second. Kinda stupid really. Anyway, here's to the future, let's hope it works out this time!
I always found it amusing going from Grunt (unwieldly huge object based config) to Gulp (config is code, pipe transformations together in a simple unix'y way) to being told webpack is the future (huge unwieldly config objects again). Definitely felt like a step backwards although I appreciate the power webpacks gives to people building boilerplates like Create-react-app. I rarely use Gulp anymore but I still appreciate it's UX/DX.
As someone who liked Gulp (SOOO much more than Grunt/Browserify) the transition from v3 to v4 felt really, really bad. Lots of unknowns around release timing, poor community support for the new version, "beta" tags sticking around forever long after it should have been released, maintainer turnover, etc...
It was enough at the time to get me to jump to Webpack, since it seemed like gulp was dying, and if I had to pick a configuration based tool it wasn't going to be Grunt.
Now I'm actually swinging back towards a combination of Gulp and ESBuild on my personal projects. I honestly debated trying a mix of Make and ESBuild (Since Gulp still feels pretty dead, and hasn't had a real release in 2+ years), but Make has enough subtle gotchas that I stuck with something familiar.
You're right about the community being mostly dead.
There are a lot of bugs and performance issues in the underlying libraries, and Gulp's developers have not been able to get the upstream updates integrated.
I just redid our Grunt stuff in Gulp. It was painful - difficult to find up-to-date information, and the result was not that great. I expected a much greater SCSS compilation performance increase compared to FS-based tooling. Maybe I'm doing something wrong.
I'd switch to something like ESBuild and leave SCSS for dead, but it's not an option for me yet :(
I remember when we just served the javascript we wrote. Of course we are back to that. You can just write es6 in as many files as you would like and serve it over http/2 without any webpack/esbuild/babble/etc.
Indeed, the best build chain is no build chain at all. I've been ridiculed at work for not using node, npm, web pack etc -- but I'm not spending 20% of my time on tooling issues.
I'm interested to know who you know spends 20% of their time on tooling issues... I use node, npm and webpack pretty regularly (albeit parcel has mostly replaced webpack for me) and other than setting up some npm scripts and a tsconfig to output the right js for my node version at the start of a project, I barely interact with them.
I think it's more like 80% of time at the start of any project, and then trickles down to no time, then up to 80% again when there's a new feature/config/incompatibility with tooling.
In larger projects and in orgs, you often have legacy choices that you have to deal with, that you can't remove or spend time replacing.
I would say that's within a believable range of improvement (although on the high end).
We're playing with ESBuild at work, TS/React build that takes 45+ seconds to run with webpack cold, 8 second for a rebuild. With ESBuild/Gulp, the full gulp watch task will refresh in about 1.2 seconds, of which ESBuild ran for about .4 seconds.
So the builds are ~100 times faster with ESBuild, and we're just running it cold every time because it's so fast.
---
It's also really exciting for run-time based compilation. I've been playing around with a server-side React rendering project, and I literally just run esbuild in the controller action in development (some prebuilding for releases) and it's wonderful. Live updates in roughly .6 seconds on average, even for relatively heavy components.
plus, if you're careful with your react code, you can build a react codebase that will actually run if client-side JS is disabled (you can render it all serverside)
Probably not an exaggeration. My build times aren't that long, but I've seen similar speedups in switching the esbuild (our smaller codebases currently build in less than 0.1 of a second with esbuild)
Bundling, and even barreling, have pretty much been solved problems for a while. Right now I feel that unit test frameworks, linters, and type checkers are by far the main bottlenecks in the development workflow.
Full build with Rollup on all of my bundles for my project took around 10 minutes. IDK what my KLoC count is, but it's probably in the 25 to 50K range, with very few dependencies. I had a lot of complexity in my build scripts to try to subdivide related bundles into individual build commands to get the day-to-day rebuilds down to the 1 to 2 minute range. I had to run TypeScript in watch mode separately to emit individual JS files from my TS code for each module, and then only let Rollup bundle the JS code (the available TS plugins were just too slow), so I had tons of garbage files all over everywhere and occasionally they would get out of synch. It was a mess and it was extremely difficult to explain everything to newcomers.
With ESBuild, everything, all the things, build in 0.25 seconds. Build script has massively reduced complexity, as there's no point in running any command other than "build all". There's just the TS code and the output. I'm still running TypeScript in watch mode separately to get compilation errors on the fly (ESBuild doesn't run the TS compiler itself, it has a custom-built translator that optimistically throws away type information), but I no longer configure it to emit translated code. And did I mention the build script is massively simpler?
Esbuild actually made writing frontend bearable for the first time in years. It alone reduced our iterative build times from 45 seconds to something like .5 seconds.
Are there any tools that transform HTML and other files? For example, lets say I have an <img> tag with a src attribute that points to a local image. Can I automatically replace that with a <picture> tag with various formats (jpg, webp, avif) and sizes?
SGML can transform HTML and all other forms of markup, though it's neither limited to that use, nor does it readily bundle image converters/compressors even though that sure has always been a plausible use case for SGML-based pipelines.
I switched to it on all of my personal (TypeScript) projects after upgrading to Phoenix 1.6 (where it is required). I normally have a separate lint step during production builds so its lack of type checking isn't an issue during development as VSCode catches and highlights type errors anyway.
It is crazy fast. It feels like the Turbo Pascal 3 of web development.
I had ignored this space for a while as I didn't see any need to fiddle with it. But there does seem to be quite a lot of movement now on Webpack alternatives. The speed improvements do look very interesting, though I'm not so sure how much of the delays I'm seeing with Webpack/Create React App are from the Typescript checking. I mean Typescript is awesome, but it's also not all that fast in building and type checking.
Vite seems to be one of the more interesting CRA alternatives. Though it uses esbuild only for development, and Rollup for production builds. It'll be interesting to see how this develops, and if the fast bundlers keep catching up in terms of features without getting slower.
Esbuild is amazing, but it’s worth mentioning SWC, which is written in Rust, even faster than Esbuild, and integrated within Deno, NextJS and other leading tools. Overall I am pretty bullish on the Rust/Js/Wasm/Typescript ecosystem.
For complex projects moving to esbuild from say Webpack is not necessarily easy.
Why would things be easy in a complex project? If you've built complexity into your project it seems a little unreasonable to expect someone else's tool to fix that for you. At some point you need to accept that building things that are simple is your own responsibility. But building things that are simple is incredibly hard so don't be too down on yourself if your project is complex. Software complexity is like entropy; it's very easy to add complexity to a system, and very hard to remove it.
It's also worth noting that you probably don't really want a tool to make your complex project appear more simple. That would only hide the complexity from you, but it would still be there. If your project is complex then your best approach to solving that is a lot of documentation, a lot of testing, and a lot of refactoring.
While useful, it should be noted TypeScript ≠ JavaScript and this build tool is for TypeScript. Many of us are using different compile-to-JS languages, or no language at all.
I started with webpack using create react app, I tried parcel, I tried esbuild. I tried parcel 2. I use esbuild. As a sole developer it takes no thought, and it still lets me do odd things pretty easily - like I have a particular process for dealing with odd cases in mdx files. Not a ringing endorsement or an exhaustive analysis. I'm just happy I don't have to spend time thinking about it. :-)
I am currently using Parcel 2, I like it because I don't have to bother with any configuration. From what I've seen, with esbuild you still have to fiddle with configurations and loaders for specific filetypes? Can I just give it an entry point and let it figure out everything (update production HTML file with bundle links, import external images, target last X browser versions, etc) ?
I've been actively moving all my projects from rollup to esbuild where possible. You have to do a bit more plumbing to get a lot of the niceties provided by the rollup/webpack ecosystem, but the resulting simplicity, speed, and size of esbuild make it worth it.
> How many scripts does a site need to make it feel faster when bundled?
It's because people are using these huge frameworks with a lot of bloated code, it's too big, now devs are forced to do tree shaking and what not to trim the fat... front end JS development has become a madhouse of unnecessary complexity, because of node.js as well...
I hope that with the help of DENO that doesn't suffer from all that cargo culting, front-end development can become a healthier ecosystem...
Yes, some apps are complex, but 99% of front-end UI aren't.
I just saw an (internal) presentation where presenter predicted that in a few coming months we will see a lot more RxJS being used in ReactJS projects. Cannot wait to see it included on webpages which would be just fine with zero JS.
Unless your site is made of 4 files/modules, you simply must bundle. ESM will download the files in series, as each dependency is discovered. Then compression doesn’t work as well as it will be per-file. Then of course you can’t tree-shake dependencies from npm, so good luck downloading the whole of lodash on your client.
In short you lose the advantages of only download what you need pretty quickly.
> good luck downloading the whole of lodash on your client
I never felt the need to use loadash. But even if a site would do that, it does not seem like a big issue. It is 71k. A Twitter profile page is over 5MB. 70 times the size of that loadash library. An appartment page on AirBnB is over 9MB. An Instagram profile page is over 9MB too.
I like Snowpack (https://www.snowpack.dev) because it doesn't actually do any bundling, it just makes sure that the files are in the right place to be loaded (it does compile Typescript files). Because the only thing I actually care about during development is that all the inter-package dependencies are resolved. I don't actually need a fat heap of JS that needs to be rebuilt every time I change something.
> How many scripts does a site need to make it feel faster when bundled?
Depends on what you're building. If you have many nested dependencies, you need to bundle them, and not rely on the browser to resolve them at runtime and do dozens of roundtrips to the server to fetch them.
If your dozens of scripts are top-level (on the html itself) then you can fetch all of them in parallel — so you wait your connection speed roughly twice (roundtripped).
If they’re nested dependencies (where you don’t identify the next necessary js until you have the first one in hand), the dependency won’t start getting fetched until the predecessor is retrieved. So you wait your connection speed multiplied by depth (x2; roundtripped)
The goal of the bundler is to take the second scenario and turn it into the first scenario.
There will definitely be an inflection point, and it's very dependent on many factors: number and size of scripts, how much of the initial content is SSRed, network bandwidth and latency etc.
You can use the Network tab of your preferred browser to see the waterfall. HTTP2 did improve a lot, but it can’t magically resolve N-deep transitive imports without additional information. It was originally designed to have that information provided at the server level, but HTTP Push has been dead for a while. There are physical limitations at work, optimizing requests on the wire is still important.
> When I visit websites that are rendered serverside, they usually feel instant to me. Even when they load a dozen scripts or so
Then they'll be faster doing only one request, this is regardless of ssr or spa
Bundling also gives you slatic analysis (typescript and linting), and frees developers from developing in the dark, like keeping track in their brain what component in what script exposes what global.
> Bundling also gives you slatic analysis (typescript and linting),
No it doesn't do that, typescript compiler does the static analysis and whatever linter does the linting, you don't need a bundler for that, a bundler just takes many source files and bundles them into one.
Although esbuild has been out for a while now, I think it's relevant today because the benchmarks have been updated to include Parcel 2, which was just released earlier today.
> Why is esbuild fast?
> - It's written in Go and compiles to native code.
> - Parallelism is used heavily.
> - Everything in esbuild is written from scratch.
> - Memory is used efficiently.
> Production readiness
> - Used by other projects
> - API stability
> - Only one main developer
> - Not always open to scope expansion
It helps to get 80% of the way there, on a logarithmic scale. The remaining 20% is evanw saying no to adding any kind of option, feature and the kitchen sink to this project, which is one of the reason Webpack/Babel & co. are some of the heaviest objects in the universe.
It doesn't always work in situations where 'require' can't be statically evaluated at run-time (webpack would work in this situation) - for example I filed a bug recently[0]
Even though the bug was closed I was impressed with the thorough and timely response from the esbuild developers which explained the reasoning behind it. For me it was worth changing my project just so I could continue using esbuild, it really is that good!
The catch is that they focus on modern builds (I.e. not IE) it also doesn’t do type definitions, so if you want type defs you’ll need to compile with typescript anyway.
esbuild is a pleasure to use. Because of its speed you no longer need to separate test/release build for simple libraries. I am using esbuild for transpiling my new TypeScript projects.
However, I am still have to use TSC to generate declaration files(dts). Are anyone aware of esbuold-like tool to do that job?
I am using ParcelJS v2 (nightly), and my builds are really fast for a medium-sized project. It takes about 5s from scratch (no-cache) and <500ms to update after a change.
Do I recommend Parcel? Not sure, it feels like every time I update the package something breaks (in their defense, it is still in Beta).
Well, I spent few days to move large vue.js codebase from webpack 4 to Vite (esbuild). You know what? It isn’t as fast as that benchmarks shows us, even buggy and laggy.
So upgraded to webpack 5 with caching to filesystem (I bet no one know) and it became even faster than Vite! I’m happy.
> Both Go and JavaScript have parallel garbage collectors, but Go's heap is shared between all threads while JavaScript has a separate heap per JavaScript thread.
It actually implies that JS GC can be faster because it doesn’t require global locks and can collect garbage in parallel.
> It also doesn't include type checking for typescript.
If you use `@aws/sdk`, you are going to have bad time with Typescript. Resolving its types can easily take >50% of compilation time for small projects.
also kudos to the esbuild team to have an official mention of deno usage and support. it works great and turned out to be much simpler, flexible and reliable than the current Deno.emit .
I'm using it right now, it's fantastic. Also been playing around with Deno, importmaps are also used there, it's so much nicer than the bundle and compile everything workflow. It's the direction everyone should be going since it actually uses the browser and browser standards to their potential.
Also if people don't want to use CDNs both Rails and Deno do allow you to serve ES modules from your server.
I think it’s a fantastic default. I’d argue most server rendered applications can get away with it. And if you ever need ESBuild you can install the js-bundling gem or pass “-j esbuild” for new Rails 7 apps.
“What is” is a vague question with no clear details in the answer. For example, parcel may have the middleware mode while esbuild may not. But then it’s parcel 1, but not 2, where it was removed to streamline plug-in development, so I better stick with 1 for a while. That is the context. By not providing it, the root comment has zero value, because an interested reader cannot learn at least some of the whys by simply reading the frontpage. Almost everyone knows it and thus will not follow your “open mindness” advice, because it’s pointless in this case.
I'm sorry, but the node.js community needs to stop producing these stuff... it doesn't do anything better than all the webpack vs gulp vs parcel vs snowpack vs rollup vs how many bundlers again?
I recently switched to esbuild and I am delighed. My build time dropped from 30s to 2s and I was able to remove a lot of dependencies. So IMHO they need to continue producing these stuff to push the innovation forward.
1. It's not just a faster replacement for a single %tool_name% in your build chain: for the vast majority of cases, it's the whole "chain" in a single cli command if you're doing it right.
That is, you don't just stick it inside, say, webpack as a faster replacement for babel (although you can). No, you look carefully through your webpack configs and its myriad of plugins, ask yourself whether you really need to inline css in jsx in png while simultaneously optimizing it for IE 4.0, realize you don't, through out the whole thing, and use esbuild instead.
I have two 50K+ LOC projects using esbuild, and I would use it even if it was slower than webpack/babel/tsc simply not to worry about the build chain breaking due to an update to some obscure dependency or plugin.
2. It is fast because it's written from scratch to do a set of particular things and do it fast, not just because it's Go and parallelized.
If you look at the commit log you will notice a lot of performance tweaks. If you look into the issues, you will find a lot of requests for additional features rejected, often due to possible negative performance impact.
3. The most impressive part about esbuild development is not just that it's one guy writing it: it is the level of support and documentation he manages to provide alongside.
The release notes alone are a good course into nitty-gritty details of the web ecosystem: all the addressed edge cases are explained in detail. To top it all off--all opened issues, no matter how uninformed they seem, find a courteous response.