Hacker News new | past | comments | ask | show | jobs | submit login
Turbopack, the successor to Webpack (vercel.com)
626 points by nutlope on Oct 25, 2022 | hide | past | favorite | 312 comments



After years of configuring webpack for various projects I have finally decided to migrate all my projects to Parcel[1] manly because I got tired of creating and maintaining hundreds of lines of webpack.config.js files, I wanted something that just works. I am really happy with that decision.

Parcel is plenty fast and is pretty much zero config, I can use it to build libraries[2] as well as applications[3] for the browser and node.

I am manly building client side react applications, so your mileage may vary.

[1] https://parceljs.org/

[2] https://parceljs.org/getting-started/library/

[3] https://parceljs.org/getting-started/webapp/


Parcel seemed to fail when I had multiple entrypoints into my app that shared code. Is this such an unusual scenario?

https://github.com/parcel-bundler/parcel/issues/6310


I think this is the exact thing that caused me to leave parcel for webpack


If you want configurability+sanity then Rollup is a good bet. It generally does what you expect it too, and has far fewer footguns.


This has been a huge frustration of mine with webpack - you end up with relatively complicated configs and it sometimes feels like even if you copy and paste your config over, you end up still having to spend a ton of time fighting with webpack to actually get it to work


And when you try to upgrade just one plugin everything breaks and you have to upgrade everything.


This should be the universal byline for all JS projects.

Make a T-Shirt with this and get rich.


Parcel felt really fast, zero-conf and “opinionated” in a good way. But I’m still searching for a zero-conf full-stack pipeline which could manage both frontend and backend in a single project under a single watch command, with modules shared across sides. This frontend/backend separation is too alien to my idea of development (coming from desktop programming), and it feels like some artificial border.

Thankfully, Node and browsers slowly converge, so isomorphic code is less and less of an issue.

Btw, how does HN develop solo-dev apps? Do you switch between two projects? Run two separate webpack configs simultaneously? How do you share code/typedefs/dtos between them? Do you use or want to use backend HMR? Backend webpack-like import extensions?

got tired of creating and maintaining hundreds of lines of webpack.config.js files, I wanted something that just works

When I finally built my perfect set of webpack configs, I deleted them after a couple of months. This complexity is “fun” to set up, but maintaining it is not something I want not working as a “devops” full time. Achievement unlocked, so to the trash it goes.


> I’m still searching for a zero-conf full-stack pipeline which could manage both frontend and backend in a single project under a single watch command

I did create a proof of concept how this could be done in parcel https://github.com/mochicode/parcel-fullstack-poc.

The /src folder contains:

1. /api => backend

2. /app => react frontend

3. /shared => shared code

Once everything is installed, just run "npm start" and it will:

1. build and watch your src folder

2. react hmr/fast-refresh works

3. nodemon will reload your node server when things change

4. serve your react app on localhost:8000/frontend, currently hard coded, but I could also load this from the package.json file.

Not zero-config, but it almost gets you there.


You can cut down the duplicate config with webpack-merge package. I start with a common config object that defines loaders, aliases, etc. and it gets imported by my frontend/backend/cronjob/worker targets that simply describe the entry/outputs

I also recommend installing ts-node and writing your configs in Typescript to avoid typos

Here's an old side project of mine that has 3 targets [1]. I personally don't think it's that complex but it does have hundreds of lines altogether.

[1] https://github.com/Trinovantes/MAL-Cover-CSS/tree/master/bui...


I use Next.js, which does both front and backend in one command. It may not be powerful enough if you need truly large API’s, but for anything I build it’s more than enough.


Rust :)

https://m.youtube.com/watch?v=oCiGjrpGk4A

but more importantly, I recommend embracing the frontend-backend separation. it's important in desktop contexts too. after all, you don't want to block the UI thread with waiting for I/O, right?

of course the last ~2 decades were about coming up with various hacks, workarounds, solutions to make the whole threshold easier to pass (from Java applets to Ajax/XHR, comet/long-poll, websockets, server sent events, localstorage, WebSQL, WASM, service workers, and so on), but the basic architectural separation of concerns remains.

...

regarding sharing things between frontend and backend: OpenAPI + openapi-generator; monorepo works okay in VSCode, etc.

many people opt for RoR-like frameworks where they don't have to write frontend JS if they can avoid it (see also htmx)


I think your comment is mixing two kinds of separation together: process/thread separation and project separation. To me, a project is a single concern, monorepo or not, and dividing it along a particular api border seems unnatural.

Anyway, I’m looking for a tool that could make this part of development simpler by joining all these efforts in a “project and process manager” way instead of maintaining multiple toolkits/configs or generators. The latter I know how to do. Been there, done that, fed up.


> [...], and dividing it along a particular api border seems unnatural.

this boundary is inherent in client-server software/systems/applications, there's just no avoiding it. every other API is basically unimportant and arbitrary.

the whole Web paradigm, the fact of "you need to download the site and run it" and the consequences and trade offs that it implies are basically unavoidable.

> Anyway, I’m looking for a tool that could make this part of development simpler by joining all these efforts in a “project and process manager” way instead of maintaining multiple toolkits/configs or generators. The latter I know how to do. Been there, done that, fed up.

completely understandable, and we're in total agreement. I also think that this is an underserved problem.

It's ridiculous that we still don't have near-perfect abstractions for these.


every other API is basically unimportant and arbitrary

Well, frontend-to-backend api is also unimportant and arbitrary, unless it’s designed for public use. Since for a user the page is always in sync with its server, this api is an implementation detail as much as any other internal api (e.g. between service processes or between async-threads)

Why do you think that the project separation is unavoidable in this case? E.g. some of my sites are simple express apps serving scripts and resources. I don’t have integrated build pipelines there because… well, that is exactly my concern. There seems to be no reason that we couldn’t have an ~isomorphic project with multiple entry points on multiple “ends” and a single build system for all of them.


I don't think it's unavoidable, but in practice it is two separate components with markedly different concerns and trade offs.

For example if we don't use any client side JS, use only a templating system, it becomes easy to keep it one project.

Eg. there's NextJS, which is a bundle of React and a NodeJS backed in one project. (And while it provides a getStaticProps and other gimmicks it doesn't really do much to make client-server state sync seamless. I still have to manually write the /api stuff.)



I kind of don't understand pure bundlers for the individual dev level; I feel that they are made for other ecosystem authors to package into a "meta-framework", such as NextJS (which used to use webpack). The devs at Vite said something similar, that although they provide a great developer experience with no/low configuration, they're really meant for other ecosystem authors.


I just spent an entire evening a few days ago trying to make Parcel work for a new project and I just couldn't get it to spit out working js and css.

The project is just a small server rendered web app (using Crystal) that I wanted to add papercss, trix, hotwire/stimulus and hotwire/turbo to (using yarn). Anyway, I never got it to output the css for papercss or the js for trix.

Webpack on the other hand, I had working in about 20min. Yeah, the config is verbose and tedious, but at least there are a lot of great docs/tutorials for just about everything for it.


IMO where Parcel shines is when you are building client side single page apps or JS/TS libraries.

I could see that it won't be the right tool for a multi page server rendered app.


Same here. I got dead tired and fatigued of all the JS ecosystem bundles and over engineered solutions. Today i mainly use a Makefile and esbuild. No config files at all! It has been a blessing!


From my experience working with esbuild since it's early days, all I can say is to that there has been a very high bar set by esbuild's speed and ease-of-use. I've used it for all my web apps and services since and i've finally begun to like my front-end build chain, unbelievably.


Yeah, as someone who focuses primarily on back-end but occasionally finds the need to dive into front-end, esbuild has virtually eliminated the pain I had with front-end bundling/packaging when I was using Webpack. At least it has in my situation, which involves mostly standard needs and very little customization.


esbuild shines even brighter when you do need to customize. The API is outstanding. Docs are excellent. I've been able to do most things myself, without hunting for plugins and examples.


Just migrated one of our projects from webpack+babel+tsc to just webpack with esbuild-loader, the difference is astounding. Just need to remove webpack itself now so we can use their transforms.


Doesn't esbuild skip Typescript type checking? https://esbuild.github.io/content-types/#typescript

You'll still need to keep tsc around for that, though perhaps you're doing that with another step in Webpack...?


You don’t need webpack to run tsc. You can treat typescript as a linter and run it seperately (it also has a fast watch mode)


Parent comment said they went from "webpack+babel+tsc" to "webpack with esbuild-loader"

The lack of tsc in the new process made me wonder if it just got added to Webpack


It does, but esbuild helps with hot-reloading speed.

You just have your tsc compile as a pre-commit git hook and CI pipeline step.


During hour to hour development, I just let Visual Studio tell me about the errors as they come up. On the rare occasions I don't already have the editor open before deploying, I have a shell script to kick off the right tsc invocation. Otherwise, bundling plus uploading to the server is just another shell script. Yes, a full type check from scratch is slow, but it doesn't come up that often.


To anyone who wants to do this:

Let's say you have a tsconfig.json. Create a tsconfig-test.json like this:

    {
        "compilerOptions": {
            "noEmit": true,
            "skipLibCheck": true
        },
        "extends": "./tsconfig.json",
        "include": ["./\*/*.ts", "./\*/*.d.ts"]
    }
Add a script to your package.json (I use "test-types")

    ...
        "scripts": {
            "test-types": "tsc --project ./tsconfig-test.json",
    ...

Then you can "yarn test-types" quickly. I use husky[1] to run that command as a git pre-commit hook. My team cannot commit Typescript errors. Additionally I have a strict tsconfig and a strict eslint config (with these plugins:sonarjs, @typescript-eslint, simple-import-sort) which prevents "any" types and bad typescript hygiene. Results in faster code reviews.

[1] https://www.npmjs.com/package/husky


This. esbuild has been amazing. Can never work without it. The only thing missing is a "vendor" kind of bundle. I wish i could specify dependencies and own code in two separate bindles. As the other wont change much, and could be cached.


esbuild is great, but our only blocker for using it was that there was no way to control the number of chunks. On a moderately sized modular app, this can slow down things significantly[0]

[0] https://github.com/evanw/esbuild/issues/2317


Have you tried vite? Its been a similiar experience for me


Same here!


These bundlers do not do the same work as webpack. Turbopack is using swc for typescript which means it will not do typechecking during compilation which effectively makes it fairly useless as a compiler. Now, if you only want bundling and minification then sure but why would anyone want only bundling? Good luck having a sensible workflow when your type checking has to run separately and perfectly match whatever typescript version is hidden inside your bundler, good luck dealing with constant mismatches when there are errors on either side.

EDIT: The responses pointed out that the author is the maintainer of webpack, I removed that part of the comment calling it deceptive. Maybe the entire intent is to deprecate webpack. I still think that it should be renamed to "Turbopack: our webpack successor" because calling it 'the' successor is a bit presumptuous and the article doesn't even directly say that.


Running the typechecker independently of the bundler is really common in TypeScript. Typechecking takes time, and blocking on it to transpile and bundle your JS doesn't really add any benefit. Most build systems run the checker in parallel because it cuts down on build time. Plus, Webpack doesn't really do transpiling or typechecking for you — your loaders do. Webpack's loaders can't really check your types because, without doing a lot of extra magic, they operate on one file at a time. If you use Babel with Webpack, for example, Babel just strips out your TypeScript annotations and calls it a day.

All that aside, it seems like Turbopack was literally developed with Webpack's maintainer, who claims it to be a "rust-powered successor to webpack" (his signature is quite literally on the homepage[0]).

[0]:https://turbo.build/pack


> Webpack's loaders can't really check your types because, without doing a lot of extra magic, they operate on one file at a time.

This is the point folks really must understand when setting up a new tooling pipeline to deal with TypeScript. Certainly all of the module bundlers I'm aware of operate in this way.

To explain further for anyone curious; for TSC to work effectively it must construct and understand the entire "compilation". To do this it starts by performing a glob match (according to your include/exlude rules) to find every TypeScript file within the project. Resolving and type checking the entire compilation every time the bundler calls for a transform on a file is very slow due to lots of repeated and unnecesssary work so most TS bundler plugins have to work around this. Unfortunately, they're still relatively slow so type checking and bundling code separately is often the best way to go.


But in theory type checks could be cached too. Why this isn't currently done is because it's probably more work and requires very deep knowledge of the typescript language and parsing. Meanwhile the bundling itself is a simple affair of combining text files together. So these tools instead just fork the typechecker for some minor gain but in practice you could absolutely cache type information. That's what every IDE does with their intellisense database. I think in this thread we are essentially talking back and forth over the current state of bundlers and most here rationalize why it's okay (if not better!) for type checking to be separated from the compilation but the reality is that it's a fairly arbitrary state of affairs. It could easily all be efficient so that type checking is just as parallelized and incremental as bundling and therefore requires no separation into a different process. As we know, after all a binary produced by c++, rust or c# is also a "bundle".

Actually this painfully reminds me why I found it so weird that tsc doesn't simply offer bundling itself. Why wouldn't it? It should be very easy for the compiler to do this as it has all the information and on top of that, tsc also has an incremental mode already. That definitely means 'incremental' for type information.


tsc already has incremental mode, also there's the LSP, the bundler in watch mode could keep a persistent TS language server running. if the typecheck succeeds the bundler emits the new bundle, easy peasy.

If I remember correctly gulp(js) was perfectly able to do this.


>easy peasy

but how do you do it? this is not as easy as it may seem. Of course it's possible but the value here is that I don't have to do this for every IDE and/or the LSP when I use webpack where waiting on the typecheck is an integrated feature.


I don't think that's an integrated feature of Webpack, though. It is a feature of TypeScript's own compiler (`tsc`), but that's about it. Nothing about Webpack supports typechecking (or TypeScript) natively.


>Running the typechecker independently of the bundler is really common in TypeScript

maybe it is common, I can't speak to that but in my opinion a large part of the success of webpack was probably because they bundled the typechecking. Because that's the only workflow that makes sense, imagine a c# compiler that quickly outputs executable IL code but half the time it's broken because it didn't do any type checking and you have to wait for the IDE based type checker anyway. On every build. It just doesn't make sense to work this way, you never want a fast, silently broken build which is what you get with non-typechecked fast builds.


No, webpack became popular because it was the only bundler that supported commonjs and handled code splitting reasonably well, so we picked it for React.

It's critical to run type checking outside of webpack. You should be able to execute code that does not type check correctly. It maximizes iteration speed and is one of TypeScript's key superpowers.


ts-loader maintainer here. webpack never did typechecking. It's possible to run ts-loader in two modes; with type checking active and with it disabled. For years I've been advising running type checking as a separate process to improve the development workflow:

https://blog.johnnyreilly.com/2017/09/07/typescript-webpack-...

Exciting news about turbopack - the world is about to get much faster!


Thank you for ts-loader!


> imagine a c# compiler that quickly outputs executable IL code

Awesome!

> but half the time it's broken because it didn't do any type checking and you have to wait for the IDE based type checker anyway. On every build.

Huh? The IDE checks your types as you type. Not on every build.

This works great and de-duplicates effort. I have an extra "check" script that runs all the linters including the Typescript checker that you can run before making a PR or production build (tells you the same thing as the IDE). I'm glad it doesn't block my development build because it takes many seconds (10-20) while without it you can get updates in far below one. That's a night and day difference.

> you never want a fast, silently broken build which is what you get with non-typechecked fast builds.

You're misinformed about what I want. I want fast builds. Decoupling linting from building is a great way to achieve that. I am yet to experience any problems with it.

If you want to type check on every build you can put that into your pipeline, but it will unnecessarily slow things down. I for one am very happy with fast builds with no downside.


Webpack does not bundle the type checking by default - iirc it cannot handle typescript at all by default, it needs to be configured.

The typescript plugin that you used may have included type checking, but it very much depends on how you set it up. Many common setups use Babel to compile typescript code, which doesn't do any type checking at all, just mechanically strips away the type signatures. I would be very surprised if much of webpack's success has come from the type checking functionality you describe.

As for whether it's "the only workflow that makes sense", I find the most comfortable flow for me is using the compiler as a linter rather than a "true" compiler. It runs in my IDE, potentially as a pre-commit script, and in CI (which means checked in code cannot fail to type check), but I can still build failing code. This would be, as you say strange if I were using C#, but syntactically valid typescript can always be covered to syntactically valid javascript, so the output won't be "broken" in the same way that a C# program with invalid types would be. Most of the time, a program that isn't accepted by TSC is still a valid javascript program, just with poorly specified input and output types.

That said, while it can be useful occasionally if I'm still trying to figure out the types, the advantage here is less that I can now compile invalid code, and more that my bundler doesn't need to do unnecessary type checking. Any errors I can catch myself based on the red squiggles in my editor, so I know not to look at the browser until I've fixed all the issues, and at that point I don't need my bundler to also confirm that the types are accurate. Each tool does one task well. (Similarly, I don't want my bundler to also run all the unit tests before it compiles, even though I would normally expect the tests to be passing before I check the output of my code. 90% of the time, if I run the bundler, my tests are passing, and in the last 10%, I probably want to quickly try something out without changing the tests. The same logic usually applies to my types.)

As a result, I strongly disagree with the claim that your process is the only one that makes sense. It may be the most logical for you and your team, but I tend to find that it makes sense to approach typescript from a different angle to most typical compiled languages, and I've found a lot of success in my process.


I think maybe the underlying problem here is that some people somehow manage to write javascript inside .ts files and I haven't figured out how they do that. I doubt it though, I think there is no way. It always complains about lack of types, it wants me to add annotations. The promise of mixing js and ts is not real for me. It just defies belief for me that people actually want to write squiggly line, broken autocomplete type of code and have it compile a tiny bit faster compared to just having a single unified workflow where typing, build output and build information always matches and you don't have to guess and look up whether the build is already done, or maybe there was a build error or maybe this time one of the type checks that I ignored actually mattered for js also, not just for ts or maybe the error was actually not just a typecheck error but also a DOM api error and on and on it goes. To actually want to live in this world of uncertain, mismatched information and expect to be faster iteratively is not believable.

How is it not a problem for you that you make a change in a .ts file and then you are immediately in a world of uncertainty: Is the typecheck already complete? No clear progress on this. The js output is probably already here, but not sure unless you constantly want to check yet another terminal window. So can I reload the page yet? Oh no wait, a few more type errors popped up after all, so I can't yet. This all happens in under 2 seconds but those 2 seconds are uncertainty and slow you down far more than any wait for a normal sized typescript project where typechecking is done integrated with the build.


> I think maybe the underlying problem here is that some people somehow manage to write javascript inside .ts files[...]

I don't think that's it. I write pretty much exclusively typed code, except in places where I've not managed to get everything set up properly (mainly old Vue2 code). And even then, imports are unidirectional: JS can import TS, but TS can only import other TS files.

As to the "world of uncertainty", this lasts surely less than 100ms for me, certainly little enough time that it feels instantaneous. Usually I get feedback as I'm typing, sometimes if I'm doing things in new files I need to save the file first, but either way, my IDE is consistently fast enough that I know immediately if my code type checks or not. There is no downtime here at all.

As for seeing the code change, I usually have the page open on a second monitor, so I press "save", and glance over to my second monitor, and with live reload and a good configuration (that's harder to achieve on some projects, I grant you), I'm usually already looking at the results of my changes.

In practice, I very rarely have any uncertainty about what typescript thinks my types are - this information is always immediately at my fingertips. Where I do have uncertainty sometimes is whether the javascript types match the expected typescript ones - this often happens when I'm dealing with third-party libraries, particularly when `any` types start popping up. In that case, it's occasionally useful to ignore the type checker altogether, play around with some different cases, and then come back to ensuring that the types match later. That's just not really possible if you're treating the type checker and the compiler as an all-in-one unit.


>with live reload and a good configuration (that's harder to achieve on some projects, I grant you),

right but livereload has the same problem as you: The build is already done so it reloads but lets say the typecheck will error out because surely you save a file more often than you want to recompile. So your page just reloaded which you didn't intend, in fact, it might actively have destroyed a debugging session which you didnt want to reload yet. How do you configure that away to make it work sensibly? And here is the kicker: I know you can configure your livereload build so it only reloads after the build and the typecheck are done. But this is precisely how we ended up discussing this.. now you always depend on both the typecheck and the build to end before you can test your change anyway.


I don't think I really understand your objection. Live reload, at least as I've got it set up, doesn't really work that way: it won't interrupt an in-progress debugging session, and the application state is retained (at least if one sets everything up correctly). Besides, type checking in my editor doesn't normally require me to save the file, so I usually know whether the code is valid or not before the build starts. So I don't have the problem you describe in the first place. Moreover, as I pointed out, I find it useful to be able to occasionally run code that doesn't type check, so there's a very real downside to waiting for the type checker to finish before building.

I suspect we're just used to building code in very different ways. If you've not tried approaching typescript as a linter rather than "just" a compiler, I really recommend giving it a proper go - it feels very freeing, and I think is one of the biggest advantages of typescript as a tool/concept. (It obviously has its own disadvantages compared to how, say, a traditional AOT compiler would work, namely that tsc can never really take advantage of types at all during the compilation, but it gives you a lot of flexibility during development - essentially the best of both worlds for dynamic and static type systems.)

That said, if you've already tried it and know it's not for you, then fair enough - ultimately all of these tools are just ways for us to write code, so if you write code best in this way, then I don't want to try and force you to do it some other way!


> How is it not a problem for you that you make a change in a .ts file and then you are immediately in a world of uncertainty: Is the typecheck already complete? No clear progress on this.

You're right in saying that having a clean and understandable development environment has a lot of complicated moving parts! Dropping the ball on any one of them means having a bad developer experience.

Since I've said this on other comments, I'm not going to dwell on the fact that this behavior isn't something Webpack itself solves. However, if you make a change in a .ts file, two things typically happen in a dev server:

1. Your code is retranspiled and rebundled, usually using partial recompilation. 2. Your IDE sends these updates to the TS language server that powers your IDE's TypeScript support. That server uses the same typechecking logic as the TypeScript compiler, so it's a good source of truth for surfacing type errors in the files open in your IDE.

Depending on your dev environment, one of a few things might happen next: 1. Your dev server might hot reload the changes in your browser once the build is finished. 2. Your dev server might also block reloading if a type error is found. 3. Your dev server might even show you that error in your browser so that you know what's going on.

Next.js currently does all three of those things, which makes for a really nice developer experience. You don't really have to keep an eye on another terminal — everything shows up in your browser. They even render a little loading icon to let you know when a rebuild is taking place.

Next.js uses Webpack internally, but Webpack isn't the reason for this nice developer experience; it's just a core part of it, and is only responsible for bundling and rebuilding.

I love talking about this stuff, for what it's worth, so feel free to ask more questions. I helped Etsy adopt both Webpack and TypeScript when I worked there, so I have a good chunk of experience in the area.


One thing I haven't seen mentioned yet in the replies is that often, people will already have type checking in their editor, with inline feedback. The iteration speed when it comes to fixing type errors is much faster that way, and then running type checking again to serve it up in the browser is unnecessarily slowing you down (and the type checker isn't really fast).


Another thing worth pointing out is that pretty much every editor that ha a TypeScript plugin uses TypeScript's language server to power it. The language server and the compiler are contained within the same binary (or at least they were) and use the same typechecking logic. So if you see an error in your IDE, you can be pretty confident that it'll be there when you build your code, and vice versa.


JavaScript is a dynamically-typed language. Parallelizing the type checking with bundling gives you the best of both worlds: the speed when writing in a dynamic language and the type safety that comes with a static-typed language. For me, I like to write my code and not too concern myself with types until I’m close to committing; that’s when I fix type errors.


> I like to write my code and not too concern myself with types until I’m close to committing

I'm genuinely curious here. How do you ensure contract correctness? The whole point of static typing (or rather explicit typing) is to declare and enforce certain contract constraints prior to writing [client] code, which prevents certain bugs.

What's the point of using type-safe language if you deliberately circumvent type safety? If you "fix type errors" by declaring things to be strings you do not get much more type safety from TS than plain JS.

Or am I hugely missing something here?


This is a good question. I had a hard time drawing a line between perfect type contracts and dealing with JavaScript's idiosyncrasies. There is a lot of behavior in JS that is very hard to produce strong, reliable types for. If someone overwrites a prototype somewhere, your types are probably incorrect no matter what you do. Add in the fact that TS types can't really be accessed at runtime (mostly) and it makes type guarantees even harder to enforce with 100% accuracy.

It helps to think of TS types as a guarantee of behavior and an encoding of intention, and of TS itself as a really smart linter. As long as you use my function according to these types, it'll behave as expected. If it doesn't, that's my problem and I'll fix it. If I say my function returns a string, you can use my function's return types as a string with the confidence that that's how I expected it to be used. TS will make sure that everything agrees with my type assertion, which removes the need for checking the types of parameters in tests, for example.


You ensure contract correctness by encoding it in the types. However, while developing, you might still be figuring out the contract.


Typescript is different from C# in that it is a superset of Javascript. That is even a wrongly typed typescript is valid JS code. IMO it makes sense to allow bundling if that enabled you to get things like fast refresh. Sometimes you just want to make a change and see the result on browser without worrying about types.


Thats not how one usually work. Whats the point in running tsc (the official transpiler) on each save? Plus, you are using an LSP client right? right??

Im a user of esbuild, and its so fast it can recompile on each save (see esbuild `serve` option) and i wont notice anything locally. Theres also a watch mode, and production build with minify.

So i work locally without a typecheck for my builds, and let my LSP client do the type checking. When i bundle for prod i run an additional tsc for type checks before the bundle is produced. Takes a few secs more (as tsc is slow) but this is a nonissue.


I recompile on every save too, all with full typechecking in under a second. As I've suspected, this is probably because most people here work in bigcorp projects where there are so many thousands of files that you cant work like this anymore. It's not because it's actually better workflow wise, it's only better for you now because otherwise the compilation is too slow. Same as the situation that has developed with Rust.


I don't, and I think maybe you should reconsider your assumptions about literally every single person replying.

Your assumption (from your first comment) is that people only run bundling. That's not the case. People run bundling in parallel to type checking, rather than in series. You still get the type checking benefits without blocking your build on it and thus slowing down your dev process.


>Your assumption (from your first comment) is that people only run bundling

that was never my assumption


Did sound like it:

> why would anyone want only bundling


> most people here work in bigcorp projects where there are so many thousands of files that you cant work like this anymore

This is not a bigcorp thing. The Typescript compiler starts becoming annoyingly slow much earlier. I have a project that's one 2000-line TS file that already takes 1.5 seconds to compile. Not yet slow enough to be frustrating, but ridiculous for the amount of code.


According to the article, the project is “led by the creator of Webpack” (but I'm not sure this means they can name it as the “blessed” successor).


Yes. Webpack is essentially going to get deprecated/unmaintained over time from what I understand, so I think it is fair the author names this the successor.


Why would you want your tooling to be coupled when it doesn't have to be with zero downside?


I outlined some of the downsides in the comment, more in another comment of mine.

I thought about it even more and I think I can finally see the possibly true reason why people are developing these type of bundlers: The typescript projects that they have to work on are so huge that the typechecking is always going to be too slow so you don't want to deal with it, you rather typecheck only the file that you're currently working on and you don't care about the 15000 other files because you didn't touch them since checkout and the reasonable assumption is that they will probably still work same as tested by some other team that worked on them and committed them. However there is still one large implication: Your production build on the CI server must then at least be configured to execute both the typechecking process and then the bundling process. I still wonder though: if the typechecking is truly too slow then wouldn't it also be too slow for your iterative development. I guess if those other sections of the code are never referenced then maybe it will do okay and you don't get squiggly lines on all your imports because it won't recheck them? I have some doubts that the typescript type checker actually caches this type of work.

It is also strange because somehow every other language in the world doesn't have to separate type checking from compilation, it's only in the js world that we somehow ended up with such large amounts of code that people felt that they want that speed back from back when js could simply be loaded into the browser instantly with no compilation.

And on top of that: If Turbopack does such supposedly excellent caching now, why would this not help with a 15000 file project and let me also do typechecking in a cached way? Yes I really don't think that omitting typechecking is the way to go for development.


For typical workflows, I’m not looking for type errors on the CLI. I’m just running webpack in watch mode in the background, while I edit and get typescript errors in my editor.

This is how the majority of people write typescript, and there is only upside from splitting out bundling and type-checking for these people.


Typescript has an open bug to use multiple threads for compilation.

That's why it's slow. It's also not a big deal because of parallel type checkers.

> Yes I really don't think that omitting typechecking is the way to go for development.

You've not really given a reason why. Just a rant about how other languages are different.


It is you who is ranting at me with this small comment complaining about a lack of reasons while I've given many reasons now across multiple comments while you have called me a ranter with two dismissive lines.


use webpack for bundling and typescript for typechecking. not that far off from the typescript + babel workflow and way less overhead. just use tsc as a linter.

https://iamturns.com/typescript-babel/

and someone else pointed out that esbuild drops typechecking.

https://news.ycombinator.com/item?id=33336803


>use webpack for bundling and typescript for typechecking

This is my point, that's what I'm doing. Webpack does the typechecking and the bundling together, you don't get broken bundles when the typecheck didn't succeed (unless you really want to, you can configure that too).

>and someone else pointed out that esbuild drops typechecking.

yes I know, these "faster" bundlers all drop typechecking which results in these unfair comparisons. Then I always end up spending a somewhat large amount of time checking these tools out only to realize that they are not doing a fair comparison.


> you don't get broken bundles when the typecheck didn't succeed

ah, I do TDD. easier to let tests give me runtime and behavioral feedback instead of refreshing/setting up hot reloads. I could see how it's frustrating without that. super frustrating to do a quick experiment/demo and satisfy ts if the compiler breaks all the time tho.


>super frustrating to do a quick experiment/demo and satisfy ts if the compiler breaks all the time tho.

the ts compiler doesn't break though, I think if I understand you correctly then this might be a relict of javascript-tier thinking/workflow. You're saying that you don't want to deal with the typechecks sometimes, that's not how any other language works. You're essentially saying that you want a build that silently works anyway because you know better than typescript, you know this will still result in correct js even though the compiler is shouting errors at you.

I think this is not the right way to look at this, if this is really a very common usecase then Typescript has failed to make truly useful optional typing. Which is actually something I've believed from the start, I was always perplexed when they claimed to be javascript compatible when that doesn't seem to be true at all, you cannot actually genuinely mix pure untyped js and ts code and have it all work seamlessly together. It always complains about adding any types and so on.


> You're essentially saying that you want a build that silently works anyway because you know better than typescript, you know this will still result in correct js even though the compiler is shouting errors at you.

negative. I use tsc as a safety net, I don't release code that doesn't pass ci/cd which includes a linting/typechecking step. I just know that sometimes it's faster/easier to do an exploration without strict typechecking.


As others have said, the convention is to use tsc for typechecking and declaration file generation and babel (ie by way of webpack) for bundling. You should not have typescript mismatch issues if you follow good practices. You should get no mismatch with babel using one typescript version and your tsc command using another. Do not use a global tsc command for instance, typescript belongs as a dev dependency in your project


Wow remote caching like Bazel is wild to see in a bundler. It's nice to see someone realize that JS bundling is more or less like compiling and linking in the native code world and we just need to accept it and optimize the hell out of it.


speaking of which, is it reasonable to use Bazel instead of webpack?


With rules_nodejs you can transform your source with whatever node-based packages you’d like. But be prepared to be more or less entirely on your own to figure out how to make it all work.

Oh and if you create many smaller packages (as is best practice in bazel to get good cache efficiency and parallel builds) be prepared for nonexistent editor/IDE support.


I have used bazel (with remote caching) to cache the output of webpack before.

Webpack was driving the typescript compilation and had some plugins/extra codegen steps to run too. I tried to move as much as possible to bazel but I quickly found out the JS ecosystem likes to produce massive balls of muds. Very hard to break down and separate the various steps.


Bazel is just the infrastructure to run webpack. You'd need to do some work to make webpack's state be cacheable (I dunno what options and such it has for this, maybe it's already there as an option). But if you're looking at Bazel for JS work you probably just want to use the existing and maintained rules for it: https://github.com/bazelbuild/rules_nodejs It's been a while since I last looked at it but I don't think it has any caching for webpack.


Does it make sense to cache something that takes 1.7 seconds though?


Yes. The nice thing about Bazel is that it'll also be able to do the same caching and cache invalidation across your build graph, so for a decently sized project the savings can be much bigger. I think [0] gives a decent overview.

[0]: https://blog.aspect.dev/rules-ts-benchmarks


Really hope this gets a competent library mode. Webpacks library mode never really evolved to meet the needs of things. Vite has a good library mode but it has its own limitations.

Pure rollup is still the best for building libraries, by and large (maybe esbuild now? but I think rollup is more efficient in its output still).

If this has a good, solid library mode that works like rollup with the power of the webpack-like asset graph, it'd actually be amazing for the community as a whole.


Yes. I mostly build libraries and rollup still beats everything in compatibility (esm, umd, cjs, etc) and output quality. It's slow, though, and I would love a faster alternative. Esbuild doesn't do tree shaking as well as rollup.


Vite uses it under the hood and has it running crazy fast.


I think Vite only uses it for the production build, no? As far as I know Vite doesn't do bundling (yet) in dev mode.


Vite uses both esbuild and rollup for the prod build. When doing React, it also adds Babel. This chaining of tools is the reason why it's so slow.


"so slow." JS-perf world is quite insufferable. We're talking 0.09s vs 0.01s https://mobile.twitter.com/youyuxi/status/158506616237077709...


Can esbuild support esm or umd etc via plugins? I've looked at using it but these formats are an unfortunate requirement for the builds I need to do.

I noted that esbuild doesn't fully support es5 transpilation and this will hold it back from some usage.


Surprised not to see much mention of plugins and how they will work (unless I missed something?).

Plugins are the big differentiator between Webpack and existing "webpack but faster" tools, and presumably the reason most people still use webpack. What's the plan here?


https://turbo.build/pack/docs/migrating-from-webpack

It sounds like plugins will be a thing but existing webpack plugins will need to be ported.


Ah, nice find. They seem to imply plugins can be written in JS, with an API that _may_ be similar to Webpack, but definitely not drop-in compatible:

> We're planning on making Turbopack very flexible and extensible, but we're not planning 1:1 compatibility with Webpack … most Webpack plugins won't work out of the box with Turbopack. However, we're working on porting several of the most popular Webpack plugins to Turbopack.


They say they plan to support Vue and Svelte via plugins so I guess it is a thing or will be.

"In future versions, we'll be supporting Vue and Svelte via plugins."

https://turbo.build/pack/docs/features/frameworks#vue-and-sv...


Bummer that we have to wait for Svelte support, but seems worth it. Hope to see it bundled with Svelte Kit soon!

https://turbo.build/pack/docs/features/frameworks#vue-and-sv...


Vite's momentum is too much to stop right now imho


A bit of a bummer that "get started" requires nextjs, and there's no documentation for a frameworkless build (I understand it's labeled as alpha but if you're going for such grandiose announcement an "how to bundle a vanilla/frameworkless app" would have been nice)


Our initial development cycles were dedicated on getting Next working. But we absolutely plan on expanding the to support generic bundling and will include that in the docs.


Same... I went to build from source and can't even find it in their source files. I didn't realize alpha releases meant build from branch tip.


Literally 1 day after my team migrates our build to Vite :')


Vite isn’t tied to its internal tools that heavily, in theory they could switch to using turbopack internally if they wished.


While turbo pack may be faster, you should take such benchmarks with a huge grain of salt. At least, they are peer verified


At least that is stable now, be happy


Stable API perhaps, but vite has issues with memory leaks, which can break builds in CI.

https://github.com/vitejs/vite/issues/2433


Squeezing unnecessary performance for no reason is just stupidity and time wasting at best.

When I say turbopack, I was thinking why the hell would someone need this. Vite is fast enough for everybody. Quite fast indeed. Then there is Parcel which is also quite good and reliable.

I never felt with Vite performance was an issue at all. Everything is so dang fast in my company's website. We are unnecessarily optimizing stuff!


esbuild is good. you made the right choice.


You don’t have to migrate to the “fastest and the greatest” right away. Do it when you feel the need for it or starting something new.


Lucky, that could have been an uphill battle


you made the right choice.


> Currently, migrating to Turbopack from Webpack is not yet possible

Bummer! Do I need a turbopack.config.js?

I've still got my package.json, tsconfig.json, .env, and postcss.config.js apparently.

> [Turbopack cannot currently be configured with plugins. We plan to make Turbopack extensible, likely with an altered API]

Will I be able to use my current plugins? Where does config for these plugins live?

> [SCSS and LESS] don't currently work out-of-the-box with Turbopack

Keep it this way! LESS has been dead and SCSS dying. Focus on CSS Modules please!


What's wrong with SCSS modules? It has a lot of really useful functionality that frontend developers like me love.


`node-sass` was the fastest solution (node bindings to a C++ library) was deprecated a few years back.

`sass` (dart-sass) is the current iteration but it’s an order of magnitude slower for larger projects with many small scss module files. I’ve seen it add +10 to 30 seconds.

`sass-embedded` will be the next iteration for dart-sass but in its current form still suffers from similar issues.

I believe using postcss for nesting support + css variables is a better alternative, considering that css will likely get native nesting support in a few years.

https://www.w3.org/TR/css-nesting-1/


They still require a preprocessor. Let's push to get that stuff into CSS! Worst case run `npm install sass`.


Is there a place I can find a "current state of the art" on this stuff? Once upon a time a quick-start option would have been. For instance, I may have done Rails+React+Babel+Webpack to get a quick backend JSON-API and a front-end web-app that consumes it.

Not for something that "scales to 100 teams", just something for me to spin an app up in quickly. i.e. 0 to 1 is the only thing that matters.


Next.js is pretty much that. Not much configuration needed, and it wires up all the tools for you behind the scenes. If a new tool comes out that's worth it (like potentially Turbopack, although that's by the same company), they'll do the migration for you.


Rails also has a whole model/database thing going on.

Next.js is just front end focused and has always left it up to the user to decide about data persistence.

So to answer the original question, an equivalent would be Next+Database, and there is obvious stand-out answer for what Database should be. I often get stuck deciding what that Something should be when trying to go from 0 to 1.


Oh sorry yes, must have skipped over Rails in the long list of tools. Yeah, you'd commonly pair Next.js with a more "traditional" back-end I think, with the frontend calling out to the Next.js back-end, the Next.js back-end calling out to your regular back-end, and your regular back-end calling out to your database.


Okay, great, I'll just use Next.js with PostgreSQL.


If you know and like Rails and ActiveRecord especially, then one possibility, which I intend to try on my next thing, is to build an API-only Rails app (JSON only; no HTML views) and a separate Next.js app that uses the Rails app as its backend. Although it’s two apps, it seems like it would be a really nice, clean setup to work with. You can easily deploy the Next app on something cheap and serverless like Vercel, and deploy the Rails app separately on something like EC2 or Fly.io.

I have written Next apps that just connect directly to PostgreSQL, but I always end up adding other tools in an attempt to recreate something like ActiveModel and other aspects of Rails in Next.js, and it ends up turning into a mess. I think Rails+Next would avoid this problem. It would be two apps, but each would very clearly focused and would be built following the golden path of their respective frameworks. (One thing I’m not sure about yet is how I would DRYly share endpoint schemas from Rails in order to have strong typings in the Next app based on the API endpoints. But I don’t think it should be too hard to find a nice way to do this.)


I see. That's the structure I use but Rails and React. Okay so I'll just replace the React with Next.


Getting Next.js up to Rails feature-wise is using something that already wires up the niceties like Auth, ORM, Tailwind styles, and Typescript. Create-t3-app is a huge contender here: https://init.tips/ and https://github.com/t3-oss/create-t3-app

A DB is as simple as pressing a button on Planetscale or Railway, copying the secret strings to your .env file, and 'npx prisma db push'.


Appreciate the advice. Going to try that later this week.


Supabase


All these lazy graph recompute reminds me of 90s dag based "procedural" modelers who achieved near real time re-rendering of extremely complex data through just that.

I assume the mainstream is ready to swallow the idea.


Very excited for this if the configuration and behavior is 1:1 with Webpack. The primary reason I haven't moved to other "more efficient" build tools is having to learn how to do various optimizations and getting the artifacts I expect.


> Webpack has a huge API. It's extremely flexible and extensible, which is a big reason why it's so popular.

> We're planning on making Turbopack very flexible and extensible, but we're not planning 1:1 compatibility with Webpack. This lets us make choices which improve on Webpack's API, and let us optimize for speed and efficiency.

https://turbo.build/pack/docs/migrating-from-webpack


God, I hope not. If there's one part of webpack that needs a complete overhaul it's (1) how you set up complex workflow configurations and (2) proper documentation for all aspects of that. And yes, that's one thing. Code without docs is not production-ready code, but code with bad docs isn't even PoC-ready code.


I think webpack has overhauled 1 and 2 every major release.

Accordingly the docs and API and blogs are littered with a mish mash of all previous APIs.

Thats the whole problem with webpack-its always changing.


Looks impressive, but proof will be in the actual day to day dev experience and configuration, not the perf. Vite and esbuild are fast enough, and I feel the winner will be more about usability, docs, easy config, etc.

That aside, it is just so frustrating and sad that this just continues the fragmentation in the JS build space. It is beyond exhausting at this point. I don't care about vite vs (or with) esbuild vs turbo. I just wanna `script/build` or `yarn dev` and not think about it anymore.

It seems like collaboration and consolidation around common tooling is just impossible and not even considered a possibility at this point. We are forever stuck in this world of special snowflake build toolchains for damn near every app that wants to use modern JS.


An explicit goal I would personally like to see from build tools in the JS world is long term stability. I don't want my build tools to have a new major version every year. Semantic configuration improvements simply aren't worth all the churn. Adding new functionality is fine and great, but keep it backward compatible for as long as you possibly can.

This is an area where we could learn something from the Golang ecosystem. You're always going to end up with some warts in your API. Tools with warts that are consistent, documented, predictable, and long-lasting are so much easier to manage than tools that are constantly applying cosmetic revamps.


Agreed. Every time NextJS changes out their build system for speed, its users lose out on all kinds of functionality that they were depending on before.

Moving away from Babel to SWC meant we could no longer use SCSS within JSX styled components. We first switched everything to plain CSS, which was a nightmare IMHO. Now slowly switching things to SCSS modules.

Now with Turbopack, we lose that too: https://turbo.build/pack/docs/features/css#scss-and-less

"These are likely to be available via plugins in the future." Fantastic


As a performance nightmare you can't wake up from, SCSS can't die soon enough. As someone who depends on it currently though I def understand the frustration with losing support.


The new version (written in Dart of all things) seems pretty fast. The old ruby implementation was insanely slow, and the nodesass rewrite was insanely broken.

Whatever they got now though is just about perfect for me


Embedded SASS promises to be faster anyway, though still slow IMO..

There are some issues with realizing the potential though. Namely, a standard-ish CRA React app will have thousands of SCSS compilation entrypoints due to importing SCSS in individual component modules.

Lack of process reuse is causing many to see slightly SLOWER compile times. Next you have:

* Lack of tooling support: Vite?

* Need to migrate SCSS syntax

* ??

As soon as it's a free-ish upgrade with high ROI on swapping it in, I'll take it! I think SCSS is a dead-end though. Modern CSS + postcss for some CSS-Next polyfills is the way forward IMO.


Yeah, it really pains me that Sitecore is doubling down on them as the main FE framework.


> Looks impressive, but proof will be in the actual day to day dev experience and configuration, not the perf.

I think it really depends on the use case. I use Webpack, but it's all configured for me by create-react-app and I don't have to mess with it. If my configuration could automatically be ported from Webpack to Turbopack and my builds got faster, great :)

Of course, that's not the only use case and I agree that speed alone won't decide the winner.


    // @TODO implement and publish
   import { webpackConfigTranslator as translate } from ”turbopack-webpack-compat”;
   import webpackConfig from ”./webpack.config.js”;

   const turbopackConfig = translate(webpackConfig);

   export default turbopackConfig;
Any takers?


I don’t think any of the current options are good enough that I would want the community to settle on one of them at this point at the expense of any innovation or experimentation.


We’ve been doing JavaScript development for how many years now? And still no bundling options that are good enough? Or at least one that can continue to evolve vs. creating a successor, etc.


By my count its 23 years.

The biggest issue in FOSS is folk don't wanna join a "good enough" project and move it. Sometimes the project is contributor hostile (rare?)

And we end up with basically: "I'll build my own moonbase with blackjack and hookers!"

All that starting from scratch costs loads of impossible-to- recover-time.


That’s really not what’s going on here.

All the previous gen bundlers are written in JS and support a giant ecosystem of JS plugins. There’s no incremental way to migrate those projects to Rust. The benefit of these new bundlers is that they are literally full rewrites in a faster language without the cruft of a now mostly unnecessary plugin API.

And the “cost” of this? Some of these new bundlers are written by just 1 or 2 core contributors. Turns out with the benefit of hindsight you can make a much better implementation from scratch with far less resources than before.


Well, that's what is really frustrating here. Turbopack is built by the creator of Webpack. So, instead of fixing the bloated megalith that webpack has become, they are just moving on to greener pastures. But this time it'll be different™


Webpack 5 was that push to fix the monolith. I would guess that after that herculean upgrade effort that the creator of Webpack has a pretty good idea of what’s fixable and what’s an inherent limitation of the existing tool.


Microsoft has been able to evolve Windows and Office and SQL Server for decades, with huge customer bases…


I mean, there’s really only so much you can do. If major changes are required, you can either make a significant new version with massive compatibility issues, splitting the community (ala Python 2 & 3). And even then, you’re still building off of the old code.

Or start from scratch and make something better.

Either way, you split the community, but at least with the new tool you can (hopefully) manage to make it a lot better than a refactor would have been. (In some areas, anyways.)

Plus, this allows the old project to continue on without massive breaking changes, something its users probably appreciate. And this old project can still create new major versions if needed, which is something you don’t get if you have a major refactor because everything meaningful gets shipped to the refactored version.

So I think spinning off a new project is a net good here. It doesn’t impact webpack very much unless people ditch it and stop maintaining it (unlikely). It lets them iterate on new ideas without maintaining compatibility. (Good, when the big issue with webpack is its complexity.)

So if the idea turns out to be bad, we haven’t really lost anything.


> instead of fixing the bloated megalith that webpack has become

Megalith? Isn't it super modular and configurable?


Yeah. I’m consistently surprised and disappointed by how much resistance people have to getting their hands dirty digging through other people’s codebases. It’s fantastically useful, and a remarkable way to learn new approaches and techniques.

The world doesn’t need yet another JS bundler. It needs the one js bundler that works well with the wide variety of niche use cases.


I dig through other people’s codebases all day long. I really don’t want to do that in my free time as well. Especially to fix the garbage that they built in the first place.

It’s just not fun. And that’s one of the most important reasons I do this job.


Webpack was flexible and “good enough.” But it turns out rust-based bundlers are 10-100x faster and so the definition of “good enough” has changed, for good reason.

It’s hard to overstate how game changing this new wave of zero-config, ultrafast rust bundlers are for the day-to-day dev experience.


For me the age of JavaScript isn't particularly important. I just don't think any one of the well-known options are good enough that I would want the community to throw a big portion of support behind it.


That seems to be the chicken-and-egg problem here: the current problem is endless repetitions of "none of the existing options are good enough, let us build a new option". The call above is "just pick one and everyone work together to make it better", but we're back to "none of the existing options are good enough".


Specific to build tools, a number of projects are VC-funded. Rome raised millions of dollars in VC money (https://rome.tools/blog/announcing-rome-tools-inc/). This offering is now funded by Vercel.

The same problem plays out in the JS engine space (Deno raised $21M and Bun raised $7M) and in the framework space (e.g. Remix raised $3M). As long as there's money to be made and investors to fund projects, there won't be consolidation.


Fully agreed.

I would add that esbuild has set the bar very high for documentation and configuration. I come away very impressed every time I need to touch esbuild, which is not usually my expectation when it comes to the JS build-and-bundle ecosystem.

And while vite is still young, it does a good job integrating itself with many different JS stacks. The docs are also great, and the project has incredible momentum.

Between esbuild and vite, I feel pretty set already. Turbopack will need to be that much better. Right now, it doesn’t look like much more than an attempt to expand the turborepo business model. Let’s see where they take it.


> special snowflake build toolchains

That reminds me, wasn't there a build tool called Snowflake?

Oh, it was called Snowpack [1]. And it's no longer being actively maintained. Yeesh.

[1]: https://www.snowpack.dev/


My disappointment related to this is that I still think Snowpack's philosophy was the right one for JS present and especially for JS future: don't bundle for development at all because ESM support in today's browsers is great and you can't get better HMR than "nothing bundled"; don't bundle for Production unless you have to (and have performance data to back it up).

I know Vite inherited the first part, but I still mostly disagree with Vite on switching the default of that last part: it always bundles Production builds and switching that off is tough. Bundling is already starting to feel like YAGNI for small-to-medium sized websites and web applications between modern browser ESM support, modern browser "ESM preload scanners", and ancient browser caching behaviors, even without HTTP/2 and HTTP/3 further reducing connection overhead to almost none.

But a lot of projects haven't noticed yet because the webpack default, the Create-React-App default, the Vite default, etc is still "always bundle production builds" and it isn't yet as obvious that you may not need it and we could maybe move on away from bundlers again on the web.


Vite is driven by pragmatism. They are bundling for prod because it is pragmatic.

If they do not bundle for prod, then initial load will cause a flood of network request and cause slowness due to a lot of network requests


I think a lot of people assume this is the case for every website/web application without actually watching their network requests and having hard data on it. Like I said, small to medium sites and applications in some cases start to be more efficient with ESM modules than ever before (because it is the ultimate code-splitting) even before you get into new tools like HTTP2 and HTTP3 which significantly reduce connection overhead for network requests entirely and make a lot of old bundling advice outdated at best.

Plus, ESM modules always load asynchronously without DOM blocking so in any and all "progressive enhanced" sites and apps, it may feel slow to hydrate parts of it, but you generally aren't going to find a better performing experience while it hydrates. The page itself won't feel "slow" to the user because they can start reading/scrolling faster.

Some of it comes down to your framework, of course, and if you can afford SSR or are using a "progressive enhanced" style of site/application. For one specific counter-instance I'm aware of, Angular's NgModules dependency injection configuration mess routes around ESM's module loaders and creates giant spaghetti balls the browser thinks it needs to load all at once. I can't see recommending unbundling Angular apps any time soon as long as they continue to break the useful parts of ESM style loading with such spaghetti balls. Other frameworks, especially ones with good SSR support/progressive enhancement approaches are increasingly fine to start using unbundled ESM on the modern web.


I thought http2 made the network requests a non-issue (at least for 100 files or so). The only advantage of the larger bundle is though compression.


Even compression is maybe somewhat a non-issue with Brotli compression options (now supported by every major browser and often "required" for strong HTTP2+ support) and its hand-tuned dictionary for things like common JS delimiters/sequences. With Gzip you do sometimes need large bundles before its automatic dictionary management algorithms pick up things like that. In theory, Brotli always starts from a place of more "domain knowledge" and needs less "bulk" for good compression.


Have you ever tried to load a development build of a project with 3000 modules in Vite? It’s like a webpack build all over again every time you load the page.


I didn't have much of any trouble in Snowpack with even the largest projects I worked on but at least with Snowpack it was easy enough to target "freeze" a sub-tree of modules into Snowpack's "web_modules" with esbuild. I don't know about Vite in that case as I haven't been convinced to actually try Vite despite Snowpack's shutdown endorsement. Recent stuff for me has either been projects still using webpack for platform reasons or ESM with hand-targeted esbuild bundles for specific module trees (libraries) without using Vite or Snowpack above that. About the only thing I feel I'm missing in those esbuild workflows is a good HMR dev server but the simplicity of using fewer, more targeted tools feels handy to me.


The folks working on Snowpack didn't just give up. They transitioned over to working on Astro. It was a very positive move. I see great things coming out of that project as we move into a more "server rendered" future.


JS has 10x the amount of developers of all other languages combined, that translates to a lot of ideas on how to progress the different avenues of JS land.


It is also the primary language taught to bootcamp developers looking to get started, and so a lot of suggestions and ideas come from people without any real experience.


very much doubt that it's the bootcamp-devs-without-real-experience developping new-gen bundlers and transpilers


Indeed they do left pad instead.


Build tooling stability is one of the great undersold benefits of the ClojureScript ecosystem IMO.

The build process is defined by the compiler (using Google Closure tooling under the hood) and has not significantly changed since the introduction of ClojureScript in 2011.

Since all CLJS projects use the same compiler, the build process works the same everywhere, regardless of which actual build program is being used (Leiningen, shadow-cljs, etc).


And that's one of the reasons why people don't use clojurescript. Arbitrary npm package import and interop was not possible until late 2018 with shadow-cljs. Build tooling "stability" is only a thing if you believe in reinventing every wheel, not using third party dependencies, and pretending that the half-baked "clj" tool doesn't exist.


> Vite

On large applications Vite is fast to build, but loading your application in browser issues thousands of http requests, which is tragically slow.

Esbuild is basically instant to build, and instant to load on the same application. It’s a shame it doesn’t do hot reload.

If Turbopack can give us the best of both worlds, then that’s absolutely an improvement I want.


Yeah, working on app with >8k modules seeing >4s page loads on refresh. It's also weird how it wants to force you to not use deep imports on package deps, but then doesn't want you using index files in your own code..

I believe there are areas for improvement here that Vite can make though. They need to implement finer grain browser side module caching, dependency sub-graph invalidation(Merkel tree?), and figure out how to push manifests and sub-manifests to the browser so it can pro-actively load everything without needing to parse every module in import order making network requests as it goes..

Lots to do lol.


> I just wanna `script/build` or `yarn dev` and not think about it anymore.

Parcel might be a good fit for you: https://parceljs.org/


We've already migrated our old webpack rails front end to esbuild as of a couple months ago, and couldn't be happier.


Is rollup still a thing?


What do you mean by "still"? I just learned that it exists a month ago! I wish I was joking...

JS is not my main language, true, but damn it's hard to keep up. I think it took me less time to be productive with Typescript than with webpack.


Yeah I just learned about Vite a few days ago, and Turbopack today.

Apparently rollup is used under the hood by Vite, and the plugins and config syntax are compatible, but I couldn't get my rollup config to work with Vite.


Yes, e.g. Vite uses Rollup for production builds. Vite uses esbuild for development with plans to make it usable for production builds.


Rollup is used by some of these tools. For instance vite uses rollup for production builds.


I will be completely honest, Next.js is amazing and with each version it is getting better. This turbopack thing, I think they made mistake and it is distraction. This is not a problem anyone had, at least that I know. Vite is amazing and they should've built on work of others.

This is one of the Rust things, we just have to build everything from scratch :).


Feels like the rails community is going to be confused by the naming.


The Rails community has been confused for the last X years by having to switch JS toolchains several times without anything getting easier or faster.


The webpacker rails 6 approach was a mistake that has been fixed with rails 7. Rails 7 offers a very good solution.


If it combines the flexibility of Webpack with the performance of esbuild this is going to be great.


Parcel is another one of these that started promising but the longer we use it the more issues crop up (caches need to be pruned, performance issues).


Curious if this will be supported by react-scripts. Or is it Next.js-specific?


It's _currently_ Next specific, but we plan on making it generic and usable in any setup.


that's a crucial detail that I did not read in the article. I've seen Next.js mentioned a lot, but havn't gotten to trying it yet. Reading the article I wasn't sure if this meant I would have to adopt Next.js. I definitely suffer from build tool configuration fatigue, so if it had to be through next.js (a new system for me) it's a lot less interesting, for me, in this moment.


I read this while taking a break from trying to find the problem with our pnpm/turborepo monorepo. Works in a clean devcontainer, fails in CI, fails on multiple dev machines.

Currently. I’m skeptical.


Are the benchmarks used for generating the results on the website available for us to poke around in? https://turbo.build/pack

I'm curious about the "Cold Start" time for 30k modules taking 20s, which still doesn't feel like the best possible experience, even though it's a significant improvement over the other contenders listed.

Is there a separate, faster "Warm Start" time after everything is cached? Or is this it?


Been loving vite these days and can’t think of a compelling reason to leave that ecosystem - but I’ll take this for a spin for sure.


You either die a JavaScript fanboy, or you webdev long enough to see yourself walk away from the unending parade of needlessness.


Vercel is on a streak. I am curious whether Turbopack elapses Vite like Vite elapsed Webpack over the past 12 months.


If the configuration is a nightmare like webpack then not likely.


Webpack has a reputation that was deserved in early versions. More recently it's not so bad unless you have very specific needs. If all you're doing is building a standard app from things like JS or TS, a CSS processor, and maybe a framework you probably won't need more than half a dozen lines of config that you can paste from Webpack or your framework's docs.


Currently looking at a 1000 line webpack config..........


You're either doing something complicated or you're not using Webpack very well. Most Webpack config I've worked with isn't like that.


It's justifiable for the level of complexity it is handling, but it does mean that trying to switch to a different build process is gonna be painful.


What does your 1000 line webpack config do?? I think part of the problem with webpack is it gives users the freedom to create 1000 line configs even though they’d be fine with a smaller more simple config


Problem with webpack is that freeform JS code is not "configuaration". Configuration presumes set amount of features driven by declarative statement. Most of what I've seen done "in Webpack" in fact is shitty scripting in an awful language. I'm very happy to see movement away from this criminal waste of everyone's time.


Not gonna get into the details, as it's company proprietary code, but we aren't talking about a weekend project. This is a large, complex product that has a significant user-base, and it also has to handle a bunch of legacy stuff.


Sure, I did not mean you are doing something bad. I just happen to hate JS and Webpack with passion ;)


No worries! I like js, I'm just feeling the pain of trying to move off webpack.


It is still in alpha, I am guessing a lot of backward compatibility might still be added to it before the stable release.

And the article exclusively testing it against just Next.js build could be an indicator as of how optimized it is for the meta framework probably?


according to the article, Turbopack is 10-20x faster than Vite. I wonder how they were able to edge it out so significantly!


I would guess they're factoring this in:

> Turbopack is built on Turbo: an open-source, incremental memoization framework for Rust. Turbo can cache the result of any function in the program. When the program is run again, functions won't re-run unless their inputs have changed. This granular architecture enables your program to skip large amounts of work, at the level of the function.

Which would make that multiple an extremely rough estimate, highly-dependent on the situation. But it's still an exciting development


I wonder how they make that robust, given that Rust isn't pure and there's no way to force a function to be pure either.


It's not very hard to stick to a pure-ish style even in languages that don't statically enforce it; JavaScript's MobX framework has the same constraint, as do React functions, and computed properties in Vue, etc. You don't really get any static help with functional purity in those cases, but thousands of programmers do it successfully every day

And while Rust doesn't strictly enforce purity, it does make it a lot easier to stick to in practice with its explicit `mut` keyword for local variables, references, and function arguments (including for the `self` argument on methods). There are various ways like RefCell to get around this if you're really trying to, but you have to go out of your way.


The answer seems to be caching (https://turbo.build/pack/docs/core-concepts), lots and lots of caching. Which enables it to reuse a lot of already computed stuff for incremental rebuilds.


Which makes me wonder if the speed comparison graphics are solely about repeat builds.


Vite uses esbuild for transpiling and rollup for bundling. esbuild (written in go) is already pretty fast for transpiling but there might be a lot of possibilities to optimize in rollup as rollup is written in JavaScript.


Coincidentally, the primary maintainer of Rollup mentioned plans for a rust rewrite in his recent talk at Vite Conf.


Incremental compilation plus not having each dependency be its own network request (I work on Turbo, but not Turbopack).


Rust is a hell of a drug.


My guess is some combination of dropping support for older browsers, supporting fewer features, and optimizing for the common case at the expense of edge cases. In a few releases, once all those are added back in, it will be no faster than Vite.


Ah, so this is why Vercel hired Tobias. Makes sense, and it could only gain traction with a company like Vercel behind it. Webpack is legacy now, supplanted by Rollup/Vite, ESBuild, et al. The only folks I know still using webpack are stuck with it (e.g. Next users)


Remember jikes - the java compiler written in c (or something)?

I think writing build tools for JavaScript in anything other than JavaScript (or language that compiles to js) is a dead end.

How would you write a plug-in to this? Or a programmatic configuration. So much gained from staying on js.


On the flip side, it’s very hard (sometimes impossible) to make highly performant abstractions in JavaScript, which makes it difficult to make fast tools which do complex work. Most JS-based programs choose to either be fast and difficult to program, or easy to program but slow.

The advantage of having an incredibly fast build tool can already be seen with Vite and esbuild, which hot-reload modules at speed. Fast tools can change the way work is done.


I doubt it; most speed improvements comes from taking a different approach (ie. caching or ignoring time consuming typechecks), not a difference in the speed of language or platform.


My point is not that it’s impossible to write fast JavaScript, it is that creating performant abstractions is very hard in JavaScript. So performant code ends up not using many abstractions, and becomes difficult to write and maintain. Or well-abstracted code which is easy to write and maintain is slow.


A rather niche question: The blog says Turbopack is fast thanks to using Turbo, a library for incremental memoization. I wonder how Turbo tcompares to Salsa, the latter which was developed in tandem with rust-analyzer.


How does it compare to bun. That is what got me excited after a very long time.


The Turbo memoization framework sounds interesting, but I don't see any code samples for what it looks like for Rust users, or how it compares to other Rust memoization/caching libraries…


Documentation here is one of the areas we need to work on. There's a bit in https://github.com/vercel/turbo/blob/main/architecture.md#tu...

A small example of code that uses this is https://github.com/vercel/turbo/blob/main/crates/turbo-tasks..., which defines how we can load a `.env` file using FS read caching, cached `read` and `read_all` functions, etc.


Ah, very helpful/interesting – thank you! Seems neat.


In several places it is mentioned as a Rust library, but I don't see any links to it. Using google search I'm not finding a Rust library named "Turbo". Is the library open source, or available on crates.io / in a public repo somewhere?


Right, they link to this docs page when referring to the memoization library: https://turbo.build/pack/docs/core-concepts

And the github link at the top of the page links here: https://github.com/vercel/turbo but, despite being called "turbo", that seems to actually be the repo for Turbopack (the webpack alternative) not "Turbo" the library.

Even digging a bit into the crates, I'm not sure where this supposed library lives: https://github.com/vercel/turbo/tree/main/crates


The engine here are the creates called "turbo-tasks" and "turbo-tasks-*" extend it with more features. See a bit in https://github.com/vercel/turbo/blob/main/architecture.md#tu...


gotcha, cool – thank you!


The content of that "Core Concepts" page sounds a lot like https://github.com/salsa-rs/salsa


What differs webpack from other "bundlers" is it's plugins ecosystem. I use couple of them at the moment in my workflow and that's the main reason I won't migrate to turbo for some time. At least not until porting will happen. https://turbo.build/pack/docs/migrating-from-webpack#will-we...


Im using rollup now over webpack. The landscape of js is starting to diverge again so rollup isn't exactly solid ground for me, I'm waiting for the next big migration to get swept up with. Maybe turbopack is it, maybe not. From what I'm gathering turbopack is like webpack only faster.. that's ok but I'm not exactly a fan of the whole webpack system with its loaders, plugins etc and arcane bugs and fixes.


If I want to write a plugin does it have to be in rust? I like rust but we're not going to add rust tooling to our dev env just for one plugin.


We plan on supporting both. We're still in the alpha phase, and have not designed the plugin API yet.


It’s surprising to me that new bundles mainly advertise their speed. There are other things that are so much more important for a great DX.


Well, crap. I literally just moved to Vite.


At this point, Vite is mature and production ready. Turbopack is still just in alpha. Doesn't even have plugins yet.


Thank you!


Is there any actual proof that Turbopack is 10x faster than Vite at anything? I can't find any benchmark showing it.


I'm a little skeptical as well. Vite is fast

I just checked a code base that's at ~900 modules, and vite is ready to serve dev requests in well under a second.


That is misleading. Open the browser and request the app and you'll see the first cold load will take many many seconds since Vite didn't load and compile the app yet.

If you reload the second time will be much faster since it will serve from it's memory cache.


That does take a few seconds, for me. But I could see how that could get painful on larger apps.

I'm not trying to mislead... but it would be good to know what they are actually measuring.


They are measuring the time until Vite is ready to serve requests, but then they compare it to the time of the webpack dev server, not mentioning that WebPack will have everything compiled at that time unlike Vite.

Vite is indeed much faster than webpack, but they use this highly misleading "vite ready to serve in 100 ms" stat.


Just note it's not Vite but esbuild that's doing the dev bundling.


Here is the benchmark section of the site, although I don't see a definitive source for what is being run.

* https://turbo.build/pack/docs/benchmarks


If they can come close to the beauty that is esbuild documentation, changelogs and focus on details it'd be interesting, but do we really need another tool that only touts performance and ignores the rest?


Is nextjs the modern rails?


Kinda, but not really. There are still lots of parts you'll have to implement yourself if you want to create a CRUD app. The mainly focus on the frontend part with the help from the backend.

Combine next.js with https://blitzjs.com/ and you'll have something that looks like rails.


To throw another in the ring, I recently stumbled on https://github.com/t3-oss/create-t3-app. Lots of buzzword tech in there, but the setup looks very interesting.

Looks like blitz is tRPC + next auth?


> Looks like blitz is tRPC + next auth?

From what I understand Blitz is their own implementation of tRPC, their own Auth system, Prisma and Typescript support. Blitz is also more of a toolkit that wraps around NextJS currently, but later on Remix, Svelte and Vue.

In future (probably in '23 and post 1.0) some things that may be coming are:

- backend functionality ala NestJS

- form component library

- permissions (rbac) library

- file upload library

- and a bunch of other stuff I'm forgetting right now.


Blitz actually came first, so trpc is an alternative implementation of RPC.

Blitz auth was developed at the same time as next auth, and takes a more imperative approach which allows you to build more custom auth flows.


No. It's been awhile since I played around with it, but it's a front end framework that depends on react, that's really good at generating static pages


Yes, it really is. Startups these days aren't using traditional rails anymore as much as either full-stack typescript w/ React (a la Next.js) or Next to make SPA and hitting a more full-featured REST API.

There is currently a lot of cutting edge engineering focused on the Typescript ecosystem (Vercel, Cloudflare, Deno, etc) vastly outpacing things being done around Rails, Django, PHP, etc.


oh yay more fragmentation


Excited to play around with this. Congrats Tobias, Jared, and team!


This now has really started to feel like the last days of Rome.


For context: https://rome.tools/



Thought I was on Reddit and not HN for a second


Nah, for extra context you are here: https://news.ycombinator.com/item?id=33334857


I think this might have just made their lives easier, from the projects website:

> Rome is designed to replace Babel, ESLint, webpack, Prettier, Jest, and others.

Since both projects are written in Rust, the Rome team could use Tubopack to build/bundle projects and focus on the other features they are planing: https://rome.tools/#supported-features


Bread and Javascript bundlers.


I am wondering if Rails moved to esbuild a little too early.


You know. This thing is created by the creator of webpack. The site looks very polished. His name, photo, and fancy autograph is there. Seems there's too much ego involved in this project.

I tend not to trust architecture / quality / performance of people who made my programming life worse. :)

I get a lot of deno feels here. I'll just be happy with esbuild.


you weren't kidding; that autograph is pretty tacky. the ego and marketing that goes into JS tooling is nauseating.


I'm a huge fan of esbuild. Whether it's 'the best' at everything or not, our build process is now so simple I can understand it with barely any effort. Webpacker never gave me that confidence.

Not mentioning speed at all here, it was never my biggest concern.


This is using swc under the hood which as a transpiler is usually slower than esbuild. What makes Turbopack faster is caching not the transpiler. Rails (or vite or whoever) can implement similar caching speeding things up as well.

Why I wouldn’t choose, esbuild is because they don’t support the automatic React runtime and don’t seem to have plans to (or at least last time I checked.) Swc does… So as long as you’re okay with that limitation I imagine you’re probably fine.

You could also potentially use Bazel for remote caching in your rails app, though I haven’t used it myself so I don’t know how well it would work.


Automatic JSX runtime was added to esbuild in 0.14.51: https://github.com/evanw/esbuild/pull/2349


Cool, thanks for saying!


esbuild remains a fantastic, stable, well documented choice. It does one thing, and does it very well, and is well beyond "version 1.0" quality software.

Comparing an alpha just open-sourced today to an established tool like esbuild isn't even a fair comparison, for either tool.


The Rails https://github.com/rails/jsbundling-rails gem lets you pick between esbuild, rollup and Webpack. If Turbopack ends up being popular then jsbundling should be able to support it.

The nice thing about Rails now is there's no massive direct integration like Webpacker once was. Now we can basically use the JS tool straight up and Rails will just look at assets in a specific directory, it doesn't matter what tool generated it.


esbuild is just fine. It's so much easier to use than webpack. Speed is not only reason why people like esbuild over webpack.


Was Vercel the original creator of webpack?


The Turbopack project is led by Tobias Koppers, creator of Webpack


It is highly recommended to try hel micro(https://github.com/tnfe/hel) in turbo,cause hel-micro is a runtime module federation SDK which is unrelated to tool chain, so any tool system can access module federation technology in seconds .


Does it have module federation?


You can try hel-micro(https://github.com/tnfe/hel) A runtime module federation SDK which is unrelated to tool chain,It means that any tool system can access module federation technology in seconds , Users do not need to be bound to the webpack system。



I'm trying it out right now and currently figuring out how to add tailwindcss as a sidecar process since it's not supported yet. I'm so excited about this! I can already see myself pitching this to our project management in a few months.


I wonder if I can use this to replace Rollup for my Svelte SPA...


Not yet. They don't have plugin support yet. And they are prioritizing Nextjs for obvious reasons


How does it handle native libraries in non-main entry points?


Stuff like this is why I obsessively read HN. Love it. More nice work from Vercel!


LESS GOO


This looks great, but I feel like calling this a successor to Webpack in its current state is disingenuous, considering the roadmap literally says: “Currently, migrating to Turbopack from Webpack is not yet possible.”

At least the marketing website is snazzy ¯\_(ツ)_/¯


It's led by the creator of webpack, and they are working on an incremental migration path. What more would you expect? If you want to keep the exact same API, stay with webpack...


Indeed. Calling it a "spiritual" successor would have been more honest.


Heir apparent maybe?


Not to mention explicit language about _not planning to support the Webpack API 1:1_...


As Lee Robinson mentioned and as I had said before [0], Rust and other compiled languages are the future of tooling. We should optimize for speed for tooling, not only whether people can contribute to it or not. We can always have a JS layer for configuration while the core runs in Rust, much like for many Python libraries, the interface is in Python while the computation happens in C++.

I was also looking forward to Rome (a linter, formatter and bundler also built in Rust but still under development) [1] but looks like Vercel beat them to the punch.

[0] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

[1] https://rome.tools


How often is bundling happening relative to loading the bundle that 5 seconds of savings is pertinent?

There's this fairly recent notion that speed rules everything. Perhaps maintainability and releasing non-alpha quality software could have a day or two in the sun as well?


On a large team, it happens quite often in the CI infrastructure, as well as testing on local machines.

> There's this fairly recent notion that speed rules everything

Recent? I'd say it's very old, then got outdated as everyone moved to dynamic slow languages like Ruby (for Rails), Python (Django) and JS because of their perceived productivity improvements over typed languages like Java around 15 years ago. The pendulum is simply swinging in the other direction now.


> We can always have a JS layer for configuration while the core runs in Rust, much like for many Python libraries, the interface is in Python while the computation happens in C++.

And this behind-the-scenes-native code-but-pretending-to-be-Python is the reason why most Python stuff I try fails to run on the first tries.

Half of them I give up because I don't get them running after going down the weirdest error rabbit holes.

No matter whether Debian, Ubuntu, Windows 7/10, Cygwin, WSL2, Raspberry Pi, x64, ARM ...


What are you running? I've done multiple large deployment for machine learning work and they all work fine, stuff like numpy, scipy, pandas, pytorch etc which are the primary technologies I've seen written in C++. I wouldn't expect normal libraries to be so however, since they're likely not speed critical; Django for example I'm pretty sure is just Python.


Turbopack is just a bundler, while Rome is the whole package. I think only Deno strives for the same, but still can’t replace things like eslint


Sure, but Rome is being eaten by the other things Vercel has, such as SWC replacing Terser and Babel. Pretty soon all of Rome's niches will be already met by other tools, I'd wager. I think Rome simply does not have a large and well-funded enough team compared to Vercel and others like Vite.


I think this could benefit Rome. Both projects are written in Rust and so far they have a linter and a formatter and now they can integrate Turbopack, package it behind "rome build" and don't have to build their own bundler and transpiler.


IMO, The whole thing behind Rome is that it will have one tool to do everything (lint, prettify, minify, build etc), so every part in pipeline remains in sync. Using other tools for parts of it(build etc) would be against the philosophy


Is Rome being used anywhere? I haven't heard about it for a while. Kinda figured it was dead tbh


Too much marketing speak I think.

> We're planning Turbopack as the successor to Webpack. In the future, we plan to give Turbopack all the tools needed to support your Webpack app.

> Currently, migrating to Turbopack from Webpack is not yet possible. In the future, we're planning to offer a smooth migration path for all Webpack users to join the Turbopack future.


JavaScript is basically a Game of Thrones spin off by now.


I think the name isn't a great choice, given the already established Turbo ecosystem for Rails/Hotwire. Perhaps Rails can create a gem that makes migrating from one version to the next easier, a Version Compatibility Enhancement Library, calling it VerCEL for short.


When I built the Q Platform originally, I put so much work into it, but showing it to people I mostly got one piece of feedback without them even looking: "it's called Q... and there is a library for promises called Q that is way more popular. Rename yours."

I tried to explain that it was a completely different field, and that this was a web development framework. And that we had started ours way before this library for promises. But people told me I was foolish, and refused to look at it. I think it was here on HN somewhere around 2012 or so.

Well, promises are now native in browsers and no one remembers the Q library. But I did rename our library to "Qbix". I wish though that I hadn't listened to them... things come and go, just do name it what you like!



The "original" Turbo is from Turbo Pascal. There still is the ancient Delphi library/component collection of Turbo Pack around: https://github.com/turbopack


Naming is hard, it's hard to expect everyone to be aware of all ecosystems' names and trends to choose something non-conflicting


Sure, though I think I'd at least Google "javascript {my cool name}" and see what comes up (in this case, the top result is Hotwire)


My prayers for another JavaScript bundler have finally been answered!


There's always such a mixed response when new web tooling gets posted here that I really can't tell if this is sarcastic or not.

Regardless, I like that this is fast and it is probably going to get a lot of adoption since it will be included with Next.js.


I'm sure it will be good, it's just so JavaScript for there to be another one of these.


Very nice




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: