Hacker News new | past | comments | ask | show | jobs | submit login
Snowpack 2.0 (snowpack.dev)
322 points by pspeter3 44 days ago | hide | past | favorite | 141 comments



Some of this wording confuses me and should probably be reworked:

> Snowpack is a O(1) build system… Every file goes through a linear input -> build -> output build pipeline

Seems like O(n) to me?

> Snowpack starts up in less than 50ms. That’s no typo: 50 milliseconds or less. On your very first page load, Snowpack builds your first requested files and then caches them for future use

So you can open a socket in 50ms? Seems disingenuous to imply anything takes 50ms when really you’re just waiting until the first request to do anything.

Looks like an interesting project though.


I watched the talk linked by Ives, the guy behind Codesandbox.

So the exact quote in that talk, is:

"... and if you take the existing module map (file), it essentially replaces (function) #1 with this (new) one. And this is how bundlers like Metro work, for example -- They regenerate the file, and over a websocket connection, they send the new file and say "replace function #1 with this one, and execute it."

He then touches on how bundlers do a lot more than just transformation in this step, other processes like chunking, tree-shaking, code-splitting, etc. And this makes it incredibly difficult to properly cache a file, because you can't really get a good heuristic for diffing/patching.

And then the quote of the hour:

"But what if bundling had an O complexity of 1? That would mean that if one file changes, we only transform THAT file and send it to the browser." (no others in dependency tree)

Above quotes start here, and go for about 2 minutes:

https://youtu.be/Yu9zcJJ4Uz0?t=1018

----

In this context, what I believe Ives is aiming for is the idea that's pervasive throughout the talk, which is this "50ms or second HMR time."

If you overlook slight variances between time to update a file which contains more/less content with this system, and you average it out to "50ms, every file, any file", I think that would qualify as O(1) bundling/hot-reloading right?


It’s certainly not O(1).

However I think the intuition and excitement exists around the ideal bundler and I think this project provides hard evidence that we can strive for such things with incremental success.


Maybe I'm misunderstanding, but it seems clear to me: Other bundlers = change one file, `n` files are rebuilt/bundled. Snowpack = change one file, only that one file is rebuilt. Building "from scratch" will necessarily be O(n), but incremental rebuilds can be O(1), no?


An incremental rebuild will still be O(n), but in this case n=1. This isn't the same as O(1) where, regardless of the input size, the number of operations will remain the same. This isn't just being pedantic, O(1) implies they have a seperate algorithm that doesn't grow as input size grows, which is totally seperate from intelligently running an algorithm whose runtime increases at a constant rate. While in this case the resulting runtime will be the same, the nuances that are implicitly implied when they say something is O(1) will not be true.


Surely that depends on what you call `n`, they are using `n` to refer to the number of files in the project.

I would agree that the term is a bit of "shorthand" that doesn't perfectly map to the idea of big-O complexity - for exactly the reason you mention, build time still depends on the size of the file. But for me, it was shorthand that helped me understand the idea. They had two other options on how to write this - either leave out the big-O stuff entirely, or explain it more deeply eg. "most bundlers are O(lmn) where l is the number of files, m is the number of files changed, n is the number of JS tokens per file" or something. Both of these options may have been more "technically correct" but would've taken me longer to grok the idea than the way they have it written. Maybe they should just have a footnote for the pedants explaining that O(1) is more of an idiom than a technical claim in this case :P


Yes, exactly. O(n) means that as n (the number of files in the projects) grows, so does the runtime complexity. O(1) means that as n grows, the runtime complexity remains constant. In this case, they're always using an input of size n=1, but this doesn't make the algorithm itself O(1). By calling this an O(1) operation, they imply that you could rebuild your entire project at the same rate you can rebuild a project with just one file changed. This is misleading and untrue, which is why it's not peedantic.

It wouldn't be a problem if it wasn't on a technical page like this where this distinction can have large implications (such as the one above). When I read that it was O(1) I drew conclusions that were both very favorable to snowpack and also completly untrue. It'd be like if you said you drove a truck but you actually drove an accord. It's probably fine to say that 99% of the time, but it could cause issues if you say it to your mechanic because they'll conclude things from what you said that might not be accurate.


If you have a project with 1000 files in it, how often are you editing all 1000 of them at the same time? Virtually never from my experience.

The way they've structured "rebuilds" is to only "build" (or really probably do very little if anything) to just that one file you edited and saved.

Yes if you edit all 1000 files it's going to take longer.

"In theory there's no difference between theory and practice. In practice, there is." I think this is one of those cases where in practice it's awfully close to O(1) so while they're technically incorrect practically it's difficult to tell the difference.

This esp. compared with something like Angular in which you're looking at many, many seconds of build time to get started. I think it's laudable.


> If you have a project with 1000 files in it, how often are you editing all 1000 of them at the same time? Virtually never from my experience.

For the purposes of Big-O notation, this does not matter. If they didn't need its semantics, they should not have used the notation. Simple as that.

It's still O(n), regardless of the fact that they have optimized constants and in the best case it may be less than that. Irrelevant. Big O establishes an upper bound. Bubble sort is still bubble sort, we don't care if it is quick when you are ordering two elements.

Maybe they meant to advertise Ω(1) on the best case, and compared to other build systems?

EDIT: another poster says that "n" refers to the number of files in the project. Still misleading. Usually we are interested in how the complexity grows as the number of inputs grow. Big O purposely discards constants.

They could say that other build systems are O(n) where n is the total number of files, while this one is O(n) where n is the number of modified files. It's immediately clear then how this is better for the build use-case, while still making it clear how the efficient it is as the input size grows.


> They could say that other build systems are O(n) where n is the total number of files, while this one is O(n) where n is the number of modified files. It's immediately clear then how this is better for the build use-case, while still making it clear how the efficient it is as the input size grows.

That's a great, concise and clear articulation. The project would do well to quote you on that!

If anyone's still struggling with the difference between O(1) and O(n), there's a common example with lists that might help:

1. Getting the head of a list is usually O(1). It doesn't matter how long the list is, getting the head element always takes the same amount of time.

2. Getting the length of a list is usually O(n). The time taken to count the list entries grows in proportion with the length of the list.

As an aside, note also that a list with a single entry doesn't make the length function O(1).


It's only misleading if you take it out of context and examine that one sentence without reading the rest of the article. It's perfectly clear what they mean in the post (to me, at least).


An since we are talking pedantry pedantic is, er, spelt pedantic not peedantic. Sorry I couldn't resist!


> An incremental rebuild will still be O(n), but in this case n=1.

TFA explains pretty clearly what claim is being made. They're talking about the cost of rebuilding one file, given the "input" of a project with n files. They claim that for other build systems that cost (the cost of one file) scales linearly with total project size, but for theirs it doesn't.

Your objection here redefines n to be the number of files changed, but they don't claim anything about that.


That is misleading, because if I have n project files and I change n files then it runs n rebuilds not one rebuild.


They didn't claim anything about the cost of changing multiple files. "Rebuild one file, in a project of size n" is the operation they're examining the performance of.


There is such a thing as best-case, worst-case, and average-case computational complexity.

Also in this case complexity we seem to care about is function of two factors: number of files and number of changed files. Call the first one n and the second one p.

The complexity of webpack would be O(n) + O(p) [+] and the complexity of snowpack would be O(p) in development. Worst case p = n, but best case p = 1, and usually in development p is on average very close to 1. Also p is parallelisable, making it appear like p = 1 WRT wall clock time for small p values even though CPU time would be O(p).

[+]: which one of the p or n constant factors dominates is unclear, but some hardly parallelizable n-bound processing is always present for webpack.


Isn't this same as the most famous o(1) example, hashmap complexity assumed to be O(1) when its actually O(n).

https://stackoverflow.com/a/4553642


Yeah, I thought it was easy to understand. Single file builds are O(1) in the number of files in the project. This is probably better described as "incremental compilation" or some variant, though.


I think other bundlers/build systems can also do incremental builds (although may be implemented as modules in some systems, and not part of the default system)

for example, lastrun in gulp (which is part of the default)

https://gulpjs.com/docs/en/api/lastrun/


Other bundlers would be O(n^2).


>> Snowpack is a O(1) build system… Every file goes through a linear input -> build -> output build pipeline

> Seems like O(n) to me?

Big O notation describes how for a given operation (sorting a list), a given measurement (e.g. items compared, memory used, CPU cycles) grows in relation to a set of variables describing the input to the operation (number of elements).

Here their operation is 'a single-file incremental compilation', they're measuring how many files need recompiling in total, and the only variable n is 'the number of files in the project'.

In that case, Snowpack is O(1), and most other bundlers are O(n). They're not wrong.

You can argue that that's not an interesting point, or that there's other measurements that matter more ofc. It's not wrong though, and imo incremental single-file changes are a thing you care about here, and only ever processing one file in doing so is an interesting distinction between Snowpack and other bundlers.

Big O notation is not (ever) defining any detail of the operation in terms of every possible conflating variable. All Big O comparisons include implicit definitions: the exact same sorting algorithm could be described as both O(nlogn) for item comparisons required per n items in the list, or O(kn^2) for memory copies required per k bytes of largest item & n items, and both are accurate & useful descriptions.

(Sorry to be pedantic, but there's a _lot_ of replies here that have misunderstood big O entirely)


If we're being pedantic,

> Some bundlers may even have O(n^2) complexity: as your project grows, your dev environment gets exponentially slower

that's clearly not exponential!


Yeah, but no one really says "quadratically slower" in the real world, while "exponentially slower" is used by regular non-CS/math/stats people. It's a forgivable inaccuracy.


If they're referencing Big-O notation, in my opinion it is fair to assume that the people reading it are CS/math/stats people


Exponentially slower is flat out false though in this case


Maybe we are mean and they are right, who knows: they said "exponentially slower", which might be true if somehow executing the instructions of the O(N^2) code runs in O(2^N) time due to CPU/RAM/disk limitations.


Huh? n-squared is an exponent


Sure, but for something to be considered exponential running time it must be <some-constant>^n.

So 2^n would be exponential. However, n^2 is instead quadratic.

Quadratic time complexity is better than exponential.

Here is a useful comparison of the rate of growth for various time complexities[0]

[0] https://stackoverflow.com/a/34517541/3574076


Quadratic in this specific case, or more generally: polynomial.


Exponential refers to c^N. N^c is quadratic. Small visualization: https://www.onlinemathlearning.com/image-files/exponential-q...


We call that polynomial, not quadratic.


Both is right, quadratic being subset of polynomial.


It doesn't help that most people in journalism use the phrase exponential growth to describe quadratic growth.


we call that "quadratic". exponential is when the exponent is variable. (unless I'm being whooshed)


No, you're 100% correct.

O(n^[number]) = polynomial, with O(n^2) being quadratic.

O([a number]^n) = exponential.

I guess the authors aren't big on CS fundamentals?


Indeed. It's been a while since I've had to so much as think about big O notation and I forgot that quirk. To most folks "exponential" means n^2 or greater


Most folks are using the word wrong then.


“Superlinear” is the word you’re looking for.


Superlinear can mean O(n log n) or anything like that, which is smaller than O(n^2).


n^0 is also an exponent. Do you consider constant time to be exponential growth?


When family talks to me about covid spreading exponentially I am pretty sure they mean n times n, or n^2

Yes we all know the technical definitions with respect to software and algorithmic complexity. But the lack of sympathy/empathy on display with how a common definition of a word can be misconstrued by someone without an academic background in tech is kind of surprising (ok well maybe not so surprising but c'mon folks, we are all human)


disease spread when modeled is actually exponential.

https://en.wikipedia.org/wiki/Basic_reproduction_number#Esti...


The JS community doesn't give a fuck about using correct technical terms. It's like listening to a 5yo child trying to use adult words, you have to know to interpret them differently.

"Isomorphic Javascript" ~ Same code running in browser and server, has nothing to do with isomorphism in the mathematical sense.

"O(1) build system" ~ It's faster than others, has nothing to do with big-O notation.


Cool, I’ve been writing "Isomorphic Javascript" and I didn’t even realise it.


I just have no idea why someone would think the killer feature of a new new new new new build system for web dev is micro optimising for speed.

The issue with build systems is complexity, poor defaults, poor config systems that often have you reading across the docs for multiple build frameworks they mashed together trying to figure out how to merge config overrides and breaking changes between versions.

Literally the last thing I care about is whether my browser has refreshed fractionally quicker as I switch from my IDE


I think they might be claiming to be O(number of files changed) rather than O(total number of files) for incremental compiles.


Sort of. What they're talking about is the time it takes to rebuild during your dev-cycle, when you will typically be editing 1 or 2 typescript files and then refreshing your browser. In this case N is the total number of files in the project. Other bundlers will be O(n) because they need to bundle all N files together (higher values of N result in longer build time). While Snowpack will still be O(1) because the time it takes to rebuild the 1 or 2 files you edited does not increase with the value of N.


Probably, but saying “each O(n) change takes O(1)” is a pretty confusing way of putting it.


Do they mean O(1) with respect to a change? If you make a change, only one file is rebuilt. Not all of them.


Isn't this how most bundlers work? AFAIK, ParcelJS for example caches results and only updates what changed.


That's a good point. The webpack-dev-server also has incremental builds so I don't understand what this does differently.


I think the difference is that they don't create a bundle but import the files directly in the browser.


I believe they are saying O(1) since it only builds the single file that you just changed and will never just build all your files. They also mention that it only builds the files as requested by the browser, so in terms of the compiler it would be O(1), however it would effectly become O(n) the first time you use it if you include all the individual requests the browser makes as a single unit.


Indeed, there's nothing O(1) about this and is precisely still O(n) ¯\_(ツ)_/¯


> Seems like O(n) to me?

Yep, it's very much O(n). Even if you only consider incremental compilation it's O(n), where n is the number of lines in the file being compiled.

That being said, I understand the feeling of it being sort of O(1) from an incremental compilation perspective. For most of the JS build ecosystem, changing a file can result in many, many files getting recompiled due to bundling. So if you ignore that the size of the file isn't constant, it seems kind-of-O(1) for incremental compilation: for any file change in your codebase, only a single file gets recompiled, regardless of the size or dependency structure of the codebase. And as a result it should be much faster than the rest of the JS ecosystem for incremental compilation, since typically individual files don't get to be that large, and other incremental build systems may have to compile many files for a line change.

But yeah, from a CompSci perspective, it's O(n), even for incremental builds: as the number of lines of the file grows, the amount of work grows. And for non-incremental builds it's of course O(n).

> So you can open a socket in 50ms? Seems disingenuous to imply anything takes 50ms when really you’re just waiting until the first request to do anything.

This makes a lot more sense in the context of the rest of the JS ecosystem. Of course, what Snowpack is doing is opening a socket, and opening a socket in 50ms isn't particularly impressive (mostly it's just measuring the overhead of starting a Node process and importing various dependencies). But other JS ecosystem build tools are very slow to start, because they're architected differently than Snowpack: they do full builds of the entire codebase (due to bundling) — or at least typically builds of large swaths of the codebase — and so on startup typically they'll immediately start building because doing it just-in-time is slow, which makes them slow to start. And if they don't start building immediately, the first request they service is typically quite slow. Since Snowpack doesn't bundle files, it's able to only build the files a specific page uses, which is typically much faster than building the entire codebase; as a result, they can do on-demand builds when a specific page is requested instead of relying on precompilation.

The 50ms isn't impressive in terms of "look how fast we opened a socket." It's impressive in terms of "look how quickly you can start loading pages to see results as compared to other systems," because their build system is so fast that they don't need to precompile.


>Seems like O(n) to me? As the number of files grows, so does the speed of your builds?

O(n) effectively approximates O(1) with sufficiently low values of n


You have a misunderstanding of O notation.

When we say O(n), we say that as n goes towards to infinity, the runtime grows as k * n, where k is some constant.

Your sentence effectively says:

> (k * n runtime as n approaches infinity) effectively approximates (j runtime as n approaches infinity) for sufficiently low values of n.

This makes no sense, because you can't have n approach infinity and also be a "sufficiently low value" at the same time.

I think what you were trying to say is that O(n) algorithms run in constant time (or "fast enough") given a constant input size. But in that case, O notation is meaningless because when we use it, we specifically care about what happens at infinity.

Also, O(1) isn't synonymous with "fast enough"; constant factors can matter a lot. For example, the "fastest" (Big O wise) matrix multiply algorithm has such a large constant factor that using it is not feasible for matrices that fit on today's computers. https://en.wikipedia.org/wiki/Coppersmith%E2%80%93Winograd_a...


I think you confused some definitions, what you are saying sometimes applies for O(constant), for example you can consider O(57) to be O(1), if the number "57" never changes even though the input size changes.

O(n) by defition means that execution time grows lineary with input size.


If we're just going to make up new definitions for existing terminology, why even bother discussing anything?


I started using Snowpack just last week, but I'm not even using the dev server or the bundler part. All I really needed was its ability to convert npm packages into single-file ES modules. Once everything is an ES module you can just let the browser load them all, no bundler or dev server needed at all in your dev cycle. The only dev-time conversion needed is the compilation from typescript to JS, which my IDE already does instantly whenever I save. Previously this worked fine for all our own code but not for dependencies, so I'm pretty happy Snowpack was able to solve that problem.


Snowpack's web_modules build step produces a single-file ESM bundle for each NPM lib?

I wasn't aware of this, that's actually a pretty cool feature and incredibly useful.

A bit unlearned on ESM modules, how are they different from the isomorphic browser/Node single-file bundles produced by Webpack/Rollup?


Yeah, that's basically what it does.

An ES module is just a js file that can be imported by other js files with the "import X from 'module.js'" syntax. This is different from CommonJS modules which use the "var x = require("module")" syntax. Modern browsers (Chrome and Firefox) know how to load ES modules, so if all your js code is written as ES modules you can just load your main.js file and then the browser will go fetch all the other modules it depends on. It used to be that you needed a loader like require.js to handle loading all the module dependencies, but that's no longer the case.

But you run into an issue when you want to use 3rd party libraries that you've installed using NPM. Most libraries still use the CommonJS syntax, because that's what Node uses. Since they're not ES modules the browser can't fetch them natively and you need to have either a loader like requirejs or a bundler like Webpack. Snowpack will convert each NPM library into an ES module for you. So you just run "Snowpack install" once, and then add "import X from '/web_modules/module.js'" to your code, and you're set.

You still need to bundle before you ship your code, because not all browsers speak ES module and there are still performance benefits from bundling/minifying/etc. But having everything just load in the browser natively when you're developing is quite nice.


> "Since they're not ES modules the browser can't fetch them natively and you need to have either a loader like requirejs or a bundler like Webpack. Snowpack will convert each NPM library into an ES module for you. So you just run "Snowpack install" once, and then add "import X from '/web_modules/module.js'" to your code, and you're set."

Ahh understood!


Safari and Edge also support ES modules.

https://caniuse.com/#feat=es6-module


They're actual ES modules (so with the `export` syntax on their end, allowing `import` with `<script type="module">` in compliant browsers).

Most NPM modules aren't ESM and aren't usable in that way. This allows it, it's a really good thing! Hopefully pushes things towards much wider usage of modules in the browser, which would make a life a bit more sane in FE world.


How does it work with shared dependencies? Does each direct npm dependency get bundled with its own copy of each shared dependency, or do those get re-wired to point to a shared module?


Under the hood it's using rollup. So it's benefiting from rollup's code splitting for those cases.


I haven't run into it yet so I'm not sure about this, but I believe it does have a way to combine shared dependencies into a single shared js file.


Okay, this is really cool but I don't want to "create a snowpack app". I just want a "If you're using webpack + babel and want more speed, do this" thing. With the webpack dev server builds aren't too bad for the size of thing I'm working on.


That’s basically what the rest of the docs are for. I’ve been playing with it recently, and there is a learning curve but probably less than learning webpack from scratch.

I also found it useful to look through the code in create snowpack app, it’s not very dynamic or complex, the config files are written in a simple way and they get copied over or extended by the app that the tool creates for you.


For everyone who was as confused as me:

It's basically a tool that allows you to develop without bundling, but it still bundles for production via Parcel.

So it's not a Webpack/Parcel/Rollup killer.


FYI, Snowpack uses rollup for its production bundling


Can you elaborate on that or provide a link to where you learned this? According to there site, they only maintain two official plugins for production builds (Webpack & Parcel). I'm coming from using Rollup, so I would prefer to use Rollup instead.

https://www.snowpack.dev/#snowpack-build


Ah okay.


And by that, I take it that it requires ESM module support from the browser, so it really is for development purpose and cannot just be kept used in production that requires wider browser support.


It makes production builds for legacy browsers (such as IE11). Only during development is modern browser required.

https://www.snowpack.dev/#legacy-browser-support


Maybe I'm overlooking something, but that sounds like it's not possible to both debug legacy browsers and reap the benefits of this approach at the same time, defeating the point.

(I presume that people supporting legacy browsers generally have to deal more with their website breaking on those browsers than with things breaking on modern ones)


I just tried it in a @microsoft/rush project of mine.

Added a new project with 1 dependency (which contains a single one-liner function to return a test string). No other dependencies.

Takes about 30s to start. Not sure whether the fact that my dependency is a link with many siblings due to rush and pnpm is an issue, but it is a far cry from 50ms.

Also I did not get it to reliably pick up when the dependency has changed (cache invalidation most likely has an incompatible strategy with `npm link`/`pnpm`.

Snowpack in principle looks nice, but I think I need something else


I really don't get a sense of what snowpack exactly does from their website but I found this blog post useful: https://blog.logrocket.com/snowpack-vs-webpack/


Webpack/Parcel: Save file changes, app bundle gets regenerated to hot-reload, this takes a bit of time.

Snowpack: Run a module build script one time (or again when adding new NPM libraries) on project to generate some assets, but no re-bundling time between changes.


> takes a bit of time.

Parcel's incremental build is < 100ms, so I'm not sure how SnowPack feels any better for me.


From what I can tell it's a wholesale replacement for Webpack + Babel + whichever other loaders you're using. If you're not from the JS ecosystem and that description doesn't make sense to you, I can see why it might be unclear.


Not quite; it's intended only to replace webpack for development. Snowpack explicitly recommends you continue using bundlers for production.

"Snowpack treats bundling as a final, production-only build optimization. By bundling as the final step, you avoid mixing build logic and bundle logic in the same huge configuration file. Instead, your bundler gets already-built files and can focus solely on what it does best: bundling."


How does Snowpack compare to Rollup? I use Rollup because it's light-weight and dependency-free.


AFAICT It allows you to develop without bundling, but for production it still bundles with Parcel.


It uses rollup with a bunch of plugins internally.


Shameless plug for those of you who prefer video tutorials to written https://youtu.be/nbwt3A9RzNw It's an intro to Snowpack v1 but it'll still give you a good idea of what Snowpack does and how it differs from Webpack. I would agree that Snowpack isn't quite there for production projects, mostly due to the fact that many projects still don't ship their modiels as ES modules.


I find this interesting. As a mainly desktop developer now doing web frontend work, the JS ecosystem has been so frustrating.

Bundlers struck me as unnecessary given JS now has native module support, and that is the premise of this project.

Some out of memory issues when bundling certain dependencies, and slow "npm start" times with React, has only strengthened my initial impressions. So again, this could be a welcome impovement.


You'll likely still want to bundle for production, though, for the optimizations like minification, dead code elimination, module splitting, etc.

But yeah, JS is Crazy Town. It can be very frustrating.

( Be wary of dependencies. )


I don't think they understand what O(1) even means.


Not just that:

>Some bundlers may even have O(n^2) complexity: as your project grows, your dev environment gets exponentially slower

They seem to not understand the difference between exponential and quadratic either. This is appalling.


Eh, so what, they're not using the correct mathematically term for complexity. Do most developers care if it is quadratic or exponential? I don't care, because they're both varying degrees of bad. It's just one gets worse faster than the other.


One gets bad at 10, the other at 10000. There's quite a difference.


It seems like they're spamming a lot of O(buzzwords).


Sometimes I wonder why people cannot simply replace technical expressions with plain English words.


> This is appalling.

Your /s is (hopefully) missing.


It seems like the author is implying there is a flat constant time for compilation which can't be true because it's dependent on the number of changed files.


I don’t think their target audience knows either.


From other comments I understand Snowpack as:

Development: Creates many ESM-Files. Firefox/Chrome can load them.

Production: Bundles&Minimizes these ESM-Files.

One Question: There is a JS-Error, only occuring in IE11. "t._x is undefined". How do I debug that?


I would assume the flow to be the same for debugging post bundled Production output from most tooling, use source maps and hope that the bundler produced accurate ones :)


Sounds interesting. It's a bit unclear for me what the "runs in 15ms" means. I think in my projects, the TypeScript compilation is what takes the longest, so although I use parcel and it's pretty fast, I still have to wait 1-2 seconds for TypeScript to compile changes. If it does not bundle, and still uses all the external transformers (TypeScript, Babel, etc.), what exactly does it do? Does it somehow optimize the execution of those transformers/transpilers?


> I think in my projects, the TypeScript compilation is what takes the longest, so although I use parcel and it's pretty fast, I still have to wait 1-2 seconds for TypeScript to compile changes.

The build result doesn’t need to wait on the results of the type checking. TypeScript or Babel transpiling can happen even if there is a type error.

> If it does not bundle, and still uses all the external transformers (TypeScript, Babel, etc.), what exactly does it do? Does it somehow optimize the execution of those transformers/transpilers?

It skips the bundling step, and does aggressive caching.


> The build result doesn’t need to wait on the results of the type checking. TypeScript or Babel transpiling can happen even if there is a type error.

I do run TypeScript async with Parcel, but I still wait for it to finish before I start working on a different task as I do want to know if I have any TS errors before proceeding.

> It doesn’t optimize the transformers/transpilers; but it does only run them against the modules that have changed.

But isn't this how other bundlers work too? They cache results and only run transformers on the changed files?


I tweaked my comment a little. I’m not sure exactly how webpack is doing it’s work, but I think you’re right.

I think the big optimization is skipping the bundling. If you want to wait on type checking results, and that’s the slowest part, then I don’t see how this could speed up your builds.


Having the browser make one request per npm bundle sounds awful. It’s great if client has fast internet and server is close by, or mostly localhost, but latency will play a far bigger role than the 50ms startup time. That’s not a good metric to look at.

The metric that corresponds to user experience is cold compile + page reload time, incremental compile + page reload time i.e. How long before I press enter on a command and I see something usable in a browser to devloop on.

If you let the browser load the first file, parse and figure out the next file to load, a large project could have 100s of roundtrips. That’s why JS bundlers were created in first place. To avoid the cost of a long critical chain.

Using a device from Africa (Uganda) to connect to US servers, one feels how bad an experience latency can make. More and more development is done on cloud machines or remote host, so this isn’t a rare usecase.

What I do hope for is if there is a new bundler, it can use the webpack plugin ecosystem. It’s massive and anything new has to foster a similar ecosystem of tooling.

Or please just make webpack fast with incremental disk compiles. I would pay money for that.


HTTP/2 and HTTP/3 both go long ways to mitigating the costs of multiple requests over single large bundled requests. It's still early days in HTTP/2 and HTTP/3 adoption, of course, but we're almost to the point where HTTP itself takes care of many of the reasons bundling used to be needed. (Especially as you get into more advanced features like Server Push.)

Also, several of the restrictions baked into the ESM module format are specifically designed so that browsers don't need the full file to load, and can use an optimized import parser that doesn't need to wait for the full JS parser run to find the next modules to load. (I've seen benchmarks where modern browsers have discovered/loaded the entire module graph before the HTML parser has even finished building the DOM and signaled DOM Ready.)

That said, reading the site, Snowpack's focus on one ESM per npm bundle is primarily just for the dev experience where you are on localhost and latency isn't an issue. It takes several approaches to further bundling for Production intended builds, including directly supporting webpack as an option (and thus webpack's plugin ecosystem).


Congrats on V2 and everyone involved!



Is anyone using this in combination with a plain ole' server rendered app? All the examples seem to build on a SPA example where you have a single index.js entrypoint for your entire app. What about a Rails/Django project where each page loads a few scripts it needs?

That usecase has been stuck with the "global jQuery plugins" approach for ages and it feels like <script type="module"> + something like Snowpack would really improve it.


I actually tried this, and failed. Gave up after like three hours of trying to wrap my own non-React scripts with that Snowpack stuff. It seems the tool is not capable of handling those simple use cases, which is quite sad.


Is it just me, or is the build time pretty much never an issue? Usually when I develop stuff builds/recompiles faster than I can switch to my browser to try it out.

How is this such a big problem for people that people need to write yet another build tool, instead of improving the one everyone already use?


I promise you this is a very real problem, every company I’ve worked at with even a moderate sized codebase has had to battle webpack at various points and try to hack in various types of only semi functional 3rd party caching tools and such to make development more manageable.

If you’re a solo dev working on mostly new codebases I imagine it’s not a problem for you though.


I guess Im different to most JS developers, because I prefer to work with HMR off about 95% of the time. Its good for UI prototyping (which I dont do much tbf), but it tends to get in my way when doing anything else. Maybe in total it makes me loose a minute or two but thats not an issue.


This is interesting, what's the upside of working without HMR? There are changes where HMR fails to figure things out and you have to hard reload, but other than those it has served me very well otherwise. Interested in hearing the other side of the story, if there is one.


Hard reload takes marginally longer, but has no potential to fail - its peace of mind. I have had cases when small changes to dev tooling or maybe something in state management causes a failure during HMR reload. And if you dont realise this quickly enough, you might waste a lot more time than HMR saves you. As I said its still quite useful for working just on UI, so its not all bad, but ideally I prefer to work on UI components separately to the app anyway, eg using something like Storybook.


In my experience using webpack, once you’ve configured incremental builds, the only slow part is TypeScript type checking. That’s solved by doing it async and having the dev build be compile only. Even a huge project builds after a single file change faster than you can notice.


Could you share a link or keyword about async type checking and 'compile only'?


See the sibling comment.


Can you elaborate on how to configure TS to run type check asynchronously?



Tell me what it is exactly, before starting with a list of features, 50ms start, etc.


I had been avoiding bundling due to its effect on development, but this looks well worth a shot.

I do wonder though if it would be enough to turn on CloudFlare's minification for prod.


Once create React app will use it by default it will be fun



Thankyou for putting, nice and prominently at the top, what Snowpack actually is! (“Snowpack 2.0: A build system for the modern web.“)


Is Svelte really now a tier 1 library compelling enough to put in advertisements like this?


I'd say it's S Tier, but the casuals aren't on board with the pro meta.


If you're bit confused by what this is (as I was) here's a simple TLDR conversation I had with them on Twitter [1]:

> Me: Would you say Snowpack is mainly about generating ESM files (and their common code) for each import statement? Curious how that is different from webpack's code splitting strategy perhaps together with an ESM plugin

> Snowpack: Snowpack's dependency installation is a form of bundling + code-spliting: your entire dependency tree is bundled together and then split into one-file-per top-level package.

In other words: they're a code-splitting strategy where they "don't touch your code", they only look at it to find the dependencies and then they generate files (ESM modules) from the dependencies information. Then they serve that and let the (modern) browser do the rest.

Really simply idea but effective.

1. https://twitter.com/lmatteis/status/1262126825427415044


What if a file depends on another file. So, I think it is O(n)


If one file is changed, one file is reprocessed, no matter how many other files depend on it.


your argument is not valid because a file can expose many functions in javascript context. Another files depending on these functions need to be rebuilt as well.


They do not.


Which is... ?


So Vite is already dead? Geez!


I know you're being facetious, but as someone who used to follow "the new hotness" JS libraries constantly coming out, I'd advise caution with this way of thinking. There is a tendency for developers to see every new library/framework/tool as "the new way" which obsoletes the old - often because the new tool advertises itself based on distinctions/improvements made over existing libraries.

Browserify, Webpack, Parcel, Rollup, Vite, Snowpack: people use all of these for different reasons & they all have their own advantages & drawbacks. It often doesn't make sense to abandon a stable solution for one which promises speed/magical features but may be full of bugs/untested (not saying this is true of Snowpack, but you can't just trust the claimed feature list as an accurate representation of the tool).

I mention this simply because I've been in so many situations where a dev casually denigrates someone else's work because it was built using an "old" solution, without engaging with the core functionality of the code. Like "oh, they're using Webpack, they must not have heard of Rollup." It's good to be re-evaluate the tools you use from time to time, but don't let the "new hotness" make you think the old tried-and-true ways are any less valid - many yaks have been shaved and bikes shedded this way.


Vite and Snowpack are a little different, though definitely similar in many regards.

There's some good info here: https://github.com/vitejs/vite#how-is-this-different-from-sn...

The salient points seem to be:

1. "Vite is more opinionated and supports more opt-in features by default - for example, features listed above like TypeScript transpilation, CSS import, CSS modules and PostCSS support all work out of the box without the need for configuration."

2. "Both solutions can also bundle the app for production, but Vite uses Rollup while Snowpack delegates it to Parcel/webpack. This isn't a significant difference, but worth being aware of if you intend to customize the build."


Thanks, that's useful!


Would it have killed them to actually say what it is in the title of their self promotion?


I came here hoping this was related to figuring our avalanche conditions when backcountry skiing.


Reddit is that way --------------------->


Snow science is fascinating and absolutely par for the course for HN. Your comment is far more HN worthy than mine.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: