> Snowpack is a O(1) build system… Every file goes through a linear input -> build -> output build pipeline
Seems like O(n) to me?
> Snowpack starts up in less than 50ms. That’s no typo: 50 milliseconds or less. On your very first page load, Snowpack builds your first requested files and then caches them for future use
So you can open a socket in 50ms? Seems disingenuous to imply anything takes 50ms when really you’re just waiting until the first request to do anything.
Looks like an interesting project though.
So the exact quote in that talk, is:
"... and if you take the existing module map (file), it essentially replaces (function) #1 with this (new) one. And this is how bundlers like Metro work, for example -- They regenerate the file, and over a websocket connection, they send the new file and say "replace function #1 with this one, and execute it."
He then touches on how bundlers do a lot more than just transformation in this step, other processes like chunking, tree-shaking, code-splitting, etc. And this makes it incredibly difficult to properly cache a file, because you can't really get a good heuristic for diffing/patching.
And then the quote of the hour:
"But what if bundling had an O complexity of 1? That would mean that if one file changes, we only transform THAT file and send it to the browser." (no others in dependency tree)
Above quotes start here, and go for about 2 minutes:
In this context, what I believe Ives is aiming for is the idea that's pervasive throughout the talk, which is this "50ms or second HMR time."
If you overlook slight variances between time to update a file which contains more/less content with this system, and you average it out to "50ms, every file, any file", I think that would qualify as O(1) bundling/hot-reloading right?
However I think the intuition and excitement exists around the ideal bundler and I think this project provides hard evidence that we can strive for such things with incremental success.
I would agree that the term is a bit of "shorthand" that doesn't perfectly map to the idea of big-O complexity - for exactly the reason you mention, build time still depends on the size of the file. But for me, it was shorthand that helped me understand the idea. They had two other options on how to write this - either leave out the big-O stuff entirely, or explain it more deeply eg. "most bundlers are O(lmn) where l is the number of files, m is the number of files changed, n is the number of JS tokens per file" or something. Both of these options may have been more "technically correct" but would've taken me longer to grok the idea than the way they have it written. Maybe they should just have a footnote for the pedants explaining that O(1) is more of an idiom than a technical claim in this case :P
It wouldn't be a problem if it wasn't on a technical page like this where this distinction can have large implications (such as the one above). When I read that it was O(1) I drew conclusions that were both very favorable to snowpack and also completly untrue. It'd be like if you said you drove a truck but you actually drove an accord. It's probably fine to say that 99% of the time, but it could cause issues if you say it to your mechanic because they'll conclude things from what you said that might not be accurate.
The way they've structured "rebuilds" is to only "build" (or really probably do very little if anything) to just that one file you edited and saved.
Yes if you edit all 1000 files it's going to take longer.
"In theory there's no difference between theory and practice. In practice, there is." I think this is one of those cases where in practice it's awfully close to O(1) so while they're technically incorrect practically it's difficult to tell the difference.
This esp. compared with something like Angular in which you're looking at many, many seconds of build time to get started. I think it's laudable.
For the purposes of Big-O notation, this does not matter. If they didn't need its semantics, they should not have used the notation. Simple as that.
It's still O(n), regardless of the fact that they have optimized constants and in the best case it may be less than that. Irrelevant. Big O establishes an upper bound. Bubble sort is still bubble sort, we don't care if it is quick when you are ordering two elements.
Maybe they meant to advertise Ω(1) on the best case, and compared to other build systems?
EDIT: another poster says that "n" refers to the number of files in the project. Still misleading. Usually we are interested in how the complexity grows as the number of inputs grow. Big O purposely discards constants.
They could say that other build systems are O(n) where n is the total number of files, while this one is O(n) where n is the number of modified files. It's immediately clear then how this is better for the build use-case, while still making it clear how the efficient it is as the input size grows.
That's a great, concise and clear articulation. The project would do well to quote you on that!
If anyone's still struggling with the difference between O(1) and O(n), there's a common example with lists that might help:
1. Getting the head of a list is usually O(1). It doesn't matter how long the list is, getting the head element always takes the same amount of time.
2. Getting the length of a list is usually O(n). The time taken to count the list entries grows in proportion with the length of the list.
As an aside, note also that a list with a single entry doesn't make the length function O(1).
TFA explains pretty clearly what claim is being made. They're talking about the cost of rebuilding one file, given the "input" of a project with n files. They claim that for other build systems that cost (the cost of one file) scales linearly with total project size, but for theirs it doesn't.
Your objection here redefines n to be the number of files changed, but they don't claim anything about that.
Also in this case complexity we seem to care about is function of two factors: number of files and number of changed files. Call the first one n and the second one p.
The complexity of webpack would be O(n) + O(p) [+] and the complexity of snowpack would be O(p) in development. Worst case p = n, but best case p = 1, and usually in development p is on average very close to 1. Also p is parallelisable, making it appear like p = 1 WRT wall clock time for small p values even though CPU time would be O(p).
[+]: which one of the p or n constant factors dominates is unclear, but some hardly parallelizable n-bound processing is always present for webpack.
for example, lastrun in gulp (which is part of the default)
> Seems like O(n) to me?
Big O notation describes how for a given operation (sorting a list), a given measurement (e.g. items compared, memory used, CPU cycles) grows in relation to a set of variables describing the input to the operation (number of elements).
Here their operation is 'a single-file incremental compilation', they're measuring how many files need recompiling in total, and the only variable n is 'the number of files in the project'.
In that case, Snowpack is O(1), and most other bundlers are O(n). They're not wrong.
You can argue that that's not an interesting point, or that there's other measurements that matter more ofc. It's not wrong though, and imo incremental single-file changes are a thing you care about here, and only ever processing one file in doing so is an interesting distinction between Snowpack and other bundlers.
Big O notation is not (ever) defining any detail of the operation in terms of every possible conflating variable. All Big O comparisons include implicit definitions: the exact same sorting algorithm could be described as both O(nlogn) for item comparisons required per n items in the list, or O(kn^2) for memory copies required per k bytes of largest item & n items, and both are accurate & useful descriptions.
(Sorry to be pedantic, but there's a _lot_ of replies here that have misunderstood big O entirely)
> Some bundlers may even have O(n^2) complexity: as your project grows, your dev environment gets exponentially slower
that's clearly not exponential!
So 2^n would be exponential. However, n^2 is instead quadratic.
Quadratic time complexity is better than exponential.
Here is a useful comparison of the rate of growth for various time complexities
O(n^[number]) = polynomial, with O(n^2) being quadratic.
O([a number]^n) = exponential.
I guess the authors aren't big on CS fundamentals?
Yes we all know the technical definitions with respect to software and algorithmic complexity. But the lack of sympathy/empathy on display with how a common definition of a word can be misconstrued by someone without an academic background in tech is kind of surprising (ok well maybe not so surprising but c'mon folks, we are all human)
"O(1) build system" ~ It's faster than others, has nothing to do with big-O notation.
The issue with build systems is complexity, poor defaults, poor config systems that often have you reading across the docs for multiple build frameworks they mashed together trying to figure out how to merge config overrides and breaking changes between versions.
Literally the last thing I care about is whether my browser has refreshed fractionally quicker as I switch from my IDE
Yep, it's very much O(n). Even if you only consider incremental compilation it's O(n), where n is the number of lines in the file being compiled.
That being said, I understand the feeling of it being sort of O(1) from an incremental compilation perspective. For most of the JS build ecosystem, changing a file can result in many, many files getting recompiled due to bundling. So if you ignore that the size of the file isn't constant, it seems kind-of-O(1) for incremental compilation: for any file change in your codebase, only a single file gets recompiled, regardless of the size or dependency structure of the codebase. And as a result it should be much faster than the rest of the JS ecosystem for incremental compilation, since typically individual files don't get to be that large, and other incremental build systems may have to compile many files for a line change.
But yeah, from a CompSci perspective, it's O(n), even for incremental builds: as the number of lines of the file grows, the amount of work grows. And for non-incremental builds it's of course O(n).
> So you can open a socket in 50ms? Seems disingenuous to imply anything takes 50ms when really you’re just waiting until the first request to do anything.
This makes a lot more sense in the context of the rest of the JS ecosystem. Of course, what Snowpack is doing is opening a socket, and opening a socket in 50ms isn't particularly impressive (mostly it's just measuring the overhead of starting a Node process and importing various dependencies). But other JS ecosystem build tools are very slow to start, because they're architected differently than Snowpack: they do full builds of the entire codebase (due to bundling) — or at least typically builds of large swaths of the codebase — and so on startup typically they'll immediately start building because doing it just-in-time is slow, which makes them slow to start. And if they don't start building immediately, the first request they service is typically quite slow. Since Snowpack doesn't bundle files, it's able to only build the files a specific page uses, which is typically much faster than building the entire codebase; as a result, they can do on-demand builds when a specific page is requested instead of relying on precompilation.
The 50ms isn't impressive in terms of "look how fast we opened a socket." It's impressive in terms of "look how quickly you can start loading pages to see results as compared to other systems," because their build system is so fast that they don't need to precompile.
O(n) effectively approximates O(1) with sufficiently low values of n
When we say O(n), we say that as n goes towards to infinity, the runtime grows as k * n, where k is some constant.
Your sentence effectively says:
> (k * n runtime as n approaches infinity) effectively approximates (j runtime as n approaches infinity) for sufficiently low values of n.
This makes no sense, because you can't have n approach infinity and also be a "sufficiently low value" at the same time.
I think what you were trying to say is that O(n) algorithms run in constant time (or "fast enough") given a constant input size. But in that case, O notation is meaningless because when we use it, we specifically care about what happens at infinity.
Also, O(1) isn't synonymous with "fast enough"; constant factors can matter a lot. For example, the "fastest" (Big O wise) matrix multiply algorithm has such a large constant factor that using it is not feasible for matrices that fit on today's computers. https://en.wikipedia.org/wiki/Coppersmith%E2%80%93Winograd_a...
O(n) by defition means that execution time grows lineary with input size.
I wasn't aware of this, that's actually a pretty cool feature and incredibly useful.
A bit unlearned on ESM modules, how are they different from
the isomorphic browser/Node single-file bundles produced by Webpack/Rollup?
An ES module is just a js file that can be imported by other js files with the "import X from 'module.js'" syntax. This is different from CommonJS modules which use the "var x = require("module")" syntax. Modern browsers (Chrome and Firefox) know how to load ES modules, so if all your js code is written as ES modules you can just load your main.js file and then the browser will go fetch all the other modules it depends on. It used to be that you needed a loader like require.js to handle loading all the module dependencies, but that's no longer the case.
But you run into an issue when you want to use 3rd party libraries that you've installed using NPM. Most libraries still use the CommonJS syntax, because that's what Node uses. Since they're not ES modules the browser can't fetch them natively and you need to have either a loader like requirejs or a bundler like Webpack. Snowpack will convert each NPM library into an ES module for you. So you just run "Snowpack install" once, and then add "import X from '/web_modules/module.js'" to your code, and you're set.
You still need to bundle before you ship your code, because not all browsers speak ES module and there are still performance benefits from bundling/minifying/etc. But having everything just load in the browser natively when you're developing is quite nice.
Most NPM modules aren't ESM and aren't usable in that way. This allows it, it's a really good thing! Hopefully pushes things towards much wider usage of modules in the browser, which would make a life a bit more sane in FE world.
I also found it useful to look through the code in create snowpack app, it’s not very dynamic or complex, the config files are written in a simple way and they get copied over or extended by the app that the tool creates for you.
It's basically a tool that allows you to develop without bundling, but it still bundles for production via Parcel.
So it's not a Webpack/Parcel/Rollup killer.
(I presume that people supporting legacy browsers generally have to deal more with their website breaking on those browsers than with things breaking on modern ones)
Added a new project with 1 dependency (which contains a single one-liner function to return a test string). No other dependencies.
Takes about 30s to start. Not sure whether the fact that my dependency is a link with many siblings due to rush and pnpm is an issue, but it is a far cry from 50ms.
Also I did not get it to reliably pick up when the dependency has changed (cache invalidation most likely has an incompatible strategy with `npm link`/`pnpm`.
Snowpack in principle looks nice, but I think I need something else
Snowpack: Run a module build script one time (or again when adding new NPM libraries) on project to generate some assets, but no re-bundling time between changes.
Parcel's incremental build is < 100ms, so I'm not sure how SnowPack feels any better for me.
"Snowpack treats bundling as a final, production-only build optimization. By bundling as the final step, you avoid mixing build logic and bundle logic in the same huge configuration file. Instead, your bundler gets already-built files and can focus solely on what it does best: bundling."
Bundlers struck me as unnecessary given JS now has native module support, and that is the premise of this project.
Some out of memory issues when bundling certain dependencies, and slow "npm start" times with React, has only strengthened my initial impressions. So again, this could be a welcome impovement.
But yeah, JS is Crazy Town. It can be very frustrating.
( Be wary of dependencies. )
>Some bundlers may even have O(n^2) complexity: as your project grows, your dev environment gets exponentially slower
They seem to not understand the difference between exponential and quadratic either. This is appalling.
Your /s is (hopefully) missing.
Development: Creates many ESM-Files. Firefox/Chrome can load them.
Production: Bundles&Minimizes these ESM-Files.
One Question: There is a JS-Error, only occuring in IE11. "t._x is undefined". How do I debug that?
The build result doesn’t need to wait on the results of the type checking. TypeScript or Babel transpiling can happen even if there is a type error.
> If it does not bundle, and still uses all the external transformers (TypeScript, Babel, etc.), what exactly does it do? Does it somehow optimize the execution of those transformers/transpilers?
It skips the bundling step, and does aggressive caching.
I do run TypeScript async with Parcel, but I still wait for it to finish before I start working on a different task as I do want to know if I have any TS errors before proceeding.
> It doesn’t optimize the transformers/transpilers; but it does only run them against the modules that have changed.
But isn't this how other bundlers work too? They cache results and only run transformers on the changed files?
I think the big optimization is skipping the bundling. If you want to wait on type checking results, and that’s the slowest part, then I don’t see how this could speed up your builds.
The metric that corresponds to user experience is cold compile + page reload time, incremental compile + page reload time i.e. How long before I press enter on a command and I see something usable in a browser to devloop on.
If you let the browser load the first file, parse and figure out the next file to load, a large project could have 100s of roundtrips. That’s why JS bundlers were created in first place. To avoid the cost of a long critical chain.
Using a device from Africa (Uganda) to connect to US servers, one feels how bad an experience latency can make. More and more development is done on cloud machines or remote host, so this isn’t a rare usecase.
What I do hope for is if there is a new bundler, it can use the webpack plugin ecosystem. It’s massive and anything new has to foster a similar ecosystem of tooling.
Or please just make webpack fast with incremental disk compiles. I would pay money for that.
Also, several of the restrictions baked into the ESM module format are specifically designed so that browsers don't need the full file to load, and can use an optimized import parser that doesn't need to wait for the full JS parser run to find the next modules to load. (I've seen benchmarks where modern browsers have discovered/loaded the entire module graph before the HTML parser has even finished building the DOM and signaled DOM Ready.)
That said, reading the site, Snowpack's focus on one ESM per npm bundle is primarily just for the dev experience where you are on localhost and latency isn't an issue. It takes several approaches to further bundling for Production intended builds, including directly supporting webpack as an option (and thus webpack's plugin ecosystem).
That usecase has been stuck with the "global jQuery plugins" approach for ages and it feels like <script type="module"> + something like Snowpack would really improve it.
How is this such a big problem for people that people need to write yet another build tool, instead of improving the one everyone already use?
If you’re a solo dev working on mostly new codebases I imagine it’s not a problem for you though.
I do wonder though if it would be enough to turn on CloudFlare's minification for prod.
> Me: Would you say Snowpack is mainly about generating ESM files (and their common code) for each import statement? Curious how that is different from webpack's code splitting strategy perhaps together with an ESM plugin
> Snowpack: Snowpack's dependency installation is a form of bundling + code-spliting: your entire dependency tree is bundled together and then split into one-file-per top-level package.
In other words: they're a code-splitting strategy where they "don't touch your code", they only look at it to find the dependencies and then they generate files (ESM modules) from the dependencies information. Then they serve that and let the (modern) browser do the rest.
Really simply idea but effective.
Browserify, Webpack, Parcel, Rollup, Vite, Snowpack: people use all of these for different reasons & they all have their own advantages & drawbacks. It often doesn't make sense to abandon a stable solution for one which promises speed/magical features but may be full of bugs/untested (not saying this is true of Snowpack, but you can't just trust the claimed feature list as an accurate representation of the tool).
I mention this simply because I've been in so many situations where a dev casually denigrates someone else's work because it was built using an "old" solution, without engaging with the core functionality of the code. Like "oh, they're using Webpack, they must not have heard of Rollup." It's good to be re-evaluate the tools you use from time to time, but don't let the "new hotness" make you think the old tried-and-true ways are any less valid - many yaks have been shaved and bikes shedded this way.
There's some good info here:
The salient points seem to be:
1. "Vite is more opinionated and supports more opt-in features by default - for example, features listed above like TypeScript transpilation, CSS import, CSS modules and PostCSS support all work out of the box without the need for configuration."
2. "Both solutions can also bundle the app for production, but Vite uses Rollup while Snowpack delegates it to Parcel/webpack. This isn't a significant difference, but worth being aware of if you intend to customize the build."