JS has an extremely large and robust standard library in comparison.
At this point, optimizing runtimes to only include necessary functions, and optimizing files to only include necessary functions are both things that are pretty much a solved problem. For example, azer has a random-color library, and an rng library. Having both of those as azer-random or something means that someone automatically gets all the dependencies, without having to make many requests to the server. This makes build times shorter and a lot easier.
Sometimes, in order to best optimize for small, you have to have some parts that are big. Libraries tend to be one of those things where a few good, big libraries lead to a smaller footprint than many tiny libraries.
> makes security auditing more difficult
What? If you go all the way, you just review all dependencies too. And if they have a good API, it's actually much easier. For example if your only source of filesystem access is libfilesystem, you can quickly list all modules which have any permanent local state.
Splitting huge libraries into well designed categories would make a lot of reviews easier.
> Having both of those as azer-random or something means that someone automatically gets all the dependencies, without having to make many requests to the server.
Also disagree. One-off builds shouldn't make a real difference. Continuous builds should have both local mirror and local caches.
A bit of code duplication would go a long way towards bringing sanity to JS land.
It's interesting to think about the distinction between promiscuous dependencies (as pioneered by Gemfile) and the Unix way. I like the latter and loathe the former, but maybe I'm wrong. Can someone think of a better example? Is there a modern dpkg/rpm/yum package with <50 LoC of payload?
Edit: Incidentally, I just checked and echo.c in coreutils 8.25 weighs in at 194 LoC (excluding comments, empty lines and just curly braces). And that doesn't even include other files that are needed to build echo. Back in 2013 I did a similar analysis for cat, and found that it required 36 thousand lines of code (just .c and .h files). It's my favorite example of the rot at Linux's core. There's many complaints you can make about Unix package management, but 'too many tiny dependencies' seems unlikely to be one of them.
I'm starting to see where http://suckless.org/philosophy and http://landley.net/aboriginal/ are coming from (watch Rob's talks, they're very opinionated but very enjoyable).
I didn't dig too deeply in 2013, but I did notice that about 2/3rds of the LoC were in headers.
which are needed for things like memory allocation, filesystem access, io, and so on.
One can imagine alternative implementations of cat that are full of #ifdefs to handle all the glorious varieties of unix that have ever been out there.
I thought how commands are bundled into packages was the entirety of what we were discussing. That was my interpretation of larkinrichards's comment (way) up above. Packages are the unit of installation, not use. Packages are all we're arguing about.
My position: don't put words in God's mouth :) The unix way is commands that do one thing and do it well. But the unix way is silent on the right way to disseminate said commands. There you're on your own.
I support packages of utility functions. Distributing them individually is a waste of resources when you have tree shaking.
I trust a dependency on lodash. I don't trust a dependency on a single 17 line function.
In OpenBSD, it's 3 LoC IIRC.