Hacker News new | past | comments | ask | show | jobs | submit login
Worrying about the NPM Ecosystem (sambleckley.com)
174 points by diiq 15 days ago | hide | past | favorite | 86 comments



The npm ecosystem needs something like the distinction between Ubuntu's "main" and "universe" repositories, so that you have a smaller subset of known-good packages with harmonized transitive dependencies, stronger centralized curation, a lot more direct scrutiny, tighter control over versioning, and some party that is responsible for addressing vulnerabilities and ensuring appropriate maintainership.

As a venture-backed startup, the company behind npm needed the number of packages and the number of downloads to constantly trend upward in order to justify their valuation. This led to extremely poor policy and prevented them from taking strong steps to remedy the deterioration of the ecosystem. Now that it's owned by Microsoft, there's an opportunity to fix this.

Linux distributions have decades of experience solving these problems, there's no excuse for the JavaScript community to continue ignoring the longstanding precedents and best practices that have emerged from that experience.


This is a really good idea, and a fitting analogy. NPM already supports private registries, so it would be a simple configuration change to point to "main". On top of that, there is a lot of good work in the node community around static analysis, CVE detection in transitive dependencies, and more finely grained security perimeters, which could be used to detect possible backdoors or malicious code.


It would be completely straightforward to do the work you describe. Presumably numerous private actors have already done it for themselves. Until someone does this work and shares it with the public, we'll all have to wonder how valuable it would really be... anyway, it's unreasonable to expect the npm people to do this work on top of everything else they do.


Another example where this has worked really well is Rails... I think originally there was opposition to this type of Big Framework approach in the node community, but personally I don't want to pick and compose a bunch of packages from NPM and would rather spend time on higher order customer problems than simple technical ones e.g. getting a next.js app to send emails.


Love this idea, a lot! The quality, security, support, code review, etc. bar for “main” could indeed help. Maybe there’s an additional measure - what’s the size and complexity of the deps tree. Higher “score” for more shallow/narrow trees.


This post describes several potential "problems" with NPM packages, but not _why_ they are problems in the first place. The author doesn't say why they consider circular dependencies bad, or why dependency depth should be low, or even why their categorization of packages is the way it is (and leaves out large categories like tools and CLIs).

I don't think that a high number of dependencies is necessarily bad, or a low number is necessarily good. A dependency can often have factored out common functionality into a shared and higher-quality project than the same bespoke implementation re-implemented in several packages.

I think we'd need some more complicated metrics, possibly looking a bit like a web of trust analysis, to determine quality and health of packages. Depending on a well-maintained, popular, and highly-trusted package like lodash or express shouldn't be an indicator of low-quality or low-trust. We also have deprecation and security notices provided by NPM.

So I'd be much more interested in questions like "how many unique, low-trust packages are included in this package graph?" or "how out of date are the dependencies?" to try to estimate package health. From there I'd be curious to know how healthy the ecosystem is in aggregate, weighted for usage.


You're absolutely right that I am doing a certain amount of "searching under the spotlight", here. The metrics I measured were the ones I had the information and computational power to measure. You want to see how many "low trust" packages there are -- me too! But what does that mean? How do I find them? That is, in essence, the problem I'm describing. If I could solve it, believe me, the post would have been about that, instead :)

I do propose, towards the end, a network-of-trust score -- but I don't think that's something a single person can implement today; it requires social work.

(I am sorry you feel I didn't explain why deep dependency trees are a problem -- my premise is based on the desire to assess packages for if they are appropriate to include in a project, and I do mention why deep and broad trees make that harder. Circular dependencies I specifically say aren't a problem; they're just disconcerting to me.)


+1 for "searching under the spotlight", such a great (and IMHO underused) turn of phrase


This whole comment is spot on. To take the lodash example, you can easily hand-roll a bunch of methods to do things like group-by, index-by, etc. If you're not careful though, it's very easy to introduce a prototype pollution vulnerability in this, as lodash the themselves have done. The difference between your code and lodash is that this vulnerability is much more likely to be caught in lodash. And when lodash fixes the vulnerability, all you have to do is update the package, rather than writing a bunch of code (hopefully correctly this time).

One of the real issues with the JS ecosystem is that we don't have a way of capturing that information about degree of trust. People tend to use number of downloads as a metric, which is better than nothing I guess. But what if there a way to capture whether the library has a bug bounty, or information about governance practices?


One of the author’s points it the thing that makes me crazy about the JS ecosystem “... find a package ... it might depend on hundreds of other packages, with dependency trees stretching ten or more levels deep...”

I cannot count the number of times when I’ve installed a package and got >500 or >1000 packages as dependencies.

Just today I installed ‘alptail’ a simple set of components using AlpineJS and TailwindCSS (I love both!) but it npm installed 1314 packages! For a simple set of maybe a dozen components.

This is madness.

How do we make this better?

Maybe we need more support from tooling to understand just what is being used down the tree of trees of trees?

Maybe as lib writers we should seek to implement our own supporting functions (within limits, of course) rather than having a first reaction “let’s go grab XYZ for that” and inherit XYZ’s entire dep tree.

Not sure what the answer is but what we have today is a beautiful mess (love/hate it).

Thanks for letting me vent a bit. I feel better. But this really bothers me.


I believe you are looking in the wrong place for the source of the problem. The true source is the people who established and maintain the norms within this ecosystem. They just don't care that all manner of things are either broken or insecure or both. They have other concerns. I'm not sure what those concerns are, but I'm very sure that we will never ever persuade them that their culture is a bad one and so the problems outlined above will never be fixed in place. Hence some sort of B-Arc or nuke from orbit approach will be necessary.


I just wonder who cares? At all? What is "better"? Reinventing wheels instead of relying on open source, battle tested code?

Also alptail isn't an npm package? https://github.com/danieljpalmer/alptail


This "reinventing the wheel" concept has to die.

Writing your own code is not "reinventing the wheel". It's what we do. A lot of times writing a new implementation is faster in the long run (more focused, less obscure, easier to maintain).

Open source code is often not "battle tested" in your environment for your use case. And if there is a problem, getting it fixed can be a nightmare.

It's like I'm making a go-cart, and making some nice appropriately-sized matching wheels for it, and someone comes along insisting I don't "re-invent the wheel" and use these four mismatched truck wheels instead. Yes, I'll save time making the wheels. But the rest of the project is screwed.


> Writing your own code is not "reinventing the wheel". It's what we do. A lot of times writing a new implementation is faster in the long run (more focused, less obscure, easier to maintain).

So much agreement. We often will rewrite smaller sets functions or other code for the express purposes of trimming dependencies, understanding what the code does, and having the opportunity to adopt it into our codebase with a fresh re-look at implementation, adding comments, etc.

> Open source code is often not "battle tested" in your environment for your use case. And if there is a problem, getting it fixed can be a nightmare.

Again. Agree with you strongly. Just because something as been available in npm does not automatically mean battle tested. There's a ton of broken/poor code out there.


I care. I don't want an exponential growth in dependencies each time I use a top level library. 1314 packages is crazy for a small set of web components.

How could one ever read/review 1314 libs to have a passing familiarity of the deps one just took on? Oh you just npm install and DON'T look through what you got. That, friend, is scary. If that's the case I'd invest in some packet capture training :) if I were you.

Right, alptail is not an npm package. Clone the repo then npm install to see the tree of pain grow itself.

Not picking on alptail here. Just my most recent WTF experience with getting way more deps than seems reasonable, by a mile.


> How do we make this better?

I do believe that a great library (package) is either self contained (e.g. lodash, moment, etc.) or has a minimum number of dependencies on other such libraries.

I think we can think of dependencies as adding to the code size and code complexity of the library which uses them, and then derive the quality of said library from those metrics.

JavaScript is great for writing code which can do a lot, and writing it fast, and I think this power shouldn't be misused to manufacture a lot of small libraries with a lot of dependencies. Perhaps we should use something like Github Gists or Stack Overflow answers for that.

This why I am personally gravitating towards Deno (https://deno.land/) now -- it has a nice set of base APIs and modules, and it allow me to pull a self-contained library straight from the CDN.


The chance to fix npm is dwindling quickly. With each year, a culture sets in that will get harder and harder to fix with time.

Personally I suspect that the lefpad debacle might have been the point of no return for npm; the fact that such a simple package managed to break everything g should have been a wake up call, but it does not appear that the community fixed much.


What could potentially happen with a malicious dependency added knowingly by the original author or unknowingly because his account is compromised will one day make leftpad look like a harmless warning / near escape we ignored.

Sure, leftpad broke productivity for one day. But the continued willful ignorance we show towards "yet another dependency" will kill us one day.

There is work happening in creating notary etc. Github/Microsoft is in a very good position to solve this. I hope they take the opportunity.


The sad thing is that it already happened 2 years ago [1]. And nothing changed apart from some general apathy on the interwebs.

The JavaScript ecosystem is past the point of no return right now. Any improvements will only be band-aids.

[1] - https://www.trendmicro.com/vinfo/dk/security/news/cybercrime...


My non-technical thoughts on the npm ecosystem:

1. Split package-lock into a human readable file and a machine readable one. Right now it's just pain.

2. TRUSTED PACKAGES. Somebody has to assign trust. Companies or highly involved developers should become curators.

3. LIMITED Packages with reduced security risk: File Access, Network Access, package access, OS access, etc...

4. Transit npm package best-practices into a single module format. Reduce clutter and number of files in node_modules.

5. Better tagging. Allow votes on tags.

6. Add monetization links for package authors to npm (blunt) and to the CLI (subtle, e.g. 'npm pay leftpad' opens the monetarization page on npm.).

Additional ideas:

A: Buyable certificates/insurance for packages.

B: Better clone, modify & re-publish workflow

C: Integrate FAQ/Forum directly in NPM directory website. Simple downvotes/upvotes for visibility. Allow and ask for readme pull requests from npm package page.


If packages are "trusted", how would NPM ensure that they are continually trusted as new versions come out? I've read those stories with packages whose developers hand them over for maintenance, and then a new update comes out with malicious code


M of N signatures on new version tags, or on ranges of commits?


Hate to break it to the author, but npm has been a raging dumpster-fire for over 4 years now. My personal advice on how to live with it:

- Assume you will have to rewrite your app in a year or less from now and organize it as such. This won't be easy when dealing with release deadlines which result in all-nighters full of sphagetti code. Whether you're doing react, vue or angular, just keep your components and logic small and if you need a package, (say for handling CSVs) then look for one that promotes itself as having no dependencies. Keep in mind that while re-writes can be a taboo subject to many managers, they can actually be the fastest way to solve deep dependency issues.

- Unless your framework requires it, try not to lock your package versions, this is a very common bad practice in my opinion. ```npm audit``` is your new best friend, make updating to latest part of every sprint. If you're just 3-4 months out of date these days you're likely to have over 40 security vulnerabilities. If you're 6 months out of date on a public facing application and need to pass a security audit, you're not gonna pass it. Compliance requirements, like SOC2 or healthcare ones are getting a lot more thorough as well. It's a lot less work to update continuously.

-Secure your devops. Any npm package that runs has full acess to your entire disk and network I/O. The cheapest way to harden security is to move your devops to aws or gcp. They can afford to audit their networks and OSes a lot more often than you can. If you're doing it alone or on a very small team, i'd recommend trying out aws-amplify or firebase hosting on gcp.

-If you're doing node on the backend, go full typescript. I think it's easier to adopt a "strongly typed" mentality when your logic is clearly defined. If you don't feel confident in your use of typescript, then don't use it in the front end. You're gonna have a bad time, especially with state management in React. However it is getting more mature and package managers are learning to avoid type issues too when updating their methods.

- And last, when picking a new framework or boilerplate or package, run npm audit on it first then make sure to read up on their github issues as much as you possibly can. It's very important to get not just a technical feel but a social one too on how their devs react to their issues.


Aside from the "raging dumpster-fire" - I personally have benefitted greatly from NPM, and many of its problems are likely common in other package ecosystems/managers - I do find myself nodding in agreement with all your points.

"Assume churn as inevitable and be organized for it" is a good mindset to stay sane on the leading edge, especially in the world wild web. The Jenga tower of dependencies will move around, break things every so often - changing interface for sometimes seemingly no reason other than preference. Major shifts in foundation, architecture, patterns, standard libraries, or organizational concepts are a regular occurence.


Javascript babies everywhere


A lot of the results of this analysis are a consequence of actually good UX in dependency management.

In NPM publishing a library is really easy. To compare with java, I recently had to follow a [three part medium article](https://proandroiddev.com/publishing-a-maven-artifact-3-3-st...) to publish. NPM: one login, one cli command and you're done. That lowers the bar to create a library, there are a lot of low quality libraries, but there's also much more diversity and really cool libraries. In java I've seen a lot of copy pasting. In Node you just create a library. You can worry a lot about 1k dependencies, but a lot of those will be one liners.

In node, it's basically impossible to have conflicts with dependencies. Again if we compare to Java. In Java namespaces are global, if there are two libraries sharing the same package name they will collide. That makes deep dependency trees actually a liability. I've seen recommendations of not using guava if you are writing a library, because they change the api so much that it will most certainly give problems if your user wants to use it as well, so much for a library. In Node, in contrast, if there are two different libraries that require different versions, they will each have their own dependency version. That has the cost of space, but there it saves hundreds of hours on conflict resolution. You can even have two different versions of the same library with different name, in case you need to partially migrate a project.

Making it easy to create and consume libraries has of course a clear consequence, more deep and big dependency trees.


The solution mentioned at the start of the article is what distribution package maintainers have been doing for decades. I would take this account as a giant flashing warning sign of what could happen to the flatpak/snap ecosystem, and also a flashing warning sign for languages that are developing their own package manager, especially if it is difficult for packages from the language package manager to be integrated into actual system level package managers. The curation effort required to get NPM into manageable shape is almost certainly beyond the scope of any traditional community effort.


Not having a package manager for your language means you're back to CMake and the "FindBoost", "FindCuda", "FindMKL", scripts that always fail for one reason or the other.

It encourages reinventing the wheel instead of standing on the shoulder of giants. For smaller languages having everyone implement a JSON parsing library because there is no dependency management would be a significant hurdle that might kill off the language before it's even started.

System package manager are a pain to deal with as a developer, when adding CI to my projects I'd rather deal with the language package manager, that works on Windows, MacOS, Linux rather than brew + msys2/pacman + apt-get and some C/C++ dependencies are just unavailable (looking at you recent BLAS/Lapack on Windows)

It also significantly streamline the documentation. Any step you add to your git clone or foo install is an extra hurdle for package adoption.

Lastly the language community needs is often at odds with the system administrator needs. You can't expect the distro package manager to deal with every single packages in the R community for example which allow you to analyze in a slightly different but niche way. This was examplified by the recent Haskell package management in Archlinux rant (link dead but was here: https://dixonary.co.uk/blog/haskell/cabal-2020)

Now I'm not saying that system package managers are bad, I go out of my way to integrate all the Data Science libraries I use in my system: https://github.com/mratsim/arch-Data-Science, but that's because here I take my sysadmin/end-user/package consumer hat.


> Not having a package manager for your language ... encourages reinventing the wheel instead of standing on the shoulder of giants.

Isn't writing a new package manager from scratch a pretty clear example of reinventing the wheel, especially when these package managers often miss vital features that system package managers have had for 15 years?

https://wiki.debian.org/SecureApt


You link a package management solution that works for a single OS and wonder why programming language libraries don't just use that?

So, I have a useful library that I want to share with the world. I do what, exactly, so that other people across platforms can use it?


Your comment is a beautiful demonstration of the problem -- a lack of understanding of the existing tool drives a desire for a new, different tool.

There is absolutely nothing about apt or dpkg that are os specific other than they require a posix-like environment. The same goes for rpm and yum. And pacman.

At the end of the day, these package managers all do the following: 1) describe the dependency relationships of packaged software 2) describe where to extract files 3) describe any actions that need to be taken prior to or after extracting files

There's nothing stopping you from installing .debs with dpkg on an RedHat system nor .rpms with rpm on a Debian system aside from conflicts when both package managers try to put files in the same place.

This same conflict occurs when language package managers come into the picture, because newsflash: someone else has been working on the same fucking problem for decades, and somewhere between just ignoring their work, and actively hampering it.


Can you give an example of a project, maybe a programming language ecosystem, that unifies some sort of publish-once, access-anywhere package system that uses those tools and works across Linux, Windows, and macOS?

I don't understand what that would look like in practice. Tools like Homebrew and MacPorts don't even do what you are asserting is the obvious, trivial way to do it. And I have a feeling it isn't just because my microbrain doesn't get it.

For example, I don't see how your post pitches something different than what I used to have to do with C/C++ deps and CMake back in the day, and those were not the hey days of dep management.

Also, enough with the animosity. When you find yourself registering a new account because you want to sneak in some venom when talking about software dep management on an internet message board, what are you doing?


Gentoo prefix can run on a wide variety of operating systems and distributions. It needs a few updates to work around the recent changes in macos permissioning, but if it can find a familiar shell it will more or less work as expected.

I think part of the above comment is expressing frustration about the fact that part of the reason for the lack of wide portability of these programs is precisely because people say "oh they can't do x" or "this is operating system specific" and taking it as axiomatic rather than realizing that with a bit of engineering time and effort they can be extended to work on other systems. This especially since there is pretty much no language that is used entirely by itself without dependencies on another at some point in time. If your language has a ffi, then you need a real package manager to accompany it.

[0] https://wiki.gentoo.org/wiki/Project:Prefix


Nice to see some actual numbers and research on the topic. Most complaints about NPM seem to fall into the category of fretful, emotional hand-wringing (although there's some of that here too).

If we measure success by the amount (not the percentage) of high-quality packages available, NPM is easily the best package management ecosystem yet. But there's definitely room for improvement.

One major improvement I'd like to see is some solution for unwanted indirect dependencies. If I depend on package A, and A provides features X and Y, and I only need it to do X, and the code for doing Y depends on package B, then it shouldn't download B. The recommended solution is to split up A into a family of packages (like lodash's per-method packages), but that just creates a mess of versioning that's annoying for both the authors and the consumers. And those massively-multipackage libraries (like Babel) just seem to create dependency hell situations that semantic versioning is not good enough at solving. So maybe instead of package splitting, the next-gen package manager could have first-class understanding of ES6 modules, with tree-shaking logic to prevent unused indirect dependencies.


I think that a lot of ideas here are really good, but they shouldn't be implemented in npm. Npm is not meant to be curated. NPM is meant to be a canonical way to reference a dependency, and that's about it.

I don't think it's NPMs role to assign "quality" to a package (even though ironically they have a quality metric) -- that's something a third-party should do on top of npm.

That's the unix philosophy in me speaking.


For most other languages the package ecosystem is a competitive advantage; Perl had CPAN; Python famously comes with batteries included, but is used where its batteries don’t matter (numpy and friends, django); rubyists will go on about their gems for as long as you care to listen. JS is the only ecosystem I know of where the libraries are something that scares people away from the language. My own opinion is that npm is not state-of-the-art, and there is a culture problem (starting with wrapping single statements in a library — leftpad is most famous, but wrapping a single regex in its own, incompatible wrapper seems to be an accepted way to do things, too). The culture problem may be unsolvable.


Some other salient differences with Perl and Python:

- Perl and CPAN are tightly integrated, whereas NPM and JavaScript are not. Perl without CPAN is unthinkable; JavaScript without NPM is rather pleasant. A huge amount of JavaScript culture, best practices, and idioms existed before NPM and therefore there’s much less cultural cohesiveness on “standard” third party libraries, etc. It’s a different story with Node specifically but the horse already left the barn by then.

- many of the “batteries included” in Python are from a rich standard library that JavaScript/Node lacks. So there isn’t a need for a third party library to implement extremely basic functions like you see in Lodash or leftpad, and for pip packages there’s a much deeper base of code/API standards to draw standards and idioms from.


I'm of the opinion that most of lodash should just be incorporated into vanilla javascript, as part of its standard library. It could be done similarly to Python (or any language, really), and just group each collection of functions by usage (like itertools in Python groups tools for iteration).

Once people stop feeling the need to reach for third-party packages to accomplish day-to-day tasks maybe there'll be more freedom to move away from the way npm is the way it is today.


> lodash should just be incorporated into vanilla javascript, as part of its standard library

That's already happening. More and more of lodash/underscore/etc's functions are being introduced to Array/Object/String APIs.

https://github.com/you-dont-need/You-Dont-Need-Lodash-Unders...


> Perl and CPAN are tightly integrated

I've been a CPAN author for over 20 years. They're actually not integrated. I guess I'd use the word cooperative.

The "cpan" utility is used to install them locally, but you don't have to run that command.

Having said that, PHP took over web development because ISPs didn't want to run cpan for end-users on shared hosting, and the PHP standard library had 3000 APIs built-in.

> Perl without CPAN is unthinkable

CPAN has 194,000 modules, but you can live without them just fine. (CGI.pm was taken out of core for some shortsighted reason, so you will likely need CPAN for that now.)

But this thread is about JS and npm, so what can we learn from CPAN?

- an effort was made back in the day to limit the size of CPAN to a single CD, but that obviously failed

- most CPAN modules have no or a few dependencies, so most programmers are comfortable installing and using them

- the Catalyst web framework has an insane number of dependencies, like a 1-hour download at 100 Mbps, and for that reason I don't recommend it

- I don't think it's allowed to delete a CPAN module, so we don't have rage-induced outages like leftpad


> They're actually not integrated. I guess I'd use the word cooperative

I should clarify that I was speaking informally in terms of the culture, not in terms of technology or formal infrastructure. Likewise with "Perl without CPAN is unthinkable" - I didn't mean it was impossible to be productive in Perl without CPAN so much as it's hard to envision Perl as a technology without it.

My overall point is that CPAN is nearly as old as Perl itself, and almost everyone who has ever written in Perl has used it extensively. This is very different from JavaScript, which was one of the world's most popular languages years before Node.js. I think this (along with general 21st century trends) is a major reason why NPM is so much of a disaster compared to CPAN.


I think the problems articulated in this article will make Deno attractive to many JavaScript developers. The standard Deno library has everything needed for building server-side dynamic websites.

But I think what will speed adoption of Deno is something that has been little-commented on: the runtime is a single-executable. This massively simplifies deployment.

Contrast that with Node, Python, Ruby and the way they spew out dozen, often 100s, of tiny script files all over the place. Simple, uncomplicated deployment solutions (particularly for self-hosting) simply don't exist with many interpreted languages. Whatever you think of Deno, their decision to create a single executable runtime will make it attractive to many developers.


Deno makes not sense to me outside of hobby projects, lambda functions and other small components. Large projects with thousands of lines of code will be difficult to support with this paradigm. The security model is also not great. I applaud the effort, I think it is great, but command line flags for security work only for small project files. For any large project you need something that is granular and per module/function. If you start going down that rabbit hole you will end up with some kind of configuration file no different than package.json.


Technically speaking you can have single "deno" npm with all those apis and you'd be in pretty much the same place, dependency-wise, no?


I have decided to use javascript strictly for frontend only, no more node.js or express or whatever else for backend. I will just use non-js backends these days, which are plenty.

It's sad I still have to use npm even for frontend development nowadays, along with babel, webpack and a matrix to get started.

I now stick with jquery(yes I know, but its concise syntax matters to me) and bootstrap, and they seem working for 99% use cases for me. super easy to get start and maintain, good enough for my use cases.


The only problem I see is the lack of a comprehensive standard library for Node.js.


Ruby: x.is_a? Integer

PHP: is_int($x) (alias of is_integer)

Python: isinstance(x, int)

Node.js[1]:

    typeof val === "number" &&
    (!(typeof value !== 'number' || value !== value || value === Infinity || value === -Infinity)) && 
    Math.floor(val) === val
I think all supported Node.js versions now have Number.isInteger (via es262), but the is-integer package still gets a 200_000 downloads a week[2].

[1]: https://github.com/parshap/js-is-integer/blob/master/index.j...

[2]: https://www.npmjs.com/package/is-integer


This actually taps into something I really like about npm. For things that "should be easy/obvious", there are usually edge cases, _especially_ around how `Object` is handled.

The real answer is for Node to have built-ins that handle this (which they now do for `isInteger`, but absent that, I love that there are these specific packages that have unit tests and a simple API. People scoff at things at `isPromise`[1], but it's reliably filling a super important niche that the language doesn't handle.

[1]: https://github.com/then/is-promise/blob/1d547300e780a4eca391...


Maybe we should adopt these packages into es262 and make the shims available by default directly with the language?


You could have found a better example. The package you referenced literally just does `Number.isInteger` unless you are on decade old versions of nodejs


It still gets downloaded and the code still runs. 200_000 times a week, irrespective of your node version.

Even when the language migrates to a standard (NumberisInteger), the remaining ecosystem keeps depending on this package.


The Node.js core packages and eg express.js + middlewares on npmjs.com are the "standard library" for Node.js, developed cooperatively as part of the CommonJS effort back in 2010-12. Modular JSGI (connect/express) middlewares can target both Node.js' core http and express. Almost all modern asset management pipelines run on npm (webpack et al) anyway. What do you miss and where should this mythical "standard library" come from in your opinion?


This is why Deno is so promising.


The node.js standard library is quite extensive, including networking, cryptography, OS access, etc. What more do you want from a standard lib?


Well from https://www.npmjs.com/browse/depended we can see a few candidates:

- request, indicates that the built-in networking isn't working for everybody

- moment, the JS nor Node have very comprehensive Date manipulation

- express, again indicates that the built in networking library isn't working for everyone

- fs-extra, literally filling the gaps in the built-in fs package

- async, bluebird, these could easily be considered "standard library" material


I don't know. While I commonly use several of these it feels like this easily bleeds into questions of subjective "taste" and domain-specific libraries.

If you're not changing anything else about the management of these packages, what's the advantage of having request or express as part of the "standard" library?

And if you are changing the management of these packages, then maybe that's the solution with or without expanding the core library?

I think that Java for example has similar defacto-but-not-actual "standard libraries" that correspond to many of these domains.

I'm not convinced that a larger "core" is addressing the root problem.


All standard libraries get conveniences such as these built upon them. That is not the role of a standard lib, which is to provide a basic level of OS compatibly and some commonly useful data structures and utilities. The cost of standardization is rigidity, so they are typically kept thin.

I happen to not like the design choices of many of those libraries you mentioned and choose to to use them.

Express is an entire web framework. What other ecosystems have you used which would ship with these sorts of libraries?


Go's built-in http package is roughly equivalent to Express


I think a lot of people honestly don't know about them. I've met a lot of people (JS devs) who didn't know about fs, child_process, etc.

Which is a shame. child_process + promises (and a couple of listeners) is really handy for orchestrating things in scripts.

I don't really do much web dev these days, but JS is really convenient for these sorts of things. I still use it for a lot of scripts.


A package depending on itself is just an accident (and probably something npm ought to detect on publish and reject, but alas...)

However, I believe a 1-package circular dependency is often the result of a developer mistakenly adding a package as a dependency when they mean to add it to `peerDepedencies`:

e.g. in TFA: "babel-core depends on babel-register, which depends on babel-core... yeoman-generator depends on yeoman-environment which depends on yeoman-generator."

Peer dependencies, of course, no longer exist because it got deprecated because raisins.

Which leaves us with zero options to define sideways dependencies.


Where did you read peerDependencies was deprecated? It's still valid and recommended, AFAIK:

https://docs.npmjs.com/files/package.json#peerdependencies


I think they meant the behavior of not installing peerDependencies was "deprecated".


Right, I misspoke. However what's the point of calling it a "dependency" when you don't install it alongside?

"is this package 'healthy'?" is a good line of thought, but the proposed metrics are all less useful than just opening the source on Github or unpkg.com and giving the source code a quick skim read.

If it looks too scary for you to maintain, you shouldn't take it on as a dependency.

> It’s not really a technical problem, but mostly a social one.

This is absolutely correct.


You're right! That works great for the package itself, and it's what I do. But will you also chase down and open on github all that package's dependencies, and their dependencies, etc? It is that, more recursive problem that I'm trying to point at. If you are content to trust that the package maintainer chose their dependencies wisely, then this probably won't seem like a big deal -- and that's ok! Different projects have different requirements for how they assess third-party code.


I do check out dependencies, yeah – Octolinker makes it easy to do due diligence on reasonable libraries.

If it has a ton of dependencies, then I don't want to take it on/maintain it anyway.


unpkg may work, but I doubt many people know of that.

Most people are going to reach for Github, and let me warn you right now: the packages on NPM are not necessarily from the source code you may find on Github. I've run into this a number of times, where a package exists on NPM that has an entirely different Github repo than you may expect. In addition, people can freely create NPM packages that look official but are not. It's a huge security hole.

If a package contains only minified JS, then you're not going to be able to audit it. And it goes without saying, if you didn't minify the code yourself, how well do you really trust it in the first place? It's a huge tower of trust, and NPM solves none of it.


The problem with the javascript ecosystem is that javascript ecosystem rejects the problems with the ecosystem.


Why why why is the default behavior of npm install to choose a range of versions rather than a single version.

I don’t want “a version compatible with lib 0.1.0” I want lib 0.1.0 ... package-lock is a strange bandaid over this.

If I want the code my app depends on to change, I’ll change it.


> I don’t want “a version compatible with lib 0.1.0” I want lib 0.1.0 ... package-lock is a strange bandaid over this.

Doesn't plain semver in your package.json dependencies already give you this capability?

``` { "dependencies": { "foo" : "2.0.1" } } ```

Should yield version 2.0.1 _exactly_. It's the use of modifiers like ~ and ^ that yield the fuzzy matching.

https://docs.npmjs.com/files/package.json#dependencies


I turn that off whenever I setup a new computer. I too want my packages to be exact. It has only ever caused me pain. Mainly around a package author introducing a breaking change in a minor/patch version (which I actually don't fault them for).

I see ranges as "wobbly" foundation and the more dependancies on dependancies you have the "wobblier" your foundation becomes. I've spent hours tracking down some mid-depth dep that was being installed with a range that had deps that had a range that had deps that had a range and because of this a new "npm install" or update causes a massive jump in version of some very low-level library that DOES have a breaking change that propagates up. It's all a house of cards.

Any package manager I use for software development I lock down versions as tight as I can. If I have a problem I'll go read the change log, commit log, release notes, etc for the library in question to see if they have a fix for the issue I am having but blindly hoping everyone follows SEMVAR to the letter and doesn't just miss something they didn't test for is laughable.


why bother with the package system at all then? Why not just copy/paste the code into your project?


Npm provides much more value than copy-pasting code! I think we all know that.

That said if I happen upon a possible dependency that's just one JS file and has a permissive license, I often do choose the copy-paste route.


Btw, nobody is forcing anyone to use any packages from npm. You can easily use whatever repository you want. You can store your dependencies in your source repositories together with you application or another entirely separate repo.

JavaScript is a huge, perhaps the biggest, software ecosystem in the world. There is a vast amount of code out there and you are going to get some bad stuff if you are not careful. This does not mean it is bad at the core. It means that it comes at the cost of doing security work upfront if you care about this. while benefiting from the crowd-sourced experience of the masses.


While I agree that NPM can be improved, most people who are complaining about the number of dependencies have no idea what they are doing.

If you install a package without the --production, which is default, you get everything including perhaps some build packages like babel, linters, and what not. These dev dependencies tend to be huge. That is how it works. Similarly, if you install the dev version of a library in some other language you get not just the source but perhaps hundreds of megs more of other source. What is the problem?


This analysis does not include devDependencies. The counts and statistics in it relate to strict dependencies only.


What if npm packages are re-coded by popularity? In the sense that any package that has achieved high connectivity and popularity must eliminate as much dependencies with pure JS code. But again that would cause code bloating. Dan Abramov had a talk about abandoning Clean Code. It is kind of strange that simple React app now days has NPM folder with 1GB of files, but I guess we got used to.


If I install React it's because I trust React. If React installs 10000 packages, I see no problem because I assume that React knows what they're doing, otherwise I wouldn't trust them and I wouldn't have installed React in the first place.

If I don't like deep dependency trees, then I install packages that I trust not to use deep dependency trees (or rather graphs since they can have cycles).


I refuse to use npm at all. Too risky.

I only use libraries if they can be added as a first-order dependency (i.e. in a script tag on the page). This is still risky, but I can mitigate it somewhat. And at least I'm not giving a bunch of random maintainers commit-to-prod access in my core code.


Been looking for mdns browser package and still not sure how to filter out node packages on NPM.

Cyclical dependencies are a feature, not a bug. Many languages don't allow such a feature to exist at all with their compilation/interpreter models.


What is a good usecase for circular dependencies?


Compilers/transpilers seem like a prime example. For example, imagine Typescript vX using utility-library-a, which itself is written in Typescript and has Typescript vY as a dev dependency.


You're right, that is a perfectly valid occurrence! However, I did not count dev- or peer- dependencies in cycles; only true-blue dependencies. So all the cycles I counter were either not of that type, or failing to use devDependencies -- which would also be unfortunate.


That's fair. Does npm have a way to declare build vs runtime dependencies? That would at least let you qualify that kind of thing (although only somewhat). (Sorry if this is a noob question; I'm interested in package managers, but I've never meaningfully worked with NPM specifically, so I don't know some of the local context.)

EDIT: Looks like that's devDependencies? So we can at least sometimes prune the runtime dependencies, which is nice.

EDIT2: Actually reading more I'm not sure devDependencies is quite that same, although it's similar.


Problem domains where you have tricky algorithms with weird edge cases, like complicated graph searches/traversals, computational geometry, linear algebra. You'll often compose your algorithms out of other algorithms, pulled in as dependencies. Then your algorithm can be used by an upstream dependency if it resolves some weird edge case.

Things like first class functions or programming with combinators naturally leads to those kinds of dependency graphs, it's basically a form of recursion.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: