Hacker News new | past | comments | ask | show | jobs | submit login

I think this is a serious set of reasonable thoughts about the incident, and don't want to demean the article.

That said, I wish more people would talk both sides. Yes, every dependency has a cost. BUT the alternatives aren't cost free either. For all the ranting against micropackages, I'm not seeing a good pro/con discussion.

I think there are several lessons to be learned here (nixing "unpublish" is a good one, and I've not been impressed with the reaction from npm there) the most important of which is probably that we should change our build process: Dev should be pulling in updates freely to maintain the easy apply-fixes-often environment that has clearly been popular, then those should be pinned when they go past dev (to ensure later stages are consistent) and we should have some means of locally saving the dependencies to reduce our build-time dependency on package repos.

Sadly, though, I've not seen a lot of discussion on a reasonable way to apply those lessons. I've seen a lot of smugness ("Any engineer that accepts random dependencies should be fired on the spot", to paraphrase an HN comment), a lot of mocker ("haha, look at how terrible JS is!"), and a lot of rants against npm as a private entity that can clearly make mistakes, but not much in the way of constructive reflection.

Clearly JS and NPM have done a lot RIGHT, judging by success and programmer satisfaction. How do we keep that right and fix the wrong?




I suspect you aren't seeing much discussion because those who have a reasonable process in place, and do not consider this situation to be as bad as everyone would have you believe, tend not to comment on it as much.

For the sake of discussion, here is my set of best practices.

I review libraries before adding them to my project. This involves skimming the code or reading it in its entirety if short, skimming the list of its dependencies, and making some quality judgements on liveliness, reliability, and maintainability in case I need to fix things myself. Note that length isn't a factor on its own, but may figure into some of these other estimates. I have on occasion pasted short modules directly into my code because I didn't think their recursive dependencies were justified.

I then pin the library version and all of its dependencies with npm-shrinkwrap.

Periodically, or when I need specific changes, I use npm-check to review updates. Here, I actually do look at all the changes since my pinned version, through a combination of change and commit logs. I make the call on whether the fixes and improvements outweigh the risk of updating; usually the changes are trivial and the answer is yes, so I update, shrinkwrap, skim the diff, done.

I prefer not to pull in dependencies at deploy time, since I don't need the headache of github or npm being down when I need to deploy, and production machines may not have external internet access, let alone toolchains for compiling binary modules. Npm-pack followed by npm-install of the tarball is your friend here, and gets you pretty close to 100% reproducible deploys and rollbacks.

This list intentionally has lots of judgement calls and few absolute rules. I don't follow all of them for all of my projects, but it is what I would consider a reasonable process for things that matter.

[edit: I should add that this only applies to end products which are actually deployed. For my modules, I try to keep dependency version ranges at defaults, and recommend others do the same. All this pinning and packing is really the responsibility of the last user in the chain, and from experience, you will make their life significantly more difficult if you pin your own module dependencies.]


These practices may not be as widespread as I assumed, but this is how I've been doing npm dependencies for the last few years.

Originally we used to simply check in the node_modules folder.

Now I check in the npm-shrinkwrap.json (sanitised via https://www.npmjs.com/package/shonkwrap), and then use a caching proxy between the CI server and the real npm.

There's a bunch of choices available for this proxy, I've used one called nopar, but sinopia is also popular. Both Artifactory and Nexus can also be configured to do this, as well as act as caching proxies for a number of other package systems too.


If you aren't using something like nopar or sinopia, you're really doing it wrong - I mean, if you're taking the micro-dependency route, surely you're not building your application as one monolithic chunk of code, right? So you need somewhere to publish your private modules to, anyway.


Essentially we're trying to figure out when it's appropriate for "my" code to become "everyones" code, and if there are steps in between. ("Standard library", for example.)

One thing that would be useful to this debate an analysis of a language ecosystem where there are only "macropackages" and see if the same function shows up over and over again across packages.


> One thing that would be useful to this debate an analysis of a language ecosystem where there are only "macropackages" and see if the same function shows up over and over again across packages.

Look no further than C++, where nearly every major software suite has its own strings, vectors, etc. implemented, frequently duplicating functionality already implemented in (1) STL, and (2) Boost. I seem to recall that the original Android Browser, for example, had no fewer than 5 kinds of strings on the C++ side of the code base, because it interfaced with several different systems and each had its own notion of what a string should be.

The advantage (or disadvantage) of including common functionality in macro packages is that you can optimize for your particular use case. In theory, this may result in more well-tailored abstractions and performance speedups, but it also results in code duplication, bugs, and potentially even poor abstraction and missed optimization opportunities (just because you think you are generalizing/optimizing your code doesn't mean you actually are).

Clearly, we need some sort of balance, and having official or de facto standard libraries is probably a win. Half the reason we're even in this situation is because both JS and C++ lack certain features in their standard libraries which implicitly encourage people to roll their own.


> Look no further than C++, where nearly every major software suite has its own strings, vectors, etc. implemented, frequently duplicating functionality already implemented in (1) STL, and (2) Boost.

In many ways problem exists because there used to be different ideas in how strings should be set up. Today we mostly decide on UTF-8 and only convert now to legacy APIs if needed. This is a bad comparison because it's literally caused by legacy code. C++ projects cannot avoid different string classes because of that.


As the author of Groups on iOS and the Qbix Platform (http://qbix.com/platform) I think I have a pretty deep perspective on this issue of when "my thing" becomes "everyone's thing".

When a company goes public, there are lots of regulations. A lot of people rely on you. You can't just close up shop tomorrow.

When your software package is released as open source, it becomes an issue of governance. If you can't stick around to maintain it, the community has a right to appoint some other people. There can be professional maintainers taking over basic responsibilities for various projects.

Please read this article I wrote a couple years ago called the Politics of Groups:

http://magarshak.com/blog/?p=135

This is a general problem. Here is an excerpt:

If the individual - the risk is that the individual may have too much power over others who come to rely on the stream. They may suddenly stop publishing it, or cut off access to everyone, which would hurt many people. (I define hurt in terms of needs or strong expectations of people that form over time.)


For workaday engineers (e.g., not people attempting to build distributed libraries), the appropriate question about dependency management is, "under what conditions is it appropriate for a person or group outside of your org to publish code into your project?"

Note that this isn't a cost-benefit approach. It simply asks what needs to be the case for you to accept third-party code. Every project is going to answer this slightly differently. I would hope that running a private repo, pinning versions, learning a bit about the author and some sort of code review would be the case for at least most, but apparently many folks feel OK about the unrestricted right of net.authors, once accepted into your project, to publish code at seemingly random times which they happily ingest on updates.

A lot of coders seem to see only the "neat, I don't have to write a state machine to talk to [thing]," or "thank god someone learned how to [X] so I don't have to." But that, combined with folks who don't manage their own repos or even pin things, leads to folks who's names the coders probably don't even know essentially having commit privileges on their code.


> the appropriate question about dependency management is, "under what conditions is it appropriate for a person or group outside of your org to publish code into your project?"

>Note that this isn't a cost-benefit approach.

Eh, it is a cost-benefit approach if the appropriate circumstances are "where the benefit outweighs the cost/risk." And I don't know any good way to answer it without doing that.

What are your own examples of specification of such circumstances that don't involve cost-benefit?

Unless you just decide, when asking the question like that, "Geez, there's no circumstances where it's appropriate" and go to total NIH, no external dependencies at all (no Node, no Rails, no nothing).


Well, in that everything is eventually a cost-benefit decision, sure.

What I was getting at is that "under what conditions" is a baseline gating requirement that needs to reflect the nature of what you're building. If I'm building something that is intended to replace OpenSSL, my gating conditions for including third party code is going to be a lot different than if I'm building what I hope is the next flappy birds' high-score implementation.

People have all sorts of gating functions. I don't write code on Windows, because I never have and see no reason to start. Baseline competency of developers is another one. You can view baseline requirements as part of the cost-benefit if you like, but generally, if the requirements need to change to make a proposed course of action "worth it", you probably need to redo the entire proposal because the first one just failed. (If your requirements are mutable enough that ditching some to make the numbers work seems acceptable, I have grave doubts about the process and/or the decision making.)

I basically included my own examples already: a private repo to freeze versions in use and host our own forks when appropriate, code review by at least one experienced developer of any included modules, and a look at the developer[1].

[1] This is pretty fuzzy (the process, not necessarily the developer). I generally google-stalk them to see what else they've worked on, if there have been public spats that lead me to believe the future upgrade path might be trouble for us, whether they have publicly visible expertise in what they're coding, etc. Basic reputation stuff, plus evidence of domain expertise, if appropriate.


So when you say "code review by at least one experienced developer" as a condition, you really mean "code review AND the developer thinks it's 'good enough'", right? Presumably same for "a look at the developer" is the same the condition isnt' just that someone is looking at the developer, it's that they are looking and satisfied.

Without being clear about that, it can sound like your conditions are just about having a certain internal _process_ in place, and not about any evaluation at all -- if the process is in place, the condition is met. Rather you're saying you won't use any dependencies without reviewing them more than people typically do and deciding they are okay.

The problem of HOW you decide if something is okay still remains; although the time it takes to review already means you are definitely going to be using less dependencies than many people do, even before you get to the evaluation, just by having only finite time to evaluate. And you'll be especially unlikely to use large dependencies, since they will take so much time to review. (Do you review on every new version too?)

And you still need to decide if something is even worth spending the time to code review. I'd be surprised if expected 'benefit' doesn't play a role in that decision. And, really, I suspect expected benefit plays a role in your code review standards too -- you probably demand a higher level of quality for something you could probably write yourself in 8 yours, than for something that might take you months to write yourself.

What you do is different than what most people do in code reviewing all dependencies, and making sure you have a private repo mirror. I'm not sure it's really an issue of "conditions instead of cost-benefit analysis" though. You just do more analysis than most people, mainly, and perhaps have higher standards for code quality than most people (anyone doing "cost benefit analysis" probably pays _some_ attention to code quality via some kind of code review, just not as extensive as you and without as high requirements. If you're not paying any attention to code quality of your dependencies, you probably aren't doing any kind of cost-benefit analysis either, you're just adding dependencies with no consideration at all, another issue entirely).


I'm actually having difficulty seeing if there's a real disagreement here or if we're arguing semantics. And for the record, you are correct that, for instance, I don't ignore what was learned in code reviews; I assumed I didn't need to spell everything out, but you're correct that I'm not endorsing cargo-cult code review or whatever.

Try this. Say you want to build a house. You plan it out, and while doing so write requirements into your plan. One is that you want zombie-green enameled floors, and a second is that it needs to comply with local construction ordinances.

Neither of those are costless. It is possible that your green floor related urges are something you'll compromise on - if it is too expensive, you'll suffer through with normal hardwood. This, to me, looks like a classic cost-benefit tradeoff - you want to walk on green enamel, but will take second-choice if it means you can also have a vacation instead of a bigger mortgage.

The building code requirement looks different. While you can certainly build your home while ignoring codes, that is usually not a very good strategy for someone who has the usual goals of home ownership in mind. Put a different way, unless you have rather unusual reasons for building a house, complying with building codes is only subject to cost-benefit insofar as it controls whether the home is built at all.[1]

Does this better illustrate where I'm coming from?

When one begins a coding project, hopefully one has a handle on the context in which is to be used. That knowledge should inform, among many other things, expectations of code quality. Code quality in a given project is not just the code you write, but also the code you import from third parties. If you don't subject imported code to the same standards you have for internally produced code, you have a problem, in that at the least you don't know if it is meeting the standards you set.[2]

So… I agree that, from one perspective, every decision is cost-benefit, including eating and putting on clothes when you leave the house. I think it is useful, however, to distinguish factors for which tradeoffs exist within the scope of succeeding in your goals from factors that amount to baseline conditions for success.

As an aside, if you wanted to jump up and down on this, I'm surprised you didn't take the route of asking how far down the stack I validate. What's _really_ running on that disk controller?[3]

[1] I do hope we can avoid digressing into building code discussions.

[2] "I don't care" is a perfectly fine standard, too, depending. Not everything is written to manage large sums of money, or runs in a life-or-death context, or handles environment control for expensive delicate things, or... I absolutely write code that I'm not careful about.

[3] http://www.wired.com/2015/02/nsa-firmware-hacking/


PHP's composer, for example, writes a "composer.lock" tracking the exact versions/commit-shas as was recursively resolved on "install" or "upgrade" time, so you can commit that after testing a "composer upgrade" and be sure you stay with tested versions, and others can then do a composer install based on the lock-file instead of the composer.json "spec" file (which may contain version wildcards).


npm has a similar feature, it's called a shrinkwrap.


Maybe more encouragement of caching where NPM downloads into places where it can be checked into source control. Then you can have the beginner friendliness of NPM, but you are only relying on it being up when you want to add or update a package.

Pretty much the only thing needed to accomplish this is to discourage the use of the global flag, and a few strong notes in the documentation and tutorials to check in your node_modules folder that remind people that if you don't check it in, you have established a build time dependency on NPM being up/correct.

edit: corrected spelling


> I think there are several lessons to be learned here.

Indeed. Unfortunately, they've mostly been learned by others before this, and ignored or forgotten. If you want to see how these issues have been dealt with in the past, look to a project that's been doing the same thing for much longer, such as CPAN and the Perl module ecosystem. It's been around for quite a while, and is very mature at this point. Here's a few features that I would hope other module ecosystems would strive for:

- Mirrored heavily, easy to become a mirror. [1]

- Immutable by default, older packages migrated to special archive version of system which does not archive older packages. [2]

- Indexed in multiple ways (author, name, category) and searchable. [3]

- All modules tested regularly on a matrix of multiple operating systems, language versions, and module releases. [4][5] There's also often a few different architectures tested to boot (IBM/z, Sparc), , but that may be quite a bit less regular.

- Static analysis of modules as a service to authors to help them possibly identify and fix problems, as well as reporting . [6]

- Module installation defaults to running all tests and failing to install if tests fail.

- The ability to set up a local private CPAN mirror for yourself or your company (you can peg your cpan client to a specific mirror). [7]

Undoubtedly there are features of other module ecosystems that CPAN doesn't do, or doesn't do as well, and could look to for a better way to accomplish some things, but that's the whole point. None of this exists in a vacuum, and there are plenty of examples to look to. Nothing makes any one system inherently all that different than another, so best practices are easy to find if you look. You just need to ask.

1: http://mirrors.cpan.org/

2: http://backpan.cpantesters.org/

3: http://www.cpan.org/modules/index.html

4: http://cpantesters.org/

5: http://cpantesters.org/distro/P/Path-Tiny.html

6: http://cpants.cpanauthors.org/ (CPANTS is distinct from CPAN Testers)

7: http://blogs.perl.org/users/marc_sebastian_jakobs/2009/11/ho...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: