Hacker News new | past | comments | ask | show | jobs | submit login
Backdoor in event-stream library dependency (github.com/dominictarr)
1034 points by cnorthwood on Nov 26, 2018 | hide | past | favorite | 491 comments

I really have a hard time putting as much blame on the author as the people in that Github thread are doing. Maybe they could have handled this specific issue a little better, but the underlying problem is just one of the flaws in the open source community that everyone has to accept. Maintaining a project is a lot of work (even just having your name attached to a dead project involves work) and the benefit from doing that work can be non-existent. If the original author has no use for the project anymore and someone offers to take it over from them, why should the author be expected to refuse? Isn't adding another potentially unknown maintainer generally better for the community than a project dying?

Especially given that Dominic maintains hundreds of packages[1].

He's a good guy and a great developer, and he's always been supportive of others who want to help out. When working on Scuttlebutt[2] I got some flak for a change I made and he said this[3]:

> I argued against this before, but @christianbundy wanted to do it, so I figured okay do it then. Maybe it is annoying for everyone (in which case christian probably learns for experience) or maybe it turns out to be a good idea! in which case we all do. Either way, shooting down* something someone wants to do (that doesn't have a strong measurable reason) leads to a permission based / authority driven culture. And that isn't what I want. This may make ssb less stable... but I'd rather have less stability but more people who feel like they can work on it / are committed to it.

>* "shooting down" is not the same as negotiating a better solution that addresses everyone's needs. I think that's a good thing, plus being good at negotiating is a very good skill.

[1]: https://www.npmjs.com/~dominictarr [2]: https://scuttlebutt.nz [3]: http://viewer.scuttlebot.io/%25%2B2WIaJ%2BoRVURoTAke9YWuJzp%...

Dominic is wrong. If there's no authority, then there's nobody taking responsibility. This is a perfect example of how lack of organizational structure simply does not work in the real world. Dominic's other projects like scuttlebutt are likely doomed to fail as well because of his wrongheaded views about organization.

For a successful counter-example, one can look at the well-structured, hierarchical organization behind Linux. Lieutenants gain authority based on the merit of their contributions, and are responsible for reviewing the work of other developers. Authority works, has worked for thousands of years, and will continue to work for thousands more.

Linux has a giant user base, a giant installation base, and a giant pool of talented devs willing to take on unpaid work.

If this is an indictment of anything, it's an indictment of the entire NPM ecosystem -- it's been the wild wild west for years; haphazardly using whatever NPM install gives you is baked into the culture.

Sure, Dominic is an active participant in that culture but it seems to me that it is impossible to have a largely unmoderated volunteer system with as many packages are actively used without things like this happening.

Keep in mind, this is a case where the system worked, more or less -- an observant user caught the issue, and made a public issue of it. Who knows how many packages have slipped by like this?

> Linux has a giant user base, a giant installation base, and a giant pool of talented devs willing to take on unpaid work.

And also important: It even has a giant number of paid maintainers, for who this is their main job.

For those the incentive to continuously maintain things is different than for someone who gets nothing expect more work out of it.

> Linux has a giant user base, a giant installation base, and a giant pool of talented devs willing to take on unpaid work.

Linux didn't always have a giant user base, and it wouldn't have gotten there without strong leadership having a sense of pride and responsibility.

"I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, and is starting to get ready. I'd like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things)."

I'm curious if anyone knows how it really has developed. Has anyone documented the history? Do I understand correctly that right now it's a hierarchy with Linus at the top and levels of "Lieutenants" managing increasingly more detailed levels of subsystems. How was development organized before? Are the developers that are paid mostly on the top or the bottom of the hierarchy? Are the proportions of paid developers very distributed among different companies or does a major portion of them belong to one company?

> other projects like scuttlebutt are likely doomed to fail as well because of his wrongheaded views about organization.

Define failure. I don’t know Dominic and I haven’t looked into the Scuttlebutt project beyond being aware of its existence and what it is, but...

He talks about creating a community where anyone is welcome to contribute.

It is perfectly fine for an open source project to have the development process and the community as its raison d'être.

Just because a project doesn’t outcompete every single alternative doesn’t mean it’s failed.

Just because a project isn’t even used by more than a handful of people doesn’t mean it’s failed.

It all depends on what the goal of the project was in the first place, and what the goal of the contributors are.

A security hole and backdoor is always a failure in a software product. Having someone else control your system is just about the worst you can get. It has nothing to do with relative definitions because it always degrades every other objective the project could or does have. If a model of leadership leads naturally and often to security holes, it is time to reconsider the model. If it is a common library, then it is even worse. Open source that is used widely is _even_ more important to protect because the impact can be so much greater.

This sounds like "if it's not perfect, it's a failure."

I'm actually not aware of a single piece of software that didn't at some point have a production security vulnerability.

Facebook failed.

Google failed.

Amazon failed.

I don’t know, if I write open source software, the only definition of success I use is whether or not I had any fun writing it.

Anything else is gravy.

Lots of projects, including windows and linux have had security holes that allow remote control of your system. Security flaw does not mean failure, its always about weighing the cost of security flaws against the utility the software provides. Even a completely compromised system can provide utility to many users.

> He talks about creating a community where anyone is welcome to contribute.

Yes and in this case, that was exactly the problem.

> It is perfectly fine for an open source project to have the development process and the community as its raison d'être.

Conway's law is not an instruction manual.

The problem was that too few people contributed. A contribution to an open source project doesn't have to be a pull request. Glancing over the code you're pulling down rather than assuming the maintainer is infallible counts just fine. Very, very few people do that in the JS ecosystem though.

When a package gets 2m downloads a day and it still takes 2 and a half months to find a problem, a huge number of developers have failed to do their part.

> Dominic is wrong. If there's no authority, then there's nobody taking responsibility. This is a perfect example of how lack of organizational structure simply does not work in the real world.

Maybe, but the sort of people who are not jerks (i.e. who don't get too much pleasure out of having power) who are willing to lead for free are... very rare.

Fundamentally, non-commercial open-source needs to evolve organizational methods that require less leadership, just because there aren't a lot of good leaders willing to work for free.

But in the absence of a trusted, well-structured organization, authority rests on the shoulders of those using the library as a dependency.

I would rather see it as if contributors gain _trust_ based on their contributions, which is not necessarily the same as authority.

If a contributor's end goal is to publish a backdoor, then making them wait 0 or 100 commits to the project before trusting them doesn't change the end result.

In fact, if you had the energy to do the attack at all here (which took some work), having to fake trustworthiness doesn't require much more effort. Just look like a super enthusiastic contributor, put work into the readme, bike-shed over some issues every month, and bam.

What happened is that Dominic gave ownership to the only person who wanted it: "he emailed me and said he wanted to maintain the module, so I gave it to him. I don't get any thing from maintaining this module, and I don't even use it anymore, and havn't for years." At every single point, there was authority and responsibility. It is just that authority turned out to be bad actor.

It was situation where former authority was not interested in being authority anymore, since former authority gained nothing from it and had other work.

> that doesn't have a strong measurable reason

I think that's the key here - organization/hierarchy is important as a project scales up, but he didn't want to stifle a new contribution without good reason.

Scuttlebutt already works, people use it and are building on it.

> Lieutenants gain authority based on the merit of their contributions

Meritocracy is an outdated discriminatory practice.



Is there a showcase of open-source projects managed/developed according to this post-meritocratic system?

> Meritocracy is an outdated discriminatory practice.

Of course merit is judged subjectively (like anything else), but what exactly is the alternative?

In particular, I don't find anything actionable in that manifesto in regards to decision making.

To be clear, you're arguing that people should be judged on their identity and background rather than the merit of their contributions?

Skimming those links, I think the position is that people are already being judged on their identities before they're judged on the merit of their contributions, not that they should be.

The extent and congruence of the findings regarding, e.g., blinded vs non-blinded resumes alone should put the lie to the idea that a true "meritocracy" is something humans can actually meaningfully do at this point.

That's a contrived "no true Scotsman" right there. Also quite a few people argued that democracy should be replaced by dictatorship on similar grounds: "look at America, it's clear that democracy is shit".

Imperfect realization doesn't mean the ideal is invalid (except when that imperfection is inherent to the ideal). You can argue for more equal, more meritocratic community; tearing it down because it's not already perfect is disingenuous and destructive.

You are feeding a troll. Please don't.

> maintains hundreds of packages

Might be worth looking over some of these to see if he has also given access of them to some dodgy people. As most of them appear to be as old as event-stream and are also probably as unused by him. And if we are concerned, we should, he would advise, simply make a fork and write an email to npm to warn them. But hes a good guy, so don't blame him if you find that he's done this before in the other 422 packages.

Edits: I mean I do understand having software that's old and unmaintained, and it's fine and easy to hand off control to others - we developers are generally trusting of other developers. I wonder if he has also trusted others in this way before and that he has been exploited before without anyone knowing.

> And if we are concerned, we should, he would advise, simply make a fork and write an email to npm to warn them.

Well... yeah. If you don't trust him, don't trust him.

If you don't trust NPM's vetting, don't trust NPM's vetting.

npm does vetting?

NPM increased their efforts with regards to auditing. They realized it was a prominent attack vector and without them taking responsibility for some level of the problem they would be throwing their reputation down the drain. It isn't perfect, but it's a step to improving the situation.

Sweet! There are so many brilliant & creative people in the Node community. I'm positive there are some innovative ways to approach this problem.

> Especially given that Dominic maintains hundreds of packages

> maintains hundreds of packages

> hundreds

How can a single human reasonably and responsably do this? This number alone demonstrates how sloppy and inept the Node.js community's practices are.

NPM packages are frequently tiny, single-purpose functions. The infamous left-pad module is only 47 lines of code. https://github.com/stevemao/left-pad/blob/master/index.js

Node people highly follow the single responsibility principle that is prevalent in Linux. By your logic, Linux is sloppy and inept too.

It's not.

Not to that degree.

Unix frequently espouses "do one thing", but how much "one thing" is is always open to interpretation.

And it's never been quite as small as e.g. left-pad. It's more something like `printf` or `cat`, which do "print a formatted string" and "concatenate files" as their "one thing", respectively.

Let me reiterate: The problem isn't "single responsibility", the problem is "single responsibility for half a thing".

The pieces are just too tiny, which leads to an explosion of packages, which leads to packages being badly maintained.

`cat` without any options support is possibly simpler than left-pad.

I think the big difference is that very few people set their systems to trust and receive updates of `cat` directly from the developers of `cat` (and same for all the other standard tiny utilities). Instead, they rely on a Linux distribution to vet the developers' code and to regularly pull in updates. Maybe there's room for a similar model of distributions vetting code in the npm ecosystem.

Do you really think cat is as simple as left-pad?


Also, cat is kind of an important primitive for building a functional system that uses files as one of its core abstractions. It makes total sense that it would (a) exist and (b) be well maintained by an authoritative and reliable source.

Left pad it is not.

> `cat` without any options

Interestingly, the `raw_cat()` function is 37 lines long.


UNIX `cat`, unchanged from V2 through V6 until it was rewritten in V7, was 44 instructions.

The core utilities are provided as a suite. The cat utility is one of a hundred others, all packaged together. Even without distributions, you'd only need to vet one organization for those basic utilities.

Add to that the fact that while the Unix way works well for connecting certain types of programs through pipelines and/or shell scripts, those programs themselves are written usually in C or Perl, Python &c, which are languages that have big standard libraries. It might be reasonable to assume that while programs that conform to Unix philosophy may constitute useful building blocks for end user or sysadmin tasks, more complex programs, even those "building blocks" themselves are better off with languages that provide them with a rather large, well-though-out toolset that's built as an integrated whole, i.e. a standart library.

I don't follow your comparison. What parts of the Linux ecosystem have a single person maintaining a large multitude of packages?

Looking at the kernel itself you see a hierarchy where people are only responsible for small segments of the kernel.

Looking at a standard distro like Debian, which makes this easily available at https://www.debian.org/devel/people you'll see that most individuals are only responsible for a handfull of packages, with the bulk having a team responsible for them.

I think with "single responsibility" he means the packages. So instead of complex packages that are hard to maintain they have lots of smaller packages that are easier to maintain. So a single person can maintain several packages.

Not saying that I think maintaining hundreds sounds like a good idea, just trying to point out the misunderstanding :) .

Handing over a popular software project to a random stranger you've never spoken to who asks to take it over is just irresponsible. Nothing is preventing that stranger from adding malicious code and potentially compromising millions of devices, which is exactly what happened.

Dominic is the same kind of random stranger to most of the people whose codebases he's in.

Sure but he earned the trust of the community by building everything he did. It took times and effort, unlike that other strangers, which does give more credence to what he does.

Sadly he didn't deserved that trust.

Have you ever created a package that people started to use, which you then had to maintain for purposes that didn't do anything for you and then no longer even need the package? Pretty much any rando offering to take over maintenance is going to be welcome to it compared to the dozens of requests and insults offered as bug reports.

Yes, I am in this situation right now, actually. I have a project with around 700 stars on GitHub with over a million downloads per month (according to PyPI) which I no longer have the time, interest, or willpower to continue maintaining.

I placed an open call to find a new maintainer months ago, and have received many requests. Every single request I've received has been roughly as shady as the request sent to event-stream. I've declined them all due to their shadiness. I know exactly what could happen (this situation right here) if I give it to the wrong person.

This is good maintainership right here!

For outsiders, it looks like the maintainer is greedy to keep the project to themselves. The bug reports pile up, and dozens of people are offerring to maintain. And then insults start to come up, because it looks like you don't want to hand it over.

If you want to take over a project, you just earn the trust if the current maintainer, be it patches for existing PRs, finding security vuons, etc over the time. Linux has gotten this right.

Thank you. Exactly.

Explain how you would be able to evaluate the trustworthiness of a person asking to transfer ownership to.

Actually, it's not as hard as you make it sound:

1. - don't transfer - mark your repo Abandoned, tell people to use that other person's fork if you have to 2. - does that person have any other online presence? Long running? 3. - does he also put his face in front of the crowd? Talk at conferences?

This creates accountability - someone like that wouldn't pull out the same kind of injection, because he could be caught for that and his "brand" would be destroyed.

This is a reasonable response. I'm not sure why you're down voted but I gave you a up vote.

At the very least, a few years of regularly contributing to Node projects on GitHub. The account he handed it over to has essentially zero history: https://github.com/right9ctrl. The one repo they have is just a copy of https://github.com/barrysteyn/node-scrypt.

I'm not against handing over projects, but some vetting has to be done.

If you can't evaluate the trustworthiness who you're handing the package over to, just don't hand it over. Mark it as deprecated and call it a day. This is Open Source 101.

GitHub has a Read Only option, and package managers of most languages have a way to convey that the project is abandoned or provided security fixes only.

You don't have to feel guilty to abandon the projects. Most us do it, every day.

exactly, transferring a package to an unknown person is not only giving them access to the code but also to all the people who already trust that code

To be honest that kind of your fault for trusting that code. Heck, the commits were not even signed as far as I know, had they been signed the change of "ownership" would have been clear.

Makes no difference if they were signed or not. He handed over the NPM package, not the GitHub repo. NPM doesn't show commit history or authors and has no connection to GitHub; you can easily have a GitHub repo for your NPM package with benign code in the GH repo and completely different, malicious code in the package.

> not the GitHub repo

He did that too actually.

> you can easily have a GitHub repo for your NPM package with benign code in the GH repo and completely different, malicious code in the package

This is one of the biggest issues that NPM has, along with not enforcing packages to be signed. If the package was signed this would not be an issue as people would see that the signer changed.

No. Transferring a project wholesale to an unknown maintainer is effectively a fork; I'd much rather have my dependencies die than start silently pulling in a fork. If I want to swap in right9ctrl/event-stream, I'll do it myself.

Why would someone update to a new version of a dependency if they don’t trust the new maintainer? Can’t you pin dependencies to a particular version with npm? In my world, you should have a really great reason to update a third party dependency—“there’s a new version” is not a sufficiently good reason.

> Why would someone update to a new version of a dependency if they don’t trust the new maintainer?

They didn't know there was a new maintainer, let alone that they didn't trust them.

> In my world, you should have a really great reason to update a third party dependency—“there’s a new version” is not a sufficiently good reason.

Because new versions come out with bugfixes _and_ security patches all the time. In my world (which I'll be clear is not npm), I update all my dependencies to whatever backwards-compat latest version there is, to get advantage of any bugfixes or security patches without having to spend time being aware of every possibly applicable bugfix or security patch. If I spend a day hunting down a bug that ended up having already been fixed in an upstream dependency and I wouldn't have even been subject to it if I had updated... that was a wasted day.

Also if I regularly update, I get security patches without having to have spent time being aware of every one.

This all applies to indirect dependencies (dependency of a dependency) along with direct ones too.

Some of these issues are general to any kind of dependency system. But the particular balance of trade-offs changes in different ecosystems. One thing that seems to be somewhat special in npm is how _small_ and _numerous_ the dependencies are. Npm ecosystem is built upon a large number of very small and discrete dependencies. This has advantages, but the big disadvantage that it becomes very difficult to 'manually' keep track of them, and easy for a bad actor to sneak something bad in.

Without locking in to a specific version you are trusting the maintainer in perpetuity. That trust extends to things like reviewing their own dependencies, reviewing pull requests, adding other maintainers, or just flat writing bad code. I don't blame the author any more in this instance than if any of those other possible examples occurred.

Security patches are part of my original point regarding why it is bad for a project to die. I have no data on this, but I have a feeling that there are a lot more unpatched security bugs in widely used dead projects than there are popular projects that are taken over by a malicious maintainer.

The simple fact of the matter is that you are responsible for all the open source code you run. If you allow the code to be updated automatically you are abdicating your responsibility to review that code as a trade for being relieved of the responsibility of keeping up to date with patches. You are the one liable when something goes wrong because you are the one who made that decision and misplaced that trust. That is an inherent agreement you make when you use open source code and is often spelled out explicitly in the license.

And yet, to produce the software that nearly any employer or client in 2018 wants at the price they want, requires using open source dependencies without manually reviewing every diff from every version released.

I don't really understand what you are trying to suggest.

Yes, whether we realize it or not, we are trusting the maintainers of our dependencies. Sometimes that trust is misplaced. That might lead us to try and avoid a particular dependency or maintainer in the future when it happens. And then it'll happen again.

We've built an ecosystem around using open source dependencies. Isn't that what the open source dream was? You might not like it, but not liking it doesn't make it realistic to develop software for money without using all sorts of open source dependencies for which you can't personally vouch for every line of code. I'm seriously not sure what it is you are trying to advocate for.

I am generally advocating for being an informed user of open source software. As for something more specific, I would echo the two suggestions from the original author in one of his Github comments:

>1. Pay the maintainers!! Only depend on modules that you know are definitely maintained!

>2. When you depend on something, you should take part in maintaining it.

You can't expect someone to maintain your code for you without you contributing anything in return and if something does go wrong in that situation you have no one to blame but yourself.

I suppose this is hard in NPM.

For example of I have a project with 5 dependencies, this explodes to hundreds of tiny projects. These individual projects are almost always just a few lines of code, and there's no way anyone's going to become a donor for them.

Many major projects are well funded (webpack, etc), but all it takes is one of those tiny packages to send a malware.

That isn't a problem. You pay the maintainers of the packages you use, and they pass on some of that money to the maintainers of the packages they use. If a problem like this one happens they'll lose a lot of customers, so they're incentivized to audit the code they're pulling down.

You only need to worry about the level immediately below yours.

A lot of users had it locked to a semver major version or major.minor version, which isn't necessarily trusting the maintainer "in perpetuity", just the current development track.

Is a change between maintainers a semver-major breaking change? Several people have suggested that that is a baseline that npm could easily automate/enforce. That would have at least sent a community signal to re-review/re-audit the package in light of a new maintainer.

Does npm currently control versioning to any extent? A quick Google search seemed to show that they only make versioning recommendations with no real rules. If you are operating from the premise that the maintainer is potentially compromised, why would you trust them to stick to the semantic versioning spec?

Admittedly, npm can't control the "semantic" in semver as that will likely always be a human judgment call, but npm has a lot of general control in package versioning. Their semver recommendation [1] is pretty strong in their docs, and embedded in the default version range specifiers ^ (a previous default) implies semver major, and ~ (the current default) implies semver major.minor.

Other small ways that npm controls versioning is that it does not allow you to publish the same version number for a package [2], and `npm version` [3] is often the most common tool for incrementing version numbers, which itself provides a number of semver-focused shortcuts.

So yes, it's certainly a common expectation in npm that packages follow semver, largely due directly to npm's documentation and tools support.

> If you are operating from the premise that the maintainer is potentially compromised, why would you trust them to stick to the semantic versioning spec?

The suggestion was that in the current case the maintainer changed with no warning. One warning system that npm provides is semver major breaking changes. npm does have enough version control (for instance, the part not allowing previously submitted version numbers to be reused) that they could theoretically force all new versions submitted after a maintainer change to jump a semver major version. That would at least send a signal to the large number of developers that don't pay attention to the changelogs of minor and patch versions (and may naturally have a ^ or ~ scope in their package.json) to at least check the changelog for breaking changes. That's possibly the easiest "sufficient" fix for this problem of an otherwise unannounced maintainer change.

[1] https://docs.npmjs.com/about-semantic-versioning

[2] https://docs.npmjs.com/cli/publish.html

[3] https://docs.npmjs.com/cli/version

You publish the packages from a directory. While npm supports VCSs, it's not a requirement.

Interesting, thanks for the explanation. I was not familiar with the npm ecosystem. If the ecosystem has a cultural norm of "blindly" updating dependencies regularly, that could be the root problem. If you're going to live on the bleeding edge, rolling the dice over and over, you're going to have a bad roll once in a while.

Your world seems more sane, and I (also not in node, JavaScript, npm etc.) generally follow that too. Updating third party dependencies should be a rare thing, something you do only when you critically need to. If I were working on any sort of serious or commercial project, I'd expect to do my due diligence when considering updating a dependency, including examining the dependency's downstream dependencies. Do I really require the additional functionality or fixes from the new version? How mature/tested is the new version? Have there been any changes in the API? What do the release notes say? Are the trade-offs of updating worth it?

Just saying "YOLO, update my dependencies and go!" would give me severe anxiety.

You misunderstood me.

I don't work much with npm. I work mostly with ruby, using bundler for dependency management.

And i update my dependencies to new patch releases _all the time_ without reviewing the releases individually. I think most other ruby devs do too. My understanding of the intention of "semantic versioning" is to _allow_ you to do that.

My projects have dozens if not hundreds of dependencies -- don't forget indirect dependencies. (I think npm-based projects usually have hundreds+). How could I possibly review every new patch release of any of these dependencies every time they come out? I'd never get any software written. If I never updated my dependencies anwyay, I'd be subjecting my users to bugs that had been fixed in patch releases, and wasting my time debugging things that had already been fixed in patch releases.

This is how contemporary software development using open source dependencies works, for many many devs.

Not just that, but the longer you wait the more you diverge from mainline. The more you diverge, the more painful re-integration will be.

By updating all the time you can address bugs and changes as simple fixes here and there. Wait too long and you run the risk of having a really painful merge/update cycle, most likely happening under duress because now you have to update because of some bug fix or security issue.

The more dependencies you have, the more critical it is to stay up-to-date, and thus this directly leads to all the craziness with NPM based projects whenever something goes wrong.

Security vulnerabilities due to not updating are more likely then those from library being intentionally hostile.

And therefore you are ultimately responsible if any of those dependencies leaks your users data or compromises them. Just because that's how you choose to work doesn't absolve you of those responsibilities.

You can, but this package is very low-level so chances are that it's not a direct dependency for many end-users. A lot of popular modules that may be using this module also don't pin dependencies or don't use lock files so that's one problem. A second problem is that when a npm package is transferred to someone else and then a new version is published, there is nothing notifying you that the package in under new ownership. A third problem is that sometimes users might delete their lock files and re-install dependencies when they encounter some versioning issue and are looking for a quick fix without realizing the implications of doing that.

You can, but it's opt-in. At least once with every new project I go "gah, dammit!" because I forget to check in a lockfile, but I sleep better at night.

I do not know if NPM lockfiles are transitive, though. Guess I'd better go looking...

As far as I know, you can pin your versions (something I always recommend), but those packages can always pull in other dependencies with version bumps. And NPM in particular encourages a massive plethora of micropackages, so the dependency tree gets extremely large.

Do they? I'm not 100 percent sure about the npm package lock (because I don't typically use it), but I'm fairly sure that yarn's lockfile will lock the full dependency tree, including the dependencies of your dependencies. I believe npm's package lock will as well, but I'd have to double check that.

npm's package lock will lockfile the whole dependency tree, but there are easily overlooked differences between `npm install` and `npm ci` on respecting locked dependencies for things like semver-patch level changes.

`npm ci` is so new a lot of projects aren't using it and are still using `npm install` everywhere. (To be fair, evaluating if most of my projects should switch to `npm ci` in more places is still on my own personal devops backlog. This thread has reminded me to bump priority on that.)

that assumes you know that the maintainer changed

A reasonable position -- and obviously better in this case -- but in general I'm not sure most people agree. The overwhelming majority of the time maintainers are not malicious.

the overwhelming majority of people are not thieves either, but we still lock our cars when we leave them in the parking lot

Sure, but we don't typically walk around the car and check that no one has slashed the back tires.

> I'd much rather have my dependencies die than start silently pulling in a fork.

Maybe, but Node had that issue too, no? Remember leftpad?

So, lots of people in the Node ecosystem don't agree with you.

No, that was a different issue. Left-pad was removed from npm, not just left abandoned and unmaintained. The people using it were happy to still use the (unmaintained) version of it.

well then lots of people in the Node ecosystem are wrong. If many people stopped hashing passwords, would you say that's a good thing to follow?

then we might want to question why the community sees silently pulling in updates as "best practice". Is it? Seems like a double-edged sword.

> Isn't adding another potentially unknown maintainer generally better for the community than a project dying?

No. If it's someone involved in the project for years you feel you can trust then sure, go ahead, but someone coming along being like "sup, commit rights plz" then something's just wrong with you of you agree. If that person were serious about it they could still fork it and maybe msg users of the old version about it.

Sure those people could still mess up by just switching over without checking the new fork, or having good reasons to switch like bug fixes or new features, but at least you as the author of the original lib did your part to prevent any messups.

Open-source projects greatly depend on reputation.

E.g. I can trust the people behind git: some of them may be poor UX designers, but they won't let out a version that eats my files, or is backwards-incompatible. If a completely different team forked git, I would be quite wary to use the fork outright for my daily work.

When all long-term maintainers stop maintaining a project, the last person leaving should turn off the light. The project should be officially stopped, because no people remain in behind it.

Whoever wants to pick up the development should fork, and use a different name. This way, the forked project will accumulate its own reputation: of course, initially propped in major ways by the idea of "this is a fork of a well-known project".

At the very least, it would be a heads-up for everyone who depend on that project that a major change in it is happening.

Definitely. One of his coworkers has a great thread about this, including on how he has hundreds of libraries in use:


Many, many people treat open-source projects like consumer products. Except the paying-money part, of course. That they're happy to leave out.

This is a systemic problem, and blaming one guy won't solve anything. Especially since so many are blaming him for not doing enough free work to help them with their paid jobs. If open-source libraries are truly valuable, we need to find a way to jointly pay maintainers for their labor. I can think of many options (a non-profit build with hefty donations from tech companies, governments giving out tax-funded grants, places like NPM and GitHub making crowdfunding 10x easier). But the ideas aren't the problem. It's getting one or more of them done.

> Many, many people treat open-source projects like consumer products.

To me, this linr of argument completely misses the point.

It's entirely immaterial how a FLOSS project is treated. This problem is essentially an identity hijacking problem. The community trusted the old maintainer, but then he screwed up by enabling an attacker to essentially take over his identity and thus create and exploit a major vulnerability in his behalf.

It is not immaterial. Projects are run by human beings, and economics does not cease to exist once a computer is involved. People's expectations were out of line for what they were paying.

I also think the words "community" and "trust" are being badly abused here. A community is a relationship of mutuality. Most of the people affected here never did a thing. They were leeches, not participants. A healthy community would have looked at the guy laboring overtime for free and said, "Hey, buddy, let's split that load up." And most of the people who benefited from his work didn't even know his name. They didn't trust him; they trusted the system.

He owed those people nothing. He gave them more than that: a good-faith effort to find a new maintainer.

This, and then make sure the rewards trickle down to everyone involved: https://github.com/ktorn/vdp

The author's stance is that because it was a volunteer effort, he bears no responsibility for transferring ownership to an unknown third party who wants to commandeer the code used by millions of people. I believe this is false.

He also feels that there's nothing anyone can do about it except scramble to control the damage. I believe this is currently true.

You may not like it but from a license standpoint, I think you may be incorrect

As quoted elsewhere in this thread


End-users are harmed and are not licensees, and so, even if the license disclaimers are effective (boilerplate disclaimers are often broader than the law of some jurisdictions will give effect to) the claims they would have for negligence would not be covered (and would not be transferred to downstream maintainers, who likely have concurrent liability, absent a indemnity clause as well as a disclaimer in the license.)

So, while I don't think the upthread claim was about the maintainer having legal responsibility, I don't think it would be entirely wrong, even with the license text, if it did.

The people with legal liability to the end-users, I expect, are the developers that made use of this library without properly vetting all changes to it.

Liability is very often not exclusive in law, and, in particular, tends to flow the whole way up supply chains. With nothing being sold you probably aren't dealing with strict product liability upstream, but exposure of end users is reasonably foreseeable so there's no immediately obvious blanket reason for ruling out upstream negligence liability to end users.

The idea that the party most proximate to end users is exposed to potential liability is true, but the idea that this must mean noone else in the chain is exposed is not.

You honestly think that the author of software released as open source is going to be liable for vulnerabilities in that software ... really?

If that were the case, you'd pretty much wipe out the software industry as it stands today :)

I'd be very interested in case law where you can see the users of a service (which might not even disclose what software they use) are able to sue the author of a package used as part of that service.

I'm pretty sure they'll be legally liable for intentionally inserted malicious code, no matter what the license text says.

I'm not talking about the original maintainer (who didn't introduce malicious code intentionally), but the person he turned it over to (who seems to have).

Legal liability and ethical responsibility are not always the same thing, although it's generally only the first that matters in court.

oh sure the criminal who put the backdoor in place, no-one's arguing his/her liability.

But the point that I was referring to is any suggestion that the repo. owner who handed it over could bear any liability for doing so, I'd suggest that's not probable/practicable.

I don't see how the text of the MIT license can be construed to indemnify a negligent developer but not a malicious one.

I am not a lawyer, but my understanding is that the text of the MIT license is a potential defense to a suit, it does not prevent one. As in I try sue you, you say, "But did you read the license?" I do, talk to my lawyer, and still decide to sue you. You tell the judge, "But look at the license!" And then it is up to the judge to decide whether it matters.

Therefore the issue isn't the license, it is the rules of the law in question under which the author is being sued.

Therefore your intent can matter. Whether a valid contract exists matters. Whether I can be expected to have read it matters. THAT THE INDEMNITY IS WRITTEN IN ALL CAPS MATTERS. (I'm not making that up - see https://law.stackexchange.com/questions/18207/in-contracts-w... to see that it does matter.)

The result? The indemnity in the contract can say whatever it wants and still only provides partial protection. The real rules are complicated and elsewhere in the legal system.

Well, let me put it to you this way. If a malware author installs a piece of software on your machine and it steals bitcoins from your machine, do you think they'd be able to argue to a judge that it was all OK 'cause of the software license...

Put it this way, I would not suggest relying on that defence in court.

Oh, no, sir. I didn't insert the backdoor. I gave the keys to this anonymous person on the Internet, and he inserted the backdoor.

That clearly absolves me of any responsibility, does it not?

> You honestly think that the author of software released as open source is going to be liable for vulnerabilities in that software

Where the type of harm that results is reasonably foreseeable and could have been prevented by reasonable care by the developer (or maintainer; different though often co-occurring roles), I don't see how the general law of negligence doesn't fit. AFAIK, negligence has no open source software escape hatch.

Do you think anyone would publish open source software if it was possible that they might be held liable by people who used services or software which included that code at any future date when they had no say in how their code was used??

Really you think that's realistic, given the astonishingly heavy presence of open source software?

IANAL but I see two issues here. First, you still have to show that he had the duty to act, which is quite problematic given that there was no relationship between the parties beyond an open source license which expressly disclaims any liability. There's no relationship between the end users and the library maintainer and for any specific instance of the harm, it's difficult to argue that the end user, whose connection to the library is merely that whoever wrote the software happened to use the library, is owed some duty by the library maintainer. Likewise, the idea that the library maintainer should have foreseen this harm, given that the library maintainer likely has no idea how the library is being used, seems far-fetched.

Second, since software engineering is not a licensed profession, for any related conduct to be seen as negligent, it has to be something that a reasonable person should be able to avoid and foresee that could cause specific harm. Even a relatively gross act of incompetence by any reasonable engineering standards likely does not meet this bar, given that there's no license required for someone to be in this situation and that it takes a lot of expertise to understand how specific bad practices could cause harm.

There's a difference between legal liability and moral liability.

I hope everyone talking about motel responsibility are donating to all the open source projects they depend on. If not I find the moral preaching a bit one sided.

You and I both know they aren't. It astounds me how many popular OSS projects are terribly funded despite how many people use them.

> There's a difference between legal liability and moral liability

Making business decisions on the hope that someone else's moral codes will perfectly align with your own is unscalable. That's why we have written laws, codes and contracts.

When that happens, pay people.

Seriously. Having a moral discussion in tandem with scale seems very poorly misaligned with the interests, needs, risks of pretty much everyone involved.

I don't even see any moral issues here. Is there any reason to believe the original author acted in bad faith? If you sell your used car and it gets used to rob a bank, did you act immorally?

If the car is sold, but still uses the same number plate, and is still attached to the name of the original owner, there is a problem.

Selling a car = shutting down the maintenance of current project, pointing to a fork done by someone.

What happened is just handing the car's keys to someone, without much notice.

When you sell your car there is generally a title transfer. A process which lets everyone know that the car is no longer yours.

I think the largest gripe here is that the original maintainer let the new, unknown maintainer commit to his repo and publish under the already established package name instead of making him fork it and publish as a new package.

There is nothing wrong with publishing rewritten package with same name under full supervision of original developer. Transferring control generally implies full trust, and Dominic haven't established any trust with new developer. He didn't even ask them for their real name!

If someone's determined, they can always create a legit looking GitHub account, submit a few PRs (they did, in this case), gain trust, and _then_ deliver the malicious code. It just takes time.

But this trust part seems to work pretty well. You need to be trusted to be a Debian package manager, and I volunteer as a Drupal code review admin where we require all contributors to have a real name, and there is a back and forth discussion for a few days until we mark the user as vetted.

If you run a business where you have convinced people to give you access to their house to do some chore and you sell your business and your copy of their keys to a criminal it could be morally problematic.

A car is merely a fungible vehicle the customer would have been no better or worse off had the robber been driving a different car.

This would be an apt analogy for just giving / selling a code base.

Had it been distributed under a new account/name users could have decided to trust or not trust a new maintainer.

The dev allowed new people to trade under his name and rep worse allowed new people to delegate further to unknown others.

He is morally liable and ought to have known better.

He was doing this for free and releasing it under an open source license.

In your analogy the business would be performing the chores for free and telling the users that the business is not responsible for any damage related to the access granted by the key. I don't think most people would sign up for that without a business relationship.

> Had it been distributed under a new account/name users could have decided to trust or not trust a new maintainer.

This is the myth we keep telling each other but I don't seriously believe this is how open source works in reality.

> sell your business and your copy of their keys to a criminal

This implies the seller _knows_ the one they are handing over the keys to is indeed a criminal. In that case it is certainly morally problematic.

I see your point but I still think calling the maintainer's behaviour immoral is going a bit too far. Perhaps careless. Or maybe naive. But not more than that.

He is morally liable

Google says "Definition of Liable: responsible by law; legally answerable".

If you claim he's not legally responsible but is "morally liable", where "liable" itself means "legally responsible", what in your world does the term "morally liable" mean, specifically? What does it mean you can do to him, or what does it mean you should do in future in response to this?

Google isn't the ultimate source of truth of the meaning of words. When someone says that another party is morally liable they mean morally responsible. That he ought to feel responsible and act according and consider others actions in the future lest he feel like he has morally failed people in the future. Ultimately we are often and are often expected to be our harshest critic and ought not to limit our duty to others to the minimum that the law requires.

The meaning of words isn't what I want to focus on, but the irrelevance of any response to this event along the lines of "well he SHOULD feel bad".

If "he is morally responsible" leads only to "he should feel bad" and nothing more, then what does it matter if he is/isn't morally responsible?

Ok he (does/doesn't) feel (justifiably/unjustifiably) bad .. now what?


> If you sell your used car and it gets used to rob a bank, did you act immorally?

If he transfers it knowing other people are going to use it, and don't tell then them, then they cut the brakes, that's a problem. It's not just that it was sold, but people continue to use it and weren't told. That's a different situation.

There's a moral liability to do your due diligence when using upstream software, too. Otherwise it's just whining about "the untrustworthy system I built upon is untrustworthy".

yep, and that's why I said "from a license standpoint". Software licenses aren't moral constructs, they are legal constructs.

Putting it on paper does not make it so. A single lawsuit and you could easily be out thousands even if you win.

The legal outcome could vary widely state to state and nation to nation.

If putting a blurb in a text file makes you feel safe about being sued for negligence you haven't considered all possible venues.

Here is an article about liability waivers


So you reckon that people who publish open source software should be liable for flaws/vulnerabilities in their code?

It's a bold assertion, I'd be interested to see if you can provide any case law where it's been tested.

it depends. laziness can actually be used against you. even in european countries.

But on this topic specifically, I've never heard of Open source software authors being held liable in any fashion for software they've released in any country.

I'd be interested to hear if such precedent existed.

This license is invalid in quite a few jurisdictions around the world.

> I believe this is false.

It's literally not, though. That's what the license says. It expressly disclaims any such thing.

If you want somebody to incur responsibility, you have to get them to take it on. You can't just demand it of them.

Fortunately, there is a good way to do exactly this. Did you bring your wallet?

If I thought I might be on the hook for damages caused by a mistake in how I run an open source project, I would never open source anything.

Yeah, no kidding.

That's why the idea of commercial support exists. You need to depend on it? Pay for it.

The other person set out to subvert node modules, the author was just a target interchangeable with 100,000s of other module maintainers. There are already documented cases of very popular node modules having their passwords compromised so this is probably a form of attack we will see grow significantly more prevalent since these modules can see database credentials, encryption keys etc.

This is also very similar to bad entities obtaining or acquiring browser extensions to discretely poison with spyware and advertising, which happened a lot of times.

What was he compensated with in exchange for taking on responsibility? Without compensation the contract by which he takes on responsibility is not valid.

Node could implement something like an "ethical transfer of responsibility for packaging" clause for their code of conduct. I'd like to see that.

I'd also like to see something like StavrosK mentioned in his comment[1] about https://codeshelter.co made a part of this. When a maintainer gets an email for a long-dormant project of theirs, the maintainer needs options. One of those should probably be to yield the package back to the community. A "code shelter" is one way of doing that.

Then the question of, "How do we vet maintainers at scale?" comes up. All I really know is it'd take a financial & human capital investment in Node community infrastructure to make it happen.

I think it's in Node's best interest to do so. These highly prolific maintainers like dominictarr are prime targets for black hats. Overworked, underpaid, huge product portfolio they manage. Who among us wouldn't be grateful for the interest & help?

So Node should invest in fixing this.

1 https://news.ycombinator.com/item?id=18534741

> If the original author has no use for the project anymore and someone offers to take it over from them, why should the author be expected to refuse?

People who use your project place their trust in you. If you pass that trust on to another developer without your users' involvement, then that developer's actions reflect on your reputation. Why? Because you've not given me the option of only trusting you. Your policy made it a package deal.

It's kinda like how I can't tell Bob about anyone's Christmas gift. He sometimes tells Fred the Loudmouth, who tells everyone and ruins the surprise. Bob never ruins the surprise directly, and he always asks Fred to keep it a secret, but it doesn't matter. I still can't trust Bob, even if Bob's only mistake is that he trusts in the wrong people.

This thread is an amusing rediscovery of the auditing process major companies use before they choose to include open source in their projects. How is it maintained? Who maintains it? What's the risk if it goes rogue? How are updates reviewed? If you think Google simply ingests any random update to their Node dependencies you're crazy.

> Isn't adding another potentially unknown maintainer generally better for the community than a project dying?

I don't disagree with your overall premise: if you aren't actively maintaining a project for what-ever reason (you don't currently have the interest in it compared to other project or can't justify the time that you could spend elsewhere instead) and someone else does have the time and the interest, the project should be allowed to live on through new maintainers.

But: just handing over admin access of a project that many rely on to an unknown entity is not a safe move as this case proves. I understand the point of not wanting a permissions based community and so forth, but (and call be cynical if you will) that is rather naive. The world is just too full or arseholes for that sort of idealism to be at all safe in practise.

Instead let the new person/people create a fork and update your documentation to state that you are not currently actively maintaining your fork and people should consider moving to the new one instead. This way no one else unknowingly uses the new fork. Of course people might blindly switch over without verifying the new maintainers which puts those people at the same risk, but at least they take action to blindly move over rather than not knowing at all that it has happened, and people who are more sensibly cautious will hopefully monitor the changes more carefully than they would under the previous stewardship so this sort of backdoor is less likely to go unnoticed.

I don’t think any blame is due to the author at all.

Volunteer project ownership is voluntary, and transferring to another volunteer is the only choice other than abandonment for a lot of volunteers. Package repositories don’t support monetization and there’s no pool of volunteers associated with the repository itself to take up maintenance of what otherwise would have been abandoned.

Would we have preferred if the original creator had simply deleted the repository altogether? The last time someone did that on NPM, it generated deafening howls of rage — but here we are today in the non-abandonment scenario, listening to renewed howls of outrage.

> Package repositories don’t support monetization and there’s no pool of volunteers associated with the repository itself to take up maintenance of what otherwise would have been abandoned.

Your statement contradicts itself: there is a pool of volunteers, — they are the ones maintaining NPM packages to begin with. In fact, the author of event-stream has given control to another person, because he believed, that the guy was an ordinary volunteer like himself. Unfortunately, the reality is a bit more complicated than just that: people will voluntarily help you to maintain your projects, but only if they share your goals. Ideally, one should ensure, that the to-be-maintainer has invested a lot of effort in the project they are trying to take over — reported issues, fixed bugs, added features etc. for considerable amount of time. In other words — ensure, that they can contribute to the project in significant way and move it forward. Otherwise, what is the point of transferring maintenance?!

We’re colliding on English imprecisions. I distinguish growth (all unnecessary work) from maintenance (only necessary work: fix serious bugs, no new features, minimize rewrites). My use is the latter, not the former.

There should be well known best practices to signal that a repository is not maintained. Example: archiving the repo (it becomes read only). Then somebody forks it and updates the NPM registry with the new repo.

If you no longer want to maintain a package, and you do not have a trusted source to hand it off to, the right thing to do is let it wither.

If the package remains relevant, someone will eventually fork it. The burden of trust is no longer on your shoulders.

The problem is that everyone is trying to push everything to one side. That's just not stable. Yes, the maintainer SHOULD (but doesn't have to) make a hand-off explicit at a minimum. Yes, people should take responsibility for ensuring their packages are (and remain) legit. But both of these can break down.

It's like litter. It is not realistic to have anyone clean up everyone's trash. It's also not realistic to expect that things remain clean if everyone only picks up their own trash. Everyone needs to clean up their own trash and a little bit more, to compensate for the burps in the system.

> Isn't adding another potentially unknown maintainer generally better for the community than a project dying?

No, not if the project becomes malicious. I'd rather it died and I switched to an alternative I can trust.

Maybe a compromise would be some sort of obvious notification (via the website and also via the npm cmdline software) if a maintainer changed.

> Isn't adding another potentially unknown maintainer generally better for the community than a project dying?

no. What does "a project dying" even mean? The code still runs, and will for the foreseeable future. If it was half complete, then that would be bad, but this was clearly a complete solution. Let it remain complete. This was a quick way on dominics part to kill the project actually

NPM repository is not "open source community". NPM is controlled by commercial organization ("npm, Inc."), which is fully capable of establishing rules, preventing package authors from selling to black hats (or even gifting for free to black hats). The author of the package could have formally given his Github repository to new maintainer without transferring package control . He didn't. Why? — presumably, because there is nothing in NPM ruleset, preventing him from doing so. It does not matter if he was bribed or not — there is an obvious glaring hole in notion, that widely-used digital assets may be covertly "gifted" to third parties.

This isn't the first time that has happened — the story with Google Chrome extensions being sold to hackers should have tought NPM, composer etc. a lesson. Maybe someone should finally sue them to drive the point home?

It's better if the new maintainer's intentions are altruistic. From @dominictarr, the maintainer:

> he emailed me and said he wanted to maintain the module, so I gave it to him. I don't get any thing from maintaining this module, and I don't even use it anymore, and havn't for years.

That's just plain irresponsible. He must have known how popular the library was. Taking ownership of browser extensions and common software libraries is becoming a very popular attack vector. Owners of these need to be a little more diligent when transferring ownership.

Perhaps there should be some sort of escrow jail for new maintainers where all changes are reviewed. Certainly better vetting needs to take place.

This is entitlement speaking, and it's clearly a solution that doesn't scale. Downstream must be responsible for only depending on software from reputable sources, there simply is no alternative.

I hate to have to do this, but the requirement runs right to the core of how this development model functions whatsoever:

I'd suggest one problem here is a user interface / packaging model issue: end users / downstreams may override the version choices made by their upstream dependencies. In the case here, the reputability of that upstream varied over time. Permitting version locking allows a chain of reputability to be exist, allowing a limited amount of trustworthiness to be imparted by upstream's selection of dependencies ("I trust this guy's package, so I trust all their dependencies too")

The real problem is that so far, open source has been mostly more responsible than closed source. mostly. So many companies and people just go 'eh' and accept the defaults rather than really digging in. Which, in a real way, means they get what they deserve, but on the other hand, what is the alternative?

I mean, personally, the alternative I like is a sort of hybrid model, where you pay someone else to do the vetting, like RHEL, but nobody I support is willing to stay within the RHEL package world, or to otherwise pay to really vet the packages they use, so it's defaults all the way down, and it usually works just fine! until it doesn't.

It always takes time for parasites to evolve in a new system.

> Downstream must be responsible for only depending on software from reputable sources, there simply is no alternative.

The alternative is to use distributions that do vetting and staging.

No, there is no alternative. You are responsible for what you ship. Even if you pay for the software, you should vet it as best you can as you're responsible.

How are you going to have the time and skill to vet Oracle anything?

"... as best you can..."

Monitor the network, make sure you know where the data is going. Things like that.

It is irresponsible if you believe--and I don't say this pejoratively or to caricature the argument--that somebody takes upon themselves a responsibility when they open-source software. I used to think this was the case, but less and less often I find this to be true. In a practical sense it becomes a free-rider problem and people with money aren't stepping up.

In the absence of accepted responsibility from (monied) consumers, the way responsibility is ultimately taken on has to be either to affirmatively take it on (as public service to the community) or to pay for it. Which is a sticky problem because it's hard to pay for it and it's invariably not presented effectively in terms of building a business case. Existing solutions, not to put too fine a point on it, suck at building that case. OpenCollective, the remnants of Gittip/Gratipay, etc. - incentives don't align to put money where it needs to go, and J. Random Consumer often suffers the most for it.

I have a pretty strong idea about how to solve it for-reals. At the moment, though, it's a time/money problem for me; I'm not in a place to chase the kind of (relatively small) funding that the problem probably requires. If anyone else is interested in this problem space, please feel free to email me directly; I'd be happy to chat with you and I'm very interested in seeing this done well.

So if I understand you correctly, he was supposed to have run some sort of background check with all the state agencies to figure out if he was a bad actor or someone who really relied on the module and wanted to maintain it. You can’t protect against all possible outcomes from a situation like this. Seems like we’re just looking for people to blame whenever there’s an issue these days

He should just not allow any party not known and trusted to distribute under the original product name.

Let it be known that it's discontinued and let the new maintainer trade on his own reputation.

Free lunch tasted bad? And let's not blame the free lunch people who are always on the hunt for people and ingredients needed to put together free lunches.

Paying for a better, more consistent lunch makes sense at some point, yes?

People are motivated to work, ingredients are vetted, better prepared, etc...

Not blaming anyone, just pointing out an obvious problem by analogy.

> That's just plain irresponsible. He must have known how popular the library was. Taking ownership of browser extensions and common software libraries is becoming a very popular attack vector. Owners of these need to be a little more diligent when transferring ownership.

I'm sure he would do it for $150k/year. He won't do it for $150k/year? Increase the price until he says "Yeah, sure I will maintain it". Or maybe he will find someone else would would maintain it for that money.

How about just not transfer ownership of the original name. Let foo become foo-new or go from john/foo to bob/foo whichever is appropriate thus ensuring that only those who affirmatively added bob/foo or foo-new are ever effected.

Vetting new maintainers sounds like hard work that you might not feel like doing for free no problem just don't do it and don't give them the damn name.

That's fundamentally not how identification on the Internet works. You are whoever you say you are. It's absurd to tie identity to a physical person.

You are incorrect you are who you can prove who you are.

I can prove any of a number of identities scattered around the internet most of which are in fact my real name. Pretending that this is trust less is just not real.

Example I know from a wide variety of sources that certain projects are trustworthy even if I can only verify a pseudonym that itself is trustworthy and infer that authors other projects are trustworthy.

Accounts emails and domains are all useful tools even if not perfect.

People don't normally put years into developing trust in order to distribute malware. It's normally a low effort affair.

Not giving maintainership to random people who send you an email or selling projects to skeevy companies seems like a good way to avoid 80% of issues kind of like washing your hands can prevent a lot of colds.

> Perhaps there should be some sort of escrow jail for new maintainers where all changes are reviewed

That wouldn't really solve the problem. Attackers would just have to wait a bit longer before they push malicious code.

Maybe it should be easier to give monetary rewards so that popular module maintainers get more motivation to care.

Claims here are that dominictarr maintains "hundreds of packages" even though that's too much work for one person.

If maintaining modules earned money, that would be much more incentive to "maintain" thousands of random things you never look at, and to hand over control while keeping it in your name (which he's also being blamed for).

Why refuse? Because you have no idea who the person is, they don't have a history, and it is possible they want to take over the project to insert exploits into it. An ignored project (un)maintained by an ethical person is better than a project just handed off to whomever. At the very least a slight background check should have been done to see if the user has a history of contributing and maintaining open source projects.

Allowing the first person to express interest to "adopt" your project, seems to me to have all the same potential bad outcomes as allowing the first person to express interest to adopt a child from an orphanage.

For children, that's much of the reason for Child Protective Services to exist: to regulate orphanages and adoption agencies such that they will thoroughly vet prospective adoptive parents; and to establish and regulate a fostering system—pool of known-good temporary guardians (foster parents) that can take care of children temporarily when no vetted-as-good outsider can be found to take them more permanently.

Now imagine a software org that acts on abandonware software projects the way a CPS-regulated orphanage acts on abandoned children. Start by picturing a regular software foundation like the ASF, but then 1. since all the software is currently abandonware, the foundation itself steps in and installs temporary maintainers; and 2. since the foundation's presence in the project is temporary, they seek to find (and vet!) a new long-term maintainership to replace the existing one.

Of course, that level of effort is only necessary if you care about project continuity. If it's fine for the project to die and then be reborn later under new maintainership, you can just let it die.

>If the original author has no use for the project anymore and someone offers to take it over from them, why should the author be expected to refuse? Isn't adding another potentially unknown maintainer generally better for the community than a project dying?

Depends on the project, I guess. Open source can always be reanimated.

This takes "charitable interpretation" to an extreme.

No problem with someone forking it based on the license... who needs original authors?

There is a very strong sense of entitlement in that thread.

I had this exact same problem from both sides (not working on a project any more and wanting to find someone to maintain it/wanting to maintain a project someone wasn't working on because I found it interesting). It's not always easy to find people who are interested, and, while giving maintainer access to someone you know very little is usually fine and works out great, sometimes you get results like these.

In the end, I built something to "solve" this, a project called Code Shelter[1]. It's a community of vetted maintainers that want to help maintain abandoned FOSS projects, so if you want your project to live on but don't want to do it yourself, you just add it there and you can be reasonably sure that nothing like this will happen to it.

Of course, you have to get a large enough pool of trusted maintainers that someone will be interested in each project, but it's still better than blindly adding people. I can't fault the maintainer of that project, since trusting people is usually fine, but it's too bad this happened to him. The commenters are acting a bit entitled too, but I guess it's the tradeoff of activity vs security.

[1] https://www.codeshelter.co/

Why would the average joe trust something like this? Your FAQ says each maintainer is vetted and handpicked, but nothing about criteria or how they're picked.

Do you mind explaining this vetting process a little more? How can we be sure that something like this flatmap thing doesn't happen on codeshelter?

Sure! They're either people I know personally and trust (and hopefully people will trust me to do this transitively) or they are people who are already authors of popular libraries and have shown they are experienced in maintaining OSS projects and trustworthy (since they're already pushing code to people's machines).

Trust is definitely an issue here, and trust is something you build, so I hope we'll be able to build enough trust to let people not think twice about adding their projects.

to quote linus: if you don't do security with a network of people you trust, you're doing it wrong.

There's a similar effort in the Python / Django world called Jazzband (https://jazzband.co/). This model will probably become more and more necessary as maintainers need to move on from projects for whatever reason. Having a safe place to transfer a project to with a formal process (announcement of the change, code review before acceptance, etc.) would certainly help combat this issue.

Yes, I was inspired by Jazzband, but Jazzband has two things that led me to develop Code Shelter: It's pretty specific to Django, whereas I wanted something general, and people have to move their projects to the Jazzband org, which many people don't like doing (because they understandably want to keep their attribution).

With Code Shelter you don't have to move the project anywhere, you just give repo admin access to the app and the app can add/remove maintainers as required.

There's obviously a corrective component as well, where maintainers who don't do a good job are removed, but this hasn't happened yet so it's not clear how it will be handled.

If you are a maintainer of a project that you want to move on, what's the problem of adding this to README: "This project is abandoned/no longer maintained.", and optionally "Here's a known fork but I haven't vetted the code so if you use the fork you are AT YOUR OWN RISK: <url-to-the-actively-maintained-fork>", and when someone asks you to transfer ownership, you just tell them that they can fork it? Is it because of the "namespace" issue in some package management systems (e.g. NPM) that the forks can't get the nicer name?

It's half the namespace issue (the release package name sometimes needs to be added) and half that maybe you haven't agreed with some fork that you will make it the official one beforehand. Maybe there isn't even a fork like that.

Besides, projects don't usually go from active to completely unmaintained. Adding it to the Code Shelter is a nice way to solve this when you see development slow down, because you basically have nothing to lose.

I really like this idea, thanks for sharing.

Thank you! I really hope it takes off, it's an effort from the community for the community.

NPM is just a mess, and the whole culture and ecosystem that surrounds it. Micropackaging does make a little bit of sense at least in theory for the web, where code size ought to matter, but in practice it's a complete shitshow, and leads to these insane dependency paths and duplication and complete impossibility to actually keep abreast with updates to dependencies, much less seriously vet them.

The insidious threat is that these bad practices will start leaking out of Javascript world and infect other communities. Fortunately most of them are not as broken by default as JS, and have blessed standard libraries that aren't so anemic, so there is less for this type of activity to glom on to.

I don't see how this problem is limited to just NPM. This is a problem of any package manager. Or any software distribution system in general, really. Look at all the malware found on the Play Store and the Apple Store.

Unless you're willing to meticulously scrape through every bit of code you work with you're at risk. Even if you can, what about the OS software? How about the software running on the chipset? This is exactly why no one in their right mind should be advocating for electronic voting. There's simply no way to mitigate this problem completely.

NPM is quite a different software distribution mechanism than a typical app store.

In an app store, apps are self-contained packages with low to no dependency on other apps in the store, meaning that a single compromised or malicious app can only really affect that app's users. The OS may also isolate apps from one another at runtime, further limiting the amount of damage such an app can do (barring any OS bugs).

On the other hand, NPM packages depend on a veritable forest of dependencies, motivated by the "micropackaging" approach. Look at any sizable software package with `npm list`. For example, `npm` itself on my computer has over 700 direct and indirect dependencies, 380 of which are unique. That's bonkers - it means that in order to use npm safely, I have to trust that not a single one of those 380 unique dependencies has been hijacked in some way. When I update npm, I'm trusting that the npm maintainers, or the maintainers of their dependencies, haven't blindly pulled in a compromised "security update". And `npm` is in no way unique here in terms of dependency count.

So this problem is limited to NPM, as far as the potential impact of a single compromised package goes.

Most Linux distribution package managers also draw in dependencies (alas not that much as npm) automatically. You have to trust that all are vetted correctly.

The difference is that JavaScript lacks so much basic stdlib type functionality that it all has to be replaced with libraries. This dramatically increases your risks, since simple libraries will be created which become near standards, which in turn become dependencies of a huge swath of more complex libraries, meaning that any one of the dozens of crazy-common-but-should-be-in-the-stdlib libraries can be a target for hacking, social or otherwise. Also, it means that any web app no matter how trivial is likely going to itself depend on dozens or hundreds of libraries. Which means that even though the theoretical risks of depending on remotely sourced libraries are the same, the practical risks of establishing trust is exponentially harder for JavaScript than for nearly any other popular language out there.

I think part of the difference at issue is the number of distinct entities (users/organizations) you need to trust due to the acceptance of micropackaging.

With most programming languages, a small-to-medium project might pull dependencies from tens of entities, but with npm, even a small project can easily rely on hundreds or even thousands of entities.

It is easier to weed out unreputable entities when you are depending on fewer entities.

This; on a long enough timescale, any open-source package manager that maintains popularity will have this problem. npm has this problem now because JavaScript leaves a lot to be desired in terms of base convenience functionality (left-pad), and because of JavaScript's massive popularity that won't be going away anytime soon. This whole thing is hugely educational for people designing new languages with integrated package managers. However, while the lessons are pretty easy to grok, the solutions are going to be harder to come up with. I'm excited to see what kinds of stuff people come up with in response to this.

Unpinned dependencies are harmful.

If you aren’t reviewing the diffs of your dependencies when you update them, you’re trusting random strangers on the Internet to run code on your systems.

Espionage often spans multi-year timelines of preparation and trust building. No lesser solution will ever be sufficient to protect you. Either read the diffs, or pay someone like RedHat to do so and hope that you can trust them.

Code review can't catch determined malicious actors. It just isn't a viable protection against that kind of attack.

Take a look at the underhanded c contest for plenty of proof where even very senior developers told up front that there is a backdoor in the code often can't find it! And they can't all be blamed on C being C, many of them would work the same in any language.

I don't know the solution, but shaming users and developers for not reviewing code enough sure as hell isn't it.

All that being said, reviewing changes in dependencies is still a good idea, as it can catch many other things.

There’s no shame in refusing to study dependency diffs. If you consciously evaluate the risk and deem it acceptable, I agree with your judgement call.

What I find shameful is the lack of advisory warnings about this risk — by the repository, by the language’s community, by the teaching material.

This should have been a clearly-known risk. Instead, it was a surprise. The shame here falls on the NPM community as a whole failing to educate its users on the risks inherent in NPM, not the individual authors and users of NPM modules.

Most malicious actors aren't determined, they're lazy and will take the path of least resistance. Code review will catch out those.

> you’re trusting random strangers on the Internet to run code on your systems

I mean that's pretty much how the world works. Even running Linux is trusting random strangers on the internet. Most of the time it works pretty well, but obviously it's not perfect. Even the largest companies in the world get caught with security issues from open source packages (remember Heartbleed?).

When I visit a random website, it is very hard for that website to compromise my computer or my private data. The only really viable way is a zero-day in my browser, or deception (e.g. phishing, malicious download).

When I install an app on my iPhone, it is very very hard for that app to compromise my phone or my private data.

In both of these cases, I can download and run almost any code, and be fairly confident that it won't significantly harm me. Why? Because they're extremely locked down and sandboxed with a high degree of isolation. On the other hand, if I install random software on my desktop computer, or random packages off NPM, I don't have such safety any more.

The prevalence of app stores and the Web itself speaks to the fact that it _is_ possible to trust random strangers without opening yourself up to a big security risk.

I think what you just described are platforms where you don't have to trust strangers.

Yes, and I think they meant to. Node isn't one of those.

Edit: partially because of design decisions around package permissions.

What does that have to do with development? My point was nearly everyone uses libraries that are written by strangers from the internet. It mostly works.

Lack of code signing is what's harmful here.

My maven projects' dependencies are technically all pinned, but updates are just a "mvn versions:use-latest-releases" away. But, crucially, I have a file that lists which GPG key IDs I trust to publish which artifacts. If the maintainer changes, the new maintainer will sign with their key instead, my builds will (configurably) fail, and I can review and decide whether I want to trust the new maintainer or not.

Of course, NPM maintainers steadfastly refuse to implement any kind of code signing support...

What if the maintainer had also given away the key used to sign the previous releases?

I know, it doesn't make much sense why would anyone do that, but then again, I think "why would you that?!" feeling is part of what is triggering the negative reactions here. We just don't expect people to do things we wouldn't.

Even package managers where signing support nominally exists, the take-up is often poor.

IIRC rubygems support package signing but almost no-one uses it, so it's effectively useless.

We're seeing the same pattern again with Docker. They added support for signing (content trust) but unfortunately it's not at all designed for the use case of packages downloaded from Docker hub, so it's adoption has been poor.

I think browsers show how to migrate away from insecure defaults successfully. The client software should start showing big obvious warnings. Later stages should add little inconveniences such as pop-ups and user acknowledgement prompts, eg. 'I understand that what I'm doing is dangerous, wait 5 seconds to continue'. The final stage should disable access to unsigned packages without major modifications to the default settings.

Browser security has heavily benefited from the fact that ther are a small number of companies with control over the market and an incentive to improve security.

Unfortunately the development world doesn't really have the same opporunities.

If, for example, npm started to get strict about managing, curating, security libs, they could just move to a new package manager.

Security features (e.g. package signing, package curation) have not been prioritised by developers, so they aren't widely provided.

actually when publishing to the biggest java package repository (sonatype) you NEED to sign your packages.

also you can't transfer ownership without giving away your domain or github account. But you can add others to also upload to your name, but if an accident occurs your liable, too.

Would you have distrusted this maintainer though? If someone takes it over and publishes what appear to be real bug-fixes, I'd imagine most people would trust them. The same goes for trusting forks, or trusting the original developer not to hand over access.

> Would you have distrusted this maintainer though? If someone takes it over and publishes what appear to be real bug-fixes, I'd imagine most people would trust them.

Quite possibly. But I'd make a conscious decision to do it, and certainly wouldn't be in any position to blame the original maintainer.

The sale of any business that makes use of cryptography will generally include the private keys and passwords necessary to ensure business continuity. Code signing would not necessarily protect you against a human-approved transfer of assets as occurred here, whether as part of a whole-business sale or as a simple open-source project handoff.

If you have tons of depedencies then it's not feasible to check every diff. You may able to do it or pay someone if you are a bigger organization, but a small shop or solo developer can't do this.

"If you have tons of depedencies then it's not feasible to check every diff."

Part of bringing in a dependency is bringing in the responsibility for verifying it's not obviously being used badly. One of the things I've come to respect the Go community for is its belief that dependencies are more expensive that most developers currently realize, and so generally library authors try to minimize dependencies. Our build systems make it very easy to technically bring in lots of dependencies, but are not currently assisting us in maintaining them properly very often. (In their defense, it is not entirely clear to me what the latter would even mean at an implementation level. I have some vague ideas, but nothing solid enough to complain about not having when even I don't know what it is I want exactly.)

I've definitely pulled some things in that pulled in ~10 other dependencies, but after inspection, they were generally all very reasonable. I've never pulled a Go library and gotten 250 dependencies pulled in transitively, which seems to be perfectly normal in the JS world.

I won't deny I haven't auditing every single line of every single dependency... but I do look at every incoming patch when I update. It's part of the job. (And I have actually looked at the innards of a fairly significant number of the dependencies.) It's actually not that hard... malicious code tends to stick out like a sore thumb. Not always [1], but the vast majority of the time. In static languages, you see things like network activity happening where it shouldn't, and in things like JS, the obfuscation attempts themselves have a pretty obvious pattern to them (big honking random-looking string, fed to a variety of strange decoding functions and ultimately evaluated, very stereotypical look to it).

And let me underline the point that there's a lot of tooling right now that actively assists you into getting into trouble on this front, but doesn't do much to help you hold the line. I'm not blaming end developers 100%. Communities have some work here to be done too.

[1]: http://underhanded-c.org/

I'm not convinced that this incident argues in favor of Go's "a little copying is better than a little dependency", which I continue to strongly disagree with. Rather, it indicates that you shouldn't blindly upgrade. Dependency pinning exists for a reason, and copying code introduces more problems than it solves.

I don't think you have to get all the way to the Go community's opinions to be in a reasonable place; I think the JS community is at a far extrema in the other direction and suffer this problem particularly badly, but that doesn't mean the other extreme is the ideal point. I don't personally know of any other community where it's considered perfectly hunky-dory to have one-line libraries... which then depend on other one-line libraries. My Python experiences are closer to the Go side than the JS side... yeah, I expect Python to pull in a few more things than maybe Go would, but still be sane. The node modules directories I've seen have been insane... and the ones I've seen are for tiny little projects, relatively speaking. There isn't even an excuse that we need LDAP and a DB interface and some complicated ML library or something... it was just a REST shell and not even all that large of one. This one tiny project yanked in more dependencies than the sum total of several Perl projects I'm in charge of that ran over the course of over a decade, and Perl's a bit to the "dependency-happy" side itself!

I suggest a different narrative. That node.js achieved the decades-old aspiration of fine-grain software reuse... and has some technical debt around building the social and technical infrastructure to support that.

Fine-grain sharing gracefully at scale, is a hard technical and social challenge. A harder challenge than CPAN faced, and addressed so imperfectly. But whereas the Perl community was forced to struggle over years to build it's own infrastructure - purpose-built infrastructure - node.js was able to take a different path. A story goes, that node.js almost didn't get an npm, but for someone's suggestion "don't be python" (which struggled for years). It built a minimum-viable database, and leaned heavily on github. The community didn't develop the same focus on, and control over, its own communal infrastructure tooling. And now faces the completely unsurprising costs of that tradeoff. Arguably behind time, due to community structure and governance challenges.

Let's imagine you were creating a powerful new language. Having paragraph- and line-granularity community sharing could well be a worthwhile goal. Features like multiple dispatch and dependent types and DSLs and collaborative compilation... could permit far finer-grain sharing than even the node.js ecosystem manages. But you would never think npm plus github sufficient infrastructure to support it. Except perhaps in some early community-bootstrap phase.

Unfortunately, one of the things that makes JS dev pull in so many deps is that it lacks a decent standard library. Meanwhile, the Go standard library is amazing!

Most users of dependencies will never consider whether to trust those dependencies at all.

If you’ve considered the problem and decided to trust them as a compromise to reduce staffing and costs, that’s a fine outcome in my book.

How many health/medical startups are blindly trusting dependencies today, having never thought through the harm I describe?

That's why you lock your dependencies.

Hope you don’t start your project with the compromised version — and then lock it in.

Dependencies need bugfixes and you may even want to use new features, so locking is not a permanent solution.

If you're on point enough to know which features/bugfixes you're getting then you're probably doing enough to be safe already. Just don't go around running npm -u for no reason and you should be fine.

The only way to be truly safe from this attack vector is to own all of your dependencies, and nobody is willing to do that so we're all assuming some amount of risk.

That will work assuming have have audited the code already, but you will also have to audit every (changed dependency)(factorial) every time you bump a dependency!

I would argue that having tons of dependencies is a problem in and of itself. This has become normal in software development because it’s been made easy to include dependencies, but a lot of the costs and risks are hidden.

Maybe not, but you can avoid updating unless necessary. Assuming you only make necessary updates (and at least do a cursory check at the time) and vet any newly added-dependencies as you go, you can greatly reduce your own attack surface. You're still probably vulnerable to dependency changes up the chain, but then at least you're depending on a community that is ostensibly trustworthy (i.e. if every maintainer at least feels good about their top-level dependencies then the whole tree should be trustworthy).

I would caution one to not have tons of dependencies. More surface area in terms of the amount of 3rd party libraries/developers means more chances that one of them is not a responsible maintainer, as in this case. That increases the application's security risk.

Then let's hope you're not using Webpack, which alone has several hundred of them and not small ones, mind you... super complex libraries that trying to "own" enough that you'd be able to securely review code diffs is completely infeasible.

The problem here is that the diff is not source, but "compiled" code. We ultimately come back to "Reflections on Trusting Trust" [1]

  [1]: https://www.win.tue.nl/~aeb/linux/hh/thompson/trust.html

I'd love to do this, but how can you review thousands of lines of code that changed?

You can use smaller, more targeted libraries so that the changes you need to review are actually relevant to your project.

In the Node/NPM world, this is pretty difficult. There are many (many) small libraries and everything you depend on brings in many more.

That's a big and totally objective reason to abandom the Node.js/NPM ecosystem, like its original author did.

A language that doesn't have a decent standard library means that you'll have to use huge amounts of code that random strangers used, and the chain of dependencies will grow larger and larger.

In languages like Ruby and Python, you have a decent standard library, and then big libraries and frameworks that are maintained by the community, like Rails, Django, SqlAlchemy, Numpy, etc. That's healthy because it minimises or zeros the amount of small libraries maintained by a single guy, thus maximising the amount of code that you can trust (because you can trust the devs and development process of a popular library backed by a foundation or with many contributors).

With Node, almost every function comes from a different package. And there's no bar to entry, and no checks.

If Node.js is going to stay, someone needs to take on the responsability of forming a project, that belongs to an NGO or something, where the more popular libraries are merged and fused into a standard library, like that of Python's. Personally, I'm not touching it until then.

You can't, you're forced to trust to some degree depending on factors specific to your project. If you're writing missile navigation code then you better check every last diff, but if you're writing a recipe sharing site then you don't have the same burden really.

Ideally you don’t pull in thousands of lines of code in unknown dependencies as a starting point

How does this work practically when the vuln exists only in the minified version of the code?

Most projects are already bundling and minifying code themselves. Any cookiecutter type tool will say that up.

Unfortunately this isn't really doable in today's world of JavaScript development. If you want to use any of the popular frameworks you are installing a metric ton of dependency code. So not only do you have to somehow review that initial set of code, but you need to know how to spot these types of things. Then, once you complete that task, you now have to look at the diffs for each update. And there will be a lot of updates.

What you're suggesting is a great idea from a security perspective. But for typical workflows for JS development it just isn't practical.

Now, maybe this means we need different workflows and less dependencies. But it's so ingrain I don't know that it's easy to fix / change.

But you also have to review the diffs of your dependency's dependencies. And their dependencies. And so on and so on.

Your direct dependencies are not the only problem.

That thread is a huge argument for paid software. It's mind blowing how folks expect people to maintain things for nothing and get mad when it doesn't work perfectly. Some silly choices were made by the original maintainer but give the dude a break. He doesn't owe you a damn thing.

Paid software most often contains open source components that are equally vulnerable as this one, and if you expect paid software to be explicitly better maintained in terms of code than free software... you are in for a surprise (hint: corporate release cycle is a grinder)

I agree. The author released this library under MIT license which explicitly states that there is no warranty. Yet some people in the GitHub issue thread are acting as though the author owed something to them personally.

do you know how many of your paid software contains a similar disclaimer that removes their responsibility for the damages caused by the software? Only you'd never have a chance to look at that code and find this injection...

You're not wrong; quite right, in fact.

My point here is less about payment guaranteeing a lack of bugs or having someone to point a finger at and more at incentivizing the team building it to fix things quickly. Of course, there's always the <Insert Negligent, Well-Compensated Fortune 500 Company Here> problem, but that's a case-by-case issue.

Yes but then you have someone culpable for their mistake.

I agree.

If anything maybe those who depend on unchecked code so willingly have the burden of responsibility?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact