It came with the actual title, although arguably not the correct one.
>"we received a report to our security bug bounty program of a vulnerability that would allow an attacker to publish new versions of any npm package using an account without proper authorization."
>"This vulnerability existed in the npm registry beyond the timeframe for which we have telemetry to determine whether it has ever been exploited maliciously. However, we can say with high confidence that this vulnerability has not been exploited maliciously during the timeframe for which we have available telemetry, which goes back to September 2020."
>Any version of any npm package published before September 2020 could have been tampered with by anyone aware of that exploit and no-one would be any the wiser. That is pretty bad news.
> Second, on November 2 we received a report to our security bug bounty program of a vulnerability that would allow an attacker to publish new versions of any npm package using an account without proper authorization.
They correctly authenticated the attacker and checked they were authorised to upload a new version of their own package, but a malicious payload allowed the attacker to then upload a new version of a completely unrelated package that they weren't authorised for. Ouch!
Yeah, this is what's going to keep me up tonight. Yikes.
I can't help but wonder if the root cause was HTTP request smuggling, or if changing package.json was enough.
How do we even mitigate against these types of supply-chain attacks, aside from disabling run-scripts, using lockfiles and carefully auditing the entire dependency tree on every module update?
I'm seriously considering moving to a workflow of installing dependencies in containers or VMs, auditing them there, and then perhaps commiting known safe snapshots of node_modules into my repos (YUCK). Horrible developer experience, but at least it'll help me sleep at night.
Don’t import thousands of modules from third parties just to write a simple web app. If you have 10 stable dependencies it’s no problem to vendor them and vet changes. If you have 10k you’ve entirely given up on any pretence of security.
The NodeJS team refuses to discuss NPM because it's a separate 3rd party. And yet.... this NodeJS Core module comes pre-installed as a global NPM package.
We're just getting started.
This module installs or even reinstalls any supported package manager when you execute a script with a name that would match any that they'd recognise. Opt-in for only a short period, and intending to expand beyond package manager installations.
Amidst all that's been going on, NPM (Nonstop Published Moments) is working on a feature that silently hijacks user commands and installs foreign software. The code found in those compromised packages operated in a similar manner and was labeled a critical severity vulnerability.
The following might actually make you cry.
Of these third party remote distributions it's downloading, the number of checksum, keys, or even build configurations that are being verified is 0.
The game that Microsoft is playing with their recent acquisitions here is quite clear, but there's too much collateral damage.
corepack (or package manager manager) was transferred to be a Node.js foundation project, voted to be included in release by the Node.js Technical Steering Committee. The one member I'm aware is affiliated with Github/NPM abstained from the vote. The specific utility of corepack is being championed by the package managers not distributed with node so that (Microsofts) `npm` is not the single default choice.
I'm interested to hear what parts of this you see as coming from Microsoft/NPM as I didn't get that vibe? In my view this was more likely reactionary to the Microsoft acquisitions (npm previously being a benign tumour, doctors are now suggesting it may grow :)
NPM's security issues prime the ecosystem for privacy and security topic marketing (ongoing, check their blog), which is leveraged to increase demand for Github's new cloud-based services.
In the meantime they will just carry on moving parts of NPM to Github until there's so little of the former left, that it'll be hard to justify sticking with it rather than just moving to Github's registry like everyone else.
Eventually NPM gets snuffed-out and people will either be glad it's finally gone, or perhaps not even notice.
Additionally, unlike other approaches, Corepack ensures that package manager versions are pinned per project and you don't need to blindly install newest ones via `npm i -g npm` (which could potentially be hijacked via the type of vulnerability discussed here). It intends to make your projects more secure, not less.
- No security checks are present in the package manager download and installation process so there are still no guarantees.
- Existing installations of package managers are automatically overwritten when the user calls their binary. What if this was a custom compilation or other customisations were made?
- This solution does a lot more behind the scenes than just run that yarn command that the user asked for but hand't installed.
- Why not simply notify the user when their package manager isn't installed or only allow it with a forced flag? (As has been suggested uncountable times by numerous people anywhere this topic came up over the years.)
Disrespecting user autonomy, capacity to self-regulate, and ownership over their machine and code is not the way.
The problem isn’t only ridiculous amounts of untrusted code, but thousands of new developers of the last 10 years who think this is the way to write reliable code. Never acknowledged the risks of having everyone write your code for you, and overestimate how unique and interesting their apps are.
If you must participate in this madness, static analysis tools exist to scan your 10000 dependencies, taking security seriously is the issue.
And what's the alternative? Do you write your own libraries to store and check password hashes complete with hash and salt functions? Roll your own google oauth flow? Your own user session management library?
It's madness on either side, the difference is `npm install` and pray allows you to actually get things done
More tangentially, use persistent lockfiles and do periodic upgrades when warranted (e.g. relevant advisories are out) and check new versions getting installed.
Yes you can write your own things like session management, yes that is better than the entire web depending on a module for session management which depends on a module which depends on a module maintained by a bored teenager in Russia.
Please do check out other ecosystems, there is another way.
Using a small number of libraries, where each library provides a large amount of functionality. When I install Django, for instance, four packages are installed, and each package does a substantial amount of work. I don't have to install 1000 packages where each package is three lines of code.
Without accounting for that, your comparison makes no sense! Not even mentioning that you’re comparing two very different level languages. A low level language like C would never behave like a high er level language
Maybe this is a dumb question but could you please suggest some of these tools that can scan dependencies?
We can talk about alert fatigue, but to be honest, I feel more secure with my node_modules folder than I do with my operating system and plethora of DLLs it loads.
I don't wanna turn this into a whataboutism argument, but at some point you gotta get to work, write some code and depend on some libraries other people have written.
If a dependency has been compromised it doesn't matter if its code is actually used, since it can include a lifecycle script that's executed at install-time, which was apparently the mechanism for the recent ua-parser-js exploit.
Wait, I’m not safe using “npm audit”?
Many languages have a decent standard library which covers most of the bases, so it’s possible to have a very restricted set of dependencies.
Hopefully Deno helps with this pain point.
I disagree. Any application developer who seriously thinks that they only have 10 dependencies if they're only importing directly 10 dependencies should not be an application developer in the first place.
At this point if you're not actively auditing your dependencies, and reducing all of them where you can, then you're on the wrong side of history and going down with the Titanic.
The frank truth is that including a dependency is, and always has been, giving a random person from the internet commit privileges to prod. The fact that "everyone else did it" doesn't make it less stupid.
I mean, no. This is hyperbole at best and just wrong at median. A system of relative trust has worked very well for a very long time - Linus doesn’t have root access to all our systems, even if we don’t have to read every line of code.
Npm on the other hand is much, much worse. Anyone can publish anything they want, and they can point to any random source code repository claiming that this is the source. If we look at how often vulnerable packages are discovered in eg. npm, I'd argue that the current level of trust and quality aren't sustainable, partly due to the potentially huge number of direct and transitive dependencies a project may have.
Unless you start to review the actual component you have no way to verify this, and unlike the Linux kernel there is no promise that anyone has ever reviewed the package you download. You can of course add free tools such as the OWASP Dependency Check, but these will typically lag a bit behind as they rely on published vulnerabilities. Other tools such as the Sonatype Nexus platform is more proactive, but can be expensive.
For npm, trust isn't really a concept. The repository is just a channel used to publish packages, they don't accept any responsibility for anything published, which is fair considering they allow anyone to publish for free. There are no mechanisms in npm that can help you verify the origin of a package and point to a publicly available source code repository or that ensures that the account owner was actually the person who published the package.
Security and trust is very hard, but my point here is that npm does nothing to facilitate either, making it very difficult for the average developer to be aware of any issues. The one tool you get with npm is...not really working the way it was supposed to.
1 - https://reproducible-builds.org/
2 - https://news.ycombinator.com/item?id=27761334
Maven on the other hand define several requirements, such as all files in a package being signed, more metadata and they also provide free tools the developers can use to improve the quality of a package.
If <random maintainer> commits code to their repo, pushes it to npm, and you pull that in to your project (possibly as an indirect dependency), what controls are in place to ensure that that code is not malicious? As far as I can tell, there are none. So how is this not trusting that <random maintainer> with commit-to-prod privileges?
Different risk profiles exist. There’s a difference between installing whatever from wherever, installing a relatively well known project but with only one or two Actually Trusted maintainers, and installing a high profile well maintained project with corporate backing.
This is true in Linux land, and it’s true in npm land. You can’t just add whatever repo and apt get to your hearts content. Or, you know, you also can, depending on your tolerance for risk.
This doesn't get any better as you get more expert. I've had conversations with JS devs who've been professionally coding for years, and none of them are aware of it (or if they are, treat it as a serious threat). You can see the same in the comments here.
If there's not even any discussion of risk, and no efforts to manage it, then it's not really a relevant factor. No-one is considering the risk of importing dependencies, so the 0-100 scale is permanently stuck on 100.
Sure, the entire OS is a dependency. Nothing I said contradicts that. And yes, every application developer should be aware of what they are depending on when they write software for a particular OS.
> The key is to have an entire ecosystem that you can, to some degree more or less, trust.
You don't necessarily need to trust an entire ecosystem, but yes, every dependency you have is a matter of trust on your part; you are trusting the dependency to work the way you need it to work and not to introduce vulnerabilities that you aren't aware of and can't deal with. Which is why you need to be explicitly aware of every dependency you have, not just the ones you directly import.
I’m okay with saying, “I trust RHEL to be roughly ok, just understand the model and how to use it, and keep my ear to the ground for the experts in case something comes up.”
At the level of npm, I feel roughly the same about React. I don’t trust it quite as much, but I’m also not going to read every code change. I’ll read a CHANGELOG, sure, and spelunk through the code from time to time, but that’s not really the same. I’ll probably check out their direct dependencies the first time, but that’s it.
I actually don’t know how you could call yourself an application developer in most ecosystems and know every single dependency you actually have all the way down, soup to nuts. Heck, there are dependencies that I accept so that my code will run on machines that I have no special knowledge of, not just my own familiar architecture. I accept them because I want to work on the details of my application and have it be useful on more than just my own machine.
Edit for clarity: I agree with almost everything you’re suggesting as sensible. Just not with your conclusion: that you’re not a “real” application developer if you don’t know all of your dependencies
Accepting the OS as a dependency includes the security updates from the OS, sure.
> How do you literally personally vet every line of code
Ah, I see, you think "understanding the dependency" requires vetting every line of code. That's not what I meant. What I meant is, if you use library A, and library A depends on libraries B, C, and D, and those libraries in turn depend on libraries E, F, G, H, I, etc. etc., then you don't just need to be aware that you depend on library A, because that's the only one you're directly importing. You need to be aware of all the dependencies, all the way down. You might not personally vet every line of code in every one of them, but you need to be aware that you're using them and you need to be aware of how trustworthy they are, so you can judge whether it's really worth having them and exposing your application to the risks of using them.
> I’ll probably check out their direct dependencies the first time, but that’s it.
So if they introduce a new dependency, you don't care? You should. That's the kind of thing I'm talking about. Again, you might not go and vet every line of code in the new dependency, but you need to be aware that it's there and how risky it is.
> I actually don’t know how you could call yourself an application developer in most ecosystems and know every single dependency you actually have all the way down, soup to nuts.
If you're developing using open source code, information about what dependencies a given library has is easily discoverable. If you're developing for a proprietary system, things might be different.
But I don’t know how you can make such a strong distinction between “a committed line of code” vs “a dependency”, because the only thing differentiating them is the relative strength of earned trust regarding commits to “stdlib,” commits to “core,” commits to “community adopted,” etc.
It’s too much. There’s a long road of grey between “manually checks every line running on all possible systems where code runs and verifies code against compiled binary” and “just run npm install and yer done!”
That said, I don’t know what the answer is for JS. There are too many dependency cycles that make auditing upgrades intractable. If you’re not constantly upgrading libraries, you’ll be unable to add a new one because it probably relies on a newer version of something you already had. In most other ecosystems, upgrading can be a more deliberate activity. I tried to audit NPM module upgrades and it’s next to impossible if using something like Create React App. The last time I tried Create React App, yarn-audit reported ~5,000 security issues on a freshly created app. Many were duplicates due with the same module being depended on multiple times, but it’s still problematic.
The reason packages are so big is the complexity for an interesting app is irreducible. People don't import thousands of modules for fun; they do it because simple software tends towards requiring complex underpinning. Consider the amount of operating system that underlies a simple "Hello, world!" GUI app. And since the browser-provided abstractions are trash for writing a web app, people swap them out with frameworks.
I'm working on a React app right now where I've imported about a dozen dependencies explicitly (half of which are TypeScript @type files, so closer to a half-dozen). The total size of my `node_modules` directory is closer to a couple hundred packages. It's 35MB of files. And no, I couldn't really leave any of them out to do the thing I want to do, unfortunately.
1) "We have is-array as a dependency" Why? Well, pre Array.isArray, there wasn't anything built-in. Why not just write a little utility function which does what is-array does? See #3
2) "We have both joi and io-ts. Don't they do roughly the same thing?" They do; io object validation. New code uses io-ts, but a bunch of old code relies on joi. Should we update it? Eh we'll get around to it (we never do).
3) "is-array is ten lines of code. why don't we just copy-paste it?" Multiple arguments against this, most bad. Maybe the license doesn't support it. More usually; fear that something will change and you'll have to maintain the code you've pasted without the skills to do so. Better to outsource it (then, naturally, discount the cost of outsourcing).
4) "JSON.parse is built-in, but we want to use YAML for this". So, you use YAML. And need a dependency. Just use JSON! This is all-over, not just in serialization, but in UI especially; the cost analysis between building some UI component (reasonably understood cost) versus finding a library for it (poorly understood cost, always underestimated).
Not all dependency usage is irreducible. Most is. But some of it is born, fundamentally, out of a cost discount on dependency maintenance and a corporate deprioritization of security (in action; usually not in words).
Sorry, to clarify: when I say "Linux distro" here, I mean the distribution package sets, like Debian or Ubuntu.
> Second, when people pull software from their Linux distribution that ultimately comes from developers all over the Internet, they do it to use the software themselves, not to develop applications that others are going to have to deal with.
The distros are chock full of intermediary code libraries that people use all the time to build novel applications depending on those libraries, which they then distribute via the distro package managers. I'm not quite sure what you mean here... I've never downloaded libfftw3-bin for its own sake; 100% of the time I've done that because someone developed an application using it that I now have to deal with.
Conversely, I've also used NodeJS and npm to build applications I intend to use myself. It's a great framework for making a standalone localhost-only server that talks to a Chrome plugin to augment the behavior of some site (like synchronizing between GitHub and a local code repo by allowing me to kick off a push or PR from both the command line and the browser with the same service).
> Third, Linux distributions put an extra layer of vetting in between their upstream developers and their users.
This is a good point. It's a centralization where npm tries to solve this problem via a distributed solution, but I'm personally leaning in the direction that the solution the distros use is the right way to go.
People who develop web apps want that level of convenience. And if we can't solve the security problem in a distributed fashion, web development will end up owned by big players who can pay the money to solve the problem in a centralized fashion.
Why not? Because some big, centralized player has put the time, effort, and money into making yaml part of a complete library that gives you everything you need to write desktop software. Nobody writes desktop software by importing thousands of tiny libraries from all over the Internet.
> That's going to be incompatible with writing interesting software on the web, unless we want to just hand the problem over to a handful of big players who can afford to hand-vet 10,000 dependencies.
Consolidating into a distro-management-style solution would be one option.
You did say the argument was bad, but a license that prevents you from making a copy manually but allows you to make a copy though the package manager isn't a thing, is it? In either case the output of your build process is a derived work that needs to comply with the license.
Unless, perhaps, you have a LGPL dependency that you include by dynamic linking (or the equivalent in JS – inclusion as a separate script rather than bundling it?) in a non-GPL application and make sure the end user is given the opportunity to replace with their own version as required by the license.
These kinds of claims demand data, not just bare assertions of their truthiness.
Firefox, as an app with an Electron-style architecture (before Electron even existed), was doing some pretty interesting stuff circa 2011 (including stuff that it can't do now, like give you a menu item and a toolbar button that takes you to a page's RSS feed), with a bunch of its application logic embodied in something like well under <250k LOC of JS.
The last time I measured it, a Hello World created by following create-react-app's README required about half a _gigabyte_ of disk space between just before the first `npm install` and "done".
That NPM programmers don't know _how_ to write code without the kind of complexity that we see today is one matter. The claim that the complexity is irreducible is an entirely different matter.
... And I think it's an interesting question to ask why we can trust the security of, say, Debian packages and not npm, given how many packages I have to pull down to compile Firefox that I haven't personally vetted.
Right, just like every other Electron-style app that exists. The comparison I made was a fair one.
> To compare it to npm development, you would need to factor in the total footprint of every package that you had to install to compile Firefox in 2011.
No, you wouldn't. That's a completely off-the-wall comparison.
How many lines of application code (business logic written in JS including transitive NPM dependencies before minification) go into a typical Electron app in 2021? Into a medium sized web app? Is the heft-to-strength ratio (smaller is better) less than that of Firefox 4, about the same, or ⋙?
If I do the same thing with my JS app, I still download a bunch of libraries, but puts them all in node_modules. That’s also about 500MB. The resulting compiled/built code is around 2MB.
I dunno, seems roughly the same.
With respect to the package size issue, the 500MB-to-2MB observation does not bode well for the claim of irreducibility.
This is absolutely, demonstrably false. Can you really claim that you use 100% of the features provided by all of the dependencies you pull in? If not, you are introducing unnecessary complexity to your code.
That doesn't mean that this is necessarily a bad thing, or that we should never ever introduce incidental complexity—we'd never get anything done if that was the case. My point is simply that there exists a spectrum that goes from "write everything from scratch" on one end all the way to "always use third-party code wherever possible" on the other. It's up to you to make the tradeoff of which libraries are worth pulling in for a given project, but when you use third-party code, you inevitably introduce some amount of complexity that has nothing to do with your app and doesn't need to be there.
I have 35 MB of node_modules, but after webpack walks the module hierarchy and tree-shakes out all module exports that aren't reachable, I'm left with a couple hundred kilobytes of code in the final product.
That’s exactly my point. This is a tradeoff that’s inherent to software development and has nothing to do with the web or Node or NPM. You could just as well decide to write your desktop app with a much smaller GUI library, or even write your own minimal one, if the tradeoff is worth it to reduce complexity. (Example: you’re writing an app for an embedded device with very limited resources that won’t be able to handle GTK.)
This is the key.
If browsers would improve here we wouldn't need half of the dependencies that we use now. It took nearly a decade to get from moment.js to some proper usable native functions for example.
Besides that we _really_ need to solve the issue of outdated browsers. Because even when those native APIs exist we'll need fallbacks and polyfills and lots of devs will opt for a non-standard option (for various reasons).
The web is still a document platform with some interactivity bolted on top, I love it but it's a fucking mess.
JS has a culture of using lots of small, composable modules that do one thing well rather than large, monolithic frameworks, but that's only an aggravating factor; it's not the root of the problem.
Here's a similar issue that occurred with Python's PIP just this year: https://portswigger.net/daily-swig/dependency-confusion-atta...
JS and its culture of small dependencies that do one thing but import 100 other things to do that thing is the root of the problem here.
So the issue is probably something other than using bazaar-style code design. I think as other people in the thread have noted, the distros have centralized, managed, and curated package libraries that get periodically version "check-pointed" and this is not how npm works.
I may have my answer to the original thought I floated: the way this problem has been solved successfully is to centralize responsibility for oversight instead of distributing it.
And sometimes even something the language already does, but the author didn’t know.
Lots of people are writing interesting web software without these problems - the website you’re currently posting on is one example. So I completely disagree with this statement and think you need to examine your assumptions.
There is life outside npm.
OpenStreetMap is "interesting." Docs and Sheets are "interesting." Autodesk Fusion 360 is "interesting." Facebook is "interesting." Cloud service monitoring graph builders are "interesting." The Scratch in-browser graphical coding tool is "interesting." Sites that are pushing the edge of what the browser technology is capable of are "interesting."
At some stage after you've seen enough 'interesting' dependencies changing the world around your app as you write it you'll realise that boring is good for most of the tech you depend on - the more boring the better, and the fewer dependencies the better.
One need not be a big player to write good code without 10000 dependencies
What are the actual time cost savings when you take the total costs into consideration? What would it look like if you didn't implement an app by stringing together dozens/hundreds/thousands of third-party modules implemented bottom-up, but instead took control of the whole thing top-down?
That's a small up-front one-time cost relative to writing Redux from scratch. And before anyone asks... Yes, our use case is complex enough to justify a local state storage solution based on immutable state curated via actions and reducers. Just as our rendering use case is complex enough to justify React.
Or is it, disquietingly, the possibility that they are completely vulnerable to this sort of attack and either nobody has noticed there compromised or attackers haven't decided that compromising a major desktop Linux distro is worth the time?
So yes, distributions are carefully curated, with a large team of experts vetting the system in a huge number of ways, and are always looking to improve upon them. Because attackers are actively attempting to compromise major distributions.
Is vendoring in a dependency just slowing things down? Slows down development and bakes existing attacks in longer.
Don't use the popular hype garbage. Yes, I realize that may not be an option for a lot of people professionally. But I believe if you actually spend some time on due diligence for any dependency you consider adding, you can significantly reduce the number of untrusted deps you pull in.
I agree, having those dependencies authored by Node.js Foundation itself will yield higher level of trust. But we're all human, and one can argue earnest open source developers have better aligned incentives than a randomly selected Node.js Foundation employee.
I honestly am not sure I fully agree with what I've just written above either. But one thing I would want to pinpoint: those things are NOT black and white. The specific set of trade offs the Node.js ecosystem fallen into might look accidental and inadequate. But I think it's fairly reasonable.
Yes, you should write leftpad in-house. Anything that is a copy-paste Stack Overflow answer should not be dependency.
Nobody is suggesting we each write our own charting library, but we should each be capable of writing that function that picks a random integer between 10 and 15. Because the npm version of that function will have the four thousand dependencies that everybody likes to mock whenever npm is discussed.
Are there stable dependencies from reputable companies that do the things I want without me vetting 10k submodule imports?
- someone at Facebook or Google has vetted the dependcy graph for those
- I also assume they have internal Snyk-like tools
- I also assume other users have similar tools
so someone should catch it.
When it comes to anything else I often look into what it pulls in.
Also I keep an eye on the yarn.lock-file in pull requests.
Just a week or two ago, a malicious NPM package was published which, for the hour or so that it was up, would be pulled in by any installation of create-react-app, since somewhere in the dependency tree it was specified with “^” to allow for minor updates.
Any machine that ran “npm -i” with CRA or who knows how many other projects during that hour may have compromised credentials.
1 hour to find and unpublish the malicious package is a fast turnaround time, so someone was watching and that’s great. But any NPM tree that includes anything other than fully-specified and locked versions all the way down the tree is just waiting for the next shoe to drop.
That's kinda what I assumed, but "only run code that have been signed off on by a major company" is kinda a shitty solution.
1. Running those builds in VMs is a good idea.
2. Monitoring for weird behavior.
3. Restricting build scripts from touching anything outside of the build directory.
4. Pressuring organizations like npm to step up their security game.
It would be really nice if package repositories:
1. Produced a signed audit log
2. Supported signing keys for said audit log
3. Supported strong 2FA methods
4. Created tooling that didn't run build scripts with full system access
etc etc etc
I started working on a crates.io mirror and a `cargo sandbox [build|check|etc]` command that would allow crates to specify a permissions manifest for their build scripts, store the policy in a lockfile, and then warn you if a locked policy increased in scope. I'm too busy to finish it but it isn't very hard to do.
Signed audit logs seem like a good idea.
Now...how to get developers to avoid using NPM and Yarn altogether on sensitive projects...
I know HN is usually skeptical of anything cryptocurrency/blockchain related, and I am too. But as weird as it sounds, I think blockchain might actually be the solution here.
The problem with dependency auditing is it's a lot of work. And it's also duplicate work. What you'd really like to know is whether the dependency you're considering has already been audited by someone you can trust.
Ideally someone with skin in the game. Someone who stands to lose something if their audit is incorrect.
Imagine a DeFi app that lets people buy and sell insurance for any commit hash of any open source library. The insurance pays out if a vulnerability in that commit hash is found.
* As a library user, you want to buy insurance for every library you use. If you experience a security breach, the money you get from the insurance will help you deal with the aftermath.
* As an independent hacker, you can make passive income by auditing libraries and selling insurance for the ones that seem solid. If you identify a security flaw, buy up insurance for that library, then publicize the flaw for a big payday.
* A distributed, anonymous marketplace is actually valuable here, because it encourages "insider trading" on the part of people who work for offensive cybersecurity orgs. Suppose Jane Hacker is working with a criminal org that's successfully penetrated a particular library. Suppose Jane wants to leave her life of crime behind. All she has to do is buy up insurance for the library that was penetrated and then anonymously disclose the vulnerability.
* Even if you never trade on the insurance marketplace yourself, you can get a general idea of how risky a library is by checking how much its insurance costs. (Insurance might be subject to price manipulation by offensive cybersecurity orgs, but independent hackers would be incentivized to identify and correct such price manipulation.)
The fact that there is actual value here should give the creator a huge advantage over other "Web 3.0" crypto junk.
Maybe I'm just incredibly cynical from my experiences with the intersection of the JS ecosystem and security, but...
...I'd bet dimes to dollars it's the latter (just changing the package.json). My guess is they authenticate but don't actually scope the authentication properly, and no one noticed because no one thought to look.
And at that point it will probably be considered stuffy and "enterprise" and the new hotness unburdened from such concerns will repeat the cycle again.
Which of the public package systems are the state of the art that should be replicated?
For example, look at django, it provides more functionality (though not directly comparable to) than react. But installation is quick and there are small number of packages from trusted authors.
The ecosystem is orthogonal to how good package manager is.
I think it makes other package managers look like a toy.
PGP package signing is a huge plus. Is that a requirement for publishing?
How many different repo's do you typically have to deal with in the average project?
Would Sonatype react quickly to malware issue's like this in the repository? Have there been examples of similar package hijacking?
And the best past is the signature handling is a part of Java, not the package manager, so nothing needs to be re-invented. The default class loader checks the signatures at runtime as well.
Typically you need 1-2 repositories, but often just 1. But if you’re an organization, you can set up your own repository very easily and use it to store private deps and to cache deps (which also allows you to lock binaries and work offline). Repo mirroring is super easy to set up. If you have an internal repo, you can just have your internal project use your own repo and your computer never has to directly reach outside the Internet for a package.
Unlike other languages, the “central repo” and the package manager tooling are independent and package resolution is distributed. When you start a project, you choose your repos. I don’t know how quickly Sonatype would react personally but they are only default by de facto. Many packages are published on several repos and mirroring is a default feature of a lot of repo software. If Sonatype started screwing up, everyone could abandon them instantly, which forces them to be better.
I have had people tell me in discussions online, also entirely seriously, that running a package manager to install a dependency while developing is inherently dangerous and anyone who does it outside of a disposable sandboxed VM deserves everything they get. If the packages are inexplicably allowed to do arbitrary things with privileged access to the local system without warning at installation time then clearly the first part is correct, but victim-blaming hardly seems like a useful reaction to that danger.
Don't trust the package distribution system - use public key crypto.
Like that you can have anything trying to upload but fail the signature check.
Now we have the probable root cause, buried in a wall of text. No CVE.
CVEs don't just mean "this is a big security problem".
affected: "the whole damn internet"
resolution:"rewrite the last 10 years of internet developmet from scratch"
not sure that's gonna happen
They don't even know when, if, who and when this was exploited, but maybe I didn't pay enough detail attention to the few paragraphs devoted to the real problem.
So shoudn't we assume all NPM packages published prior to 2nd of November are compromised?
And if so, shouldn't this deserve a CVE? (https://en.wikipedia.org/wiki/Common_Vulnerabilities_and_Exp...)
and yet a security incident where it was possible to publish any npm package without authentication is nine paragraphs down, and isn't alluded to at all in the page or section titles. I'm not sure that's entirely in the best spirit of transparency.
The result is that instead of a dependency tree of consisting of a few packages or a few dozen, you end up with an unmanageable number like 1700 coming from who knows how many authors.
Nowadays tree shaking means that having a large package with lots of smaller function should work better, especially since adding an import incurs its own overhead, but a lot of older packages are stuck on the small packages model.
The theory about tiny libraries enabling tiny programs might work if developers were targeting microcontrollers, but developers have comparatively infinite cpu, memory, and disk, they've lost perspective, and the result is truly ABSURD. 10 big fat libraries are smaller than 2000 micro-libraries.
Chances are, only parts of it would actually be loaded into memory due to how operating systems work.
If I have a library that needs to handle a bunch of general cases, but I only need 1 or 2 of them - it's probably less code to just write out those cases myself.
As a trite example, look at the source code for `is-even`. It imports the is-odd package, and the is-odd package has a bunch of error checking (and imports a library "is-number" to check errors too!) before it returns `n % 2 === 1` to is-even just to be negated.
Now blow this insanity up to all your packages of various sizes and you have a tonne of useless code that nobody needs.
But in this case, tree shaking simply doesn't occur at all in this scenario since npm modules are installed on the backend, and tree shaking simply doesn't occur on the backend.
The best answer I can provide is to nuke your entire software supply chain from orbit and start over with something new that doesn't require you to depend on potentially hundreds of arbitrary 3rd parties. Factor into this all of your tooling and infrastructure vendors as well.
The reason we like this path is that we now require virtually zero additional third party dependencies for building our B2B products. .NET6 covers nearly 100% of the functions we require. On top of this we have 1-2 convenience nugets like Dapper (StackOverflow, et. al.), but everything else is System.* or Microsoft.*. The only other 3rd party items we consume into the codebase come from vendors of our customers as part of our integration stack - typically in the form of WCF contracts or other codegen items. Now, we do hedge for the inevitable Microsoft framework churn by not getting too deep into certain pools like AspNetCore. For example, we have rolled all of our own authentication, logging & tracing middleware, since this is the area they seem most hellbent on changing over time.
Certainly, Microsoft has, can, and will drop the ball, but they also have a very long track record available to build [dis]trust upon. For us, we went with the trust path. If our customers run us through the due diligence gauntlet (and they will - we're in the banking industry), we can produce a snarky ~1 item list of vendors that makes life much easier for everyone involved. No one has ever given us a hard time for doing business with Microsoft. Typically, everyone we work with is also to some degree. Is this bad? Maybe. I am ambivalent about the whole thing because of how much energy they have apparently put into the open source side of things. I have actually been able to contribute to their process on GitHub and watch it come out the other side into a final release I could use to correct a problem we were having.
While it seems like a lot of work, IMHO, the tradeoff of using an ecosystem with such a massive attack surface (NPM) is simply a pill we couldn't swallow. For those of us building systems that actually NEED to be secured, the "convenience" of using NPM isn't worth being kept up at night thinking about all the ways your app could be fubar'd.
1)How much JS/CSS are we talking about for heavily interactive pages? Do you not even use light weight libraries like knockoutjs or backbonejs?
2)Have you gone down the Blazor route yet?
3)What kind of system yall are working on that requires this much security? I've worked for a very information sensitive department of one of the top International banks where it was all sorts of npm galore.
We also built our own animation library to handle things like graceful entry and exit transitions (e.g. when an item is deleted, it asynchronously swipes or fades out of view). All of this was done in vanilla JS/CSS. The only external library we used was a sub-1000 line library for toasting messages in the UI. We heavily extended this library and, hopefully, improved upon the original design. We also wrote our own CSS utility library. It's bare bones but it is exactly what we need.
2) We considered Blazor but went with MVC instead. Better suited to our skill set and definitely a bit more optimized when compared to Blazor (at least it was when we started our project).
3) We are building an internal-facing financial management system. We are heavy handed with our security approach but we have the time and the budget to be. We are in a unique situation where we have a lot of time in which to complete our project, so we can be really careful about building what we need. Also, since our application is internal, we completely bypass common user issues like browser compatibility (everyone uses the same browser) and complicated server infrastructure (we have sub 200 employees). It's a pretty fun project tbh.
 - https://davidwalsh.name/fetch-timeout
Your post seems a bit strange given that Microsoft is the company that let any package owner publish any other NPM package.
Even though Microsoft is ultimately responsible for NPM on the org chart, I still dont mind the entire ordeal from my current perspective. No one ever said we had to use 100% of Microsoft's product offerings. The nuance is in selective adoption and careful negotiation of roadmaps.
I will signal my displeasure with Microsoft's acquisition of the NPM ecosystem by simply disregarding it's existence and never electing the node workload at VS install time. That's all it really takes to entirely opt-out for us. I don't hold any principled grudges against an organization larger than most municipalities. There are a lot of stakeholders involved here.
You've also become 100% dependent on a sole source.
> Microsoft has, can, and will drop the ball, but they also have a very long track record available to build [dis]trust upon.
We’ve had far more security issues with the .net toolset than we have had with Python which is far more open. Most of them have been developer mistakes, because the update process for .Net is far less intuitive than it is for Python. So my developers haven’t always been on point with updates, getting caught in the act when our network team closed old TLS versions or similar.
But the biggest issues have been with libraries abandoned by Microsoft. Like when they wanted to move the world into Azure runbooks and this no longer needed their library for Windows Server Orchestrtion runbooks. Or the half finished libraries like everything involving on prem AD.
By comparison we’ve had absolutely no issues with Python. So I think this is more of a NPM issue than anything.
Being able to directly inject C# services into the razor components is one of the most productive transitions we experienced. All of our JSON APIs got sent to the trashcan and we now have way more time to focus on more important stuff. Also, little things like subscribing to CLR events from components really make you feel empowered to move mountains. Some of our Blazor dashboards have absolutely incredible UX and it took almost nothing to make it happen because of how close you can get the backend to the HTML. You definitely have to change how you think about certain problems, but once you find a few patterns for handling edge cases (e.g. large file download/upload), you are set forever.
When I was reading this, I thought the time frame would likely have been on a time scale of hours (or even minutes),
> exposed between October 21 13:12:10Z UTC and October 29 15:51:00Z UTC
But it's actually more than one week. Do we consider one week as "brief"?
I'm guilty of this: my latest Nuxt project has 47,000 dependencies. yarn audit helps, but can i even trust that since it is retroactive?
Otherwise I can't fathom how it's possible for a project to have 47000 dependencies. I mean, my main Linux machine has all kinds of old garbage installed and still the package manager only lists 2000 packages.
I generally only update these when I need a new feature or bug fix, which means I'm unlikely to get bitten by any temporary security compromise.
If the "particular checkout of vcpkg" type of approach is impossible with other package managers, that's unfortunate.
npm modules aren't the same as boost. Boost is written and scrutinized by some of the best C++ minds on the planet.
npm modules are written by anyone. they are all open source, but so many are in use that i doubt they get the scrutiny they deserve. at one point there was a package just to left-align things and a bug in it broke thousands of services.
but that's the landscape the modern web is built on, for better or for worse.
Can I come live with you? It's sad that I'm only half joking.
Still some people argue if this deserves its own CVE.
Though this does give them a shortcut.No need to bribe some aging, disenchanted nerd to sell their soul when you can just impersonate them.
Don't get me wrong, everyone has security flaws and zero days, but I feel like npm has been mismanaged since its inception.
I'm glad I'm out of the web dev game, I would dread having to rely on this walking security disaster.
Phishing, bad 2FA, and vulnerabilities of the central repo upload path itself all go away with this simple tactic used by all sane package managers.
Someone PRed this exact same effective strategy to NPM in 2013, and it was refused even as -optional-.
NPM team members have ignorantly maintained that hashing packages is good enough. They insist on being a central authority for all packages with no method to strongly authenticate authors and this negligence has repeatedly endangered millions.
Meanwhile Debian and other community Linux distros maintain, sign, and distribute hundreds of popular NodeJS packages themselves now because they realize it would be negligent to risk having NPM in their supply chain.
You then have to manually add the public key for a given package to your package.json so it can verify a tarball came from the author/source you expect.
This won't solve problems where the author is malicious, but it helps other cases.
I'm sorry the NPM ecosystem doesn't do this already? Good god!
If I had to guess, the registry operator probably either sees this as friction to onboarding, or if they do support signatures, they'd probably rather sign it themselves.
These are both stupid. The author should be responsible for signing, the registry should never see the key, and the registry should require 2FA to log in and set the public key for a package for users to discover.
Just over a year.
The preceding sentence says they have no idea if it was exploited before the start of their logfiles:
"This vulnerability existed in the npm registry beyond the timeframe for which we have telemetry to determine whether it has ever been exploited maliciously."
So packages uploaded after September 2020 are probably fine.
Before that: ¯\_(ツ)_/¯
If NPM/Github were being responsible here, they would make package owners re-upload clean copies of anything which hasn't been touched since before the start of their audit logs.
I’m surprised more isn’t being said about this part. Any stale dependency is now untrustworthy and they all need a version bump to prove provenance. This is potentially something GitHub could protect against server-side for everybody or build into NPM. They know if a version was published before this date and can stop people from using them.
Unfortunately, the number of weekly downloads wouldn't give much indication of how many people were affected, since some of the downloads will be by bots or eager CI systems, and some organisations cache packages locally after the first download.
If that's possible, it would be really good to then run it against a list of popular packages, like  or , and report back which packages are the highest priority for getting version bumps (or at least for having someone manually check that the code in the package matches the code in its repo, which we assume an attacker didn't have control over).
An attacker would have to get very lucky, exploiting this bug just up until the point when the logs started (which they had no way to predict), and to target only packages which have either never been updated since, or which were followed by a minor/major package update (not a patch).
The attacker might be monitoring their logs, selectively silencing version clashes. Heck, it's even possible they now have backdoor access to do whatever they want to any package.
I know it's cynical thinking, but this vulnerability was unbelievable and the way they're handling is definitely not reassuring, from my personal standpoint.
Also no distracting internet or Google and you had only the man pages to work off.
I really don’t like the culture of ”download any old shit off the internet, ram it in a container and throw it into production”. It keeps me awake. One day the whole thing will come crashing down and instantly spawn a costly magic enterprise solution which will cost a fortune just to mitigate that risk which doesn’t actually mitigate it all just allow the box to be ticked on a compliance form.
1. npm doesn't record the repo, branch, or commit so it doesn't know what to compare with.
2. Published content is usually a transformation of the repo content -compiled, minified, bundled. You would have to run the same transformation on the repo source and it would have to be a deterministic build.
npm could require that popular packages are published via GitHub actions, then it could strongly associate the published version with a source commit and the build that produced the artifact.
This has some downsides like tie in to the GitHub ecosystem. Maybe that could be offset with sponsored builds?
Certainly not a complete solution but a good first step.
There was an earlier project called "Trust But Verify" which tried to detect discrepancies between published NPM packages and the corresponding (inferred) tag/commit in the source repository, but sadly it doesn't seem to have gained traction.
Interestingly, the "Gold" standard of the Core Infrastructure Initiative (CII) Best Practices guide is to have a reproducible build, but it is only "SUGGESTED that projects identify each release within their version control system", so presumably packages are free to indicate which commit they are built from in an ad hoc out-of-band way, which may not be amenable to automated third-party checking.
GitHub really comes across looking like total garbage here with this blog post. Security issues shouldn’t be hidden like this. This is dishonest and irresponsible.
Similarly, if you are downloading npm packages that provide frontend-only code, that is only run in the context of the browser's sandbox, then you don't have to worry about arbitrary code execution (although a malicious frontend package could still exfiltrate user passwords, among other things).
The way dependencies move depending on when you run a yarn/npm install has never been useful. Both for projects initialising a lock, and projects upgrading from a previous locked position.
Unfortunately that includes the people who think their NIH wheel will be much better than the existing one. In fact, the Dunning-Kruger effect would suggest that people who don't know what they're doing are disproportionately likely to be in the latter group.
I have a similar story . At my old job, we had a web socket gateway that authenticated using JWTs , then hit an internal service to request REST resources. The issue was that it didn’t actually validate the requested REST resource URL; a malicious user could authenticate as themselves but request a resource for any other account.
I found it as I was getting up to speed on the code base, having recently switched teams. Funnily enough, nobody on the team really understood the vulnerability - the EM marked it low priority and wanted the team to work on other things. I had to essentially go directly to the security team and convince them it was a sev 1. I sometimes wonder if it’s easier to just report security issues as an outsider through the bug bounty program; internal reports don’t seem to get taken as seriously.
Looking at a Rails project I've got open, it has ~100 gems in the Gemfile, and 200 gem dependencies overall.
Compare that to one of the Typescript + React projects I work on, which has ~200 dependencies in package.json and 2300 packages in node_modules.