Hacker News new | past | comments | ask | show | jobs | submit login

2020 - The year of secure supply chain.

2021 - THE year of secure supply chain.

2022 - THE YEAR OF SECURE SUPPLY CHAIN.

Working in this general space, I can say that this is only going to get worse before it gets better. Many people are rallying behind pushing towards best security practices however.

The dash requirement is interesting given it's somewhat of a pseudo prefix/namespace. Good that this was caught now and looks to not be exploited.




100% this. My CSIO has been harping on this since 2018. I’ve been harping on this since 2019. Security vendors are JUST now starting to provide SCA offerings (software composition analysis, for those not in infosec).

Events like these don’t surprise anyone who’s been following this space. It’s been a rough ride that will get rougher. Consider:

1. NPM just recently started enforcing 2FA for some packages. The NPM has yet to support mechanisms like package signing. Post install scripts are still a thing.

2. Rubygems does not support package signing.

3. Go dependency management is a hot mess.

4. Clojure dependency management is still a hot mess.

Initiatives like Google’s SLSA will help here, but there’s a whole ecosystem of stuff across multiple technical ecosystems they need to mature, and fast.


I'm not in infosec (or any sec really, I'm a dev in a very immature organization), but I don't think it's a tooling problem, but rather a people problem. Software has reached a crazy amount of complexity. Layers are built to compensate for the layers that come before them, followed by new layers to undo that very layer. The hardware we run out software on has never been so uniform, yet our software has never been more abstract. That abstraction is then implemented in millions of lines of dependencies that people cargo cult into their projects.

The problem with go packages isn't the security of the package manager, but rather the fact that I need to download some arbitrary go code from github to make uuids. "Pull this random package" has become the new "copy paste from stackoverflow without reading the snippet".


Security is absolutely a people problem, but in general good security tends to make it hard to do dangerous things. I do not believe most development tooling makes it easy to be safe while also making it hard to do generally dangerous things—like downloading unsigned, unverified code from an unknown third party.


Point 2 (Rubygems does not support package signing) is not true.

Rubygems has supported package signing (`gem help cert`) since very early on, and it has an install flag `--trust-policy` which can be used to verify various things, including certs (https://github.com/rubygems/rubygems/blob/96e5cff3df491c4d94...).

The experience in using it, however, sucks on every level. No one can really use the `High Security` policy level, because most gems aren’t signed. Most gems aren’t signed because there’s no clear benefit and it’s non-trivial to have shared certificates that can be used by multiple people authorized to release a particular gem. Most gems aren’t signed because there’s nowhere that public gem certs are published (there used to be with rubyforge), and you have to track down each cert you want to verify and download it separately.

I used to sign my gems, but then stopped.

Shopify has proposed a new RFC for signing gems based on sigstore. This RFC has many of the same points that I have already made as a reason for changing mechanisms. https://github.com/Shopify/rfcs/blob/new-signing-mechanism/t...

I’ve just discovered this, so I haven’t really evaluated it, but I would prefer to sign the gems I publish.


I could only find this PyCon 2012 talk (https://pyvideo.org/pycon-us-2012/advanced-security-topics.h...), but there was a session in 2010 that was perhaps more zeroed in on this topic. It was evident then that these problems affected every packaging and dependency ecosystem, but it was hard to get devs to take it seriously.

The pattern plays out with every new wave in our industry. You can even see this with the k8s ecosystem where there is a ton of complexity and opportunity for attackers. It's only later that security is taken more seriously when it could be designed in from the start. We love to mitigate things that we could eliminate in original designs.


> The NPM has yet to support mechanisms like package signing.

Yes, but actually no.

"To increase confidence in the npm public registry, we add our PGP signature to package metadata and publicize our public PGP key on Keybase."[0]

Unfortunately (and perhaps unsurprisingly), this means NPM is still a single point of failure, and this doesn't add any defence in depth. I guess what you want is signatures by the package maintainers themselves, but key management is hard for both signers and verifiers.

[0] https://docs.npmjs.com/about-pgp-signatures-for-packages-in-...


IIRC when I examined this a couple years ago—it doesn’t seem to have changed—this is about NPM signing packages submitted to it’s registry. This can give confidence they packages are not tampered with when one downloads them from NPM.

What this is not is support for package maintainers or CI/CD systems to sign artifacts they upload to NPM. Supporting this would give consumers of said packages the ability to detect if a package maintainer was compromised—the signature on a new version would either be invalid or different.


What are you seeing with Go dependency management?


I agree with most of your post, but I do not understand this:

> Post install scripts are still a thing

I cannot find a situation where post-install script actually pose a security issue. I find that you'll always end up running the code you just installed on your machine (either through running unit tests, or running the actual code, or whatever). Hence, the only scenario where post-install scripts are a problem is:

- You install deps on a machine that has access to security sensitive resources (think: your ssh keys)

- BUT you do not run the resulting code on that machine.

Are there really any use-case that satisfies those requirement where post-install scripts would actually result in a security issue?


NPM for compiled frontend JavaScript. The resulting code only gets run in your users browsers, but the post install script could be run on your Jenkins/whatever box.

In general, anyone who gathers dependencies ahead of time before deployment, so they can just copy them over. So most people deploying to multiple machines/containers would be at a greater risk from a post install script


While this makes sense, my experience is that you end up with just a handful of dependencies for your actual product (“dependencies” in NPM parlance), but a whole herd of them for your build toolchain (“dev dependencies”). This happens because front end libraries don’t often have many, if any, dependencies themselves for fear bloating the JS slugs using them, whereas the Node libraries used to build it do not need to sweat that and each comes with a whole tree of transitive dependencies. So the exposure your build server has to supply chain attacks for code it runs directly is huge, much bigger than what’s just passing through. And for those toolchain reps, the GP’s point stands.


Ah, yes, I had totally failed to consider that NPM is now used much more widely than just for nodejs server code, and can be used for browser deps. Thanks for this!


Given the fact that deployments can also include a npm install, depending how you package and run your code. But that is not the point. An attacker can run code in the name of the current user. He can download and inject other code on the host or the project. Which means that even if you bundle your code in a docker image and you never run the install on the same machine, some code could have been injected. And I personally see any attack being against a production system or development machine relevant.


Maintainer scripts lower the bar for the attack.

In order to get the code in scripts executed at a predictable time – you add it there and done. In order to get the code in the library executed on the build box, you have to add it somewhere that's actually used by the environment you want to attack. Scripts allow more coverage more easily.

Plus, it's not unheard for people to run "npm" as root to run some scripts as root.


robabla, are you an NPM maintainer? They all seems to hold the same opinion despite the exploits being enumerated for them over and over again.


No, I am not. I'm just an interested bystander. I don't even code much JavaScript anymore.


3. Go mod is excellent and actually protect against supply chain issues, it's one of the few to do so.

https://go.dev/blog/supply-chain


Check out some of the initiatives across the board:

https://github.com/ossf (Lots of WG and efforts such as package analysis, scorecards, etc)

https://deps.dev/ (Implements OSSF scorecard)


In particular, check out the Securing Software Repos WG: https://github.com/ossf/wg-securing-software-repos

So far folks have turned up from RubyGems, PyPI, NPM, Maven Central, Gradle, Drupal and I'm probably forgotten someone.


> Security vendors are JUST now starting to provide SCA offerings

Sorry, but that just isn't true. Solutions have existed for years [1], but enterprises just weren't interested.

[1] For example Sonatype who I know about from a former colleague (ex. Sonatype employee). They have existed for 14 years.


Could you point me at the product from Sonatype you have in mind? I’d genuinely love to see it. I’ve seen their SAST offerings, but not much around SCA/SBOM/supply chain analysis.

It’s also worth pointing out that my original comment was probably less specific than it should have been—SCA isn’t new, but tools to help and mitigate risks around the digital supply chain have up until recently been fairly primitive as far as I’ve personally seen.

Many vendors can tell me a package has a vuln. Very few can tell me which vulns can actually be exploited in the context of applications I secure. Fewer still can give me a cohesive view over time of what packages or maintainers start doing things out of the ordinary.


Hi there, Ax Sharma here from Sonatype - I've written extensively about our malware/hijacked package findings almost every week now on the company blog. The automated malware detection bots flag anything that looks suspicious on npm/PyPI and "quarantine" the packages until a manual analysis from researchers is pending.

Nexus Firewall automatically blocks malware and malicious typosquats, hijacked packages and dependency confusion attacks with algorithms now being expanded to cover self-sabotages (In fact, I first broke news on dependency confusion along with researcher Alex Birsan on the company blog and BleepingComputer). As such, before the attacks even picked up steam, Sonatype already had a solution for it and been blocking these for months - but coordinated disclosure agreement for PoC research delayed our public disclosure.

Nexus IQ/Lifecycle is more for SBOM/vulnerabilities, including those without a CVE - e.g. reported via GitHub Issues and other sources. The vulnerability scanning looks for the exact occurrence of vulnerable code rather than just flagging any and all artifacts for a given component, which makes it quite precise imo.

For SCA, there's Sonatype Lift that connects to your GitHub repo for free so you can test drive it before moving on to other offerings.

Thanks, and I hope it helps.


Thanks for the reply Ax! I was familiar with Nexus Lifecycle, but Lift and Nexus Firewall are new to me. Will definitely check them out.

What are your thoughts on risks in this space? As a member of an org that thinks about these problems a lot, I’d love to hear about any novel attacks or mitigations you and your team have eyes on.


You're welcome! If you look at "A Timeline of SSC Attacks" compiled and periodically updated by us, the trend is getting worse than better. To be blunt, the increased volume of these unwanted packages (whether research PoCs or malicious) in recent weeks can be and has been infuriating even for us to keep up with.

Whereas, previously only typosquatting or malware published to OSS repos might have been the primary concern, the recurring incidents today have diversified in both their type and quantity.

We now have to dedicate more time and resources to analyzing malicious packages that we'd have otherwise spent on hunting for zero-days or vulnerability research activities. These packages keep multiplying and it's become a whack-a-mole situation: every other day we report malware to the OSS repos, these get taken down, and the threat actor repeats the attack with slight variations a few days later. Additionally, copycat attacks follow, further increasing the number of malware incidents.

For example, other than typosquatting attacks, between last year and now we saw attackers hijacking legitimate libraries (ua-parser-js, coa, rc) or publishing tens of thousands of dependency confusion packages (A dependency confusion attempt against VMware VSphere SDK devs was just caught by us, along with 1000+ packages targeting Azure developers caught between March & April - our blog posts will explain it all. We've thus far flagged well over 65,000 suspicious packages including malware, dependency confusion attacks, typosquats, PoC tests, etc.).

In 2022, we are met with self-sabotage and protestware incidents that are on the rise: colors/faker, node-ipc, event-source-polyfill, styled-components, es5-ext, ... These have further complicated matters, and pushed us to fine-tune our algorithms. We can now no longer trust the original developer of a library either, as they are free to change their mind on a whim (they always were).

As pioneers of a proactive solution behind dependency confusion attacks and a company that's been consistently leading OSS malware discoveries every week, I'm obviously biased but I'll say whatever solution you implement, make sure to have something in place to protect your dependencies, components, and supply chain against these novel attacks. Vulnerabilities like Log4Shell or Spring4Shell, as serious as they are, are just the tip of the iceberg when you look at the entire OSS threat landscape that's evolving.


> In 2022, we are met with self-sabotage and protestware incidents that are on the rise: colors/faker, node-ipc, event-source-polyfill, styled-components, es5-ext

You raise a very good point about self-sabotage and protestware. I'm reminded of the left-pad debacle from some years back that fell in the former category.

Protest via libraries is new and interesting, but I suspect boils down to a matter of acceptable licensing, liability, and guarantees.

While well intentioned, I have some anxiety that protests through dependencies that operate in production environments could cool adoption of open source projects. The solve for this will likely be more attention paid to governance of open source components--which is something I already encourage.

What trends are you folks seeing with protestware? There is a real need to spread these messages, but I don't see many consumers of said projects being receptive to their platforms assisting without consent.


Very true, the left-pad incident from 2016 may have seemed like a one off occurrence but we see protestware revived this year.

1. colors/faker followed the Log4j debacle and was more about corporations using open source heavily but not giving back enough to support the developers so the dev threw in the towel. Applications using 'colors' began freezing (entered a DoS condition) due to an infinite loop introduced by the developer in the code.

2. But with node-ipc, the self-sabotage turned destructive with the package actively deleting files on detecting a Russian/Belarusian host IP

3. event-source-polyfill, styled-components, etc. have adopted more a more "peaceful protest" approach by expressing the maintainer's views condemning the Russian war, but without engaging in outright destructive activity.

Thus far the trends have been about open source and the ongoing war.

But developers have discovered a new avenue of their creative expression (open source) which no longer limits them to simply coding the intended application functionality. And so, the questions that arise are, what will the next protest be about and if we are prepared for it?


How is go still a hot mess? Unlike the other systems you know exactly what went in by default and it's easy to vendor.


Rubygems does support GPG signing but no one uses it.


It's actually x509 based, but otherwise very PGP-like in its trust dynamics.

But as you say, basically nobody uses it.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: