And once again calls to allow optional signing support natively to NPM will be rejected citing that it might intimidate drive-by devs who do not want to learn to setup a yubikey or nitrokey for artifact signing.
I have talked to the NPM team about this multiple times over the last several years and they literally believe no signing at all is better than some devs feeling pressured to sign.
You need no stronger evidence of the NPM teams negligence than these two times they refused to even accept community contributed optional signing support saying they would come up with something better than PGP. Still waiting 10 years later.
Meanwhile PGP secures the supply chain of the Linux distros that power the whole internet, and Debian signs hundreds of npm packages used in their dependency graph, but it is still not good enough for NPM.
You can use the well tested and rust-written Sequoia/sq now and never touch GnuPG. You can also self certify your keys with keyoxide. The past complaints are largely moot and still people stick to their guns on this.
This is the same NPM that made a change causing the `integrity` field to go silently missing from `package-lock.json` [0][1] when installing packages, and then also not complaining at any other time in the future.
The Debian PGP system is very impressive. Looks like the maintainers actually met each other, verified each other's identities and created the fabled web of trust.
> When joining the Debian project, developers need to identify themselves by providing an OpenPGP key that is signed by at least two existing members of the project.
This isn't quite accurate. In fact, npm did ship a form of code signing called 'npm provenance' in April 2023. We wrote a semi-official deep dive on the feature in cooperation with the npm team that explains how to sign your npm packages [1].
You can see npm provenance in action on this npm package page [2] if you scroll to the very bottom and look under the "Provenance" heading.
I am sure virtually everyone understands "code signing" to mean what it has meant historically. The author of code signs git commits, or a tarball, or signing a package as in the debian, arch, guix sense. All of which typically share cryptography standards like PGP rather than rolling their own solutions.
Each maintainer has a signing key to identify themselves to the public without need for any central infra, and signs the packages they publish. Someone that accesses some centralized server or account will not be able to impersonate the key held by that developer or the signatures they issue.
This new provenance system and the fulcio system which it is based on, is a centralized setup where you use traditional, usually phishable, authentication with a SaaS, and then the SaaS takes your submission and signs it for you with a centrally managed keychain. Having done security auditing for many fintech signing systems, I can tell you I have almost never once seen anyone get this right, particularly when there is no accountability.
Is this done in a secure enclave with a public remote attestation of the software image running on it that I can locally compile and verify the matching hash of? Does that code enforce the participation of multiple distributed people to make updates, key exports, or key imports using shamirs secret sharing or similar?
Or maybe it is just sitting on an amazon box somewhere a few people ssh to from their daily driver macbooks ?
I don't -hate- centralized signing existing as an -option- if it is done very well and highly accountable (which fulcio is not, imo).
That said, -mandating- centralized signing on behalf of developers as the only path is really insulting, as though people who write software can't type a couple commands to provision a PGP key on a smartcard and publish their key to keyoxide which is strictly better in every way from a threat modeling perspective.
Speaking of Fuclio, this was meant to "invent" a solution for container signing, even though PGP multisig has existed from the start. No one used it because none of the major players in container software documented it other than the podman team.
Back to NodeJS, Debian and Arch already sign npm packages with PGP keys. It works fine. We need to let people actually do that with NPM. Tell me how many supply chain attacks have happened in Debian or Arch recently compared to NPM?
PGP may be a small barrier to entry, but it is a standard with solid smartcard support and works in practice. It should be the default recommendation to all developers, and end users should be able to set policies to only install packages signed by a trusted set of maintainer or reviewer keys.
We've been building Socket [1] to detect and block this exact type of supply chain attack. Our Socket AI scanner [2] successfully detected this attack. It uses dozens of static signals combined with an LLM to detect novel attacks that evade traditional scanning tools.
This is what Socket AI produces when given @ledgerhq/connect-kit 1.1.7 to analyze:
> The obfuscated code block is highly suspicious and likely contains malicious behavior. The presence of obfuscation and the unclear purpose of the code raise significant red flags.
Feeling very proud of our team right now as this validates that our static analysis + LLM approach works well on novel malicious dependencies. If you're interested, we maintain a listing of malicious packages detected by this system [3].
Small plug: If you’d like real-time protection against attacks like this, you can install Socket for GitHub to automatically scan every PR in your repo. The free plan is incredibly generous. If you do decide to install it, it’s important that you enable the ‘AI Detected Security Risk’ alert type in your Security Policy to activate this protection.
Do you discuss anywhere what you use for static analysis? I skimmed through your blog but didn't see any details. Also -- did you detect and publish this BEFORE it became public knowledge? It's unclear.
We've built our own minimalist static analysis engine that only supports scanning for the specific supply chain threats we care about. For that reason, it's a lot simpler and faster than a generic engine.
I'll see if we can write up a bit about how it works in a future blog post.
Love Socket! A lot of folks (think most) were loading the compromised package through another package, @ledgerhq/connect-kit-loader [1], via a CDN call [2]. Would be great if Socket could pick up on this because Socket's @ledgerhq/connect-kit-loader page [3] doesn't include any warning.
Could you slide a gzip window over the source code and flag any relatively high entropy region(s) for human review? Would this maybe be more deterministic than an LLM?
How about a multi-stage system that uses the LLM to attempt analysis of the statistically-detected high entropy regions by way of an assortment of tools, such as b64 decode?
I like where you are headed with this. Just some thoughts I had.
How did the exploit work? Obviously it looks really bad for Ledger to keep having these web security failures, but the entire point of a hardware wallet is to make it so that you don't have to rely on the security of the code on your computer.
If the hardware wasn't compromised (sounds like this was just JS), then there was no way for the exploit to take anyone's private key. It sounds to me like the exploit would work by getting you to sign a transaction that would transfer out the funds, without the attacker ever getting your key.
The only way this is possible is if users are signing transactions on their Ledger without looking at them.
And this is place where the Ethereum community needs to look in the mirror. Blind signing is the default for using Ethereum with a Ledger. I'm not sure the technical reasons behind this, but I do happen to know that much of the information that gets signed is in very convoluted formats (meta transactions etc). This is not the case everywhere. Other ecosystems, like Cosmos, present the information to be signed in a plain text format that you can scroll through on the Ledger's screen before you sign it.
Ethereum needs to put some serious effort into making sure that anything that gets signed can be viewed in a human-readable format before signing. Until then, hardware wallets are security theater.
You can see a technical analysis here https://twitter.com/Neodyme/status/1735337711555285261 , this is a JS repo for app integrations for Ledger and really has nothing specifically to do with Ethereum itself or any hardware. There are several wallets solutions that make transactions easier and more secure for people using Eth, but Eth is a protocol running a network and doesn't concern itself with the app layer, and rightly so.
Yea my point is that Ethereum has created a complex system without paying enough attention to generating human-readable signing blobs. This is not something that a wallet can help with. The information displayed on the Ledger's screen needs to be human readable so that people know what they are signing. This is something that needs to be solved by the community creating transaction format specifications and the people writing the Ledger Ethereum app.
Ledger deserves a lot of criticism for insecure JS, but the whole point of a hardware wallet is not to have to worry about the JS you are running.
Web browsers support programs written in a language called JavaScript (JS). When you're on a website that provides interactivity beyond the basics of e.g. clicking links that go to other pages or buttons to submit forms, that's generally because there's one or more JS programs (scripts) on the page making it happen. (Actually, most websites have JS programs nowadays, even if they don't even "do" anything and only exist to let you click links and submit forms.) JavaScript doesn't need to be compiled; your browser can just run it. Most websites up until about 15 years or so were just published in the clear. You could just read the code to see what they did. Gradually, with the introduction of heavyweight "libraries" and "frameworks" like jQuery, React, etc., web developers started adopting really complicated toolchains that, for one reason or another (and not all of them very good), would mangle their programs: the programmer would write scripts, run it through what amounted to a second-rate compiler, and then put the mangled code online instead of the code they actually wrote. Thus, the normalization of deviance began to set in—lots of programmers started doing this.*
Ledger has a browser extension. It lets websites integrate with Ledger Connect. Browsers require browser extensions to be written in JS, too. For one reason or another, none of them very good, programmers started using the same complex toolchains for browser extension development.
Around the same time, people stopped auditing the libraries they depended on. Pretty much the same principles behind the bystander effect and the free-rider problem, they just sort of assume that someone else is doing their job (even though the programmer making the app knows that they themselves aren't, and that no one else they know is, either, it's still assumed that someone out there is).
With the delegation of responsibility and the normalization of deviance around mangled JS, this provided fertile ground for people to start exploiting the software "supply chain" for websites and browser extensions. (I.e. the unreviewed code that programmers copy from other programmers. Because most people who call themselves software engineers are usually joking when they say it but never explain to anyone that it's all just one big joke, lots of companies end up hiring them and putting them in charge of writing programs for the company, and the programmers just copy what everyone else does since they don't know what they're doing.) Someone gained access to the account for a developer's "NPM" package. (NPM is a website where a lot of programmers like to put their hobby projects, and it gives them stats to make them feel good, like how many other programmers downloaded it so they could use it to make their boss's boss more money.) One of these packages was called "connect-kit". Someone put a bad version of "connect-kit" online, the Ledger browser extension used it instead of the copy that the programmers should have reviewed and checked into version control in the latest release of the browser extension, and the mangled package contained hard-to-find code that would steal Ledger users' cryptocurrency. Since everyone programming is just joking around and most of the JS now in existence on production websites is mangled, it didn't raise any red flags that something fishy was going on, because bad code looks about the same as normal code nowadays.
* usually when asked the programmers will argue that it was to make websites faster because the compiled programs would be smaller and consume less power, but empirically the adoption of these toolchains have actually resulted in larger programs that assume the same sort of hardware that the programmers themselves own in order to feel performant—and even then they're usually off
Yet it doesn't seem to really answer the question.
I get that what we're looking at a browser extension that relies on a bunch of webshit, some of which was malware.
As somebody not versed in "web3" specific webshits, I thought the point of a hardware token is that there was some kind of verification on the device itself. So this doesn't seem sufficient to "drain" a wallet - right?
My assumption would be that the computer running the malware never gets the key material directly, rather it submits some request to the hardware token, which prompts the user with the details on some external physical display. The user reviews the details, then does something in meatspace that causes the hardware token to sign the something in question and pass it back to the software on the PC.
So isn't it the case that the user would have to approve the malware drain transaction themselves? And if not... what's the point of these devices, anyway?
Not sure if anyone actually read my original post. The problem is that Ethereum transactions are not especially human readable so they are commonly signed blind. As you point out, this is a problem.
So it wasn't the case that dynamically loading and executing a blob of unreviewed third-party code containing the offending section is what was responsible for those transactions being initiated. Oh wait, it was.
Exclusively focusing on the security failures arising from end-user UI/social engineering and ignoring the failures arising from poor engineering billed as modern software development best practices is another type of failure.
"The @ledgerhq/connect-kit-loader allows dApps to load Connect Kit at runtime from a CDN so that we can improve the logic and UI without users having to wait for wallet libraries and dApps updating package versions and releasing new builds.
This looks like an extremely dangerous approach now, if I understand it correctly, connect-kit-loader trusts whatever the CDN throws at your dApps. So when connect-kit is comprised, all downstream dApps are automatically exposed."
So it was intended to be used this way. Didn't work very well. Connect-kit-loader trusts whatever the CDN throws, CDN trusts whatever NPM throws and NPM trusts whatever GitHub throws.
Is there even an alternative? Once you can inject arbitrary code into a library that a web app loads and executes (except if it’s in an iFrame), it’s game over, no?
Just yesterday I watched a talk [0] at WarsawJS about LavaMoat [1], a set of tools to protect against malicious behaviour from npm dependencies. Guess it’s time to look into it deeper.
Technically, it is just the frontends. You can always interact with the contracts directly and that can't ever be shut down (if you know what you're doing). Can you do that with your bank?
Let's also not forget that every other website on the planet that relies on npm also relies on the blind trust of unverified UI components. This isn't just silo'd to crypto.
> Let's also not forget that every other website on the planet that relies on npm also relies on the blind trust of unverified UI components.
It feels like you think you're making a really good point here. In reality, it's just a run-of-the-mill appeal to popularity. The programmers doing this sort of thing on other websites are in the wrong, too.
If properly configured and audited, this approach can be secure. Github is the only way configured to publish to NPM, and NPM pushes can only be initiated by signed commits from trusted accounts with MFA, the entire workflow is can be secure on its own.
I don't really see the point for a project that doesn't seem to update their code all that often, though. The risk of misconfiguring something doesn't seem worth the effort saved by having someone with a 2FA key upload a tarball generated on their dev machine.
You are right, I should have waited for the postmortem.. it appeared the likely way because the secret was in the release pipeline env.
However.. something doesn't add up. There is no chance that a malicious actor gained access and in a couple of hours put together this exploit. Or, I can't see someone putting together this exploit, THEN trying to spear-phish in hope of getting lucky and pressing the button.
There is an option to always require 2fa when publishing a package.
> To protect your packages, as a package publisher, you can require everyone who has write access to a package to have two-factor authentication (2FA) enabled. This will require that users provide 2FA credentials in addition to their login token when they publish the package.
> Require two-factor authentication and disallow tokens: With this option, a maintainer must have two-factor authentication enabled for their account, and they must publish interactively. Maintainers will be required to enter 2FA credentials when they perform the publish. Automation tokens and granular access tokens cannot be used to publish packages.
NPM optionally enforces 2FA. You can create an automation token to bypass it. In that case depending on how branches are protected a push to the right branch can publish a new package.
Heck if they have an automated deployment and use devs personal GitHub handles all it would take is forgetting to remove an ex employee from the right github access group. Even if you took away all other access when they left.
Actually worse than that, former employee phished for credentials, per Ledger themselves. Underlying cause is utter incompetence by company, 4th strike.
Pretty fascinating that the malicious code doesn’t seem to be obfuscated one bit. Even contains the word “drain” in multiple places. At least use innocuous looking variable names ffs.
Feels pretty apt honestly. This is about how secure I feel using modern technology. The only thing that makes me ok using a bank is knowing I can go ask a human being to chase my money down when it vanishes suddenly. There's even a non-zero chance they get it back to me!
Security is not absolute. Even cryptographic hardware can be vulnerable.
Yubico for example had to replace many of their YubiKeys after a vulnerability was detected in its secure element firmware which affected the strength of keys generated on the device. They sent me a replacement YubiKey after I contacted them.
Never. It is this way for cosmogonical reasons; it fulfils a purpose. I see Isaac Schlueter as the modern Genghis: "If you had not committed great sins, God would not have sent a punishment like me upon you."
This was a UI popup that got injected into the middleware provided by Ledger that is used to make it easy for apps to prompt Ledger users for a signature. The keys aren't compromised by this attack, this is more similar to a phishing attack, but via supply chain to increase fake legitimacy.
I only have a Ledger because my required me in order to implement a crypto wallet on the website. I have 2 seed phrases written on the back of a book since 2017 and it has kept me well, no hacks so far.
Ledger has been hacked so many times now i've lost count.
I remember buying one in 2019, and shortly thereafter all customer data was dumped on the internet endangering everyone who bought one.
Then after deep diving the tech i threw it in the trash, it seemed like security theatre product.
There's also been so many phishing attempts, fake ledgers sold, bricked ones losing funds, it's total shitshow that ecosystem if you check their subreddit going back in time.
The more you rely on 3. parties, and the more obfuscated your setup is, the more unsafe your data is. I just use isolated cheap laptops and encrypted usb's now.
>I just use isolated cheap laptops and encrypted usb's now.
I figure this isn't practical for most end users. Is there an alternative hardware wallet that you think is okay for most people? How do you feel about Trezor?
How? My set up is less secure, but more auditable than hardware wallets--a dedicated hard disk running a portable Linux doing nothing else than crypto. And only for sending out funds. For receiving funds I use my normal operating system with view-only keys.
Thanks for the shout-out. Obviously I agree. Multi-factor wallets are more secure than single factor wallets, by default.
Having no seed phrase vulnerability (single point of failure) significantly reduces the surface area for attack vectors. Added layers of security (like the built-in web3 firewall) help protect against Web3 attack vectors.
You are missing something. Happy to jump into the details if you're interested.
3D FaceLock is one of the parts of the wallet recovery process: It's a biometric liveness verification (backed by 600,000 USD bug bounty). But 1) It's only one of the factors, and 2) It's never been hacked/spoofed.
Zengo's MPC wallet uses a 2/2 signing mechanism (similar conceptually to a multi-sig). You initiate transactions from your Zengo app (inside the app is the Personal Secret Share, which interacts with your wallet's secure enclave/TEE during the signing process). The Remote Share on Zengo's server essentially co-signs the transactions.
By removing a single point of failure (private key or seed phrase) it is much more challenging for a hacker to steal/spend funds or take over a Zengo wallet - indeed... we have over 1,000,000 users (since 2018) and 0 wallets hacked, 0 wallets drained. More info here: www.zengo.com/security
For "most people", I wouldn't know, but for the typical HN reader, I would advise something open-source, verifiable, DIY, stateless and air-gapped, and that is the seedsigner:
To me, this is the perfect solution for a long term saving account, completed with a Lightning wallet for spending. The coldcard and Jade wallet are also great options.
Regular smart cards also don't have screens, so it would mean totally blind signing. That's the problem which hardware wallets are solving, but sometimes the screen is just too small to show all the details of complex transactions.
Yubikeys are fine for basic sign-in/sign-out functionality, but even on a basic web app, your auth tokens are something else independent of your Yubikey signature.
Phishing attempts are irrelevant as long as users check TXes before they sign.
Fake ledgers are also irrelevant because the software does a check if the hardware is legit.
Bricked Ledgers losing funds is only a thing if a user didn't keep a backup of their seed phrase, which would make them lose funds regardless of what wallet they used.
I've found the most secure key management is to keep important keys offline and stored on paper and only load them into a live tails/whonix system for brief uses. I even contributed a binary decoding feature to zbar to let me store them on printed QR codes and easily input them back in.
> bricked ones losing funds
Well of course. It's just a computer and all computers fail. Cheap laptops can also fail and destroy your keys. USB flash storage failure is even more likely. This is the number one argument for storing keys on paper which is actually known to last centuries.
That's the user's fault. The product makes it very clear you need to create a recovery sheet and store it in a safe deposit box or other secure place. If you actively ignore the instructions you deserve it.
Could you share some of your deep dive and tell us about what concerns you found? I use one of their wallets and I'd like to investigate more now as well.
A few months ago they also pushed a new feature which, if enabled, literally exfiltrated your secret key to external parties, requiring only 2 to reassemble the full key...
Same here. The most infuriating thing is how they downplayed the data breach, specially considering some of their customers live in dangerous countries.
I’ve switched to a Coldcard. Everything from purchase to the device operation seems to be highly focused on security and protections against tampering. No client software… it’s all sneakernet. Coinkite even deleted my customer data a few weeks after purchase without me having to request.
I still have my ledger. I think it is a nice device but when I tried to repurpose it as an yubikey of sorts (it has fido and gpg micro apps) it didn’t actually worked alright. I never trusted ledger live though.
LOL, I initially thought this was Ledger, the command-line personal finance management software, and was worried that it was actually something important.
Ledger Connect Kit genuine version 1.1.8 is being propagated now automatically. We recommend waiting 24 hours until using the Ledger Connect Kit again.
The investigation continues, here is the timeline of what we know about the exploit at this moment:
- This morning CET, a former Ledger Employee fell victim to a phishing attack that gained access to their NPMJS account.
- The attacker published a malicious version of the Ledger Connect Kit (affecting versions 1.1.5, 1.1.6, and 1.1.7). The malicious code used a rogue WalletConnect project to reroute funds to a hacker wallet.
- Ledger’s technology and security teams were alerted and a fix was deployed within 40 minutes of Ledger becoming aware. The malicious file was live for around 5 hours, however we believe the window where funds were drained was limited to a period of less than two hours.
- Ledger coordinated with
@WalletConnect
who quickly disabled the the rogue project.
- The genuine and verified Ledger Connect Kit version 1.1.8 is now propagating and is safe to use.
- For builders who are developing and interacting with the Ledger Connect Kit code: connect-kit development team on the NPM project are now read-only and can’t directly push the NPM package for safety reasons.
- We have internally rotated the secrets to publish on Ledger’s GitHub.
- Developers, please check again that you’re using the latest version, 1.1.8.
- Ledger, along with
@Walletconnect
and our partners, have reported the bad actor’s wallet address. The address is now visible on
@chainalysis
.
@Tether_to
has frozen the bad actor’s USDT.
- We remind you to always Clear Sign with your Ledger. What you see on the Ledger screen is what you actually sign. If you still need to blind sign, use an additional Ledger mint wallet or parse your transaction manually.
- We are actively talking with customers whose funds might have been affected, and working proactively to help those individuals at this time.
- We are filing a complaint and working with law enforcement on the investigation to find the attacker.
- We’re studying the exploit in order to avoid further attacks. We believe the attacker’s address where the funds were drained is here: 0x658729879fca881d9526480b82ae00efc54b5c2d
Thank you to
@WalletConnect
, @Tether_io,
@Chainalysis
,
@zachxbt
, and the whole community that helped us and continue to help us identify and solve this attack.
Security will always prevail with the help of the whole ecosystem.
1) They are using some phishable auth (SMS? TOTP? password only?) to secure super high value repo? For fuck's sake, they're a HARDWARE KEY VENDOR which also supports U2F/FIDO2 as an app.
2) Former employee has signing/push auth on super high value repo?
3) Single person has signing/push auth on super high value repo?
Tokens are fully programmable, so you can encode whatever logic you want in them, including freezing if you want that functionality. This is mainly done in dollar-backed stable coins.
The base level assets, like ETH and BTC, cannot be frozen like this, although centralized exchanges will often blacklist addresses (and the chain of custody) involved in major exploits.
You can have gradations of control. USDT and USDC are centrally managed.
We used to have DAI, which was fully decentralized and over-collatoralized by Ethereum tokens (the native currency of the platform DAI is rooted on) - but the founder mysteriously died as the DAO was taken over and made to begin collateralizing DAI against USDC and USDT, ironically.
It is a shame how far crypto has fallen culturally that this stablecoin business is some niche story. Most people are in it for the money, but many good people are not.
I don't think MakerDAO ever integrated USDT as collateral, but they did integrate USDC. It's unfortunate DAI is not fully decentralized, but the best fully decentralized stable coin efforts (like RAI and LUSD) often suffer from a capital efficiency problem.
I think it's fine to have a spectrum of centralized assets and decentralized assets represented as tokens. Blockchains are public, permissionless, ledgers.
There is no possible way that USDT is backed one-to-one. It just isn't. If it were, it would have a simple audit trail that they would publish. They don't because it isn't. It's a scam that will at some point unravel, and everyone will lose their shirts because of "many good people" lol.
It's possibly backed greater than 1:1. Tether likely cleaned up their operations over the past few years, but the "Tether Truthers" are still anxious about fraud. Even more transparency is welcome of course.
> Cantor Fitzgerald CEO Howard Lutnick on CNBC:
> "I'm a big fan of this stablecoin called Tether...I hold their treasuries. So I keep their treasuries, and they have a lot of treasuries. They're over $90 billion now, so I'm a big fan of Tether."
That was the point, yes. The whole problem is people reinvented the entire centralized banking system on top of crypto. Stuff like USDT should not even exist, people were supposed to adopt crypto wholesale and only convert to fiat currency to pay taxes until the government caved and allowed paying taxes in crypto.
Everyone using fiat backed centralised stablecoins these days like USDT and USDC. Not only do they have blacklists but they can also burn your balance + they are fully upgradeable aka they can add/remove any functionality they want any time :p
The blacklists need to exist as per regulations though.
> Wasn't like, >30% of the point of crypto to not allow people to do this sort of high-level/centralized freezing?
I mean, unlimited Tether can be created or destroyed at the whim of some guy with a big button somewhere. The promise of crypto being the embodiment of true distributed governance went out the window with USDT ages ago.
There are at least three type of vulnerabilities here:
1/ Handling the custody of secrets by the company. The attackers first attacked and accessed a former Ledger employee with official Ledger account secrets. This is where secrets were mismanaged since the actual company secrets should never be in the hands of former employees.
2/ The attack could occur on an actual employee so they should employ ways to be protected against this kind of attack.
3/ The use of CDNs should have security measures in place. This is one of the most common attacks nowadays.
So Ledger was able to coordinate with a number of entities to minimize the impact of the attack? Isn’t that directly contrary to crypto’s decentralized design?
If one is to make crypto really decentralized, relying on a small number of authorities for security seems contrary and maybe poisonous to that goal.
Plug: we've been building Packj [1] to detect malicious Python/NPM/Ruby/Rust/Java/PHP packages. It carries out static/dynamic/metadata analysis to look for "suspicious” attributes such as spawning of shell, invalid/expired email (i.e., no 2FA), use of files, network communication, use of decode+eval, mismatch of GitHub code vs packaged code, and several more.
"Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting."
Co-founder @ Phylum here (https://phylum.io). We've been actively scanning dependencies across most open source package registries (e.g., npm, PyPI, Crates.io, etc.) for a few years now. Quite successfully, I might add, with recent findings targeting financial institutions [1], North Korean state actors [2], and some of the first malware staging to be seen on Crates.io [3].
The fact that an attacker was able to pull this off against a _secure_ hardware device is shocking but not surprising. The mechanism by which they did it is interesting and fairly insidious. Unlike a lot of other attacks that will publish the malware to the registry, this one pulls the payload from a CDN. So, static analysis of the loader (i.e., the intermediary package on npm) is unlikely to yield sufficiently interesting results. Solely focusing on the obfuscation angle is also not of particular use since quite a bit of packages are obfuscated on npm (like, a surprising amount of it. In Q3 2023 we saw over 5,000 _new_ packages shipped with some form of obfuscation).
Nonetheless, our automated platform pinged us this morning about some changes to this package and our research team has been digging into it to determine the impacts.
With that said, we've produced (and open sourced!) several tools that aim to help with software supply chain style attacks:
1. Birdcage is a cross-platform embeddable sandbox [4]
2. Our CLI is extensible and integrates Birdcage so you can do things like `phylum npm install...` or `phylum pip install...` and have the package installations be sandboxed [5]
We've also got a variety of integrations [6] along with a threat feed of software supply chain attacks (of which the Ledger package and other APT attacks have appeared).
Happy to answer any questions! A collective of us are active in Discord (https://discord.gg/Fe6pr5eW6p), continuing to hunt attacks like these. If that's something that interests you, we'd love to have you!
I have talked to the NPM team about this multiple times over the last several years and they literally believe no signing at all is better than some devs feeling pressured to sign.
You need no stronger evidence of the NPM teams negligence than these two times they refused to even accept community contributed optional signing support saying they would come up with something better than PGP. Still waiting 10 years later.
https://github.com/npm/npm/pull/4016
https://github.com/node-forward/discussions/issues/29#issuec...
Meanwhile PGP secures the supply chain of the Linux distros that power the whole internet, and Debian signs hundreds of npm packages used in their dependency graph, but it is still not good enough for NPM.
You can use the well tested and rust-written Sequoia/sq now and never touch GnuPG. You can also self certify your keys with keyoxide. The past complaints are largely moot and still people stick to their guns on this.
https://openpgp.dev/book/