Hacker News new | past | comments | ask | show | jobs | submit login
Fake VS Code Extension on NPM Spreads Multi-Stage Malware (mend.io)
186 points by tomabai 79 days ago | hide | past | favorite | 98 comments



I’m a little confused, how does npm play into this?

The article describes a vscode extension on vscode marketplace squatting the name of an existing extension, from how it’s worded it sounds like the extension directly contains the malware rather than being compromised through a dependency, what does it have to do with npm?


The connection appears to be simply that this is content marketing for Mend, which sells dependency vulnerability scanning software, so NPM is an important keyword for them to stuff in regardless of its relevance.


It doesn't directly. These are malicious VS Code extensions. It's completely Microsoft's fault for poorly managing the ecosystem. They must curate extensions with security audits prior to publication and sandbox them with advertised entitlements. Without these, it's running untrusted code from the internet putting users at risk for ransomware, password and cc skimmers, data harvesting, and other malware.


The package was published on npm, the original extension, has a private component on npm with a similar name to that package, and that the squat the attacker tried to take advantage of


Whilst it could just be the company's need to market their NPM scanner... The article does appear to be at least edited through AI. Which could easily bleugh out the wrong target marketplace.


Maybe the vscode extension is just an npm package? I also couldn’t find the link to npm in the article.


What continues to amaze me is the continued lack of real-time detection "enterprise" products have on even n-day discoveries like this. Even five days passed disclosure, we still have very limited IOC signaling:

1. 212.bat.exe; 1/61; https://www.virustotal.com/gui/file/2c76036ec0869f6b41bd8f7c...

2. haha.msi; 2/61; https://www.virustotal.com/gui/file/1b2d956e3eded3e7220e3ff6...

3. MLANG.dll; 15/61; https://www.virustotal.com/gui/file/a8e7f45d67b50948929adf35...

or if you focus on network/ips/perimeter detections:

4. web.winserve[.]ru; 1/94; https://www.virustotal.com/gui/domain/web.winserve.ru

5. scare[.]su; 3/94; https://www.virustotal.com/gui/domain/scare.su


The problem with the traditional antivirus signature model is that it's reactive rather than proactive. Once it's been identified by human or automated submission for human and/or automated analysis, the damage has likely already been done if it ran on a clean machine(s) already. But that's only the fraction of malware that will ever be identified because a large but unknowable fraction goes by unidentified, perhaps for all time. (When I was a Windows SA 20 years ago, I saw all sort of customer machines infected by advanced persistent threats that used evasion without any sort of antivirus signatures because they were novel threats.) What may be scanned by N vendors and deemed "safe" today and/or 10 years from now could be in fact malware by behavior.

PSA: Never run untrusted code on important machines. This might mean forbidding the use of third-party extensions for common applications until they are audited, something Microsoft clearly isn't doing.


I run little snitch on Mac, but I don't have similar software for windows. Is there something folks would recommend or is the windows platform hostile to those sort of tools?


Simplewall, windows firewall control, netlimiter.


I use orbstack for lightweight containers (macos docker), and https://github.com/jrz/container-shell for each project or experiment. Lightweight chrooted environments using containers. Firewalls only protect the network stack.


I'm not running it currently but one I'm aware of is GlassWire

https://www.glasswire.com/premium-features/


I have used Netlimiter on Windows in the past. It seems to have comparable functionality to Little Snitch


Portmaster.


> is the windows platform hostile to those sort of tools

No need for hyperbolics, just say you don't know.

The built in Windows Firewall does this. No need to pay for a 3rd party magic app.

Laud praise on Little Snitch all you want but Windows could quietly do this out of the box for two decades.

25 years ago we used ZoneAlarm and a variety of other tools.


I asked if it was hostile and asking is done because I don't know. I'm sorry that my wording upset you.

Windows firewall does not appear to have similar features. A vscode extension connecting to a host I run is okay, connecting to a random domain is not okay and I don't see anything at all in windows firewall to notify me about individual connections. Please advise me on where this functionality is if I'm just missing it.

And there's a lot about little snitch that I actively dislike, but its features are extremely useful. I'd love to have those on windows as well.

As others have linked me similar software, I will explore those.


Windows Firewall can allow or block domains.

It doesn't have "allow but notify," if that's what you're looking for.


Thanks, I'll look more into this.

With little snitch, I use "notify and I select allow or deny". And it works for ip addresses (4 and 6) as well as domains. It's a powerful system, but if I can get similar with domain allowlisting, that would be a worthwhile improvement.


I've never used little snitch but I'm betting Simplewall is what you probably want on Windows.



> The built in Windows Firewall does this. No need to pay for a 3rd party magic app.

I'm not a macOS user anymore, but when I was, Little Snitch did more than just block/allow all connections a program makes. You get a popup/window for each connection attempt, and can whitelist the process, domain, specific address, port and more.

Is this really how Windows Firewall works? Because I've used Windows for more than two decades, and I only remember a boolean "allow/disallow" based on the program itself, when it tries to make a connection, then you see nothing else unless you manually go and dig into the configuration/rules. Have I been missing out on something?


Windows Firewall Control, now owned by Malwarebytes, adds notification on connection attempt as a feature, while leaving windows firewall running intact.

I've never been fully satisfied with software firewalls, but WFC comes close.


Weird, why does a search show this is from https://www.binisoft.org/wfc. How is this associated with Malwarebytes except the use of their name and logo? I would trust this a lot more if it was hosted by Malwarebytes and not a link on their forum https://forums.malwarebytes.com/topic/296798-malwarebytes-wi...


It is absolutely not how Windows work.


It absolutely is, if you take a moment to set it up. By default outgoing connections that don't match a rule are allowed. It's very easy to change the settings to disallow by default, and to set up rules based on "process, domain, specific address, port and more".

In Windows Defender Firewall settings right click Outbound Rules, click New Rule. Choose the type of rule (Program, Port, Predefined, Custom). You can apply the rule to a program / set of programs, a service or globally. You can apply it by protocol, port, IP, specific network interfaces etc. The only thing I can't find that was mentioned in GP is rules based on domain/address - I'm not sure if this is a limitation of the firewall or I'm just too dumb to find it.


You skipped the "you get a popup" part, which is an important and missing feature. Windows firewall only does popups for opening ports.


You'll get a popup to allow it, but it's on/off. But you can manually create rules for each .exe as well.


Windows Filtering Platform does it. Windows Firewall barely taps WFP's potential and definitely does not do the whole "ZoneAlarm" style allow/deny thing.


I think these types of articles should be prefaced with numbers, namely: how many downloads did the package have? How many confirmed installs were there? And so forth.

Given that language package managers are intentionally open to the public, "someone uploaded malware to NPM" is not itself an interesting story. What would be interesting is whether a particular typosquatting campaign was effective, given that most appear to be caught before download counts leave "background noise" levels.

Or as another framing: malware on an unrestricted index does not matter if nobody actually downloads it. What matters (and is interesting) is when the attacker manages to get nontrivial numbers of downloads to their package.


I get your point and agree with that, but i think that the technique used here was interesting



Is it true that if you install a Cyrillic keyboard in Windows it stops some of those malware from installing? The theory is that they don't want to hack a site in their own country and end up getting a visit from Spetsnaz or get suicided.



Also interesting how good Kaspersky is


I suspect it's a case of "don't bite the hand that feeds". Kaspersky endpoint installs are probably 99% within Russia.


What’s the best way to isolate VS Code+extensions? Do I have to fully run it in a VM? Use one of those third party flatpak builds (of unknown provenance) and disable networking via flatpak mechanisms?


I think containers is the way to go. Maybe on top of VM (defense in depth-swiss-cheese is the only way to go imo). Something like Qubes can be great for VMs.

https://github.com/legobeat/l7-devenv/pull/153

This works for me (which I do run in VMs also, yes). A key thing is some secrets like GH token and signing keys are not available even for the IDE and code in the environment requiring them. Like a poor-mans HSM, made for dev, kinda. Also LLM assistant gets access to exactly what it needs. No more, No Less.

You can have your cake and eat it too.

https://github.com/legobeat/l7-devenv


> I think containers is the way to go. Maybe on top of VM (defense in depth-swiss-cheese is the only way to go imo).

If you go for a VM, why involved containers at all? What additional security you get from layering containers on top of VMs, compared to just straight up use a VM without containers?


VMs are great for coarse isolation ("dev box", "web surfing", etc). A typical qubesos workstation would have a handful.

In the setup I linked, separation is more fine-grained. Ephemeral container for each cargo/nodejs/python/go/gcc process. The IDE is in a separate container from its own language servers, and from the shell, which is separate from both the X server and the terminal window, the ssh agent, etc. Only relevant directories are shared. This runs my devenv with vscode fine on a 16GB RAM 8c machine.

You'd need like 1T RAM and over 9000 cores to have that run smoothly with real VMs ;)

Basically containers can give you far more domains (with better performance and weaker isolation) on the same host.

The other upside is that the entire containerized setup can be run as unprivileged user. So an escape means they are still nerfed local user. A typical VM escape would have much shorter path to local root.


The theory is defense-in-depth. It's dubious if it buys you much, but any malware now needs a container escape and a VM escape.

In reality, if it's target malware, it will, and if it's a mass-spray like a simple VSCode extension, it won't have either. (Nigerian Prince theory: You don't want to deal with the security-conscious people for a mass-attack)


Considering vscode is itself malware. Probably nothing other than, yes, use a separate VM.

I am an avid vscode advocate, but it is incredibly invasive and security ignorant.


Run a container/vm for your project. Use VSCode only as an interface. Dev containers should do this, but I don't like the performance. I use my own docker-powered chroot container thingy https://github.com/jrz/container-shell


If you mean "connect to VS code inside that container remotely," then https://news.ycombinator.com/item?id=42979467 is worth your read


> Dev containers should do this, but I don't like the performance What kind of dev container have you tried? What is the source of perf overhead?


Some extensions are client side so even VM won't work for all.


Seems like with deno, setting granular permissions for only what’s necessary, you might be able to block an attack like this. I’m just getting started with deno, though, so I’m not sure, but it looks doable to me.


Node.js now has a stable permissions model, though it's very limited compared to Deno's (I don't see anything about blocking network requests). Node also says "This feature does not protect against malicious code"

https://nodejs.org/docs/latest-v22.x/api/permissions.html


NPM, why am I not surprised... and this broadly applies to the JS ecosystem.

When people delegate their brains to others, their own judgment naturally deteriorates and it makes them much easier to fool.


> When people delegate their brains to others, their own judgment naturally deteriorates and it makes them much easier to fool

A thought as old as thoughts about thoughts are, almost:

> For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.

The quote above is about books, from Plato's dialogue Phaedrus 14 (370-360 BCE). You by any chance feel the same about books as you feel about reusable JavaScript modules published on npm?


This plato's quote about books is too often used as something like: "someone smart was wrong about something once, hence everyone is wrong about every new thing forever"

Nothing is black nor white but npm brought its fair share of dumb shit: https://en.m.wikipedia.org/wiki/Npm_left-pad_incident


>Nothing is black nor white but npm brought its fair share of dumb shit

Sure. So too have books. In that instance, does the blame lie on the advent of books, or to the reader too naive to tell the difference?


> This plato's quote about books is too often used as something like: "someone smart was wrong about something once, hence everyone is wrong about every new thing forever"

I mean, it is fairly similar to what parent actually wrote, isn't it a relevant quote in the context? You're not actually arguing for one way or another, but simply because you've seen the quote multiple times before, it doesn't apply, or what are you trying to say?

How is the left_pad incident related to developers becoming easier to fool?


There should be a quote for that


Yes and: Technology (progress) always presents a tradeoff. It is never wholly positive. h/t Neil Postman's Technopoly.

On the balance, writing has been a net positive. Probably. Plato wasn't wrong; he could (or would) not anticipate the upsides.


Not the OP, but yes. Blindly reciting and believing the contents of books without any discernment also makes you a fool.

If you do not believe that, then might I interest you in uncritically imbibing the succulent nectar of wisdom flowing from the Flat Earth Society?


My layman’s understanding, based solely on the quote you cited, is that it criticizes books for not providing proper instruction — being just pupils, readers need a tutor. The only way this could relate to programming libraries being reused is if people didn’t even read the books back then, much like they don’t read the libraries' source code right now.

I’m by no means agreeing with the quote, nor am I against reusing programming libraries carelessly; I just don’t see how the two are related.


For variety, here's an example from the Go ecosystem: https://arstechnica.com/security/2025/02/backdoored-package-...


> When people delegate their brains to others

I dunno, Linux distros have a pretty good track record at the same problem, over multiple decades of evidence.

The difference is that they don't allow self-publication. Canonical and Red Hat et. al. work downstream of an active community of developers cross-attesting good software. So their problem becomes "This software is known to be good, let's package it!". So to get malware into the machines of users it's not enough to fool the users, you need to fool the packagers too. And it happens, but very rarely (c.f. the xzutils mess from last year).

Node and similar repositories thought they could short-circuit that process, and as has been extensively documented, it doesn't work because users are too lazy to authenticate their own software.


how um is this different from pypi or public repos in other languages... you could try to publish junk anywhere


The lack of a batteries-included stdlib makes the JS ecosystem exceptionally vulnerable. PyPI is vulnerable to the same class of problems, but it’s an order of magnitude harder to execute a wide-reaching supply chain attack compared to NPM, since the dependency trees are far shorter on average.


With tensorflow and more recently local llm:s, running say ollama pulls in a scary amount of precompiled binaries from god knows where.

Just mentioning these because they are trendy.

Regarding the general issue, it can happen in any language with package management.

Its not reasonable to just yell js sucks because I’ve seen it in bunch of places by now.


In node projects, having more dependencies is usually seen as an asset, not a liability.

Other than that, I don't think there's a difference. When I write node projects, I tend to minimize dependencies, but I've seen PR comments saying "you know you could just get a package to do that".


> In node projects, having more dependencies is usually seen as an asset, not a liability

Not in anywhere I've worked for the past ~decade...


Replies seem to be indicating my experience is the outlier. That's reassuring to me.


It was definitely the case prior, though, so you're not completely off base :)


This is an extremely weird thing to say. I don't know a single node dev who wants more dependencies. Anyone with a modicum of experience in the space knows the cost of bringing in more external code.


”Usually”?

Do you have some statistics on that, or do you just feel that way?


Just feel that way. That's my anecdotal observation. YMMV.


[Misread, mb - ty @Jarwain]


You may have mixed up who the commenter was replying to. They were specifically questioning "usually seen as an asset not a liability" bit


That’s really all there is in the comment. They’re unambiguously conflating “number of dependencies are higher” with some sort of statement about the value system of people that work with a certain language. It’s silly language tribalism.


> why am I not surprised

Because VSCode and npm are popular.

It’s not like Ruby gems are immune to this. They just aren’t as popular.


I found it even more unsurprising that the fake extension targeted some crypto thing.

> it has the same description as the original truffle extension: “Build, debug and deploy smart contracts on EVM-compatible blockchains.”


This is an ironic statement, right? Because you’re mindlessly parroting a copy of a copy of what was once, probably, an intelligent and appropriately nuanced take? And because you’re attributing this to the “JS ecosystem” when supply chain attacks are common in many different ecosystems?

LLM-ass comment.


Worst of all: the article never explains what npm has to do with it. Vscode extensions are not installed via npm, so at least part of the malware depends on the Vscode extension store.


JS/TS having code reusability isn't a problem. Other ecosystems don't have the same problems not because they have package repos just as good as npm but write everything from scratch out of virtue, but because they don't have package repos just as good as npm.


> because they don't have package repos just as good as npm

What? That simply is not true unless you mean "good" as in "good in spreading malware". lol


I’d be curious to hear what you think that PyPI et al are doing that NPM should be copying. It sounds like you’re pretty knowledgeable in this area, to be comfortable making comments like this with such confidence.


"Easy to use" being more descriptive.

Which is a help for spreading both good code and malware.


I would respond, but you have not provided any supporting arguments.


Is this even related to npm? The extension was on the VS Code marketplace, I can't see any evidence that npm is involved at all besides that it's referenced in passing exactly once in the article:

> the need for caution when installing VS Code extensions, especially those obtained from public package registries like npm.

I'm not aware of any way to install a VS Code extension through NPM. The article honestly just reads like the author knew that Mend does a lot of business selling NPM dependency scanning and that they're therefore expected to stuff it as a keyword for SEO.


I definitely feel like NPM could be doing more to detect these. Right now it really feels like they're just hands off.


We discover a fake vscode extension that serves a multi-stage malware on npm, Inc.

The package uses javascript obfuscation for downloading the first stage of the malware, than it uses a heavily obfuscated batch file to conntinue into the second phase.

Lastly it leverages preconfigured ScreenConnect remote desktop installer to communicate with the compromised machine.


I recently tried RubyMine IDE by jetbrains after having jumped on the VS Code bandwagon since its first beta release. It has been incredible. Instead of monkeying around for 3 hours once a month because some plugin broke, or some dependency updated and a vscode plugin wasn’t expecting this, I just code.

Back in the Sublime text days, it was easy. I think I had forgotten how complex I had made VS Code. Turns out, paying someone else $120/year to deal with that complexity in an IDE is a helluva good deal at your average developers hourly pay.


There's a nice new site called https://daily.dev, but they keep bugging me to install a browser extension. The idea a website needs access to somewhere I make financial transactions is horrifying.


If you haven't tried it yet, both Firefox and Chrome offer different profiles[1][2], which comes with separate cookie stores and different installed extensions. The Firefox ones have a "this one goes to 11" level stupid UX (it's about:profiles, then click on Launch in New Profile) but to make up for it they offer Containers[3] which regrettably shares the same extensions as the host browser but awesomely hard sequesters the cookie and localStorage from other containers, and from the non-container default. Many people use it for all things QA since it's painless to log in to a website (AWS, Azure, etc) using totally different credentials

1: https://support.mozilla.org/en-US/kb/profile-manager-create-...

2: https://support.google.com/chrome/answer/2364824

3: https://addons.mozilla.org/en-US/firefox/addon/multi-account...


For what it's worth:

The DailyDev plugin only has access to "https://*.daily.dev/" and "https://*.dailynow.co/". It is also not required, you can just go to app.daily.dev yourself (I did not want to install the addon on my main PC)


Couldn't be me.

Psa: reduce your installs of things from the internet


Everybody hates on linux distributions. But this sort of stuff happens weekly on pypi and npm and almost happened once in debian.


But on the other hand, getting a library into debian so users can eventually install it is also a somewhat big and lengthy process that takes time (and rightly so), compared to npm et al which amounts to "npm publish" and you're done basically.

Don't get me wrong, I'm not saying one is better/worse than the other, but there are tradeoffs that not everyone is willing to make. I personally prefer the slower more intentional/reviewed option of package repositories like debian and arch, but things like npm/pypi/aur has their uses too.


>getting a library into debian is also a somewhat big and lengthy, compared to npm et al which amounts to "npm publish" and you're done basically.

Which is a good thing. It's not like npm skiddies use this agile process to revolutionize the industry with AGI, they do left pad and a different framework every week.


except how "reviewed" is it? You maintain a package for years to gain trust and once you become trusted, you've introduced a backdoor that most people won't know about.


That takes years of effort and if you get found out you get banned immediately. It's not a very common level of commitment for bad actors it seems.


There are different type of bad actors, some are ready to invest heavily, meaning time and money, some are there only to make a quick buck.


Ok. Can you point me one example that was in for a while before being caught?


Yeah sure washing your hands kills 99.99% of bacteria, but not 100%.

Why obsess over that 0.01% when surrounded by dark age skiddies who haven't discovered germ theory yet, focus the message: "wash your hands!"


Which is why a lot of people (even well-resourced companies e.g. Google) set up their own apt repo, and tell people to add that.


If you reduce your dependencies, you start appreciating the OS ecosystem, they don't always have the latest versions or all the newest packages, but it's stable and lacks vulns.

All of this is proper of a foundation layer. So as a dev you find that the first dependencies to go are at the app layer, and all that is left are OS dependencies.

So




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: