Hacker News new | past | comments | ask | show | jobs | submit login
35-year-old vulnerability discovered in scp (sintonen.fi)
601 points by runn1ng 39 days ago | hide | past | web | favorite | 150 comments

It's interesting that PSCP has this option:

    -unsafe   allow server-side wildcards (DANGEROUS)
and as explained in the doc (https://www.ssh.com/ssh/putty/putty-manuals/0.68/Chapter5.ht...),

"This is due to a fundamental insecurity in the old-style SCP protocol: the client sends the wildcard string (*.c) to the server, and the server sends back a sequence of file names that match the wildcard pattern. However, there is nothing to stop the server sending back a different pattern and writing over one of your other files"

I haven't used the Linux version much so I assumed it had the same option. PSCP has had this option for as long as I can remember. I guess no one bothered to look at scp, or as one of the other comments here notes, scp is overwhelmingly used with a server one already trusts.

> I guess no one bothered to look at scp, or as one of the other comments here notes, scp is overwhelmingly used with a server one already trusts.

It's the latter for me. It's not just wildcards. You can use any server-side shell code you want to specify the files as if you're writing in a command argument[1]. At least, I find this tremendously useful.

[1] https://news.ycombinator.com/item?id=18912188

I strongly discourage anyone from using PuTTY, not for this reason, but for its weird and nonstandard handling of SSH keys.

The last time I tried to help someone get it set up on a windows PC, totally normal ssh2 rsa 2048 and 4096 bit public/private key pairs created with openssh had to be converted into some other weird format before they could get public/private key auth working.

Why the developers of putty felt they needed to deviate from standard ssh2-rsa pub/privkey formats is a mystery to me.

PuTTY's changelog from 2000 shows that PuTTYgen uses "the same RSA key file format as SSH 1" (the proprietary one, that predates OpenSSH).

Reading and writing OpenSSH-style keys came later (2012).


It's quite easy. There are multiple tutorials on it. When you load up a key into putty the plain old key you're used to is sitting right there in a box in the app. Just copy.

I think I'm missing something. Step one is getting you to use a compromised server. Which either means the server your home directory is on is compromised (I’m assuming you have a login on the box, separate from your local home dir), in which case scp is the least of your worries, or they get you to bounce through a MITM server, in which case you have to accept an incorrect host key.

If you're accepting incorrect host keys, then you're completely breaking the ssh security model anyway.

I guess I don't understand how this is actually any worse than how ssh already worked?

Edit: To clarify, I understand why you’d want to fix this, for a good depth of defense. I’m just saying I don’t feel any urgency in fixing this.

What you're missing is very simple: downloading files from a server need not imply that you trust the server even slightly. That's true whether you're downloading them via HTTPS (e.g. to view this web page) or via SCP.

Many of us have jobs in which we typically only use SSH to connect to servers that we trust and control. But that assumption is no more baked into the security model of SSH than it is into the security model of HTTPS. The argument that "you trusted this server enough to connect to it and download a file, therefore you clearly should trust it enough to permit it to execute arbitrary executables on your machine" is false in both cases.

> The argument that "you trusted this server enough to connect to it and download a file, therefore you clearly should trust it enough to permit it to execute arbitrary executables on your machine" is false in both cases.

Great point. Also, imagine that you control the server and you know that it was compromised. Surely you want to be able to download logs and other files from it for inspection without having your own machine compromised.

> Surely you want to be able to download logs

Uhm, nope: you want to shut it down and perform forensics on the disk. Once it's rooted, I wouldn't touch it with a barge pole.

I've been a web developer for over 5 years and never once worked somewhere where I'd have any imaginable way of physically accessing the disk of a server. Everything's been cloud-based. I don't know the exact ratios, but I'd expect my experience not to be unusual.

You should still access that data in an offline manner, though—ideally:

1. Shut down the instance

2. Connect the storage as secondary storage on another (disconnected from the network) instance

3. Do forensics using your cloud provider's out-of-band management interface

4. Throw away that instance as soon as you're done

You can't physically access the disk, but you often can download a snapshot or disk image, which is created at the hypervisor level.

Every cloud provider out there would help you get an image of the disk if you told them it was due to a security breach.

A lot of what we do implies trust.

We trust the browser not to have bugs, the TLS protocol to remain secure against attack, every CA to not grant MITM power to a state actor, the TCP/IP stack not to be remotely exploitable. Much of the world downloads unsigned code and exectues it locally, whether it be a Go client, an interpreted library code dependency, or "curl | bash". Windows has warnings before you run unsigned code, but most people run it anyway.

We trust a lot of things, and maybe we shouldn't. The important thing, to me, is how informed I am about the trust I'm giving, and to whom, and what risks I'm willing to take.

I use scp infrequently and on machines that I control, so that's a level of risk I'm comfortable with. But if the bug had been in curl, my blood pressure might be slightly higher.

> therefore you clearly should trust it enough to permit it to execute arbitrary executables on your machine" is false

Unless you run your browser without an ad blocker

Yeah, I thought that sort of quip might come along. ;)

On the one hand, you have a real point. On the other, JavaScript running in my browser has significantly less power to do anything bad to me than arbitrary executable programs running directly in my OS do.

> JavaScript running in my browser has significantly less power to do anything bad to me than arbitrary executable programs running directly in my OS do.

I think that depends on what you mean by "significantly less power". It's entirely possible to use Javascript to place malware (such as installing a binary executable) and do other assorted nastiness on a target machine. If the target machine is properly secured, it's still possible, it just requires more effort.

This is the primary reason why I do not allow Javascript to run on my machines by default. If I'm at a site that I trust and I have no alternative to using, and the JS in question is essential, then I'll allow just the specific piece of JS to run. Otherwise, it's not happening.

> It's entirely possible to use Javascript to place malware (such as installing a binary executable) and do other assorted nastiness on a target

This sounds interesting. Can you kindly link an example for this technique?

But if you want to go that far, it's entirely possible with Javascript disabled. Not every browser bug is a JS engine bug.

Your best bet (not guaranteed) is reading the html as plain text, not rendering any images, etc.

These days, so much happens in browsers (banking, medical records, social communications) that it's hard for me to imagine a more dangerous place to give up execution. What sort of things are you thinking of that scare you more on the OS?

> The argument that "you trusted this server enough to connect to it and download a file, therefore you clearly should trust it enough to permit it to execute arbitrary executables on your machine" is false in both cases.

Nicely put.

This feels like yet another variant of the same confusion we see around web browsing, with people saying "oh, if you don't want a virus don't go to sketchy sites". Agreeing to receive HTML or even run unfamiliar Javascript in a browser shouldn't be equated with trusting that website to run anything outside the sandbox, and in a world where ads on major sites are frequently resold through a half-dozen shoddy networks it's an important distinction.

In my honest opinion, delegating the trust model down to the user to figure out which host key is trusted or not is the broken model. I always blindly accept the key because I have no way of knowing what is good or bad by looking at the prompt with random letters telling me it could be bad and in my life 100% of those alerts are false positives and I got used to accepting those alerts.

> I always blindly accept the key

Even from a host you use all the time and where know the key hasn't changed?

"Frack knows what all dem Gilfoyles do all the time. I have shit to do." <accepts>

Could someone elaborate on how to find out the correct key of a host that was recently setup, like a new Digitalocean instance? Is there even a method?

You can provide a pregenerated key to the instance during setup, e.g. using cloud-init. Or supply it with a way to report its key once it is generated.

Or use out-of-band access, e.g. a host-provided console, to connect once and extract the key.

Ok, but shouldn't ssh require the user to type in the entire key then, rather than to type yes/no?

Why?/How does that relate to my answer? It'd make it harder to just accept a changed key instead of leaving the decision to the user, and would make use cases where the user doesn't want/need to authenticate the server harder (one can probably debate each scenario for that if the decision to not authenticate was "right"). Although I find it a bit weird there's not even an option to pass a host key to SSH instead of editing the KnownHosts file.

Yes, the SSH flow is completely broken and, therefore, in reality only really protects against passive snooping. I have never met a real human, in real life, who ever checked the host key on first connect.

For what it’s worth: The way to solve this is require the host key fingerprint as an argument to the ssh command, along with host and user. This would force people to look it up.

This is not a popular opinion, but I think it’s inescapable: SSH was not made for humans.

(This is purely a client issue, by the way. But at the end of the day: does that matter for anyone?)

(Edit: when this comes up people often mention key based auth. Unfortunately, that relies on nobody else having your public key. You can find mine anywhere, it used to be available through the GitHub api , even. It won’t let the attacker connect to the original server but they can definitely impersonate it. Perhaps get your key agent forwarded, and continue. Not great.)

No, the way to solve this is to use client public key authentication. Then a successful MITM will require the client's private key too.

No, it will only require the public key (which is public) to impersonate the server. It won’t be a MITM but it’s still bad; see e.g. the bug in this very article, or if (god forbid) you’ve set up agent forwarding. Or if you do e.g. a git pull from github on a fresh comp, it can serve you bad code.

Asymmetric key is not a solution to the problem of users blindly trusting hosts without checking the fingerprint. It’s a UI problem.

Every time I bring this up it gets shot down (e.g. now -3 votes), and every time I wait for someone to put forward an actual counter argument. Unfortunately, no luck.

SSH is broken and it needs fixing.

I'm not sure you know how public/private keys work.

This reply is tragicomically ironic :(

See it like this:

The bug is in the verification of the host, not the user. Eve can intercept a connection from Alice to Bob. It can "pretend" to be Bob, but it can't MITM (aka "proxy") because it won't be able to auth with Bob. However, auth depends on the server encrypting a message with Alice's public key, which everyone has ( https://github.com/torvalds.keys !), including Eve, so Alice will connect to Eve. Really this is nothing different from just connecting to a fresh host: you upload your public key to the remote ~/.ssh/authorized_keys somehow, and voila, right? How does your ssh client know that was the same server? The fingerprint. Did you check it? No.

Check out this article:

> The more well-discussed use of asymmetrical encryption with SSH comes from SSH key-based authentication. SSH key pairs can be used to authenticate a client to a server. The client creates a key pair and then uploads the public key to any remote server it wishes to access. This is placed in a file called authorized_keys within the ~/.ssh directory in the user account's home directory on the remote server.

> After the symmetrical encryption is established to secure communications between the server and client, the client must authenticate to be allowed access. The server can use the public key in this file to encrypt a challenge message to the client. If the client can prove that it was able to decrypt this message, it has demonstrated that it owns the associated private key. The server then can set up the environment for the client.

- https://www.digitalocean.com/community/tutorials/understandi...

As you can see, ssh public key auth authenticates the user, not the host. anyone could be that host.

Now: you as a human will quickly notice "where is my homedir? wait a second... this is not Bob—This is Eve!" the question is: quickly enough? If you were using scp: no, look at TFA. If you were connecting to github for the first time using git, perhaps using homebrew on a fresh machine: now you'll definitely never know. If you had set up your ssh key agent to forward keys to your "trusted" Bob host: serious trouble. There are loads of reasons why intercepting an SSH connection is a dramatic bug, even without being able to MITM it.

Or just try and turn it around: why do you think SSH prompts you to confirm the host key fingerprint, if it doesn't matter? For fun? They didn't add that as a joke.

Alas, few people understand this, as you so elegantly demonstrated :/

Your condescension is noted.

The host key fingerprint does matter. That's the host's public key. The host also has a private key. Without that private key, Bob can't pretend to be Eve.

You are correct that when connecting to Bob, if his public key doesn't match Eve's expected key, it will prompt the user to cancel the connection.

You argue: "It will only require the public key (which is public) to impersonate the server." The key piece here you are missing is that if Bob steals Eve's public key, he won't have a matching private key, and authentication will fail.

If you don't think the private keys are important, go edit them on your server, and see how SSH'ing in goes for you.

Eve is the bad guy, Bob is the intended party. Bob is not stealing keys, Eve is. Well, not "stealing", just obtaining public keys.

The attack is on Alice, who thinks she's connecting to Bob, but is actually connecting to Eve. This is on initial connect, and assuming Alice doesn't check the host fingerprint (which nobody does; that's the crux of my point).

This nomenclature is the convention when discussing cryptographic attacks: https://en.wikipedia.org/wiki/Alice_and_Bob

I'm going to stop discussing this in this thread, but feel free to contact me (info in profile). I promise to engage, in good faith, and if you do end up convincing me, I'll post an update here.

Best wishes.

Okay, swap Bob and Eve then.

Okay. "The attack is on Alice, who thinks she's connecting to Eve, but is actually connecting to Bob. Assuming Alice doesn't check the host fingerprint (which nobody does), the attack succeeds."

You're fundamentally misunderstanding the argument about which key is being copied. It's Alice's public key that's copied. The fake server generates its own public and private keys and nobody notices that it doesn't match the real server. The user won't be "prompted to cancel" when this is their first time connecting.

authorized_hosts, as typically used, prevents an imposter from showing up later. It does absolutely nothing to prevent an imposter that's there from the start.

You can use SSHFP records in DNS. Clients like OpenSSH can be configured to retrieve and use those.

Aaand we've returned to discussions about hacks designed to make DNS at least somewhat trustful.

It gets printed to the console by default when you run cloud-init, and e.g. in AWS you can retrieve the console logs with an API call and grep for what you want.

I've got a script sitting around here somewhere which does this... Not necessarily in a state that's easy to reuse, of course.

It doesn't work if your cloud init prints too much. AWS only holds a certain amount and the so the API call will truncate the output and lose the key (it shows you the tail and not the head of I remember correctly). Our organization has this problem (we run puppet in cloud init which spews like crazy).

I did notice recently that cloud init has an option to hot a web hook when it is complete and they have options in there to add the host key to the POST. But I wonder how to make a web hook that would be immune to spoofing…

Yes, it is probably in /etc/ssh/. What type of key depends on your distro and version, but here is a helpful explanation for Ubuntu: https://askubuntu.com/a/76388

Does your cloud dashboard not tell you? Last resort, login on console and look at it.

SSH saves the host identification in .known_hosts file. So after the first connection (that asks you if you know this key, and sure, nobody verifies it) you known for sure that this is the server that you want to connect.

If after all that you receive a message an error from SSH that the host key identification changed, you didn't change this key yourself (or your team) and you still blindly ignore this issue (since it is not easy to bypass either, there is no flag, for example), this is not an UI issue and you're responsible for your actions.

There is a DNS SSHFP record type that the history can publish the host keys.

I wrote about them some time back: https://ayesh.me/sshfp-verification

For servers without a domain pointed to it, I suppose you can still verify them by comparing the history key fingerprint by logging into said server with the console provided by server vendor.

And how do you trust that the dns record hasn’t been tampered with? DNSSEC?

The article mentions DNSSEC, but is that a limiting factor in this case? If you are connecting to a server you specifically have out of band access to then just use pre-shared keys like normal and none of this is relevant. If you don't and DNS is compromised then you're depending on DNS anyway right? At that point getting the host pubkey from DNS too doesn't seem like it hurts anything and it could still defend against MITM if the chain from you to DNS is different then you to the server (not uncommon I'd think, since a lot of people are either using their local resolver, using their ISP's at 1 hop, or using someone distributed like Cloudflare who are also minimal hops). Of course authentic secure DNS is really important anyway, and worth working on. Beyond DNSSEC there is also DNS over HTTPS. And just using shared keys and nothing else, no option to even accept if it changes, is best of all. But if that doesn't work for whatever reason having it in DNS doesn't seem like an awful idea, anymore then Let's Encrypt depending on DNS does right?

Authentic secure DNS is in fact not important, and is not worth working on. I think the lengths you had to go to just now to establish some kind of marginal value for SSHFP is a good illustration of why. And that's because we've designed protocols for the past 20 years to assume the DNS is insecure. SSH, in particular, has "don't trust DNS" baked into its DNA; it's a response to the old Unix r-commands, which did trust DNS.

The threat model SSHFP protects against from is some adversary that's in the middle of you and the server (not close to you or the server). It does nothing against an evil ISP or evil AP, which FTPS (with CA signed certificates) does protect you against.

> the server your home directory is on is compromised

The attack presumes your home directory is client side, and you use SCP to connect to any kind of server, whilst ~ is the client side open directory.

Thus, any server you might SCP to could write to your local home dir. In university, I did this with compute clusters, servers of my association and other servers.

This breaks the SCP security model because it means a server has covert access to your local working path. Whereas normally you know which files SCP touched, so you can verify they are as intended.

Right I get all that (I updated to clarify that I understand the attack is against your local home dir, but I assume you have one on the remote side).

I just don’t understand the use case of scp’ing from an untrusted host.

In the B2B world there is a perhaps surprising amount of ad hoc data transfer (transactions, statements) between separate entities using scp/sftp from a cron'ed ksh script written by a junior sysadmin 8 years ago.

Throw a Tectia server in the DMZ, fix the firewall, manage some keys (never change them) and you're good to go! Data at rest security via Zip or PGP for the ambitious.

Occasionally a big enough entity will mandate "everyone will use our data xfer spec" (and it's typically some gnarly XML-infested SOAPy animal). But there's a whole lot of automated business processes doing CSV-over-scp both intra- and inter-company.

Don't even get me started on FTP.

I always go CSV-over-SCP called from a crontab, over anything more complicated like SOAP, whenever I have any say. I try to strengthen both accounts, for instance having the target account only accept SCP connections, but still this kind of bug could be exploited maliciously to jump from one server (or company!) to another.

> Occasionally a big enough entity will mandate "everyone will use our data xfer spec" (and it's typically some gnarly XML-infested SOAPy animal).

In which case, you leave your CSV-over-scp or CSV-over-FTP in place, and duct tape on another layer that handles the new data transfer spec. That way you can leave the 8 year old process itself alone and let it keep creaking away, silently chugging away as it slowly fades into the twilight of "important but forgotten" systems running on autopilot throughout the years.

It reminds me of how places like Rome still have sewers from the days of Ancient Rome in operation. Rather than replace them outright, they were just connected them to the newer sewer system to divert the waste to treatment plants instead of directly to water sources. And they'll keep on going, only being updated when absolutely necessary to warrant it.

Throughout the years my employers have reinstalled servers and consequently changed host keys. I believe many users just accept the new key without realizing what they might get themselves into.

Ideally your organization should keep updated (public) host keys somewhere, say on some https-protected website so you can double check yourself. How common is this?

(I mainly use ssh for interactive/tunneling use, but with a bit of bad luck the host key would change just in time for scp. BTW, don't rsync/unison use scp sometimes?)

Ideally they set up SSHFP dns records with the hostkey fingerprints

Haven't yet encountered it in the wild though

Or they automatically distribute the host key fingerprints onto employees machines via some organization-wide internal method (ldap, orchestration/configuration management tool of the month, ssh_config pointing to a global known_hosts on a share, etc.).

OpenSSH also supports certificates, so you can have the host system provide a certificate identifying it -- but you have to setup a CA, and arrange for new hosts to securely have their host keys signed by it.

"Ideally they set up SSHFP dns records with the hostkey fingerprints ..."

We (rsync.net) are doing this in 2019. A little embarrassed we haven't done it already ...

I've been using SSH since SSH-1 was released in 1995. I've never seen anyone verify a host key, ever!

Also, when I reinstall a server and keep the name, I preserve the keys from the original server.

I copied over results files from a university cluster to my local home dir. It wouldn't surprise me if it were possible for another user of the cluster to affect the server-side SCP program.

I trust the cluster enough to give me those results. I don't trust it with write access to .bash_aliases . Thats why I did

    scp file@host .
rather than

    scp *@host .

Because the remote host could be compromised/hacked even when you "trust" it, this could easily be used to jump from a compromised auxiliary external server to owning the internal laptop/desktop of a domain-wide administrator, for example.

If somebody manages to hack into a server somehow they can then contaminate hosts that attempt to scp from it. It's not the easiest exploit ever made but it's definitely pretty bad.

> I just don’t understand the use case of scp’ing from an untrusted host

You are SCPing to an untrusted machine, not from. Your client is trusted, the server you are connecting to is not.

Sometimes circumstances push you to connect to a server that is "semi-trusted" as described in the comments above.

> I just don’t understand the use case of scp’ing from an untrusted host.

What? Why would it even be an expectation that the host ought to be trusted? Would you say that about FTP, or HTTP or any other file transfer protocol? What's special about SCP?

If someone takes over my web server, this vulnerability means they can leverage that into taking over my personal computer? That’s catastrophic.

This sounds like the most plausible scenario in which this bug can cause havoc. Even if the web server is quickly restored to its original state, any user who happened to connect at the wrong time may still have a compromised PC.

And you may not even notice that the web server has been compromised until compromised PCs start acting up.

I think what it's saying is that in the event you connect to a malicious server, that server is able to do more damage to the client than it ought to.

Should be fixed, but clearly not the worse vulnerability ever.

That's the definition of, for example, all browser security issues.

you could do a simple arp mitm or so to have a client connect to you instead of the target server. if they didn't connect before from that client they will be prone to accepting the certificate. subsequently the malicious server can own the client. after that is happen, in the example given, the next login on the victim box runs further commands via bash.

why you would want to fix this; client needs to verify the input it receives from server and not trust it's the right thing. malicious servers is just like how browsers get owned u know. it's not magic and very risky considering how many people have ssh / scp servers in their internal networks, which is just the kind of environment you can get away with a mitm attack on..

Uhm, the attacker might not be able to compromise your systems beyond the exploit that gave out your server, but might use this to reach out into your admin box and steal the keys to the kingdom.

I accept incorrect host keys a couple of time per year because the IT has changed an IP adress or when a server is replaced. I do not always bother to check that I have a mail notifying me that the change was planned. Knowing this new vulnerability, I will take care to not perform a scp toward my home directory and I will double check PATH when logging to a new account (check that PATH does not contain "." before /usr/bin and /bin). IMHO, this is a major vulnerability for unix users.

The risk mitigation part here is that you know in advance a server has moved before you try to SSH to it. The real risk is less than cautious people who might choose 'yes' to accept a new host key when sshing towards a sshd that they have no prior knowledge of having changed IPs, or having been re-imaged, etc.

Unix has usually been pretty good about giving you plenty of rope to hang yourself. :) Tools don’t usually protect from their own misuse — in this case not verifying a changed host key.

> Edit: To clarify, I understand why you’d want to fix this, for a good depth of defense. I’m just saying I don’t feel any urgency in fixing this.

If you exclusively SCP to servers you control, this is admittedly a low priority issue.

However some companies use SCP extensively as a way to exchange data files (as an upgrade from using FTP, and out of frustration from dealing with implicit/explicit FTPS modes). Healthcare and finance come to mind for those use cases.

When someone breaks into your servers, it's still kind of bad. Consider, for one, that the admins risk their laptops getting pwned when getting logs from said boxes. Easy to get from "one compromised server" to bigger problems this way.

> Which either means the server your home directory is on

How did you arrive to the conclusion that the server has your home directory? The threat model: It's a server that you can authenticate to, retrieve files from there but otherwise you don't trust. It has nothing to do with your home directory. The PoC attack just shows the danger of invoking scp from the client's home directory.

I see so in the case that I scp from an untrusted host, I could have added risk that it is silently overwriting filed in my local home dir.

That makes sense I suppose. I never scp files from untrusted hosts, nor can I think of a use case to do so, so it hadn’t really crossed my mind.

There are plenty of use cases where the host you're connecting to isn't completely untrusted but rather it's semi-trusted. Web hosting provider, university machine, etc - all multi-tenant environments which have a tendency to get compromised.

You should not trust the server. The server may have been compromised.

The client may have also been compromised, you may have, you might be in the Truman show ... /s

If I'm reading the OP right, whilst directory permissions can be modified, traversal is not possible, so it can't overwrite outside the directory you're writing to. But if you wrote to your home directly then your .bashrc could be modified ... and it sounds like other vulnerabilities allow the actual processing to be hidden (spoofed console output).

Seems like the sort of thing that could be used in a spear-fishing scenario.

I wonder if you could write a StackExchange answer with instructions to scp files from such-and-such server ...

> Step one is getting you to use a compromised server.

This is not at all uncommon. Consider I need to send you files, either once or on a batch schedule.

You say, "give me your key, you can drop them on this server"

I feel safe since I am just writing files to your server, not expecting you are going to drop arbitrary code that will run next time I open my terminal.

Unless I’m mistaken you’d still be safe. The vuln would only work if you were pulling files from an untrusted host, not pushing to it.

But if I wanted to share files with you, I’d just put them on the web, unless we already have a relationship where you can ssh somewhere that I have access to.

In which case I assume we’re trusted or you’ve at least verified the host key.

Well, it's basically a privilege escalation vulnerability. If you've owned some server, you now have a possibility to get access to the admins local machine too, likely giving you admin access to the whole infrastructure.

If you "trust" any of these things is irrelevant, as that trust might be misplaced and the security model should keep you safe with compromised servers regardless.

It's the reverse, but it's almost the same. "Give me your key, and then take these files from my server."

I feel safe, because I think I'm going to cat a text file, not execute it.

This is precisely why i use firejail as a precaution against zero-days on ssh and other executable running from my Home account.


Honestly, I revisited my old sdf.org shell account last night - this could have got a lot of people on shell providers and similar down the years. Keys wouldn't need to change in the event of a compromise.

Which either means the server your home directory is on is compromised, in which case scp is the least of your worries

Well but what if the server doesn't tell you it's compromised?

The worst security threat is a human in-front of a terminal... eg. the user can be tricked to do something that is normally unsuspicious, like copy some files off some server.

> Which either means

or, it means the otherwise previously trusted server has been compromised, and the sshd has been replaced with a malicious one.

reminds me of this raymond chen post https://blogs.msdn.microsoft.com/oldnewthing/20121207-00/?p=... with the quote "It's not really a vulnerability that you can do anything you want once you pwn the machine."

This is getting [partial] control of every machine that merely downloads a file from the one you took over.

of course, if Windows used a capability based security model then it would still be a vulnerability that you could do anything you want once you pwn the machine.

The vulnerability is not that serious

From what I understand - if you scp any file from a remote server to your home directory, malicious remote server can copy anything else into your home directory without you knowing it, which can lead to remote code execution.

However, (1) the server needs to be malicious or compromised (or mitmd and you accept the new cert), (2) you need to copy into your home folder, not anywhere else, directory traversal is impossible

The fact that OpenSSH's refresh_progress_meter() does not pass filename strings through vis(3) is a bug in its own right, irrespective of whether it can be abused for things like this.

The progress meter is meant to be a single line, and the formatting calculations simplistically assume that every character in the buffer is a (single-width) graphic or SPC. Anything in the filename that breaks that will disrupt the progress meter.

I read:

* [...] only directory traversal attacks are prevented

* [...] can overwrite arbitrary files in the scp client target directory

* [...] the server can manipulate subdirectories as well

... so nothing points to the ability of the server to "fiddle with" parent directories.

Thus.. Is it an OK temporary workaround to _only_ perform scp from within a freshly created directory in /tmp/?


> Malicious scp server can write arbitrary files to scp target directory

looks like you don't need to `cd /tmp` before running scp, it's enough to scp _to_ a freshly created directory (`scp host:file /tmp/dir1`), inspect its contents and `rm -rf /tmp/dir1` afterwards.

I read it as being MitM and manipulating target (final, originally intended server) directories and files only, and in addition to spoof output to client to hide the fact it's doing so. In that case it does not matter where you run the client.

What makes MitMs possible is that checking fingerprint of new host is left to user. Instead we should have processes to automatically 1. acquire via other means and add fingerprint of each new target host 2. verify new host fingerprints with a separate party (central server or a ring of trusted buddies).

There is already a solution to this problem, which is host key certificates.

All you need to do is configure your SSH client to accept only host keys signed by your CA.

However, setting that up is complicated. You need a lot of knowledge to set that up securely. On the other hand, manual verification of host keys is trivial -- anybody can compare a short string of characters.

Trivial but who does that when? I never even learnt what I should compare that with and just accepted everything (and have been safe that way).

The best way to ensure that keys are correct, is to git a file like .ssh/known_hosts2 and add known keys to that file before you connect to the server.

How you get the public key is up to you, but they are located in /etc/ssh/ on the server, or given to you when creating the server.

ssh-keyscan can scan a host and print the keys as well.

If you also add the servers to .ssh/config you also get tab completion.

It takes a bit more job to do, but it feels much safer afterwards, and it's a good routine.

Your solution is that a system you trust is impossible to compromise? That seems unreasonably optimistic.

If you let a something MITM you it will always be able to steal your files, no path validation will fix that. If you let something MITM you when copying files from client to server it should have no way of changing the destination directory (assuming you authenticate using asymmetric crypto), regardless of path validation. This vulnerability has nothing to do with that.

This vulnerability is only about copying files from the server to the client, and lacking path validation when that happens. This problem can be fixed without fixing MITM.

> The attacker controlled server [...] drops .bash_aliases file to victim's home directory when the victim performs scp operation from the server. The transfer of extra files is hidden by sending ANSI control sequences via stderr. [...] Once the victim launches a new shell, the malicious commands in .bash_aliases get executed.

Sounds to me like everything the user executing the scp command can access can be compromised.

> The attacker controlled server [...] drops .bash_aliases file to victim's home directory

This can only happen if scp is invoked from the home directory (or from root or /home).

The server could also drop a "ls" file with execution rights in current directory. If "." is in your path before /usr/bin (I have already encounter that), it may be called as soon as you type ls (generally just after the scp).

Sure, it can also drop filenames starting with a dash, filenames with spaces/newlines in it and all sorts of stuff. These can cause all kind of havoc in poorly written scripts. Having "." in your path is an obvious misconfiguration too.

Having a trailing : at the end of the PATH is the same as :. and occurs quite often.

That example requires the user to copy something to the home directory. If you copy something to /tmp, or to a subdirectory of your home directory, that attack is not possible (according to my understanding)

I quick test with strace and git shows that git doesn't use scp for handling ssh remotes. I suspect that git is hardened against similar attacks (a malicious server dropping files into .git/hooks would be worrisome).

Git recently had a malicious-server-rce bug, though not related.

Honestly this seems a bit disappointing to me. Reading the issue description scp sounds like the following pseudo-code:

    sock = connect(server)
    while not sock.eof:
i.e. blindly create and change files the server tells the client about. Complete lack of client-side verification.

> sock.send_filelist(commandline.list)

Instead of a "filelist", it's got to be a remote shell command argument, since it allows stuff like:

    scp server:'$(
      comm -23 \
        <(find dir1 -type f -printf "%P\n" | sort) \
        <(find dir2 -type f -printf "%P\n" | sort) \
      | sed "s:^:dir1/:"
    )' .
I'm not sure what you propose for client-side verification in this case.

> 2. CWE-20: scp client missing received object name validation [CVE-2019-6111]

> Due to the scp implementation being derived from 1983 rcp [1], the server chooses which files/directories are sent to the client. However, scp client only perform cursory validation of the object name returned (only directory traversal attacks are prevented). A malicious scp server can overwrite arbitrary files in the scp client target directory. If recursive operation (-r) is performed, the server can manipulate subdirectories as well (for example overwrite .ssh/authorized_keys).

I don't understand why this is considered a vulnerability. The user provides scp server-side shell code to describe the files it wants. How's it supposed to verify object names then? Am I the only one that likes to do things like the following?[1]:

    scp server:'$(ls -t | head -1)' .

    scp server:'*.pdf' .
An argument could be made to have scp implement a glob pattern matcher, but that wouldn't be shell agnostic (I doubt we'd get support for zsh-style `*.pdf(oc[1,5])`) and it wouldn't include support for process substitution or any other way the user might want to specify files.

scp already describes the files it's writing to stdout. I don't see what more it can do without sacrificing usability.

The possibility of a compromised server writing things that have nothing to do with what I asked just seems like an acceptable consequence for the power scp provides.

[1] - Please, discussions about parsing ls output are besides the point, right now. Heuristics are useful too, at times, for ad-hoc portable practices.

At the very least the client could perform a check that the files it's receiving match the pattern it sent in the request by using a regular expression replacing * with an non-whitespace+forbidden character. It doesn't even check that if you request a single file like readme.txt that it receives a single file!

If you write:

    scp server:readme.txt .
"readme.txt" is still shell code. Most shells will evaluate it to the string "readme.txt", but scp on the client should not make assumptions of the shell used on the server.

If you write:

    scp server:'*.txt(oc[1,10])' .
A server that's setup with zsh with extended globs is going to return the newest 10 .txt files. If scp is written with an expectation of basic globs, I imagine it would try to match the files that have a character-by-character literal extension of ".txt(oc[1,10])". That means no file is ever going to match. You could say, "well, add recognition of zsh extended globs". Ignoring how complicated that really is because there are glob options that allow you to embed arbitrary zsh code, you're limiting the implementation of scp to work with only particular shells. scp should not be a barrier to me implementing my own shell, with it's own syntax and using it transparently. The current scp doesn't care about what shells you use where. It makes little to no assumptions of the tools you use, and that's cool.

Classic langsec failure.

Can you expand on that? I can't tell if you're in favor or against a change.

Well the language scp uses for selecting files is turing complete and relies on state unavailable to the client therefore it is impossible in the general case for the client to check whether a given file sent by the server was actually requested by itself. That's the langsec failure here. The "correct" (in any case more sane) approach would be to use a stateless, easily recognizable language for specifying files (e.g. something that reduces to REs).

And that's what I meant by saying that I couldn't see what else to do without sacrificing usability. Whatever that language you propose would be, if it doesn't allow use of state from the server, then it's pointless. You may as well use the local shell in scp's invocation.

> Due to missing character encoding in the progress display, the object name can be used to manipulate the client output, for example to employ ANSI codes to hide additional files being transferred.

The server description could be a subset of objects it writes to local.

Ok. For that, I think it would be apt for scp to somehow quote the filename when it contains a non-printable character, escaping the non-printable characters, thereby avoiding their interpretation as ANSI codes.

> Due to accepting and displaying arbitrary stderr output from the scp server, a malicious server can manipulate the client output, for example to employ ANSI codes to hide additional files being transferred.

Can someone explain how “employ ANSI codes to hide” works?


echo "Magic!" echo -en "\033[1F\033[2K" echo "Moar Magic!"

This script will print "Magic!" then the next echo sends escape sequences that move the cursor up 1 line with '\033[1F', then '\033[2K' clears the line the cursor is on, which is now the "Magic!" line. Then we output "Moar Magic". 3 commands but only one line of output.

The -e option on echo enables the escape sequences instead of literal chars. -n disables echos automatic newline for the sequence that deletes the first line

scp servers can of course do all the same trickery.

Your client asks for 1 file, the server comes back with 2 files (the 2nd one what you actually asked for), but the name of the first (tiny) file's name contains escape sequences that clear the line, move up a line, or otherwise obscure that the first file was sent.

Awesome, found https://en.wikipedia.org/wiki/ANSI_escape_code and played a bit in my terminal. It is fun.

    $ echo -e "hello\n\e[1A[2K\rworld"                                                                      
    $ world

This is just another in the series of rcp/scp considered harmful "bugs". This isn't a bug -- rcp/scp are the bug. Use sftp to fetch files. rcp/scp are fine for pushing files upstream, but not for fetching.

How is it a 35 year old vulnerability? ssh and friends have only been around for 25.

I believe the title is derived from the description of vulnerability #2:

> Due to the scp implementation being derived from 1983 rcp [1] ...

> [1] https://www.jeffgeerling.com/blog/brief-history-ssh-and-remo...

Could anyone comment about whether any file-related Ansible modules use scp by default instead of sftp?

I believe you need to set scp_if_ssh = True in ansible.cfg for scp to ever be used, even if sftp is not available on the remote host.

Nowadays ansible uses a sort-of 'smart' method by default, where it first tries to use sftp, and if that fails, falls back to scp. See https://docs.ansible.com/ansible/latest/plugins/connection/s...

So if the server is compromised, it can fail the sftp in a suitable way?

But its another matter if ansible's scp client is vulnerable to this.

So if anyone is curious, the way to prevent this is:

scp_if_ssh = false

uhh I think we killed it lol.

$ ping sintonen.fi

PING sintonen.fi ( 56 data bytes

Request timeout for icmp_seq 0

Request timeout for icmp_seq 1

Request timeout for icmp_seq 2

Request timeout for icmp_seq 3

Request timeout for icmp_seq 4


SCP-035 has breached containment.

Oh whoops wrong context.

It was inevitable.

It’s easy to fix. Just rewrite scp in Rust.

Please don't post unsubstantive comments here.

There are no memory corruption issues here, so I'm not sure how much this would help.

As long as you use vi to write your rust code I'm OK.

> Man-in-the-Middle attack does require the victim to accept the wrong host fingerprint.

This should be a WONTFIX.

The idea that it could be not only a man-in-the-middle attack but simply a "malicious scp server" is completely ridiculous.

No secure login or transfer method can protect you from the actions of a malicious server, which could be anything.

If I have the root privs to install malicious software on the server, why would I muck around perpetrating a scp exploit on remote users.

Expansion of privileges. If a malicious actor is already in your network, but doesn’t have domain admin right yet for example, taking control of all the machines that connect to a server they do have control over would be helpful in a variety of ways.

> If a malicious actor is already in your network ...

Then you're talking about a man-in-the-middle attack, which I acknowledge as a problem.

I don't agree that fixing bugs which date back to the completely insecure rcp program are the key to solving man-in-the-middle attacks in the SSH suite.

If you have to put in defenses at that level, you've already lost; the goal must be to eliminate or minimize the threat of a MITM attack on the authentication system.

This nonsense is analogous to putting a layer of Pig Latin and ROT-13 into the protocol in case the attacker breaks the AES cipher.

Some people only check a couple of bytes in the beginning and end of the host fingerprint. I've done that. It's not ridiculous to prevent attack vectors that depend on common behavior, even if it's strictly speaking the user's fault.

You should not be checking the host fingerprint at all.

1. Use `VisualHostKey=yes` in your ssh config. Learn the randomart image for your server. Don't try to compare two long random strings directly, that's not a job for humans.

2. Use an offline SSH certificate authority to generate SSH certificates, bypassing the need for a host fingerprint check altogether. If you can trust your offline CA, you don't need to trust your server's host key at all. Either they have a valid certificate or they don't. Your SSH client won't even ask you if you trust the server.

It's one thing to access a compromised server, but you should NEVER be exposed to MITM attacks with proper SSH usage, except for your very first connection on a brand-spanking new server when you drop your SSH host certificates (ideally in an automated fashion which can be deployed immediately after spinning up your box).

Sounds good in theory. Now, say I want to use github over ssh, how do I check if the randomart image is correct? (Getting my configs from github tends to be the first thing I do on a new machine.)

The CA approach is probably good if you control the servers you use. Right now I connect to about 5-7 ssh servers on a regular basis, and I don't have (full) control over any one of them.

`ssh-keyscan -t rsa github.com | ssh-keygen -lf -` gives you the fingerprint for github.com.

Save this key, and reuse it everywhere before connecting to github.

Now the issue of being MITM'd is once again only an issue with your very first connection, which is done via ssh-keyscan.

As for your other servers, you should only check the host key once for any of them, and then save it. And that's only if they are owned by a third-party, like a shared server. If your provider routinely cycles host keys, get a new provider who actually cares about security.

If these servers are owned by your employer, then your employer needs better security practices and, failing a certificate authority, should provide you with the necessary host fingerprints before you ever connect to a box.

I currently check fingerprints on first connection and then save them. They're available for most servers, including github and other git providers. I think I've only had to connect to one server without knowing the fingerprint in advance in the last couple of years.

Looks to me like the fingerprint-checking process has to be manual if you can't choose the server certificate.

> No [...] transfer method can protect you from the actions of a malicious server

Try applying that logic to a web browser. Oh, just do whatever the server says and overwrite whatever files it wants?

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact