
SCP – Familiar, Simple, Insecure, and Slow - twakefield
https://gravitational.com/blog/scp-familiar-simple-insecure-slow/
======
johnklos
"SCP – Familiar, Simple, Insecure, and Slow" \- slightly sensationalized
title. All you need to know - if you scp with an evil machine, bad things can
happen. That's all.

~~~
oehpr
Is there a way in which you can connect to an evil machine and protect
yourself from the possibility of it sending you evil bytes? What protocol is
able to detect bytes as being evil without knowing at the outset what the file
should look like?

This doesn't even seem like a valid security concern. But I'm open to hearing
about some ways that SCP would be vulnerable in ways that SFTP would not. I am
by no means an expert in this subject.

~~~
gpderetta
As long as the evil bits [1] are set correctly, you can filter by that.

[1] [https://tools.ietf.org/html/rfc3514](https://tools.ietf.org/html/rfc3514)

~~~
matheusmoreira
That RFC is a work of art. Are there more like it?

~~~
joombaga
Oh yes.

My favorite:

A Standard for the Transmission of IP Datagrams on Avian Carriers

[https://tools.ietf.org/html/rfc1149](https://tools.ietf.org/html/rfc1149)

Check the dates.

~~~
Groxx
And what good is a standard without an implementation:
[https://en.wikipedia.org/wiki/IP_over_Avian_Carriers](https://en.wikipedia.org/wiki/IP_over_Avian_Carriers)

------
Groxx
I've gotten into a habit of setting `alias scp=rsync` (with compression,
partial, progress, etc args usually) on every system I use. In the once-in-a-
lifetime cases where rsync is _not_ pre-installed, I just call /usr/bin/scp
directly (or wherever it happens to be. `type -a scp` will tell you.)

The amount of time I've saved when I've _forgotten_ that I've done this and it
just runs faster is well into the tens of hours. The amount of time I've lost
when I forgot and rsync doesn't exist remotely is about 10 seconds total -
enough time to read the error message -> `type -a scp` -> "oh yeah" -> use
/usr/bin/scp instead.

Highly highly highly recommended.

~~~
rhamzeh
Hint: you can type `\scp` to bypass the alias

~~~
Operyl
Which itself is defeated by:

place rogue SCP in /some/rogue/scp

export PATH=/some/rogue:$PATH

~~~
yjftsjthsd-h
Yeah, but we're not talking malice here, we're just talking about what to do
when your _own_ alias doesn't work. (Although the general note is still valid)

~~~
Operyl
Yup, I understand. Just leaving the note by so others don't put two and two
together on something else in a bad way :).

------
zeroimpl
It turns out that rsync is slower than scp, unless the files are already
present. It adds a lot of overhead that’s not needed most of the time.

I often need to transfer very large binary files. When stuck with standard
Linux tools, the fastest process is first transfer via scp. If the scp gets
interrupted, resume it using rsync with the —-append-verify option.

~~~
sroussey
nc is much faster if you don’t need encryption

~~~
kbenson
Well, nc +tar +compression is likely faster. If you compress your SSH
connections, scp or piping over SSH is likely much faster for files that
aren't tiny, I would imagine.

You aren't necessarily wrong, but there's a whole world of distinction you're
glossing over. Netcat is part of a set of tools you can use to construct a
somewhat equivalent tool to scp without encryption, but nc by itself without
tar and gzip (or whatever) match only the simplest possibly use case of scp,
which is copying one file, without compression, and ignoring all permissions
and ownership info.

This is also ignoring that you might not even have an open port on one side
that the other can reach to have netcat listen on.

------
ggm
We all use rsync but rsync itself has massive risks and is equally prone to
stupid buffering consequences. (I worked on a write up of an exploit to hose
an rsync server in rpki and the lack of channel protect in rsync is a problem.
Rsync over SSH is less risky but not risk free to either side. What is sent
and what is asked for do not have to align well) rsync over SSH needs tuning
to be buffer efficient.

Tar may well be better if you know you need all the files and don't need the
block checks per file to detect change and minimise data sent.

Rsync can hose the file buffer cache and slow a server down. If you want speed
mbuffer is good but lacks channel security too (ZFS send over mbuffer is
awesomely fast)

~~~
kbenson
> but rsync itself has massive risks

Err, what risks? If you run a server? Sure, but as a replacement for scp,
where it's assumed you have an account on the remote system, an rsync server
is not the equivalent solution, running rsync across an ssh connection is.

> Rsync over SSH is less risky but not risk free to either side. What is sent
> and what is asked for do not have to align well)

Glossing over that no system is "risk free", can you explain in a bit more
detail here, or reference what you wrote up or some other thing? That's the
kind of thing I would really want to know about, and an unsubstantiated throw-
away aside does not do justice to the problem if accurate.

~~~
ggm
[https://datatracker.ietf.org/meeting/89/materials/slides-89-...](https://datatracker.ietf.org/meeting/89/materials/slides-89-sidr-6)

~~~
ggm
The principle point is that if the server is compromised, the ssh tunnel just
becomes a mechanism to propagate things. If you say --size-only or --no-delete
or whatever, it doesn't acually constrain what happens over the rsync
protocol. It just says what you want the other side to know: if the other side
is a bad actor, it can send you whatever it likes, and that can include
filepaths outside the local constraint you think you scoped it to (the target
can be ../../.. specified)

~~~
kbenson
Yeah, I was just reading, thanks for sending the link. What I'm at so far,
which is that you were able to successfully get a client to write outside of
it's expected root location using a 119 line python script as the server, is
pretty problematic in itself if not yet fixed/mitigated in some way. :/

Did you event submit these finding as a bug? I would be interested in tracking
this if it was accepted. It looks like this note from 3.1.2's release[1] is
it, but I'm not sure:

 _File-list validation in 3.1.2 December 21st, 2015

If you're using a version of rsync older than 3.1.2 as a client and receiving
files from an rsync server that you might not fully trust, this version adds
extra checking to the file list to prevent the sender from tweaking the paths
and/or the transfer requests in a way that could cause a file to be received
outside the transfer destination. _

1:
[https://rsync.samba.org/security.html#s3_1_3](https://rsync.samba.org/security.html#s3_1_3)

------
comex
Is there a tool that mimics the interface of SCP but uses SFTP under the hood?
Then I could alias `scp` to that and forget about the SCP protocol forever.
(rsync is not good enough: it requires rsync to be installed on the target
server and seems to be slower than SCP in some cases.)

The proprietary ssh.com implementation of SSH apparently has one called
"scp2", but I don't see any open source version.

Well, except for this, but it's tagged "[MAINTAINER WANTED]" and written in
JavaScript:

[https://github.com/spmjs/node-scp2](https://github.com/spmjs/node-scp2)

~~~
fanf2
pscp might be what you want
[https://the.earth.li/~sgtatham/putty/0.73/htmldoc/Chapter5.h...](https://the.earth.li/~sgtatham/putty/0.73/htmldoc/Chapter5.html#pscp)

~~~
comex
Interesting. That does seem to match the description, so thanks for mentioning
it. On the other hand, PuTTY doesn't seem to support loading OpenSSH's
configuration, only its own format – so my private keys, host aliases in
~/.ssh/config, and known_hosts would have to be duplicated. And they would
quickly get out of sync unless I switched to PuTTY for all SSH use.

Seems like a lot of effort when, In Theory, this would be a pretty
straightforward wrapper to build on top of libssh, which can load OpenSSH's
configuration files, or perhaps on top of OpenSSH's ssh/sftp binaries
directly. But I'm too lazy to write one myself, so I was hoping someone
already had :)

------
jchw
I feel like this article tries a little too hard to make the SCP protocol
sound weird. It’s just using SSH as a transport layer, which is not really
unusual at all. Perhaps there are some security issues but IIRC Git for
example does basically the same exact thing to use SSH as a transport layer,
calling git-receive-pack on the remote. I’d be surprised if Rsync wasn’t the
same.

~~~
m463
I've ported ssh before and I think the whole system is a little hacky.

I think what is lacking is a precise split between file transfer and remote
command execution.

If the split were well defined, we could simply and securely split out file
transfer as a "thing".

it would allow:

\- write-only no-account-required file uploads

\- read-only no-account-required file downloads

\- an account-required incoming dropbox

\- an account-required file store

\- full blown remote filesystems (filestore + file protocol goodies)

and a number of other services that involve files and NOT command execution,
environment variables, shells, a login or even a user account.

~~~
fanf2
Doesn’t the sftp subsystem allow that in principle? You might have to
configure the server fairly heavily to make it sftp only, though...

~~~
m463
It's been many years, but I vaguely recall there were some things preventing
that, but I don't remember if they were protocol or implementation dependent.

------
war1025
I always assumed that sftp was the hacky command and scp was the preferred
one.

Guess I maybe learned that wrong.

------
noodlesUK
Compared to other file transfer protocols (I’m looking at you SMB), SCP is
pretty fast in my experience.

------
aidenn0
For a long time I have just piped tar over ssh. For lots of small files, it's
faster than an rsync to an empty directory. It can also sync anything that the
tar on both sides can handle. I wrote a script for doing this for my coworkers
who identify with [https://xkcd.com/1168/](https://xkcd.com/1168/)

~~~
jerf
One thing I like about this solution is fine-grained control over the
compression. I've used xz on a slow connection to the other side of the world,
I've used no compression on super fast connections. You get whatever you want.

~~~
aidenn0
Indeed! Unless it's a wired LAN connection, zstd will not make things slower
on lower settings, and can make it faster

