IIUC the `scp` cmdline will still exist it'll just use the sftp protocol behind the scenes instead of the eco protocol. Note that the deprecation is about the protocol not the cli.
It'll break a lot of valid functionality that relies on it, and the only case I can see presented as to why is for situations when servers choose to offer scp when they should have chosen to offer sftp.
Complaining that some people hand out shell access and this creates a "problem" that they gave people access to shell commands verges on ridiculous.
Having servers offer scp instead of ssh is not the only problem.
What about this part of the article:
Finally, while the danger is remote, it is worth noting that a local file name containing `backticks` (a file named `touch you-lose`, for example) will be handled the same way on the other end; if a user can be convinced to perform a recursive copy of a directory tree containing a file with a malicious name, bad things can happen.
You can deprecate something while still letting people add it back into their PPA manually. If you absolutely need an deprecated tool, no one's going to stop you: it's linux. Heck, just com,pile it from source and be on your merry unsupporteed way.
The scp protocol is literally deprecated: what do you not agree with? The idea (and already being worked on) is to change the scp command's underlying code so that it behaves exactly the same as before, all your scripts will keep working just fine, it just won't use the now dead scp protocol under the hood and will instead use the universally supported sftp protocol.
Which probably won't work in a lot of cases. I just got burned by the very minimal raspian ssh server they ship by default.
First it didn't support the authentication method I wanted to use, and then it didn't have any port forwarding ability. I'm betting it doesn't have sftp support either.
ssh cat /foo/bar > /bar/baz
ssh tar c /foo | tar x -C /bar
mostly because variants such as vagrant ssh or docker-machine ssh, cross-product with the remote target being any weird platform like solaris or a limited busybox or has sftp disabled
I've found many people either forget (or never knew) that pipes have a buffer, and that buffers are very useful when you are dealing with potentially bursty throughput.
In a previous team where we dealt with shipping around lots of ZFS snapshots (where both the performance of reading from disk can vary based on the ARC and filesystem metadata) and occasionally using VPN tunnels on public internet links, pv to get even larger pipe buffers than standard was often a huge performance equalizer. Readers read as fast as they can, writers write as fast as they can, and the buffer massages any variability in the middle.
It's not just the buffer, it is the PIPE. Specifically, pipelining that streams one file after another, without an idle period of waiting for the round-trip acknowledgements of one file receipt prior to sending the next. Using rsync over ssh has the same massive benefit for transferring many files, even if there is no existing data to make use of its differential transfer mechanisms.
(Those extra mechanisms are usually beneficial in WAN scenarios as well, except when you have high speed links, in which case blasting the full files may be faster than doing the IO to figure out comparisons and skip transfer.)
I was transferring files to an Android device, and adb push wasn't working for some reason. It would hit an error and kill the adb connection until I re-plugged the USB.
Ah yes, that one too. Used it a couple of times straight with a /dev/sda to best effort salvage data from a broken but running remote system I knew would not reboot as is.
Huh, pleasantly surprised to learn that dd correctly handles the truncated final block of a not-multiple-of-512-byte file; could have sworn that didn't work at some point.
SUS specifies that dd should do the "right thing" with input with non-integer number of blocks input, that is pad it with zeros if conv=sync, just pass through shorter otherwise.
In my, and possibly their defence, I read the article, and missed the one line in the second to last paragraph which states that it is the protocol being replaced. The way the article opens, coupled with the title makes that an easy point to miss.
I've become a big fan of military-style Bottom Line Up Front writing[0]. In this case, the article could have started with "we're deprecating the scp protocol and updating the scp command line utility to use the sftp protocol instead."
A new command with similar cli but different name is in process but it will not (and can't) support all features of scp so it can't be a drop in replacement and as such you can't name it scp as this would brake existing systems.
Thanks for pointing that out, even after reading the article I had not understood that to be the case. I suspect doing so will have issues of its own however.
> missing -3 support, it will not work if the server does not run sftp subsystem
Eh.
Seems like a half-assed solution for a non-problem, that will simply break stuff, remove functionality, and increase attack space for no benefit. If you need sftp, use sftp.
Isn't it decreasing attack surface if it's removing functionality? Unless you typically disable SFTP on your servers, in which case it seems like there would be basically equal attack surface after this change.
It will indeed decrease attack surface by removing (long-term, not day one) SCP protocol support. I'm not even sure gp did read the article on the security history (and unfixed bugs!) of said protocol.
Actually, what will happen is that if there is lots of breakage, people will not upgrade their SSH server and client. That sounds worse than the current situation.
Is there a reason to think there will be any breakage at all? Furthermore, the situation you are describing could only become a problem for environments where you control both the client and server (in which case, why not let them shoot themselves in the foot if they want to?)
At least 9 times out of 10, I end up typing scp instead of rsync, and then regretting it. Rsync makes an effort not to move bytes it doesn't have to and generally saves me a bunch of time. If I'm actually thinking about what I'm doing, I use rsync. Using scp is just muscle memory. I wouldn't miss it.
rsync with the weird / logic is annoying. I prefer scp. I'll use rcp and dig around for the weird incantations if I have something that I need to sync regularly though.
are you referring to the differing behavior if there is a trailing slash or not? it only takes a couple of times for that mistake to be learned, and then becomes quite the convenient tool when used correctly. it shouldn't be the one thing that prevents you from using a tool on a regular basis though.
The trailing slash behavior totally makes sense. It only took me a couple of times for me to learn it.
Unfortunately, it took only one more time for it to be forgotten, then another few times to be re-learned, then forgotten again, then half-remembered the wrong way, then....
I pretty much have it down now, though. The secret to my success: remembering that the differing behavior applies to the source side only. Trying to work out the matrix of possibilities between source, source/, dest, and dest/ is where I usually got lost, until I finally got it through my head that the destination syntax is irrelevant.
Now it makes sense: "foo/bar/" is the directory named "bar"; "foo/bar" is the directory entry within foo/ named "bar". Copying a directory puts its contents on the destination side. Copying a named thing, whether the name refers to a file or directory, puts that name on the destination side.
Lately I've taken to avoiding that fine point. I find it easier to internalize `foo/bar/.`. Using the self-referential dot this way doesn't carry the same semantics, but the effect is the same and it's a pattern that applies to many tools.
rather than consult the manpage yet again. When I want those semantics, I tack on the self-referential dot. `rsync foo/bar/. baz`
I've never used rcp or rsync, but your final explanation provides no illumination for me. "foo/bar/" and "foo/bar" are both directories named "bar". Copying a thing puts that thing on the destination side. It still doesn't sound like there's a difference.
I think it makes more sense to say that, for rsync, "foo/bar/" names the contents of the directory, while "foo/bar" names the directory itself. You could perhaps think of "foo/bar/" as a shortcut for "foo/bar/*", except that it matches hidden files as well.
Yeah, that happens to me on anything that gets used, put back up on the shelf, and then pulled back down again after some period of not being used. However, rsync has become the go to tool for getting local content to/from a server.
Which would not be an issue in almost all cases, since sftp is usually part of the OpenSSH suite, and since you are in most cases using the underlying SSH anyway...
I know that you could have an SSH implementation without sftp - but I'm arguing about the regular case. AFAIK sftp is usually there.
A few weeks ago I was scratching my head why scp was working on an embedded board and sftp- not. Yocto's default build recipes did not include the required plugin/library for sftp functionality.
It's one more protocol and one more executable (sftp-server) where things can go wrong and be exploited. Because it's less popular than the pure shell, it's likely less tested. So the right security posture, if you don't need it, is to disable it.
How do you know it's less popular? It seems like every non embedded distro enables it. OpenSSH linked sftp-server into sshd because SFTP only configurations were popular.
In most cases, it’s not so much a considered security posture and more that Yocto and other embedded builds tend to use dropbear instead of openssh because it’s smaller, and dropbear doesn’t support sftp.
Note that the contents of /etc/ssh/sshd_config are USUALLY compatible across operating systems.
So you can use ansible to install a common and secure and standardized sshd config across, say, legacy linux and new freebsd servers.
With the tiny little problem that filesystem paths will be different for linux and freebsd.
On modern freebsd, from memory, this sshd config line works:
Subsystem sftp /usr/libexec/sftp-server
I forget the path on legacy linux.
Anyway the simplest and most secure way is just to disable sftp across all systems and just use ssh and scp.
The solution to this problem is likely not to be shipping multiple inevitably slightly incompatible sshd configs, but I'll probably end up shipping exactly and precisely one sshd_config with the path on all operating systems being /usr/local/libexec/sftp-server or something like that, and then ansible rules to symlink freebsd and linux to the "standard" path.
Or I'll go all in on freebsd and get rid of the last legacy linux servers.
Or I'll reduce security by using ansible with multiple sshd_config files.
Or recode everything that used scp to use rsync or other alternatives (LOL as if thats happening)
At any rate as usual with unix there's multiple ways to handle things, which is good. Its not like windows or systemd based operating systems.
I think the parent has some sort of axe to grind with Linux in general, and believes any Linux-based OS to be "legacy", with FreeBSD as the "obvious" replacement. Clearly the parent lives in a different reality than the rest of us.
You need the sftp subsystem enabled on the server, which may be disabled on the server side (not that uncommon). The current scp still works fine in that scenario.
This is actually not so. Transfers don't work if the scp program is missing on the remote.
I use tinyssh in a few contexts where I don't want file transfers of any kind.
___
TinySSH doesn’t have SCP?
No, ‘rsync -e ssh’ makes same job. If you really need scp, use scp for example from OpenSSH. TinySSH doesn’t have problem with scp protocol, only doesn’t have scp program.
Can I use sftp using TinySSH?
Yes. TinySSH doesn’t have sftp program, but can run e.g. OpenSSH /usr/libexec/openssh/sftp-server.
Sftp support can be enabled using switch ‘-x’
Can’t test it now but that doesn’t sound right. The doc as written seems to indicate that tinySSH doesn’t ship the scp client but will work fine if you use scp from somewhere else to connect to it. As far as I know, scp is a client only.
scp(1) doesn't really distinguish client and server. There's source and sink, which is determined by the transfer direction. Without scp(1) on the path on the remote end, scp is not going to work. (Some SSH implementations may implement scp internally, though, by parsing the exec requests sent by the connected client).
If you do "scp foo user@host:/bar", then scp ssh's into user@host and executes scp(1) there with a flag that tells it to go into sink mode. When you swap the arguments, it uses a flag that tells it to send the files you want in source mode.
Can I ask why? Opening up command line/ssh access and closing off SFTP seems kind of like having a haystack and saying "hmmm something's wrong" then removing one straw from the pile and saying "ah, perfection"
I disable sftp because I do not use it. I disable all sshd features that I do not use, e.g. various methods of authentication or encryption other than those that I use.
I do not use sftp because it is a bad protocol, which cannot reach high speeds. This has been explained by other poster.
I frequently transfer tens or hundreds of gigabytes of files over ssh and with sftp that would be unacceptably slow. Also, sftp does not handle correctly all file metadata, in certain cases making incomplete file copies, with only partial metadata.
I use rsync over ssh, which does not have any of the problems that plague scp and sftp.
I deactivated sshd multiple times for some clients (needlessly I should add since the VM only network access was through VNC, and sometimes nfs wo) to avoid data leaks. When some data (often code) had to be sent to the VM, I don't see one case when reactivating ssh and not sftp is better than the reverse. Activating sftp only allowed us to keep track of the file put on the server, effectively copying everything passing through the chroot directory.
The OpenSSH sshd executable, in absence of a sshd_config, disables it - and rightly so, because you have to specify a server executable to manage it, which may or may not be available. It just so happens most distributions enable it by default. However, a hardened configuration could legitimately disable, to reduce attack space - it's an extra executable where things can go wrong and be exploited. There is also no requirement for non-OpenSSH implementations to have it.
To be clear, the problem is not that sftp is unlikely to be available, but that it's possible that it will be the case; hence it does not replace scp in that scenario without complete loss of functionality. It is an objectively inferior option.
If there’s anything out there that needs to be deprecated then it’s ftp and especially ftps. Looking at you IBM AS400 (or whatever it is called these days)
There are lots of flags controlling what it outputs, so have a look, there's likely a way to adjust it. Or alternatively, ip supports "-json" so you can process it however you want, or write wrappers.
Apart from the fact that they're working on a decent replacement, the original scp will always be available because it's open source. You can copy the sources now, compile it, and use for as long as it works. At some pint in the upgrade cycle the binary will stop working, and at that time you can recompile the sources you have. Chances are you will be able to recompile it without a single change, and if some modifications turn out necessary, they're very likely minor.
Of course the problem with this approach is that you won't receive security updates - but if the developers already consider the tool insecure, it might not matter much, especially if you use it only between hosts you control.
scp comes from the openssh package typically. Recompiling the whole thing sounds like a risky operation just to get one tweaked client, particularly on mainstream distributions.
OpenSSH takes ca. 2 minutes to build (the configure script probably takes longer than the compilation itself). Then you're free to use just the scp binary if you really need to stick to the legacy version.
It’s not about the time, it’s about the risk of not using the right flags, trusting the build environment, having a process, etc etc. Every time you have to hand-craft anything, from a security perspective, you’re more likely to put a foot wrong in ways that might reveal as disastrous months or years down the line. This is particularly true for such a basic building block as SSH.
For example, in any decent Linux distro you’d want to replace the package, not a single executable. So at a minimum it would be openssh-clients, which is a pretty important package to potentially get wrong.
But we're talking about an ugly hack by someone who wants to stick to an old program, so why would these things matter? In terms of absolute numbers, I estimate the current scp will be available on server distros like RHEL for the next 5 years. Then for the next 5 years you will be able to use the binary you preserved during the upgrades. So in 2030 you start building scp based on the last version of OpenSSH that supports it (from 2022?). Of course it's going to be suboptimal, and sooner or later you will have problems! The point is, you have this possibility and the program is not going to disappear like it's happening with proprietary services every day.
The problem isn't actually with the transfer protocol itself, it's the invocation of the remote scp is done using "$SHELL -c <some string>" and that turns out to be somewhat annoying to secure. The other parts (server sending other file than you requested) are really just pretty obvious oversights in validation (when you are doing open(server_response.fname, O_WRONLY) you should really have validated that fname...
That being said, scp-the-protocol is actually very simple. There is no spec for it, but a number of interoperable implementations and the protocol is really damn simple (it's basically goes "file <length of name> <name> <size> write <length> <data> write <length> <data>" and so on). It achieves good throughput (for large files) over SSH, but because every file involves a few ping pongs, it is RTT-bound for small files.
SFTP is much, much more complicated. And the spec situation is much worse, because there are like a dozen drafts and half a dozen different versions of the protocol. SFTP also pulls in half of the POSIX file semantics. SFTP naively is RTT bound for throughput; read size is limited to 64K in OpenSSH, so with 20 ms RTT you're only going to get at most ~3 MB/s with a naive client.
SFTP is essentially NFS, but over single SSH channel (and different). You get to ask for file handles, and then you can do requests on those handles. You get to opendir() remotely and get a directory handle and so on.
Like NFS, SFTP supports having multiple requests in flight (how many: implementation defined / no way to find out), so you can request multiple reads and wait for them to get around the 64K limitation. Problem: maximum read size is implementation-defined / no way to find out, which makes this really quite complex, since you have to account for reads coming back out of order and for reads being shorted than you requested without having reached EOF. Say you want to transfer a 500K file in 256K chunks, you schedule two reads of 256K and 500K-256K = 244K. Let's call them r0 and r1. Now r1 comes back, but it only read 64K (or 8K or 16K or whatever the implementation felt like). Now you need to figure out that (1) you should hold this data back, because the data before the offset of r1 has not been read yet (2) you need to issue another read to get the contents from 320-500K where (3) you may figure out that the implementation probably only does 64K reads (note: SFTP read request length field is 32 bit... expectations and all), so you get smart and schedule a few more reads: r2 for 320-384, r3 for 384-448 and r4 for 448-500K. Now you wait for the responses and get, e.g. r3, r4, r1, r2. You need to hold all this data and shuffle it around correctly, then write it in-order to the file (assuming you want to write the file sequentially, which is very reasonable if you want to have any chance at all of resuming the transfer).
This is on top of SSH already having throughput issues in the basic protocol over long fat networks.
Curious... I've never even had to think about chunk size when using sftp. It has always "just worked" for transfering files.
What scenarios are you talking about where chunks are important and you have to be concerned about ordering? Is this strictly for applications that perform large sync'ing jobs where "to-the-limit" performance is important?
It doesn't seem like a huge deal to deprecate scp and start using a short stanza of sftp for simple file transfers.
As a user of sftp(1) you don't have to care. You can even specify unsupported buffer sizes, because that sftp client does have all that complexity that I described above built-in. But if you need to interact with the protocol - well it's just a headache, plain and simple. (However, sftp(1) will write files non-sequentially - so you really can't just restart a transfer or you'll very likely end up with a corrupted file).
> This is on top of SSH already having throughput issues in the basic protocol over long fat networks.
Is that why I've sometimes observed slower-than-expected transfers when using rsync over ssh to do a mass migration of server data from one data center to another? Can you recommend an alternative (besides writing the data to external media and physically shipping it)?
I had some fun time coercing the build system to build this on an ARM SBC the other month, but alas, it did not seem to speed things up substantially over an LTE modem. Probably need to play with a bit more.
There are patches for openssh when using it over LFNs: https://psc.edu/research/networking/hpn-ssh/ (which seems to be in a sad state currently). These patches might interact poorly with other implementations (e.g., it didn't interoperate with paramiko (2 years ago)).
Also, make sure TCP window scaling is working. I was making transfers through a F5 Big-IP which was running a profile that disabled it.
It is (will be) deprecated but not removed for now.
A replacement with a very similar interface is in process of being made. But it will not have all features scp had.
You can't fix the protocol because people rely on it's exact behavior including the parts which have security problems. E.g. people rely on backtick expansion to run a command in the ssh session before the copy. But this also can lead to injection vulnerabilities in management scripts.
Rsync has "weird" syntax for a reason. Unlike other unix-like commands, it treats trailing slashes as significant. If a directory has a trailing slash, it means "contents" of the directory". No slash means "the directory itself". These are two different concepts, and a program that copies directories should take the difference into account.
scp (and cp, for that matter) don't take this difference into account. That leads to gotchas with recursive (-r) copying. Most importantly, scp isn't idemopotent:
scp -r fromdir todir
If todir doesn't exist, scp will copy the contents of directory fromdir to a new directory named todir.
Execute the same command again (now that todir exists), and scp will copy fromdir to todir/fromdir .
On the other hand:
rsync -a fromdir/ todir
will always copy the contents of fromdir into a directory named todir (effectively, a directory rename operation), whether todir exists or not.
rsync -a fromdir todir
will always copy the directory fromdir into the directory todir, whether todir exists or not.
These rsync operations are idempotent, which it important because rsync is designed to incrementally re-sync directories. It is expected that it will commonly be run more than once, which is why it needed to address this IMHO fundamental bug/limitation in cp and scp.
"Rsyncing the directory itself" means making a copy of the directory node (the root of the tree) and naming it with the original name in the new location. That is different than taking the children of the source directory and copying them into the new location.
Scp and cp have the same semantics, so cp is vulberable to this gotcha as well. Rsync doesn't have to be used with remote sources or targets (nor does scp).
It sees the filesystem the way Cygwin maps it, so what's transferred isn't an exact match for Windows files and attributes.
For example, Windows symlinks and Cygwin symlinks are quite different. I don't think Cygwin rsync can replicate a directory containing Windows symlinks properly, so that the replica behaves the same as seen by Windows programs.
I'm not sure about the other Windows attributes such as ACLs but I would be surprised if Cygwin rsync can replicate those.
very few things can handle stuff like that going from windows->$another_os. You'll just have to compress them down with a format that understands those and preserves them and transfer it. Some of them don't even make sense on other filesystems.
As a user-facing program it literally doesn't matter, it's only a political issue. You install it and you use it. The problem is if you're scared of the presence of GPLv3 for no reason.
Because they are scared of the presence of GPLv3 for no reason. Still a good point. Mac OS also doesn't ship modern GCC or Bash, so they are pretty much at war with reality at this point - or rather - creating their own ecosystem.
every other unix system besides linux used to be like that. for many sysadmins the first action on a new unix server or workstation was to install gnu tools.
apple decided to include those tools by default possibly in order to attract users. then they decided to stop updating and now macos is no better than those other unix systems used to be.
i used a mac for a few years and i don't miss the experience. gnu/linux just works
Obviously, as you say, syntax is not what's being deprecated here, but rather the protocol is being deprecated. Even if they didn't intend to make the new protocol work with the old syntax out of the box, anyone could have easily implemented something that does do that. (Although you can see in the article that they DO intend to do that already, of course)
What are the command line flags that mimic scp’s default behavior? I shouldn’t use the defaults, with no command line flags, should I? I keep reading through the man page trying to figure that out, but the options are numerous and confusing. I love using rsync but the interface is what gets in the way compared to scp. There are too many different rsync workflows that are all given equal weight in the documentation.
I do not believe that it is a desirable target to mimic the default behavior of scp.
I my opinion, the default behavior of any file copy program must be to make exact copies of the sources.
I find it very annoying that all UNIX copying programs do not have this behavior and by default they will lose information.
Therefore I always use aliases for all copying commands (cp, scp, rsync etc.), so that by default they will make exact copies.
For example, to make exact copies rsync needs "--archive --xattrs --acls", and cp needs "--no-dereference --recursive --preserve=all". cp also needs to be compiled with enabled extended attributes, which many Linux distributions disable, otherwise you lose the extended attributes without any warning or error.
Another trap on Linux, which may prevent making exact file copies, is when tmpfs is used for /tmp and some file is copied through /tmp, e.g. for passing it to another user. A copy through tmpfs may lose extended attributes and it also may truncate the timestamps of some file systems.
Basic syntax, "options source destination", is similar enough.
scp file.bin server:
rsync file.bin server:
The obvious difference is that rsync defaults to a more unixy no-news-is-good-news output, it will only output errors. To show interactive progress use
rsync -vP file.bin server:
Use "-e" to set ssh options, just like with scp. Use "-r" to transfer whole directories, just like with scp. But you're likely to use "-a" with rsync instead which is recursive preserving all attributes and timestamps.
Another difference is that rsync will resume transfers, which scp can't. It can do sparse transfers reasonably well. It also has more sane security defaults And, as implied by the name, update a remote directory including deletions.
Check out "man rsync" for the full story. It's not long after your shell and editor in daily usefulness.
On my rsync -e sets the executable, and my scp doesn't have -e but -o. Even then it's a bit of a stretch (or I'm missing something); e.g. for the ssh option I use mst of the time, a non-default port:
scp -P 4222
is equivalent to
scp -o"Port=4222"
is equivalent to
rsync -e "ssh -p 4222"
Again, this is just what I use after having to look it up at one point so I might be missing something, but it's not like the arguments translate one to one?
except the 'almost' can blow away filesystems if not used correctly (trailing slash vs no trailing slash being context dependent on presence/absence of remote files).. no idea why rync used differnt path semantics - cpdup for example uses identical semantics for a similar (but less performant) utility
exactly. I always do dry runs on folders and files to makes sure it's not going to blow away something. Scp is nice and simple and close enough to what I need 95% of the time. I only use rsync when I need a sledgehammer
I'd say it's the other way around: I like rsync, not the least because of the possibility to optionally resume partial transfers, which scp doesn't offer.
And while I use scp rather often, mostly for single file transfers, I still dislike that cp -a is scp -p instead. But my finger memory learned that a long time ago.
ssh uses '-p' to use a non-default port but scp uses '-P' (-p preserves mtime, as you pointed out). scp requires ipv6 to be in brackets (in order to parse the : correctly) but ssh requires it to be without brackets. Makes converting one command line to an other more annoying than it should be.
scp syntax can't be 100% compatible with both cp and ssh. Instead of going for 100% with either, or designing their own flags from scratch, I suppose they tried to pick the best of both.
-p for --port does seem like it's needed more often than -p for --preserve-mtime, but running ssh on non-standard ports was less of a thing when scp was designed.
It generates a lot of editing when converting an ssh command into an scp one, or vice-versa. It's minor, but I deal with raw IPv6 hosts a lot for various reasons and it's a daily annoyance.
What I meant was, cp chose one meaning for -p, ssh chose another. scp can't match them both (so if scp had the behaviour you wanted, someone else who uses cp all the time would be confused by it).
Ah I see what you mean, but in this case they could've gone back and added -P as an alias for -p in ssh for instance. Similarly, it makes sense to make the brackets mandatory for scp to distinguish the path from the address, but then why not make ssh also support this format optionally?
Anyway, it's pure bikeshedding at this point, it's just that it's such a common annoyance for me that I can't help but nitpick.
I'm interested to hear more about why changing syntax is such a huge deal in this particular space.
Computer security evolves over time, and we have to cope with APIs and workflows which change to keep pace. Nothing supports SSLv1 anymore. Telnet is (largely) a thing of the past. Yes these migrations can be painful, and place some burden on users to change their practices. I get that there are a large number of scripts out there which may need to be adapted to work in this new world - but how is that different to any API change? Major breaking changes like this are rare, because the cost to users is high, but even so I don't recall seeing this insistence that things must _never_ change in, for example, glibc.
A syntax change here is probably roughly comparable to changing the semantics of `accept` or similar in glibc. It would generate a ton of what feels like make-work, completely aside from whether or not it is a good idea.
This is very similar to GUI revamps. Users always hate them, even if they're super well-executed.
People just don't like change, and asking them not to grumble works about as well as asking them not to breathe.
Scp is a very low-level, basic building block in the sysadmin world. The reasons for being conservatives are the same reasons for not changing the syntax of if() in C.
Yeah it will break a lot of old scripts, I'm guessing someone would just come up with equivalent commands for rsync and build an executable/script around it to use the old scp flags and semantics.
Sorry, that is not how the world works. Stop demanding stuff from people who are giving you things for free. You might have to do things in a more inconvenient way, also for free. Or, if you read the article, someone may gift you a replacement that behaves in the same way and is secure, for which you should be thankful. Perhaps donate some money to the openssh project which is hugely underfunded https://www.openbsd.org/donations.html and which you use daily.
I think you're slightly overreacting to a reasonable point. We all know there's a problem with funding open source but for all you know the person you're responding to might be heavily in credit in that department. We all have opinions about tooling and there's no harm in expressing them clearly.
I'm afraid I may not have worded my original comment well. I was not making demands, I am thankful for the tools I have and use, I do demonstrate that by donating to open source projects as much and as often as I can (but thank you for providing the link to encourage others), and I too as a software engineer contribute my time to such projects, for free, and without want nor need of thanks, but simply to know that my work is making a user's life easier somehow.
It was not my intention to appear to be making demands, I was genuinely asking why deprecate the tool without a replacement. It seems there was a misunderstanding on my part, it is in fact the protocol being deprecated, contrary to how the majority of the article makes the point appear.
As a software engineer, I too have users, and deprecating a simple, functional and easy to use tool is not something I would do without expecting a response, or for them to ask questions about how reasonable that is, whether they pay me or not. Expecting them to use a more inconvenient solution is something we should avoid as software engineers, and instead aim to make their life easier, whether they pay us or not.
I have no issues with switching tools. I have embraced systemd, iproute2, ss and others. Deprecating scp as a tool/syntax, to me, seems more fundamental.
I'd expect no less of a response than if we were to be told that "cp" itself was being deprecated in favour of something which may or may not offer a similar syntax, may require more thought to use and may or may not be available under certain conditions.
I think you are missing OPs point entirely. We already have scp for free. (And thanks!) Deprecating it without providing an alternative is pointless. Are you going to wake up tomorrow and suddenly stop using this incredibly handy tool because some blog post said you should? No.
"[...] without providing an alternative" is a rather strong position. There are multiple alternatives - for many users the underlying protocol changing to sftp will allow them to just keep working as they always did, and they won't even notice the difference. For users copying to machines without a functional sftp subsystem there will be a small amount of workflow modification to do but there are many ways of securely copying files - even if the syntax is slightly different I don't think that means they don't count as alternatives.
Oh man, I really hate this Gnome-style defense whenever users express their opinion developers don't like. Yes, we're grateful, but sometimes we're also upset, and if you try to stifle criticism, you will lose valuable feedback and, eventually, users themselves.
Also musescore: Gets the guy who roasted them on board as Head of UX.
Musescore, but written in Gtk: "That composer just doesn't understand how things are done properly in Musescore. Also, he should shut up and be grateful that it's free."
OpenSSH is primarily a security project, they have provided sftp as a secure file transport, other projects should provide nice interfaces over it, it really is not their aim. This seems to be happening too, which is great.
Good explanation -- at the risk of mindlessly repeating you, my reason for the "always use trailing/ slashes/" suggestion is that the behaviour in that case has (for me at least) always been the scenario desired most often:
The trailing slash on the source expresses that you want to copy the contents of the source in the dest, otherwise you’ll copy the source itself (creating a subdirectory).
rsync used to be quite a bit slower at some point. rsync optimizes for network traffic and quite often network capacity is abundant. not sure how much of an issue this is with today's solid state drives and fast CPUs.
OTOH there where issues with scp having a fixed tcp window size of 4k (not sure if this is still the case) which would slow down copies on networks with some latency.
sftp have none of these issues. i prefer sftp for most tasks.
Since it's release in the year 2008, rsync 3.0.0, the recursive algorithm used is now an incremental scan that uses much less memory than before and begins the transfer after the scanning of the first few directories have been completed.
Most people use scp and not sftp because they prefer the cp-like semantics of the command line tool. I have no idea why OpenSSH's scp does not use SFTP by default and allow an explicit fallback to scp for those systems that don't have the SFTP subsystem enabled.
Also, putty is available in EPEL for the CentOS crowd.
# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.9 (Maipo)
# pscp
PuTTY Secure Copy client
Release 0.73
...
-sftp force use of SFTP protocol
-scp force use of SCP protocol
It looks like it has a LOT less library dependencies than scp:
I don't appreciate either phoronix or lwn using this clickbait-ey title. The scp command line program isn't going anywhere and (as I understand it) the usual options will remain the same. The news story is about the protocol that runs behind the command.
I use OpenBSD (they are the maintainers of OpenSSH which provides scp), and watch one of its mailing lists, and it seems like they are encouraging disuse of scp for security reasons as mentioned elsewhere here. The Fedora-related discussion here of a new scp that uses sftp underneath, I'm guessing is less likely to make it to OpenBSD soon, for whatever such a rough guess is worth.
I made a separate comment suggesting a way to locally replace scp with sftp, as others have also done with rsync (Ctrl-F here for "progress").
You don't make sense, this is a protocol problem, not a code problem, you can't just incrementally address the issue without breaking everything. Fixing the scp (really, rcp) protocol would incur more churn than just moving to rsync or sftp which are already well established replacements.
No it's whe have already sftp wich is better and whe have unfixed bugs in scp so whe throw it out.
>>The scp command is a historical protocol (called rcp) which relies upon that style of argument passing and encounters expansion problems. It has proven very difficult to add "security" to the scp model. All attempts to "detect" and "prevent" anomalous argument transfers stand a great chance of breaking existing workflows. Yes, we recognize it the situation sucks. But we don't want to break the easy patterns people use scp for, until there is a commonplace replacement
No, "sildur" was correct that the "sftp" protocol is not better than the "scp" protocol.
Another poster described in detail the problems with the "scp" and "sftp" protocols. They have different advantages and disadvantages and neither of them can be considered a good file copy protocol.
>It has proven very difficult to add "security" to the scp model.
>>Compared to the SCP protocol, which only allows file transfers, the SFTP protocol allows for a range of operations on remote files which make it more like a remote file system protocol.
> The scp command is a historical protocol (called rcp) which relies upon that style of argument passing and encounters expansion problems. It has proven very difficult to add "security" to the scp model. All attempts to "detect" and "prevent" anomalous argument transfers stand a great chance of breaking existing workflows. Yes, we recognize it the situation sucks. But we don't want to break the easy patterns people use scp for, until there is a commonplace replacement.
The problem is that in many cases those bugs are in fact the intended behavior and changing it would break backward compatibility. I think it is better to leave it as it is and move to a different tool.
You are lucky that someone is working on a rewrite! If it were up to me I would have solved the problem with a nice rm -rf. Be careful what you wish for..
The problems outlined by the article do not at all seem significant enough to merit a full scale deprecation of scp. The article states that beside file duplication on the source machine, the only security issue is the same as having any SSH access on the machine. The other issue (the bashrc example) is akin to a bonehead move like accidentally typing rm -rf somewhere important. `scp` doesn't need a gun lock, IMHO, just an attentive user, like any command, really.
This comment made me chuckle:
> Then, there is the simple matter that scp is ingrained so deeply into the muscle memory of so many users. As with other deprecated commands (ifconfig, say), it can be hard to make the switch.
Trying to wean myself off ifconfig to ip & netplan has been effing brutal since network provisioning is something I do less than once a quarter.
Similarly, I've been trying to 'train' myself to use rsync since ... checks notes ... ~2004, and I still have to stop, clear my head, and read the man page to figure it out because it is useful for huge transfers, but those are so rare in my flow.
Don't you bring netplan into this! The ifconfig -> ip transition is at least an improvement. Canonical had every chance to use NetworkManager which is battle-tested, has an extremely polished language bindings, a cli, tui and multiple GUIs, handles complex configurations like a champ, has a nice connection plug-in system to handle all kinds of less common stuff like VPNs, Bluetooth, Wi-Fi sharing, IPoDNS and a plug-in system to support reading configs in other formats (like maybe YAML for instance). The plug-in ifcfg-rh is the reason why ifcfg scripts all still work.
But no, they wrote this entirely non-extensible basically trivial C utility (that's not a dig, seriously look at the code, it does almost nothing!) that doesn't reduce complexity at all since it's the thinnest possible abstraction over its backends and adds yet another layer of things that might fail. But it's integrated into cloud-init by default so it's easier to just slog though fighting it than to rip it out.
> The other issue (the bashrc example) is akin to a bonehead move like accidentally typing rm -rf somewhere important.
I think you've misunderstood the issue a little bit. Dangerous command is this:
scp admin:boring-spreadsheet.ods .
Here, user do not do type anything wrong, however if an attacker somehow replaces the remote with a malicious ssh server which returns a .bashrc that contains dangerous content (such as aliases like alias ls=rm -rf *) then you are basically screwed. So this is a serious issue.
Other security error is quite interesting if I understood it correctly (scp some-local-file remote:'`touch you-lose`remote-file'). It is basically running the command in the remote server. If that is so, why would a file transfer protocol run a command in the remote server? If an attacker manages to put a file with name `rm -rf /` in the directory that one is going to transfer, it may damage the remote system. I think, this part of protocol really needs to be fixed.
> I think you've misunderstood the issue a little bit.
I might be misunderstanding this. How does .bashrc get replaced in that example? You mean the hijacked remote returns .bashrc instead of boring-spreadsheet.ods? That means it is only a problem if you are in $HOME (but still a problem). Further, how does the remote complete the SSL handshake if it doesn't have the private key, and if it does, isn't that a bigger problem?
I may be a curmudgeon here on this one but I read that and thought it read like "Chainsaws can cut you so we are going to start recommending only plastic teeth on chainsaw blades."
Sure, if you're on an appliance delivery vehicle instead of a computer, by all means cut off all the ways it could be screwed up by either an ignorant user or a malicious actor, but if its a computer? Why not deprecate gcc? After all you can compile code with security holes galore with that bad boy, where will it end? Only type safe vendor supplied library languages for you from here on out danger cowboy.
Back when CVE-2020-15778 came out, there was an ongoing thread on openssh-unix-dev@ about deprecating scp. The main problem that I noticed from the dialogue was that the community felt (paraphrasing and summarizing) that "If you support scp on your server, you should know to filter backticks." There was little interest in changing the behavior (understandable - this breaks backwards compatibility) nor in updating the man pages and other documentation to note this was an issue. I felt that the latter was where scp showed its legacy. Security should be built into products by default, not a series of "gotchas" that users manually configure against.
I love the idea of having a 'secure with default settings' version of scp that functions with the same syntax. I think that would be great to get users to use by default and avoid the footguns that comes with default scp.
I prefer to use rsync over SSH as a replacement for scp:
alias scp='/usr/bin/rsync --archive --xattrs --acls --progress --rsh="ssh"'
I have completely deprecated for myself the use of both scp and sftp many years ago and I disable the sftp server (in sshd.conf) on all my servers.
While scp and sftp are also slower, usually being unable to reach link speed on fast links, the main reason is that I have discovered that when copying files with them they sometimes were losing file metadata (e.g. parts of the timestamps or extended attributes) without any warnings or errors.
rsync does not have such problems, it can make exact copies even when copying between different operating systems or different file systems.
Because I have stopped using both scp and sftp many years ago, I do not know whether meanwhile there was any effort to remove the bugs from scp and sftp, but I doubt it.
The rsync option --rsh="command_name" or -e "command_name" is required to be certain that "ssh" is used for the transfers.
The rsync program might have been compiled to use "ssh" even if you do not use the --rsh= or -e options, but you cannot know this for sure (unless you have compiled it yourself and you have read the sources to verify that).
Not sure if it's common across all tar implementations, but you can use -C to let GNU tar change directories before performing any operation. So this command is equivalent:
$ tar -cC /source/dir . | ssh rhost tar -xvC /dest/dir
FWIW, while tar isn't even a POSIX command, -C is supported by bsdtar[1] (including macOS), AIX tar (confirmed AIX 7.1), and Solaris tar (confirmed Solaris 11.4). Argument ordering and bundling is really funky on AIX and Solaris, though, especially for extraction.
[1] Seems like everybody except OpenBSD has migrated to libarchive's bsdtar, but OpenBSD also supports -C.
Just a sidenote, this is probably the fastest way to copy lots of tiny files across a high latency link (but without -v [1]). Most protocols transfer files individually, waiting for the server to finish writing it - adding a tiny delay for each file transferred. The tar above simply pushes the files across the pipe as fast as it can without waiting, while the receiving tar can write them as soon as the data is received. Recommended to pair with lzop for fast compression.
1. With -v every filename is written to the terminal, which causes context switches and IO waits. This can significantly slow file copies with lots of tiny files.
> If the remote server were to give a response like "here is the .bashrc file you asked for", though, scp would happily overwrite that file instead
That really surprised me. I would have thought the implementaton would send a byte stream and that the client program would simply pipe that to the filename passed as first arg
The problem is that it supports several files per transaction, for example recursively copying a folder from remote to local - then only the remote actually knows the names of the files and it has to send them.
That's not a problem when you actually ask for a directory of content. If you are trying to pull a single file and it results in an another file pulled and put next to it then it is certainly surprising.
I see there's no discussion on the loss of being able to select files using server-side state. The SCP protocol is like the only tool that allows that (ignoring being able to use a contrived combination of ssh and tar).
For example, very often I use this to pull the latest file I produced in a server that has zsh set as the shell:
scp -T trustedserver:'*(oc[1])' .
I just pull that command from my history. For the other trusted servers that don't have zsh, I do the following (while being completely sure that the filenames I'm working with aren't directories, don't have newlines, spaces or other shenanigans):
scp -T trustedserver:'$(ls -t * | head -1)' .
If SCP protocol support is completely dropped, the only alternative I know of would be something like this:
ssh trustedserver 'tar c *(oc[1])' | tar x
ssh trustedserver 'tar c $(ls -t * | head -1)' | tar x
That wouldn't show per-file download progress bars, but oh well...
You are right, but the rsync behavior has good reasons, explained in its man page.
Moreover, all UNIX commands have different behavior depending on whether you write or not trailing slashes, at least when the arguments happen to be symbolic links.
To avoid mistakes due to the different behaviors, I use for cp and mv aliases that include the option "--strip-trailing-slashes".
Will there be a directory called dir_a on the server? Well, that depends on if dir_b exists. Run the command again and the result may be different. That's not acceptable behaviour for a tool keep a remote directory synced.
Jonathan Corbet should clarify what he means when he says "the openssh community" considers scp deprecated because it sounds like he's only speaking for himself. It's kind of clickbaity. Although his technical research is stronger than the CVEs he's citing. CVE-2020-15778 is a particularly noisy one since the author likely hadn't considered rssh shell and should have clarified exactly which configuration options he felt were compromised.
> than Corbet should clarify what he means when he says "the openssh community" considers scp deprecated because it sounds like he's only speaking for himself.
Jonathan Corbet is an employee of LWN, which produces news on a variety of Linux topics, funded by paid subscribers (like me). As far as I know he's not a contributer to OpenSSH any more than he is to any other project he reports about, unless you know something specific? Or are you mixing this up with someone's personal blog?
He's not just an employee but their editor. A journalist who's smart enough to write kernel drivers should have been smart enough to get someone from the OpenSSH team to go on record or linked to one of their announcements. Because they actually do speak for their community. LWN comes across as promoting their own views.
> The scp protocol is outdated, inflexible and not readily fixed. We recommend the use of more modern protocols like sftp and rsync for file transfer instead.
Who's them? So I did some digging on my own and I managed to uncover this statement from Theo de Raadt:
commit 82ff5eac51d41356a89ceffe2102c69616946320
Author: deraadt <deraadt@openbsd.org>
Date: Sat Oct 3 02:18:33 2020 +0000
split introductory paragraph, and insert ominous words about the glob
issue, which cannot be fully fixed and really requires completely
replacing scp with a completely different subsystem.
team effort to find the right words..
Why doesn't the LWN article talk about that? It sounds like this glob thing is the real deal-breaker for the OpenSSH team.
The CVE-2020-15778 GitHub page attributed it to the OpenSSH team. Corbet quoted the OpenSSH 8.0 release notes too.
The glob issue is the backtick issue. The protocol requires running the remote user's shell to expand globs. Running the remote user's shell expands backticks.
Would it be possible to expand globs in a different way then? Say manually? Or with escaped backticks? I kind of think it's worth breaking people's workflows here, but idk.
Edit: I guess this is what the sftp-based scp does. But I wonder if the problems could be fixed without changing protocols.
Easier would be to disable backticks per default and only allow them with the new -T option which was added recently to permit case-insensitivity problems.
> I kind of think it's worth breaking people's workflows here, but idk.
Why? As many times mentioned in these replies there are a dozen alternatives if you just want file transfer. Why break argument parsing in scp when those alternatives are readily available?
Seems preferable to evolve it in line with how 99% of people are using it, rather than deprecate it because 1% of people are using it in a dangerous way.
It's also a shame to waste a nice 3-char name that, as the article puts it "is deeply wired into the fingers of many Linux users and developers"
> Finally, while the danger is remote, it is worth noting that a local file name containing `backticks` (a file named `touch you-lose`, for example) will be handled the same way on the other end; if a user can be convinced to perform a recursive copy of a directory tree containing a file with a malicious name, bad things can happen.
If tricking the user is the issue then we should also symlink ssh to chrome. Would be a waste to loose an otherwise fine three letter name to something as dangerous as giving someone direct remote access to a system.
As a unix purist, the "scp" program seems kludgy and in-elegant. We already have "cp" for copying files, why do we need another program?
The appropriate thing to do is to mount the remote disk, and then use regular cp as God intended. Fortunately, this is possible today using "sshfs" (but there's still some ugliness in the fusermount implementation).
That's the Plan 9 way. The "everything is a file" model mostly relates to exposing all objects as file descriptors that support read and write operations--specifically the read and write syscalls. IOW, everything is a file means the universal API for all objects[1] is read and write. How you acquire the descriptor is a fuzzy area. Using open on virtual namespaces makes alot of sense, but not always, and in any event no incarnation of Unix, not even Linux, supports virtual namespaces as comprehensively and seamlessly as Plan 9.
[1] From a systems programming standpoint--kernel resources, IPC, etc. But there's an obvious relationship to, e.g., Smalltalk objects. (Just don't ask me what it is ;)
It turns out, though, that in some situations, especially those where there is little or no trust between the two ends of the connection, the actual level of security may be less than expected.
So maybe we just need to educate people that giving someone scp access is the same thing as giving them ssh access? not that many people even use SCP to begin with, maybe a warning in the config file if someone tries to only enable scp without ssh. Why would you even do that?
The problem with scp is that the trust also needs to go the other way. There is a lot of ways ssh server can trick the client into doing bad things on your local machine.
Eh, scp-the-command (interface) may be here to stay, but scp-the-protocol (implementation) is definitely due for replacement. Under the proposed change, scp will remain the easiest way to transfer files for you. It will just no longer use outdated scp protocol and will use sftp protocol instead.
The nice thing about scp is, it works with any ssh server. The machine you want to scp to might not run OpenSSH, you do not know whether SFTP was configured on it. If it is not active in the default configuration, a lot of machines won't have it activated.
How do you browse for scp? I rarely can remember the full path to the file I need so I end up with two terminals: one for ssh browsing and one for copying with scp/rsync. I feel I'm doing something wrong here.
Also, what is the correct way to use scp copy remote files with a space in the name? I usually end up just using wildcards (or rsync), but that can behave incorrectly with similarly named files.
$ touch '/tmp/test file'
$ scp localhost:'/tmp/test file' .
scp: /tmp/test: No such file or directory
scp: file: No such file or directory
$ scp localhost:'/tmp/test?file' .
test file 100% 0 0.0KB/s 00:00
Edit: I'll answer my own question, which I clearly never thought about much, since the answer is obvious. I need to escape the space within the string.
I spend my entire work day in the terminal so remembering the full paths of things has become second nature without much effort. If you just drop into the terminal occasionally for file transfers then there's nothing wrong with opening a second terminal to get the paths you want.
I do pretty much the same, although in single terminal: `pwd` and `ls` on remote machine to print dir and file name, then exit ssh and edit recent ssh command to pull the file copypasting recent `pwd` and `ls` output.
Luckily, I have to do it very rarely. When doing some more serious file-shuffling - I resort to graphical file managers. Which is, well, even worse than two terminals in this regard :)
Might just be easier to use sftp then depending what options you need. You can ls and get/put files from a command prompt like interface and it uses the ssh protocol as well.
The -3 option is crucial for some of the processes where I use scp. I will have to explicitly download then upload from the intermediary system without -3 and that’s extra work I’d rather not do :)
This an extremely terse and pointed article about the present day reality of a command that users generally take for granted or avoid altogether.
LWN rarely disappoints, but scp is an obvious next step for beginners upon learning about SSH access, and it's really great to provide context around what is actually happening with it in a manner that actually might be in reach of understanding for beginners.
FWIW here is how I replaced scp commands in my scripts with sftp. (If there is a better way of course I am interested.)
OLD:
scp $FILE "$USER_AT_HOST:/home/username/dir1/"
NEW (a single embedded newline or Enter right after the word "progress" seemed required, though not correctly displayed here, to enable a visual progress indicator; maybe some kind of \n could work instead but didn't immediately for me and this was easy enough in the end):
echo "progress
put $FILE"|sftp -f -p -N -b - "$USER_AT_HOST:/home/username/dir1/"
Edit: I suppose this could be carefully put in a script called "scp" that takes parameters, or aliased, something like others here have done with rsync options (^F here for the word "progress" to see some).
ps: FWIW, I made a script and the line with "progress" works on a single line now, not sure what I was doing wrong before. My contact info is in the footer of the web site noted in my profile, if it sounds useful. But the main line is something like this:
echo -e "progress\n put $1"|sftp -CfpN -b - "$2"
Or, to be able to echo it first for confirmation (bash, typed/untested largely from memory not the actual script but close):
#!/usr/bin/env bash
set -eu; set -o pipefail
if [[ #$ -ne 2 ]]; then echo "2 parameters expected"; exit 1; fi
CMD="echo -e \"progress\n put $1\"|sftp -CfpN -b - \"$2\""
The amount of times the "-3" scp flag saved me from waiting tens of minutes is non-negligeable. I'd say it's paramount to a number of premade scripts out there, so an "scp replacement" without it is bound to make me sad and probably break some stuff
From man scp:
-3 Copies between two remote hosts are transferred through the local
host. Without this option the data is copied directly between
the two remote hosts. Note that this option disables the
progress meter and selects batch mode for the second host, since
scp cannot ask for passwords or passphrases for both hosts.
And unfortunately there are some devices out there that still require scp with no sftp support (copying files to a cisco switch for example). You can also use tftp or ftp, but it's not as handy.
I really want a way to expose filesystem contents to someone via SSH without giving them the ability to execute arbitrary code. Bonus points for confining access to specific directories.
https://github.com/scponly/scponly can be used to prevent command execution and supports chroot. It avoids the server-side scp protocol security issues by rejecting shell metacharacters in commands. It also supports sftp.
In fairness, deprecating a Linux CLI program basically amounts to a tentative suggestion that you maybe try to find an alternative sometime in the next 20-30 years.
"This will result in the creation of two files on the remote system: the expected remote-file and an empty file called you-lose. Adding more interesting contents to that file is left as an exercise for the reader."
If someone has file transfer access to a server, the same outcome can happen, only in a perhaps more contrived way. I don't think this is a problem with scp itself. There should be a more restricted transfer command that disables these options however, for the cases where the extra security is necessary.
> Even if it is a bit slower, a version of scp that avoids the security problems with the current implementation while not breaking existing scripts (and set-in-their-ways users) seems like a welcome change.
I use SCP all the time to transfer files to a VPS. I do not understand this trend of deprecating things that are working (X.org, SCP) with inferior alternatives (wayland, ssh cat etc).
By "deprecate" the article actually means "replace with a wrapper around sftp that has the same basic command line interface as the current scp". Since sftp works just fine but has a lousy command line interface (as the article notes), this would not be an example of the trend you describe.
rsync is not a replacement, sftp is not a replacement. If I can't use cp like syntax, it's not a replacement.
I use scp almost daily to copy things to/from/between remote machines, the syntax is quick and easy.
If the issue is with the protocol, why is the protocol not being fixed or updated while continuing to support the syntax?