More information can be found here: http://www.psc.edu/index.php/hpn-ssh
scp was there from the beginning, or thereabouts. It certainly was present the first time I played with it which was within a few months of the first release.
ssh was specifically designed as a replacement for BSD remote tools (rsh/rlogin/rcp) even falling back to those protocols if ssh wasn't available (although the FallBackToRsh option was removed from OpenSSH a decade or so ago I think). That's also why "slogin" is installed as a symlink to "ssh", to keep it similar to rsh/rlogin. It only made sense for there to be a "scp" to work like the existing "rcp".
You are right that sftp only came later.
If you need to transfer large files around a lot use what the supercomputing centers use: GridFTP.
Would be pretty hairy though. Maybe another solution would be to have some new socket standard that can achieve what both UDP and TCP do, at some kind of variable window size...
Fundamentally, a password is a shared secret. So you send your password to a server, you are trusting that server not to lose or misplace it. In contrast, an SSH public key doesn't require nearly such careful management.
There are some caveats; such as having to specify what domains/IPs should use which identity, and also having to alias ssh to get it to work nicely. So it's not perfect, but it does solve the problem you talked about.
Don't you have to be actively connected to the machine for this to work? i.e. the server I haven't connected to for a few months has no way of opening connections on my behalf at this point, right?
But once you connect, a year later - in those 3 minutes before you disconnect, the attacker might have authenticated as yourself to 100 other machines -- and appended their own key to .ssh/authorized_keys on these machines, so that the compromise no longer needs you to be connected.
Every time I've tried to keep a long-running ssh tunnel for printing / http, the connection degrades after a while. I'm sure there are some flags that can be set, but I thought this was the major show stopper for the "everything over shh" (since ssh uses TCP protocol)
Casually looking at strings /usr/sbin/sshd on my devserver reveals that there is a mode to create a tun device, which sets up a more traditional VPN-ish thing (basically, your computer sees a new interface and routes certain sections of IP space to that prefix. sshd sucks in these packets and forwards them as is). This results in two layers of congestion control, which (IIRC -- I'm not a TCP guy) Congestion Control Algorithms aren't set up to handle very well.
The more traditional mechanism for tunneling though (the one most often referenced in the posts and things I've seen) is a layer 7 forwarder. Basically it creates sockets to listen for connections on either ends on a specific port, recv's data as a traditional network user, and then turns around and sends the data over the multiplexed transport to the end server. It essentially strips away the congestion control on the first hop (because you're just going to localhost anyway) and there's no TCP/IP packets in TCP/IP packets thing going on anymore, just your application stream in their header transport thing. It's less flexible, but is good enough if you want to set up a simple proxy or something and doesn't have the same pitfalls.
If you need the flexibility, something like OpenVPN might be good (though I haven't used it myself).
And you might also want to look at stunnel as an ssl tunnel alternative that lets you have more control over what certs go where without archaic and dangerous /etc/ssh/*_config meddling
The only thing about sshuttle that I've encountered that exposes it's non-VPNness, is that the connection truly originate from the remote system - e.g. if you connect through sshuttle to your peer, the connection go from the peer to the peer on 127.0.0.1. That may or may not be a problem (e.g., logging is much less useful this way).
Try sshuttle. I've stopped using VPNs since I started.
Because "Use X for everything" is a terrible design decision? SSH uses flexible transport with some desirable features and may be underutilized in practice.
This question is starting to feel like people who want to staple every pie in the sky idea to the bitcoin blockchain because it too has a set of desirable properties.
But a lot of things where we use HTTP today, we could be using SSH if we had better library support. Some more ideas towards the end of the post.
It's the firewall rules, not library availability.
(I have tried this; it is both useful and terrible)
$ cat .ssh/authorized_keys
command="tmux new-session -A -s base" ssh-rsa [...]
So, yeah, why don't we use SSH for more?
Fortunately the HPN-SSH patches exist to solve this problem - but I really want to know why their's so much resistance to adding them upstream.
On a WAN, sure, but even a mid-2000s gigabit LAN could routinely hit 800+Mb with scp as long as you had tuned the underlying TCP stack (Linux was poor and OS X/*BSD worse in that era) and weren't using something slow like 3-DES.
> Fortunately the HPN-SSH patches exist to solve this problem - but I really want to know why their's so much resistance to adding them upstream.
Looking at the patches, I'd be surprised if the problem wasn't the fact that they change other things with security or reliability implications. Seeing something like “Dynamic Window and ability to use NONE encryption” suggests that it'd be better to break it up into some smaller generally-useful patches and a separate patch for people in controlled environments who need as much performance as possible.
But the NONE encryption makes a lot of sense in a "use SSH more" perspective. When you need to move piles of totally non-private data, but want to use a secure authentication mechanism (and message authenticity system) for issuing shell commands...
If I had to guess, the reason would be threefold. One, it requires tuning on both ends of the connection to make the high speed transfers work well (and it looks like it will interfere with multiplexing as well, the reason the fixed window exists in the first place).
Two, the "none" cypher which it enables completely undermines the security of the secure shell. The cypher renegotiation after authentication would make it trivial to MITM the connection. You could probably disable the cypher after every transfer by restarting the sshd daemon, but that's being a bit optimistic.
Finally, there is no visible license on the patchfile, or on the homepage. The project's sourceforge page lists it as being BSD licensed, but there is no indication from the author that this is the case.
I see over 50Mb/s (=~400Mb/s) on a gb network without any attempts at optimization, between a stupid Atom machine and a fast machine, both running Linux (which other responders claim is unoptimized). When did you last test?
Also, this might be the speed of the disks at one of the ends - I've never even bothered to check, because making it faster doesn't make a difference for me.
These are not slow machines - i7's and i5's.
I've never been able to find a compelling explanation as to how this could be when other network protocols can manage it just fine.
Something in your setup is broken. I get 50MB/s speeds when Linux is on both ends, and even when Cygwin is on one end.
Does anyone know why this is the case? Its always baffled me.
The -v flag will help identify where the connection is being delayed.
ssh -v [user@]hostname [command]
Try connecting with the option disabled.
ssh -o GSSAPIAuthentication=no [user@]hostname [command]
You can globally disable GSSAPIAuthentication in ssh config:
echo 'GSSAPIAuthentication=no' >> ~/.ssh/config
If it's taking you several seconds then you probably have a reverse DNS problem. (By default, the SSH server looks up the reverse DNS of the client when it connects.)
In PGP/GPG, key persistence matters because you're using them to decrypt messages. Long after content was created you may need the key to decrypt it.
For SSH, the key is only strictly necessary during the session. Key distribution* (of your public key to systems you need access to) is a bit of a pain, but between having your private key(s) where you need them, and authorized keys on various servers, there's not all that much to worry about. Host keys, perhaps, if you want to be rigorous about security.
TLS is designed to serve multiple services on multiple hosts. It depends on your browser trusting an intermediary host which validates the host key, so (in theory) the initial connection can't be MITM'd, and so the key can change at any time or there can be multiple keys (which is needed for hosting multiple services on multiple hosts). TLS is designed to integrate tightly into an application.
When you compare the two protocols, TLS is clearly superior to SSH. But in terms of the features they support (tunneling, authentication, etc), it's up to the server to add missing features outside of the protocol to provide for what the client wants to do.
For example, the SSH protocol basically provides an encrypted connection through which you can do whatever you want, similar to TLS. To do IP tunneling with SSH the application server activates extra functionality to connect the encrypted session to a driver which opens an IP tunnel. Or to authenticate your ssh session against a kerberos server, the ssh server does the actual kerberos authentication; the protocol just informs the client of what 'basic' methods they can use, and the client tries to use one that works with the server's methods.
Incidentally, TLS the protocol supports client certificate authentication, which provides similar functionality to SSH's public keys. The HTTP protocol also does certificate pinning.
www.bank.com is 18.104.22.168 with pub key XYZ
Or is this already implemented and I am too stupid to find it?
Btw previous HN thread about ssh-chat here: https://news.ycombinator.com/item?id=8743374
My favourite one: http://en.wikipedia.org/wiki/SSHFS
Whenever I'm developing for my mobile phone I actually have the contents mounted on my desktop via sshfs as an actual filesystem.
What that means? Don't bother with FTP servers. ssh access to your server is all you need.
"Why aren't we using a protocol designed to add encryption to pseudo-devices emulating a real text terminal device  for everything?"
Authentication is based on some sort of shared secret rather than keys though, yes.
See also http://www.psc.edu/index.php/hpn-ssh
On top of the fact that they're entirely different protocols and tools designed for entirely different purposes, browsers already support virtually everything SSH does. File transfers, authentication, client certs, multiplexing, key pinning, etc. There is no need to use SSH, and if you did, it would be slower, less secure, and generally more annoying than using the existing tools built into browsers.
I find it aggravating when users read the manual to some software and think they have discovered fire.
Your browser supports those things because it implements protocols, like HTTP, TLS, and others. Your browser, or other tools, could support SSH, which I think is the point of the article.
The article's title is "Why aren't we using SSH for everything?", not "Why don't browsers support SSH?". Both make no sense. It's like asking why FTP clients don't support Voice-over-IP.
Unfortunately neither of these things are commonly used yet. Cloudflare is adding DNSSEC support soon, so hopefully that will change.
Actually bothering and having a secure way to verify the fingerprint are exercises left to the reader.
On the other hand, every web browser that visits a site for the first time (unless it was pinned in the browser you downloaded) attempts HTTP before HTTPS, and is thus vulnerable to all sorts of attacks. All non-pinned HTTPS connections at any time can be mitm'd by a global adversary that generates a cert using a CA your browser trusts.
So technically SSL is much easier to catch being mitm'd, since you only need to worry about the first visit.
Which threat model you prefer to deal with is up to you.
The problem is further back: the default configuration doesn't make it easy to avoid these problems in the first place. If I'm logged in and do something which will change SSH host keys or the like, then the default needs to provide a way for me to make that information easily available globally to other clients which might be aware of this system.
A security conscious client can manually verify a host's SSHFP with a dig record. You'll also want DNSSEC in place to ensure the dns hasn't been spoofed either.
Because it doesn't support virtual hosts. And I can't afford 30 IPs for my server.
Otherwise, it's a great protocol.
The NSA also has a program with which it claims it can sometimes decrypt the Secure Shell protocol (SSH). This is typically used by systems administrators to log into employees' computers remotely, largely for use in the infrastructure of businesses, core Internet routers and other similarly important systems. The NSA combines the data collected in this manner with other information to leverage access to important systems of interest.
(source: http://www.spiegel.de/international/germany/inside-the-nsa-s... )
Still incredibly vague. If they're archiving all traffic in
hopes of decrypting it some day though, it's safe to say we should treat anything on the internet as the shiny side of one way glass.
From the slides:
Page 19: "SSH [...] Potentially recover user names and passwords"
Page 36: "SSH - often have router configurations and user credentials [...]"
The only unexpected thing I could see there was the lack of anything I didn't know how to do...