Hacker News new | past | comments | ask | show | jobs | submit login
Why aren’t we using SSH for everything? (medium.com/swlh)
308 points by joshmanders on Aug 3, 2016 | hide | past | favorite | 114 comments



Hey everyone, thanks for dropping by the main ssh-chat server (chat.shazow.net). That one is running a fairly old version of the software (/uptime -> 13579h33m20.784837542s).

I just deployed the latest Github release on an east-coast server here, please come help test it:

  ssh chat2.shazow.net
Binary releases are available here if you're interested in running your own: https://github.com/shazow/ssh-chat/releases/


Is chat2 dead? I entered sentence in Chinese and now I can't see anything after login or re-login.

And it's so dead that the ssh process has to be killed by a signal 15. ^C and ^D won't work.


^C doesn't work because the SSH client is passing it through to the remote server (which is normally what you want), not because the client itself is stuck.

ssh has a little-known "escape character" feature for situations like this. You can type <ENTER> ~. to tell the client itself to exit, or <ENTER> ~? for a list of other escape commands.


It doesn't hurt to mash <ENTER> a few times, either, just to tell ssh that you're serious. I used to have it not work occasionally until I started mashing -- probably because it passed the newline through and I didn't make it to ~ fast enough, so whap-whap-whap ~. is my muscle memory for killing ssh now.

This is extremely useful information if you do a lot of work on remote servers, particularly when you restart networking and lock yourself out. You can get your shell back without having to kill the terminal or open another to kill ssh, which I see a lot of people do. It's too bad it's little-known.

~C is useful too so you can alter your forwarding setup (including cancelling open forwards) without dropping the connection.


It doesn't hurt to mash <ENTER> a few times

Do you happen to be an older North Carolinian? I don't think I've heard anyone use "mash" in this sense since being a bewildered Northerner in an NC elementary school in the 70's when a teacher asked me to "mash the lights". Apparently some light switches once used buttons, and "mash" was (and possibly still is) a regionalism for "press".

Is it used elsewhere too? Personally I'd use "mash" to describe preparing potatoes, and would need it for brewing beer, but I wouldn't have considered that it might be applicable to escaping from ssh. Although searching now, I see that it also seems to be a term-of-the art for rapid button pressing in video games. And I see that suggestions that it's also used in the West Indies.


>Do you happen to be an older North Carolinian? I don't think I've heard anyone use "mash" in this sense

I don't think it's quite that regional - I've heard & used it in this sense (british, not that old).

Not sure where it came from originally (people use all sorts of odd terms to describe pressing buttons, 'punch' being one I find weird).

I can think of one obvious pop reference, from The Simpsons[1]

[1] https://www.youtube.com/watch?v=OqjF7HKSaaI


As a New-Englander, the term mash comes naturally to me as meaning 'press rapidly.' But I'm also a gamer, so I may have picked it up there.


Yeah, the phrase "button mashing" is fairly common in gaming.


Would you say "mash <up-arrow> several times until" or "mash <up-arrow> until"? That is, in gaming terminology, would you ever use "mash" to describe pressing a key only a single time? Or does "mash" always describe repeatedly and rapidly pressing a key?


Mash, as it did in the above context, only describes pushing a button rapidly and repeatedly. It can be used interchangably with spam, but usually implys a shorter time frame, and when using spam, you usually name the action, not the button.

To demonstrate the difference:

"After you get knocked out in punch out, mash A to get back up"

"In Quake 2, spamming rockets to protect entryways is the only use for the stupid things."


getting a connection refused error ssh: connect to host chat2.shazow.net port 22: Connection refused


There's a bug that is crashing it, alas.

https://github.com/shazow/ssh-chat/issues/166

Pretty sure the bug is in the autocomplete function, if anyone wants to tackle it. I might disable it for now.


Jerks are also piping /dev/random into it, looks like that may be crashing it.


I guess we can't use ssh for everything.


That's not ssh's fault; you can pipe /dev/random into just about anything and get the same result. Rate-limiting isn't the job of the protocol, in this case - it's the job of the chat server application.


The lesson is that you can't authorize indiscriminate access to public services on the internet for single-factor authenticated users.

edit: Ugh, Wordy. Shorter: Identity is more than a public key.


Also, security is more than identity. You still need to be robust against malicious input even if you know exactly who all your users are.


Indeed. The third leg that solves this problem is known to the industry as Accounting.


As tempting as it is to use SSH everywhere (for exactly the reasons outlined here), other protocols exist because they usually have some form of marginal gains over the alternatives, and marginal gains matter at scale.

Encrpytion isn't always something that's needed (arguably; though I'm all for enabling it everywhere it can be enabled), and in places where it's not critical, it adds performance overhead both client- and server-side.

SSH also has to be shoehorned into distributed chat applications, for example, else the interface deters inexperienced users and, again, almost inevitably, overhead is incurred. Using PGP for authentication and encryption in this case is a saner choice because it supports decryption by groups of recipients, and implementing a protocol over UDP (or even TCP) is plenty.

EDIT: Just to be clear, though, there are some pretty cool nonstandard uses of SSH out there. Medium OP's chat server is a great example.


> As tempting as it is to use SSH everywhere (for exactly the reasons outlined here), other protocols exist because they usually have some form of marginal gains over the alternatives, and marginal gains matter at scale.

I heard people "at scale" also use micro services over HTTP. Or even Rest for mobile apps, where latency is a real issue.

So I am not buying that part. I think HTTP (over TCP) has won because the web has won. As always technical people are overestimating the importance of technical aspects of a product and thereby their own importance, when really it is all about usability and network effects.


>I think HTTP (over TCP) has won because the web has won.

For sure. If I were building a mobile app that had to talk to a server, a RESTful API would be the first thing that comes to mind.

It's not the most lightweight protocol, but it's the protocol with a structure and interface that's designed for this -- making it a natural choice.

You could to a certain extent implement a REST-like API over SSH, or UDP for that matter, but that takes extra effort, and the fact of the matter is that many developers are just working on a higher level of abstraction, and lack the expertise to build new protocol. (My reasoning is that the lower barrier of entry into software development has created many developers with skillsets specialized in use of more "modern" technologies).

TL;DR -- IMO, using verbs to act on resources is easier to grok than working in terms of packets for most developers, and is sufficient in most situations. I still believe there are better alternatives for high-performance networking.


There are also some significant vulnerabilities [1] in SSH by design / by default that must be disabled if used everywhere.

[1] https://news.ycombinator.com/item?id=11052745


It is very easy to write a virus that is not detected by any anti-virus. I don't see how dropping an SSH key on some machine is worse than dropping a remote shell on it.

It's not like SSH keys are stealthy or something - any sysadmin worth his salt will occasionally look at his authorized_keys list.


I use an SSH CA and it solves this problem.

It's simple to setup. You first create a CA, and then you sign the servers public key and the client's public key, and you take the resulting certs (alongside the CA's pubkey) and move them over to the machines. And I think it's important to note that you can do this process on any machine-- perhaps an air-gapped $30 Raspberry Pi, or if you're less paranoid you can do it on your laptop, but the point is that you can generate keys in bulk, offline, and move them over to dozens of machines in one fell swoop.

You still need to maintain an authorized_keys file in your homedir with your client public key, which means if someone was able to get the CA key they still wouldn't be able to get into the server without first getting the key that corresponds to the entry in your authorized_keys file. And likewise, if someone was able to get access to your user key, they would be able to login with it but wouldn't be able to add a backdoor key to the authorized_keys file unless they also had the CA key to sign it with, because both the client and server check the other's certificate against the CA pubkey.

Of course, if you see ssh keys as a panacea and subsequently use passwordless sudo or a weak enough password, they have root and can just edit your sshd_config and this was all for naught. But it does solve the problem of an unchecked authorized_keys file.


That helps, but you would still need to disable ssh multiplexing on all of your servers.


Why?


Any process running as you can piggy-back on those default-enabled channels without authentication or logging on the destination servers.


I wish everyone checked their authorized_keys file like you do. Sadly, this is rarely the case across the board, based on my experience. People are lazy. There are good aspects of being lazy, but this is one of the sub-optimal cases.

In summary, this methodology would not likely be detected even in places where folks and admins are quite vigilant. The behavior and usage is expected. If I were unethical, I could gain access to thousands of companies by simply emailing a link to github and saying, "This script is giving me errors, what am I doing wrong?" Using the default settings in ssh an sudo, I can access all of their systems with no syslog entries and gain root to anything they have sudo on.


> any sysadmin worth his salt will have automated the population of his authorized_keys file and alert on any unauthorized changes.

FTFY


That appears to be surprisingly uncommon. By that I mean maintaining the authorized keys file at all.


(2015) previous discussion:

https://news.ycombinator.com/item?id=8828543

https://news.ycombinator.com/item?id=11516582

It's been posted twice in the last day, I wonder where it reemerged.


I saw it on Lobste.rs and did a quick search for the url but it returned nothing. Medium articles suck with that hash they put on. Thought it was worthy of submitting here.


I thought it was a good read, oh and small world eh?


> small world eh?

Haha indeed it is!


I've been pushing a bunch of changes on Github, maybe that's how somebody noticed it?


While SSH seems like the ideal console application delivery platform, it suffers in a major way: no local execution.

This matters when latency matters. Processing all keystrokes remotely sucks over high-latency links like airplanes, cell phones, satellites, etc.

This can also be a pro. JavaScript is a mess in terms of design, increased client complexity, and security. It depends on what your requirements are.


This is actually one area where I like our IBM i system, the 5250 console may be extremely lacking in many features compared to say a VT100, but the block-based console eliminates the remote server from needing to handle redrawing or dealing with input until the user explicitly requests an action. It's pretty amazing to see a thousand users connected to the box and none of them have any latency with inputting data (getting a new screen back from the server make take a couple milliseconds) because it's all done locally until they press a function key or some other action button that pushes the data off to the server.


The same description would apply to HTML forms in the browser. In fact there are HTTP gateways that translate IBM forms to HTML on the fly.


Mosh helps in this case, they buffer the keystrokes and liberates user frustration by giving a visual response.

[1] https://mosh.mit.edu


Yeah, but it's not SSH anymore (except for the authentication), it's a custom protocol based on UDP.


Using compression with -C argument helps keystroke buffering on high-latency connections and also protects against several crypto attacks.

It also makes sense on LAN connections, instead of sending each character at a time...


> Why aren’t we using SSH for everything?

Because when you have 100,000 users and you need to rotate your host key following a leak, the team in charge of user retention has a heart attack.


Informing users when your keys were leaked is more important than user retention. Hiding your weaknesses does not build trust.


You're missing the point: SSH does not provide a host key rotation/revocation mechanism. TLS does.


It does provide host key rotation in OpenSSH 6.8+: http://blog.djm.net.au/2015/02/key-rotation-in-openssh-68.ht...


This technique assumes the operator is OK with taking a few weeks for all users to register the new host key. In the event of a leak, revocation of the leaked key must be immediate. That's why we have CRLs (and OneCRL) and OCSP.


You can use cert signed keys with ssh.


You said that you are afraid of loosing users after a leak, which implies if you don't have to, you won't tell your users about the incident.


No. I said that changing the key, which must happen, will trigger a warning that will scare users away. Which is (partly) why no one uses SSH with non technical populations.


No, its that the fix is client side and painful. They can tell their users to fix it and they would rather leave than do it.


I thought users would be prompted to accept a new fingerprint. Am I missing something?


If by "prompted" you mean:

@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ IT IS POSSIBLE THAT SOMONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now!

Then, yes, users will be prompted in the scariest possible language.

Compare this with HTTPS/browsers, which care not a lick if a certificate/fingerprint is different, provided that the certificate is valid and the chain is trusted.


Which --coincidentally-- is exactly the problem with it.


More important to the users probably, or at least it should be. Not remotely close to more important to the investors.

I certainly agree with you that I would like things to be this way, to live in that world with you. But we don't.


> When you join ssh-chat, not only do I know who you claim to be, but I can also permanently and securely attach an identity to your connection without any user intervention on your part.

A great part of IRC is that there's no registration. There's no identity.


There doesn't have to be registration or a consistent identity with ssh, when it's used as described in TFA. Just pass a different identity file with "-i".

Of course, most uses of ssh would use a preestablished account on a host and that wouldn't vary, but there's nothing in the protocol requiring that.


That's... not true at all? Nickserv and Chanserv are essential parts of modern IRC and in practice people need a consistent identity to interact with most channels.


Most? Essential? Not really. I can hop into most of the channels with a different nick every time. Nobody cares. I can even use the same one without registering it as long as nobody else does that. I can open a chan without being registered and enforce rules in there.

The only time I need to register something is when I want to keep the channel alive and still be OP when I come back, when I want to keep my nick or when the chan owner requires me to register.

Which is all not "most channels".


Some channels ban non-registered users to protect themselves against spam.


You don't need chanserv and you don't need nickserv. Efnet isn't dead -- it's just coughing up blood. (yes, it has ChanFix).


You must be from EFnet. Hello! :)


SSH lacks a key distribution/certification model for server authentication.

The article describes server authentication (in the section with misleading title "SSH connections are encrypted") but does not mention the fact that fingerprint should be sent over another secure channel and manually verified.



In addition to the fact that openssh has supported certificate authentication of both servers and users for years -- the easiest way to bootstrap "traditional" ssh keys is probably just to list the public ssh host keys on a web page secured by https/tls.

It'll be just as insecure as the rest of the x509 CA-based cluster-fuck -- but at least it's easier to automate and less error prone than manually checking fingerprints.

So, not saying you're wrong, but that while advice regarding "fingerprints" is technically correct, just adding "the correct" keys to known hosts is probably the more practical -- and in practice more secure -- solution.

That is of course assuming that like the rest of the world setting up an actually secure solution using ssh-keygen seems like too much work (I'm myself guilty of this, for my personal servers it's easier to just stick with manual keys - it's also hopelessly insecure and inconvenient in the case of a breach or other reason to rotate keys).


assuming the server you are connecting to has DNSSEC, SSHFP is the taylor made solution for this problem.


"It doesn't work because you generated your key with RSA instead of moduli-2048" is not a conversation I want to have with my mother. Until the tools to deal with SSH keys are a lot more user friendly, it won't be used much by the wider public.


Agreed. How easy is it to remember a password vs. keep a sensitive file (in an obscure place) for years?


Is the file a word doc in c:\Documents and Settings\PC OWNER\desktop? :)


Regarding the question, I use SSHFS to mount directories on remote machines: https://en.wikipedia.org/wiki/SSHFS


sshfs is so simple; I don't know why people seem to default to using nfs.


performance. sshfs is dog slow compared to NFS on a local network.

Also, coherence: sshfs manages to hide a lot of the ugly details, but you'll meet coherence problems on sshfs much more often than on nfs. They both do aggressive caching, but nfs gets help from the server side and is thus usually better.


Both good reasons. My personal use case happens to make these issues unimportant: mounting remote file stores that are only accessed by me. But I guess for mulitply-accessed files on the local network, nfs headaches may be worth the trouble.


> performance. sshfs is dog slow

I'd started using ssfs for something and found this slowness issue. I asked my mentor about what could be done to improve it, and he said "If you're using sshfs, you've already lost".

'scp' is very slow as well. Anything bigger than a small text file gets rsync'd. Small text files get scp'd, because remembering rsync flags is a minor annoyance :)


If anyone is interested seeing another interesting service, our organization is currently developing user onboarding over SSH (WIP). All sources are public [1] and easy to follow/read. It's written in nodejs and has nice blessed/blessed-contrib based UI.

[1] https://github.com/hashbang/signup-ssh


The hashbang people are doing a LOT of interesting things. What's nice is that they aren't waiting to get the whole internet onboard with their ideas, they are building the services now.

The best part is that (theoretically), you could install their software on a machine of your own and it would then be capable of providing services to their users.


> Or better yet, ZeroMQ-style sockets with proper security and encryption?

ZeroMQ supports CURVE encryption + authentication as of 4.0

http://hintjens.com/blog:49


If you look around there are cases of using ssh for various things, but at the end of the day HTTP is the protocol of the masses, you aren't going to beat it's adoption.

plenty of nethack servers that you can ssh into here: https://nethackwiki.com/wiki/Public_server

Another rougelike: http://crawl.develz.org/wordpress/howto


I added SSH authentication to the roadmap for the API in one of my projects. I love the UX of the first connection from a new users acting as registration.

When I looked for information on how git SSH servers are implemented I did not find enough information to get started.

Can anyone recommended reading for someone getting started with an API over SSH?


I looked at it a while ago using OpenSSH's AuthorizedKeysCommand feature and found Phabricator's implementation for vcs over SSH to be the most understandable thing I could find.

https://github.com/phacility/phabricator/blob/master/resourc...

https://github.com/phacility/phabricator/blob/master/resourc...

https://github.com/phacility/phabricator/blob/master/scripts...


Why aren't we using ipv6 with ipsec for everything? Then we could just stick with rsh, telnet and http...

I honestly think that the time has come to turn back the clock, and move forward with that approach. We've time and time again proved that "mostly encrypted" is almost impossible to get right, and almost never what we really want. We've also shown that we really do care about authenticating the servers we communicate with.

With the recent advances in cpu power, ram (and custom hardware) -- there's not much reason not just encrypt and authenticate all the things any more.

In the 90s, doing full ipsec and ipv6 everywhere was expensive: new switches and network hardware was needed, and the overhead of encryption was way too high.

But now the time should be right. Still not all that hopeful that we'll actually get there, though.


No comments here because everyone is chatting about it at chat.shazow.net?


Pretty much. :)


Excellent explanation of how ssh works along the way, to boot.


Until a recent version you couldn't even have a .d directory to hold a bunch of config files to simplify contacting your servers.


Oh neat, is that a thing now?

I ask because I hacked up something a couple years back to let me manage files in ~/.ssh/config.d, plus a Makefile that produced ~/.ssh/config from them and a set of shims for OpenSSH commands which would run make before running ssh, scp, etc., to make sure the live config stayed up to date with changes in .d files. But I haven't used that pile of shell scripts or looked at it in a while, so it's probably started to smell a bit, and it would therefore be nice if this were a thing OpenSSH itself can now do.


Oh neat, is that a thing now?

Yeah, since this Monday: http://www.openssh.com/txt/release-7.3


I think everyone that uses ssh heavily ends up making some form of that script.

Now I'm waiting for Apple to adopt a newer version of openssh


I started writing a FUSE fs specifically for this (and other fun dynamic .ssh/ things), but never got round to finishing it off.

The wildcard and (very recent) .d support has mostly solved the need now, unless I can remember what exactly it was I couldn't get it to do before.


Because inventing new protocols is cooler.


Sadly, we don't even invent new protocols any more, at least not open ones. What we do is invent new, closed platforms like Slack (and HipChat and Campfire before it).


Besides the obvious hype and hipster “this time its different” appeal, is there any technical reason one would would prefer Slack over, say, XMPP? I looked into it only until I realized one has to pay to access chat history.


Slack gives you a unified chat stream and a simple, buttery-smooth onboarding process for the tech-naifs. Jabber does neither of these. I'm not fond of Slack in particular, as it's ridiculously expensive and the main chat window is too widely spaced, even with mods.

I use jabber with a friend, and frequently messages go to his other laptop, which he doesn't see until he logs onto it. Same in reverse. Similarly, history isn't unified. You can get around these problems by having your own centralised jabber chat client (like bitlbee), but that's a technician's answer, not an answer for the general public.


And even the open ones get saturated in proprietary extensions in no time...


And >90% of the cases detailed here are covered by an IRC server with TLS and certificate authentication, and you can even use it on a mobile device with the kind of humane, task-focused UI that a general-purpose SSH client can't provide.


It's great if you can leverage the SSH keying system, but using SSH "raw" for anything high-bandwidth is a disaster (try using sshfs on a high latency network, you will get two digit kilobytes when you know the network connection can sustain a megabyte)


SSH chat does not ensure that "I know who you claim to be". Only that "I know you're someone who has access to the same SSH key as that someone with whom I was talking yesterday".

In terms of authentication, that is about on par with an IRC "nickbot".


There's nothing preventing you from transporting an SSH public key over some other secured channel. Then it's "I know that you are someone that has access to the SSH key transported over that secure channel".


That's TOFU-POP, and it's perfectly reasonable for many situations. For example, HN.


For others who've never before run into that particular acronym, it glosses per [1] to "Trust On First Use/Persistence of Pseudonym".

[1] https://lists.gnupg.org/pipermail/gnupg-devel/2014-March/028...



I made a toy SSH API thingie a couple of months back; https://github.com/jagheterfredrik/sheet


round trip times. it makes somethings impossible with that much interchange required to send anything - and it will never go away because the speed of light is a hard limit.


We're not using it because... well... it's weird (for lack of a better word)? Sure, it's a multiplexing protocol that can support a few neat use cases like port forwarding and X11 tunneling, but that's really not what we want in most cases. It carries a lot of baggage in its clients and implementations that in principle shouldn't cause problems but in practice makes it suboptimal.

One of the most important aspects of why we don't use it: it's scary to work with. SSH's implementation of its protocol is much closer to the actual client implementation than the divide traditionally maintained between SSL and its clients. People are rightly afraid to get close to cryptographic code without sufficient training, and so there is the equivalent of a "If You Are Responsible: Keep Away" sign for most developers who might be inclined to improve the situation.

Still, the community found the impetus to augment the protocol and UI to help deal with latency issues that plague SSH via Mosh. It helps escape the legacy idea that RTTs will be consistent and therefore are ignorable in a shell implementation.

Ultimately, SSL is a better understood tunneling protocol and full of a lot less... maybe it's just the perspective we have in a world where SSL is ubiquitous but SSH is weird. Its rules are weird, its certificates are weird, its tooling is weird.

This is a common contention in Hacker News I'm often on the minority size of, but I submit that while syntactic interfaces (shell, programming) has clearly demonstrated its value and deserves to be better developed, the same can not be said for serial text entry interfaces. Something based around a streaming query protocol like SSL+HTTP/2 that makes some fundamentally different decisions than the legacy UNIX shell and supporting tooling would be substantially better, and be able to use existing web browser techniques and software.

Which is all abstract, but imagine for a moment something we've seen before: an Electron Shell based terminal. But what we haven't seen is someone really dive in and attempt to redefine the world around shells. So bear with me for a second...

Imagine an Electron Shell based terminal experience that works locally and remotely. A line buffer separate from the display accumulates and maintains the current prompt (because instead of PS1 being pseudo-executable in our shell it actually can attach scripting functions and web-browser style timers).

Executables logically return open dictionary objects (which can contain streams) instead of the current fixed set of indexed streams. We can use 0, 1 and 2 as legacy keys into these objects to maintain compatibility. But you could imagine a "rich-1" key that actually outputs rich markup, "error-1" that outputs rich markup for error, etc. And of course, media itself could either be file system references. These rich objects are just textual objects with references to valid HTTP/2 requests do what they do.

This actually resembles SSH's protocol on the outside, but using tools that are reusable elsewhere and a cryptographic stack and transit protocol that has more funding and attention and traction.


Yes. I would suggest anybody who poses the question "why not" to try to actually implement SSH from the scratch! I've looked at some "smaller" implementations than OpenSSH and they are still, like you say, "weird."

SSH appeared to me to be an implementation of the console application that got crypto and network code bolted on, everything kept together almost with the Sellotape. And it doesn't change when somebody has to re-implement it as most of the "features" have to exist for a thing to work.

However, there are also some advantages in using SSH for some purposes: for somebody who knows what he does, just the key management is actually less weird than the whole SSL thing. The tunneling is also very convenient and I like it.


For one thing many corporate firewalls filter port 22 even got outbound traffic


Do TLS client certificates make HTTPS equivalent to SSH?


No. Disregarding technicalities of which there are many, the most important difference is that with ssh you either get the host key through a different secure channel or "trust on first use" (TOFU). With HTTPS you trust certificate authorities (CA).


I was referring to the browser side certificate that can be generated using the <keygen> tag, and then used for the subsequent HTTPS sessions. Would this be equivalent to ssh-keygen for SSH?


That's deprecated though and afaik, that's only the client-side authentication, not the server-side. That still relies on CAs, with all the disadvantages.


Excellent explanation of how ssh works, to boot.


... probably because it requires a central server which the industry is moving against?

Things are moving to distributed.



SSH passing username by default is actually one of the things I hate about it.


Why? not much of an info-leak if you know where you're connecting, and it has to use something, or enforce a foo@host string.

You could work around it with something like:

    Host *
    User WhosAsking
in your ~/.ssh/config (at the end of the file, so other entries can override it)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: