Hacker News new | past | comments | ask | show | jobs | submit login

How is there an alternative here?



Jump into a time machine, go back to the creation of SSH, and adopt SSL-style trusted third-party certificate authorities. Somehow get it adopted anyway, even though loads of people use SSH on internal networks where host-based authentication is difficult; SSH is how many headless machines are bootstrapped; and that you've got to do it 19 years before Lets Encrypt.

Jump into a lesser time machine, go back to when Github were creating their SSH key, and put it into a hardware security module. Somehow share that hardware-backed security key to loads of servers over a network, without letting hackers do the same thing. Somehow get an HSM that isn't a closed-source black box from a huge defence contractor riddled with foreign spies. Somehow avoid vendor lock-in or the HSM becoming obsolete. Somehow do this when you're a scrappy startup with barely any free time.


Ssh certificate authorities are a thing that exists.

We also have a way to put SSH host key fingerprints in DNS records already.


Yeah like how HTTPS CAs exist. There are some very nice three letter ones who can issue any certificate and your browser / OS happily accepts it.


SSH doesn't have any CAs that it trusts out of the box. It's up to you to tell it which one to trust.


Yes but the option to do verify host keys using ("VerifyHostKeyDNS") is not enabled by default.


Unless it has changed recently, you can't have a trust chain of OpenSSH certs though so it's cumbersome that your signing key is not only the root ca but also basically has to be 24/7 accessible to sign any server/client you want to bring up.


This just kicks the can down the road to DNS.

I'd guess that most systems aren't using DoH/DoT or end-to-end DNSSEC yet. Some browsers do, but that doesn't help tooling frequently used on the command line.

I suppose you could just accept X.509 certificates for some large/enterprise git domains, but that pokes up the hornet's nest that is CA auditing (the browser vendors are having a lot of fun with that, I'm happy that the OpenSSH devs don't have to, yet).

And where do you maintain the list that decides which hosts get to use TOFU and which ones are allowed to provide public keys? Another question very ill-fitted for the OpenSSH dev team.


No browser uses DNSSEC.


Thank god. Someone needs to take that protocol out back and give it the old yeller treatment.


That was in reference to the former, i.e. in-browser DoH/DoT lookups.


DNS can trivially be mitm'd. DNS-stored fingerprints are strictly less secure than TOFU.


If you use DNSSEC (cue inevitable rant from Thomas) this just works. If you have DoH (and why wouldn't you?) and your trusted resolver uses DNSSEC (which popular ones do), you get the same benefits.

https://en.wikipedia.org/wiki/SSHFP_record


> and adopt SSL-style trusted third-party certificate authorities

So that any large entity can own your servers with easy. (Well, they already can, but not through this vulnerability.)

Anyway, the only thing CAs do is to move that prompt into another, earlier time. It's the same prompt, the same possibility for MITM, and the same amount of shared trust to get wrong. You just add a 3rd party that you have to trust.

SSH does have a CA system. Anybody that isn't managing a large datacenter will avoid it, for good reason.


> So that any large entity can own your servers with easy.

Eh, let's not pretend existing SSL certificate validation is anything to write home about.

Even without any ephemeral servers involved, barely anybody is validating cert fingerprints on first use.

And among people using ephemeral servers, 99% of applications have either baked a certificate into their image (so that any compromised host means a compromise of the critical, impossible-to-revoke-or-rotate key) - or every new server gets a new cert and users have either been trained to ignore certificate warnings, or they've disabled strict host key checking and their known hosts file.

The existing SSL cert validation options are perfect if you're a home gamer or you're running a few dozen bare metal servers with all your SSL users within yelling distance in the same office. But we all know it's a joke beyond that.


There could be an update to the protocol that enables certified keys to be used and allows them to be accepted without warning or with less of a warning.

There could be a well known URL that enables people to fetch ssh keys automatically in a safe manner.


There isn't an alternative, really. The private key has been exposed, and presumably it is unknown if or how far it has spread. The SSH keys must be changed, and the sooner the better. All that can be done is to notify people after the change has occurred.


> and presumably it is unknown if or how far it has spread

Why would that be unknown? GitHub has HTTP and SSH access logs, right?


One H4X0R gave it to four friends, who in turn gave it to between 9 and 14 friends, who in turn gave it to between one and 6 friends.

If train A leaves New York going 60 miles per hour with 167 people on board and train B leaves Chicago one hour later with 361 people on board going 85 miles per hour, how many people now have the key?

The answer is 31337.


Since the post doesn't mention anything like "we reviewed logs and confirm the key was not accessed", it is very likely that they either don't have logs that are reliable enough to rule it out (e.g. they may be sampled or otherwise incomplete), or that the key was accessed.


Keeping a complete log of all GET requests to random files in a public repository in a reliable way would be insane.


No, it wouldn’t be - assuming by “insane” you mean “silly to do”. I build systems at Google that do exactly that.

Whether it’s worth the cost is a decision each company makes. Also, you don’t need to keep the log forever. Max of a few weeks retention would be common.


What guarantees do these systems provide? Are 100% of requests where data was served guaranteed to either end up in the log or at least create a detectable "logs may have been lost here" event?

Or does it log all the requests all the time as long as everything goes well, but if the wrong server crashes at the wrong time, a couple hundred requests may get lost because in the end who cares?


Presumably, keeping 'last remotely accessed' and 'last remotely modified' for every file (or other stats that are a digest of the logs) is sane for pretty much any system too. Having a handle on how much space one is dedicating to files that are never viewed and or never updated seems like something web companies that have public file access would all want?


It's not just GET requests. Someone could have cloned/refreshed the repo using ssh. The repo might have been indexed by github's internal search daemon which might not use the public HTTP API but uses internal access ways however those might look like. You might have purged the database of that daemon but what about backups of it? What about people who have subscribed to public events happening in the github.com/github org via the API?

You'd have to have logging set up for all of these services and it would have to work over your entire CDN... and what if a CDN node crashed before it was able to submit the log entry to the log collector? You'll never know.


That's why you need certificates and not just a key pair. Certificates make key rotation easier, and you want key rotation to be easy.

I guess the proper way forward is a small utility that gets the latest signature through http+tls, and replaces the line in your known_hosts file, all in the background.

Looking long term, maybe we need to get rid of all the security stuff in ssh and just pipe the rest of its functionalities inside a TLS pipe. Let the os do its certificate management, reuse security bricks that are way more studied, ...


Certificates just add more keys to worry about. The beauty of SSH is that it does not add hugely trusted parties in the name of convenience, while the UX of TOFU (trust on first use) is pretty decent.

The real solution to break out of these UX/security tradeoffs is to put domain names on a blockchain: then you can simply rotate the key in your DNS record, while the blockchain model is such that you need to compromise many parties, instead of "one out of many parties", as with CAs.

Tracking Bitcoin chain for DNS updates is lightweight enough that it can be built into OS alongside other modern components such as secure enclave, TCP/IP stack and WiFi/BT/5G radios.


Those keys can be worried about on a better secured computer, and don't need to be spread out on every frontend ssh server. Also it allows you to have each machine have a different host key pair, so if one leaks, only that single machine may have some trust issues, and not the whole fleet.

Also it's way better than TOFU, you can just add the CA key to known_hosts and avoid TOFU for each machine.

(Nevermind that you'll probably not accidentally commit some semi-ephemeral host key that's rotated often somewhere, because it will not be some special snowflake key you care about, but something handled by your infrastructure software automatically for each machine)


> The beauty of SSH is that it does not add hugely trusted parties in the name of convenience

Even with a certificate authority model, you don't have to trust any CAs if you don't want to. Not having the option to do so is more of a problem.


We should use a separate system that could reliably verify which certs belong to which entity.

Blockchain is a perfect solution to this. I wonder why it is not considered yet.



Thanks, I was not aware!


Are there any cert solutions that dont involve having to maintain a revocation list? I only used certs with openvpn years ago and the CRL was a potential footgun.


This is one reason people are issuing certs with 2 week expiry.


> Certificates make key rotation easier

How easy is it to rotate the keys of your CA?


Same as with any other decision: Do a cost/benefit analysis of whether the security risk created by rotating the key is actually outweighed by the security risk of doing nothing, taking into account logs that should tell you whether the exposed key was indeed accessed by unauthorized parties.

To be 100% clear: Both courses of action come with associated security risks. The problem is not choosing one course of action over the other, the problem is thinking you can just skip the cost/benefit analysis because the answer is somehow 'obvious'. It's not obvious at all.


No, you cannot keep using an exposed key. You must replace it. There is no cost/benefit analysis needed in this situation.


Wrong. A CBA is always needed. If the potential damage from MITM attacks made possible by rotating the key is greater than the potential damage from a rogue key multiplied by the likelihood that someone actually accessed the key, then it is wrong to rotate the key. It's that simple.

The only way a CBA would be unnecessary is if rotating the key didn't have any security risks. But it does.


Here I’ll do the CBA:

- if they have evidence that the key was exposed to one person, even with zero usage of the key, failing to rotate the key is tantamount to knowingly accepting widespread compromise at a potential attacker’s whim. At GitHub’s scale, that’s untenable.

- rotating the key is the only correct reaction to that

- they should have better communications in place to help users mitigate MITM

- there really isn’t an option, because they’re critical infrastructure; I’m glad they know that and acted accordingly

- on principle this speculation makes sense, but understanding the threat makes it moot

- you hopefully know that, and it’s good to insist on thoughtful security practices but it’s important to also understand the actual risk


There is a MITM risk regardless of whether they rotate the key. Except one is a one time risk and the other is a perpetual risk.

Thus rotating is the only logical course of action.


Only if you know for certain that the key has been accessed by a third party.

If you don't know for certain, you have to factor in the likelihood that it has been, and at that point, the two risks aren't equal anymore so that logic doesn't work.


Are you arguing for the sake of arguing and technical correctness or do you actually believe Github shouldn't rotate their key in this situation?


What if you don't know for certain ?

You just ignore it and hope for the best ?

Only if you are certain (and better be really sure you haven't missed any cache/cdn, temp files backus etc.) it wasn't accessed you do nothing.


It was publicly exposed, and if they are making this announcement it’s essentially guaranteed they can’t rule out it was accessed.


What? This is a terrible way to reason about risks in general. If you don't know for certain, you should assume the worst case scenario, especially since it's impossible for you to calculate the probability distribution of the likelihood of a leak.

You should only keep moving along without key rotation if you know for 100% certainty a leak didn't happen and no one accessed the key (not theoretically impossible if they had the server logs to back it up), but anything minus that and you have to assume it's stolen.


Clone repos using oauth2 with two factor enabled - both GitHub and GitLab support that though their CLIs.


The alternative would be to use certificate authorities (ssh has CA support) which allow to effectively have private keys at different levels and allow you to keep the root private key in a physical vault and use it very rarely to issue other private keys


This would just offload the problem to a separate entity. CAs can be (and have been) compromised.


Sure, but isn't it more likely that a key that has to be shared by who knows how many ssh load balancer machines at GitHub and can't be easily rotated because it's pinned by millions of users, isn't it more likely that that private key gets eventually compromised or thought to be at risk at being compromised?

We need to compare the relative risks within the same context, namely within a company like GitHub

So it's not relevant to bring up failures of other CAs


And then don't forget to setup key revocation as well, and make sure that an attacker in a position to MITM the connection cannot cause the revocation checks to fail-open.

I hope you don't need that SSH connection to fix your broken CRL endpoint!


Well, at least SSH could allow for signing a new key with the old one. So they could say it's signed, and people would know to accept only a different prompt.

There is DNS verification, but people have been trained all their lives to accept insecure DNS information (and set their systems accordingly), and I really doubt the SSH client checks the DNSSEC data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: