Hacker News new | past | comments | ask | show | jobs | submit login
SSH known_hosts hash cracking with Hashcat (github.com)
123 points by chris408 on Sept 27, 2018 | hide | past | web | favorite | 45 comments



Interesting attack, but likely impractical.

- Iterating over all internal IPv4 addresses to try attacking them isn't that expensive, there are about 16 million IP addresses, and there are likely patterns in their allocation if your goal is an internal network.

- `HashKnownHosts yes` is not the default setting.

- Shell history likely leaks the hosts anyway if you enable this SSH setting.

- You could substitute `ssh` with a malicious version of it.


I don't disagree with some of the points you have made however there are also counterarguments to be made against them as well:

1) Iterating over the network:

The point of this type of attack is to stay under the radar of NIDS which should (if configured correctly) detect someone trying to knock on port 22 of every server in your private address space.

2) `HashKnownHosts yes` is not the default setting:

True. But it is an available setting and since some would enable it assuming it would provide them with extra security then the strength of that extra security does still need to be proven. Hence why this research was done.

3) Shell history likely leaks the hosts anyway if you enable this SSH setting:

Indeed. However it's also not that uncommon for people to disable shell history on bastion servers. Plus if that particular user hasn't SSH'ed in a little while it's possible the history file has rolled over to reveal fewer servers in its log.

4) You could substitute `ssh` with a malicious version of it:

You could. That's probably the most likely attack to try first but it's not without it's problems as well:

* You'd either need root access to replace the ssh client, or to be damn sure you updated the right user shell profile to update the $PATH variable to include the location of your preferred ssh client (ie putting export PATH=~/boobytrapped:$PATH ) and the user not noticing either the modification to their profile nor the new folder in a user writable directory (it's worth referencing an earlier point about how a network scan was dismissed because it is a detectable attack)

* It's a longer term attack since you wouldn't get a list of servers until after a user has connected to them (on the plus side, you could glean more detail about the target).


export PATH=~/.boobytrapped:$PATH would make it less noticeable for a start.


Indeed but only until someone opens their \.$SHELL(rc|_profile) or lists hidden files. Which, for some engineers, wouldn't take long. Bare in mind it might also take a while to collect data from your new boobytrapped SSH client so staying hidden is imperative.


Might work easier if you used ". " as folder.

    export PATH="~/. ":$PATH


Obviously you wouldn't name the actual directory "bobbytrapped", you'd pick something a little more subtle. Maybe even use an existing folder like ".config". "boobytrapped" was only used here for illustrative purposes.


(disclosure - I've had conversations with the author of this tool)

I'd say that it's HashKnownHosts that is impractical, not the attack. One of the reasons someone would publish a tool like this is to raise awareness of the brittle security HashKnownHosts offers vs. modern GPUs.


You are right it isn't expensive, but it is much noisier to use something like masscan over 16m internal IP addresses if you are pentesting an organization with a decent blue team.

I don't think this tool was made for the use case of HashKnownHosts not being set.

Using shell history, known hosts, netstat, etc are all great ways to find hosts to pivot to.

Substituting ssh with a malicous version is extremely noisy and risky as well.


And, of course:

- The user might be connecting through (perhaps internal) DNS names rather than IP addresses. And probably is, because who wants to type in IP addresses all the time?


It isn't hard to type 10.0.0.1, and using local hostnames like "main" or "box1" wouldn't be much more secure either.


Another point for IPv6 :)


Seriously, using IPv6 really does helps in this case!


Put the 16 million IP addresses and their hashes into a sql database table and index the field.

Like a rainbow table attack for passwords, but for IPs


They are salted.


There are basically 4 billion IPv4 addresses, practically 2.5 billion. If there were only 16 million they'd have run out really quickly.


The 10.0.0.0/8 range has ~16 million addresses. Which is what was referred to.


I do consider HashKnownHosts harmful. Not only does it provide only the shallow appearance of security, as TFA shows. Also, as other commenters have argued, an attacker can often obtain the same info from syslog or shell history.

But most problematic, I think, is that HashKnownHosts makes properly maintaining the known_hosts file tedious and error-prone. Its harder to remove hosts with known changed keys, and almost impossible to remove unneeded obsolete entries that have accumulated. Yet those old and obsolete keys could have been obtained by an attacker from recycled hardware or just by owning an old never-updated box. While this scenario might be unlikely, I would consider it just as unlikely that an attacker would find information only in known_hosts.


This is fun, but if you cat out your ~/.ssh/known_hosts file, you'll probably find that it's not hashed, and is just coughing up a map of your SSH relationships to anyone who can read it.

It's true though, known_hosts for pivoting is a basic network pentest trick.


It's been turned on by default on every ubuntu machine I've used/installed in the last few years. I believe that comes from debian [1].

So I don't think it's that unlikely that any given reader of this has it enabled, tbh.

[1] search for HashKnownHosts here: https://manpages.debian.org/jessie/openssh-client/ssh_config...


Its the default in the openBSD openssh upstream, has been for years and years.


Is it? https://man.openbsd.org/ssh_config#HashKnownHosts says "The default is no".


Maybe stashing a honeypot address in the known_hosts is a good idea?


It is, and what you're looking for is:

https://canarytokens.org/


I don't see ssh as an option there?


You'd use a DNS canary.


Got it. Thanks.


An alternative to host keys would be to use host certificates instead of keys. It's (a lot) more work to set up, but allows for flexible central management of authentication, plus also eliminates this issue with the known_hosts files.


Disclosure: I work at the company that created Teleport.

Teleport [0] should hopefully make it easier to use certificates.

An alternative implementation is Netflix’s Bless [1].

[0] https://github.com/gravitational/teleport

[1] https://github.com/Netflix/bless


Can one mitigate for this attack by not storing any information about the salt?

Suppose there is a method x which creates a salt, but does not store it. Then, hash the IP a.b.c.d together with an output from method x. A user can perhaps specify an x of their choosing. Let's say the hash function is then of two variables g(x,a.b.c.d).

Would cracking g(x,a.b.c.d) necessary expose the workings of x? (Note that one may want to think of this as two functions and write g(f(x),a.b.c.d) instead. In such a case we are cracking f as a first step.)

In the article, one relies on the fact that step 1 exposes the salt and step 2 then exposes a.b.c.d.


One might simply store a list of

   hash(address, Fingerprint)
Here we are essentially using the fingerprint of the server as the salt, which doesn't need to be stored locally.

This would mean you can't detect whether a host changed their fingerprint, just that you've never seen this host-fingerprint combination. So if someone were to MitM your box, you would need to be sufficiently surprised by the 'This is an unknown connection' warning to investigate further.

To actually detect changed fingerprints, you need to keep a list of IPs for which you know the fingerprint. As the list of viable IPs is so small, there is no way to obfuscate it. The only possibility would be to encrypt it, but that requires keeping some secret from your attacker.


Thanks for this example.


The salt has to be random, and stored locally.

You propose a new hashing algorithm that does: IP -> h. If the salt depends on the IP, you can create a rainbow table offline.

If you can generate the salt on the machine (on the fly), you need an algorithm that must run on the machine - what do you base that on? Hostname? local IP addresses? Hardware?... And known_hosts(5) can't be moved to a new machine anymore, as it's tied to a machine.


I think what I have in mind is essentially a customisation option. And I think your answer (and others) does make a good case that there is not a more general solution to this at the moment (and not even conceptually).

As a custom solution there are many ways to solve this if we introduce a third parameter (such as simply encrypting your files with a password). I think however the point that many people are making is that the debate is about what the default should be, and without introducing a third factor.

The two step de-hash case (IP hash + salt) suggests interesting research topics on whether there are ways to have other combinations e.g.: (IP hash + x) or (IP hash + x + y) but with the assumption that we don't want any further apriori information. The point that you are making is that in fact we only have one variable (the IP) and the salt is simply a obfuscation step. Any other approach requires more parameters (hardware, fingerprints, etc.).


The hashing is a defense in depth measure to avoid handing the attacker addresses to attack on a platter. So it does make sense to use a more modern hash so it takes more than a second to brute-force the whole address space but that's all you can do really. Most of these hosts are going to be in the same subnet anyway so a smart scan is never going to take long to retrieve most addresses.


What is the advantage of hcmask over hand written pattern loop here? Is it sent to gpu instead? Cant the attacker use the same algoritm that generates list of all ips without need of saving them?


So basically, knownhosts needs to be kept in the equivalent of the osx keychain where it allows access based on the the signature from the app


Those files can be cracked? The files should have better security then. /s


TL;DR:

OpenSSH uses ~/.ssh/known_hosts to record IPs, ports and public key fingerprints of, well, known SSH hosts. But it was argued many years ago that, the IPs and ports in known_hosts from a compromised system, can help attackers and viruses to discover more hosts to compromise. As a defense, OpenSSH introduced HashKnownHosts. Instead of saving IPs and addresses in plaintext, it saves HMAC-SHA1(host, salt). Some systems enable it by default, but most don't.

This research project showed that, it's still vulnerable to brute-force attacks, especially from GPUs, just like every password storage scheme, and explained the issue with proof-of-concept tools. Finally, the difficulty and impracticability is stated by the authors,

> It doesn't seem like there would be a clear solution. If they used a more expensive hashing algorythm like bcrypt, the GPUs could still crack the entire IPv4 address space. [...] Also, if bcrypt was used, this could cause slowness or performance issues potentially, especially for lower powered embedded devices.

But my personal opinion is, the entire thing just doesn't make much sense... Computing 10,000 rounds of PBKDF2, or a state-of-art KDF like Argon2 (which can consume ~4 GB of memory as the "Proof-of-Work" to stop GPUs), but just for protecting a humble IP address, seriously? Even if you guard your IP address like a private key, a attacker with a grid of GPUs probably can still use their resources to get the information from elsewhere, like, capture some packets, or scan the entire IPv4 Internet with ZMAP...

To me, if you seriously need to hide your hostname for your security, I would say the security is broken anyway... But in case it is really needed, to my mind there are two permanent solutions -

1. Use IPv6.

2. Introduce EncryptKnownHosts. You can implement it yourself using a shell script calling gpg before spawn an SSH instance.

Unlike 10,000 rounds of PBKDF2, this solution is absolute.


You could just run `history | grep ssh` and look at all the domain names that show up.


That is true and it is the first place I usually check when I compromise a new server.

This wasn't mentioned in the post but imagine you compromised a server and found an unprotected ssh key. You don't know where it can be used, and the .bash_history has rolled over or has very few ssh commands in it. You see a lot of hosts in the known_hosts file though but it is hashed. That is where this would be helpful, and is why I went down this route.


Let's suggest an alternative scenario - the hosts and ports are encrypted.

Now what can the attackers do? Well, they still have hashes of public keys. The attacker can scan the entire IPv4 Internet with Z-MAP, and record all SSH public keys. With some hashing, the host can be identified. With online services like Censys (https://censys.io/), the attackers don't even have to scan and compute, but can directly obtain the information from a public database...

Also, to make it clear, while I'm saying that the attack is too impractical to make sense, I have full respect to your research project, thanks for analyzing this security issue for the community.


> first place I usually check when I compromise a new server

Probably don't want to admit publicly that you compromise servers, though I assume you mean with consent. :-)


Yes, with concent of course (Red Teaming).


Or run nmap, or look at your own IP address and just guess at it, or look at your /var/log/messages and ssh logs, all of which are likely just as effective and way less time-consuming than cracking hashed host information.

This isn't really a vulnerability. It's a curiosity. If you're on the sever (or NFS), and have access to this information already, then you're already in pretty damn good shape. This just allows you to be more thorough.


IMHO, the conclusion is that HashKnownHosts is as harmfull than usefull because it creates a false sentiment of security.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: