It's like, super detectable. You have endpoint logs for the file access, you have network logs, you have sshd logs (which contain the public key and the IP), etc.
> if you have your ssh private key on several machines, you have to remember to copy it to all those places
Your ssh key should never leave a host. That should be a policy and you should write rules to detect when that policy is being violated (check for processes accessing the file).
If you need access from N computers you should be generating N keys.
The reason rotation isn't recommended is because it leads to bad practices (people just add a '1' to their password), it's a hassle, and it can never be fast enough to meaningfully impact an attacker - once they have SSH keys it's likely they can gain persistence and C2 before your rotation takes place. Not because people reuse their passwords in multiple places.
Setting up a CA for SSH is definitely a really good practice but I think that most companies would find it far simpler to just enforce 2FA for SSH access. Still, I'd really like to see an article about how you set that up, especially if it targets smaller enterprise customers.
For others who might be interested, here's bless from Netflix:
edit: Oh, and just to be clear, 2FA for your SSH is not a silver bullet - even a yubikey. But it's a cheap, scalable, near-zero overhead way to protect against an attacker who's got access to your key (but not one who has access to your system, assuming an active session!).
It's pretty standard practice to do this post-exploitation.
Further this requires you to regularly check your log files for suspicious activity, which is way more work than just rotating your ssh keys - which can be easily automated. Running a script that will automatically rotate your ssh keys on all servers in your .ssh/known_hosts is trivial.
Also, rotating your ssh keys is something that has a chance to prevent intrusion, whereas if you see something in your logs it's already too late.
It's not that simple. In a larger/mature environment you'll have log aggregation where the initial login is close to certain to be forwarded before it can be messed with. (Unless someone can log in, escalate, kill the right daemon, and somehow prevent monitoring from noticing a missing endpoint - all before the log gets forwarded)
And that's just host logs, without the networking, potential forwarding, etc.
As for endpoint logs, yes, a privileged attacker could disrupt them. But honestly, even with regards to your endpoint, attackers often don't disable logging - though I do see it, for sure. For "blessed" logging like Windows Event Log you'll have an even harder time - they do take measures to protect the files on disk, even against privileged attackers, and supported methods of deleting the event log actually themselves generate a "Someone deleted the event log" event, which I would highly recommend you watch out for :)
The simplest advice for dealing with this is to ship logs off of the device ASAP and to make sure that disrupting the service requires privileges.
Trying to prevent a determined privileged attacker from doing something is an exercise in futility, since it's impossible in every sense of the word. If they have total control over a system, they can do whatever they want, even if you put up a bunch of stopgaps.
Most post-exploitation frameworks (prominent example: DanderSpritz) have modules to remove stuff from the windows event logs without leaving traces.
It's pretty basic stuff.
That's not really true at all.
> since it's impossible in every sense of the word
Ok? Lots of things are impossible, and lots of those things are also still very very hard and costly. Hash collisions aren't impossible, and yet here we are, with a world hinging on them being very hard.
> , they can do whatever they want
> Most post-exploitation frameworks (prominent example: DanderSpritz) have modules to remove stuff from the windows event logs without leaving traces.
You realize that:
a) DanderSpritz's logic to bypass the event log was a huge deal
b) It's literally an NSA leaked exploit????? Like are you kidding me using an NSA developed exploit as "pretty basic stuff" lol
Sorry but I think I'm gonna stop responding here.
It must have been very expensive and innovative, does that make it hard to copy into your scriptz folder?
Sorry, but this entire thread is nonsense, and it's just a clear demonstration of a lack of threat modeling and frankly a lack of understanding of what attacker capabilities are.
You can try to play games with ensuring logs leave the system, but everyone takes shortcuts to make sure they can recover the system when networking is down etc.
Everyone and their grandma has access to complex scripts, etc that were once very expensive. Whether they invest the energy in learning methods to hide their presence or just go straight to some other goal is going to depend on how they intend to abuse your systems.
I think most "security professionals" pretend they are going to catch an oddity that occurs that doesn't match what their automated tools would catch and occurs in the middle of some other crisis or holiday break. I would say good luck with that.
That's not true at all. Like, not at all.
You’re assuming the attacker has write access to the log storage of the system they ssh into. This is not the norm for production systems. If your auth processes aren’t shipping off logs immediately, your system is broken regardless of ssh.
If not, can an attacker gain access to your email server or DNS with your ssh key? If either is true, they now have access to everything not protected by 2FA that uses an email address they now control.
There's so many things to get right.
You can design a system such that there's a very high likelyhood even a nation-state attacker won't be able to intrude without leaving traces - if you make no mistakes.
Or you know, you could also just rotate your ssh keys in addition to everything else. "I have logs" is really no excuse to forego something that is this easy.
If the ssh key you lost has access to all the critical infrastructure, then, certainly you have a problem. The solution is to not give away write-access keys to your entire system.
> If either is true, they now have access to everything not protected by 2FA that uses an email address they now control.
The initial question was not whether losing a key would cause a breach, but whether a detection mechanism is reliable.
> Or you know, you could also just rotate your ssh keys in addition to everything else.
The question is how does it help? If you can't detect a breach, it will live for as long as your key rotation policy, if not longer. If you can detect and mitigate a breach, it will be closed quickly regardless of your rotation policy.
In those special cases an "extraordinary" sysadmin gets onboard a log host, it is not through the ordinary access ways, such as SSH from where the other sysadmins play around.
The usual networked logging systems that don't have ssh logins, e.g. syslog into ELK stack.
curl -X "DELETE" http://log-server:9200/logstash-*
And I'm sure I missed some - there's a thousand ways to do it.
This common problem is what led to rise of the MSSPs https://en.wikipedia.org/wiki/Managed_security_service
If you need an ssh key for whatever reason from a host (for example, git pulls on a staging machine), you should generate one on that box and narrow it's scope in the machine that will receive it (eg, Gitlab Deployment Keys -- locked in read-only mode, single purpose).
> The reason rotation isn't recommended is because it leads to bad practices (people just add a '1' to their password)
To some degree. I personally rotate my keys whenever I change my personal/work device (perm. change), or, around every year or so. It's not a hard requirement, but just a personal preference.
> it's a hassle
ssh-keygen -b 4096 isn't a hassle... and if you do deployment properly it shouldn't be an issue to sync these keys (eg, an AD system holding public key, cloud directories like JumpCloud, etc can all sync the moment it's updated -- even Salt/Ansible/Chef/etc can do it easily, with modules pre-written to sync keys already).
In any event, MFA is always a good idea. But, my biggest concern is that someone would leave ssh open to the public... the time it takes to setup an ACL or VPN to connect to the machine is hardly anything these days with the amount of automated tooling to do it... so why aren't people?
I was referring to password rotation specifically with those points, not SSH key rotation, because the quote in my post was also in the context of password rotation.
A lot of people think of the SSH keys like PGP keys (where the one private key is your identity) which is not how they are intended to be used (by my understanding which seems to agree with yours). authorised_keys can contain many so you should never need to duplicate a key because you need to access a given account from multiple hosts.
> That should be a policy and you should write rules to detect when that policy is being violated (check for processes accessing the file).
For static locations, one option I like for this is whitelisting the source address for each key (https://unix.stackexchange.com/questions/353044/). You can then monitor abuse of the policy by looking for keys with no source limit, though this isn't something I've ever done, and it means that a stolen private key is more difficult to use from another location.
Of course this doesn't work for connecting directly from client hosts that move around (i.e. a user connecting from a laptop that could potentially connect from any address unless you enforce VPN access for sensitive resources).
My ssh keys never leave the yubikey.
I have a different dedicated yubikey in each computer, with its own unique key, and a stolen key is useless without its unlock PIN.
Do you leave computers in weird places?
Are you getting robbed once a year?
Are you a high stakes poker player or CEO of a shady company?
I have like 4 laptops and two main desktops and if I didn't have 4 yubikeys in them then this would be a multiple-times-daily occurrence. Yubikeys aren't that expensive, and I mostly use the usb-c "nano" ones which are designed to live 24/7 in a computer's port, only sticking out about 2-3mm. Sometimes I have to move them around to other temporary machines but for the most part having approximately the same number of keys and computer workstations means that this is pretty infrequent.
I even have two Davinci Resolve Studio activation dongles for this same reason, and I can't physically edit video on two different computers at once, one would do if I were willing to keep track of where it is and shuffle it around between my various machines as needed.
It's pure speed/convenience, not a response to some data threat.
you don't need the physical key to "enroll", you just keep a copy of its pubkey.
So, the ideia is for you to have two devices, each with its own key, the first device you use daily and the second you use store in a safe location.
if your first device in daily use stop working or is lost you use the second device you have stored to login to your systems to remove the keys from the lost device and add the keys for a new device that replace it and then store the second device back in a safe location.
I wrote about my endeavour with this approach just few days ago .
Most VPS provides "web console" access. It connects like terminal, like Digital Ocean's web-console that doesn't require SSH access .
As backup you can also use OpenPGP cards which cost much less than a yubikey. Or a cheaper Fido2 token if you use Fido2 for SSH access (I don't yet but it's coming into vogue). An OpenPGP card will cost about a tenner, you'll need a card reader to use it but for backup purposes it's perfect.
Using Yubikey to mean U2F is like people saying "Google this term", "the image is Photoshopped", "Hoover the floor", "grab me a Kleenex", or even "take the escalator". It possible "Yubikey" could become a generic trademark, but if possible people should be wary of using brand names in this way before it 'sticks'.
Additionally, the term Yubikey isn't likely to become synonymous with 2FA in any case. Most people don't know that yubikeys work in several different, independent modes, such as FIDO/U2F 2FA, or CCID smartcard, or Yubico OTP (those long annoying strings your yubikey types when it brushes your thigh or hand).
The CCID smartcard mode requires a pin, which is technically two factor authentication (knowledge of PIN and possession of yubikey), which is an entirely different thing than FIDO/U2F 2FA (which is what most people mean when they talk about using a yubikey for 2FA, not that "yubikey" and "2FA" are interchangeable terms).
This is further complicated by the fact that CCID smartcard mode can be used for ssh (via gpg-agent, with ssh keys inside the yubikey itself), AND, separately, OpenSSH (with other keys) can use a yubikey for U2F.
No endpoint security system I'm aware of being used outside of core banking/telco/government system logs all file accesses. It would crash instantly for one single build of your average NodeJS application.
It's valuable to hear how companies are protecting their infrastructure if they're going to break down how they accomplished it, how they continue to maintain it, etc.
Also, Sym can be set up to require approvals too which is great for security auditing since it's a third party.
With a properly set-up token, basically you only need 2 commands:
ssh-keygen -t ed25519-sk -O resident -f ~/.ssh/id_mykey_sk
Apart from the small headache with the PIN, the whole thing was almost magical in its simplicity.
Edit: I suppose I am asking how does it "all" fit together. The CA that grants servers or services a short lived key - the other servers that then can trust that. It makes kind of sense but I think I am missing some parts as when I try and read how others do it, some parts seem to be missing.
Too many blog posts seem to be something something kubernetes will arrange it. Or Oauth or ...
For example, this article seems to blow against the idea of a central key management service like Vault and have the device decide to rotate keys. But I am not 100% sure because it's one sentence. And how do they provide authentication for a service account (say a web worker that processes some incoming requests). That's not device+person. The same idea can happily apply but do they?
I think I am just moaning.
> check for processes accessing the file
It is. It's called auditd, is quite possibly already installed (albeit probably not configured to do much), and can easily ship its logs off to another host (natively or via syslogd).
Private keys should be per-host. I did copy my private key across multiple machines way back when I was a bit greener and first learning about how SSH worked. I did that for years. Not everyone knows best practice automatically and we should try and educate instead of shaming people who do it.
When you believe you have enough knowledge on the subject to go and write an article about SSH key best practice and you're still casually doing this as if it's OK, that's different. This is basic stuff; I have no interest in following any advice from this author if they think copying private keys to different hosts is ok.
That way I could rotate keys and painlessly update them (and still have the old key for rarely accessed machines).
Sure it's not perfect, but it's a sight better than never rotating.
I wonder if there's something similar for the ssh_host_key - somehow to say "old key is deprecated but still here so ssh doesn't scream bloody murder, but use the new key from now on".
There is, since version 6.8 (~6 years ago) .
Make sure you have "UpdateHostkeys" set to "yes" (or "ask") on the client side. It was off by default when it was first added but I think I remember reading at some point that it changed to enabled by default.
$ curl -s https://github.com/apenwarr.keys > blah
$ ssh-keygen -l -f blah
1024 SHA256:1IWAUSXOcCKLcmOdAec8JbDt3T75udA4KSpRosEWUaU no comment (RSA)
(update: they have now replaced it with an RSA 2048 bit key. progress.)
You can still generate a 1024 bit RSA key, but someone would have to go out of their way to do so, and I can't imagine why they would have done that in the past .. decade?
Maybe they aren't using software keys, but rather a low quality/older/small-kb hardware token or following the default guide for one? The vast majority supported 2048 in 2010 though..
Not criticizing, just genuinely interested in how to best manage keys.
I find web services to be a huge pain, though: Obviously most don't offer any kind of 2FA, or maybe Google Authenticator or SMS at best (which means those websites must be so bad that people don't login to it on their phone?). But even those who do "proper" 2FA often will only allow a single U2F token - and enforce GA, SMS or a secondary email as fallback.
(Putting this rant here so maybe a webdev or even two do it better the next time they do some auth stuff ;-))
You should always have keys per device (as has been discussed in other comments here). So if you have >1 device, you'll automatically have backups.
> If so, how do you protect that ?
While it may seem bad to have some keys less securely protected than others, the ability to revoke a single device means that using Yubikey / Secure Enclave / whatever on one device is still better than using them on none.
Has anyone solved this, or got a write up of some best practices for running this? All I've managed to find are articles about how to run such apps, rather than how it fits into the broader security architecture.
Ideally ideally, what I would actually like is the ability to configure OpenSSH to require multiple things to log in, i.e. both that they SSH key is trusted and that it has recently been signed by the signing service. That way gaining access to the signing certificate doesn't help without also gaining a trusted SSH key (it's still bad, but not quite Game Over levels of bad). I had a quick look to see if I could hack together a patch to do this, but alas I had forgotten how weak my C foo is :(
You get an HSM like this: https://www.veritech.net/product-detail/keyper-hsm/ that stays air-gapped.
Then you build procedures around it, like https://www.iana.org/dnssec
Not cheap or easy.
With OpenSSH, you can require multiple authentication methods to succeed before access is granted.
For example, "publickey,password" to require password authentication after key-based authentication has succeeded. You could even do "publickey,publickey,publickey" to require three different keys to be used!
This has been supported for several years, by the way. See "AuthenticationMethods" in the "sshd_config*" man page.
I understand the concepts, but how does this work in practice? Do you have an example of generating a short-expiry certificate from an IdP, such as Google?
The question that can be asked then is: how often should should I rotate the CA key? ;)
If you have a small number of people who have access to your production environment, and they practice their trade like they are actually trusted with said production access, that provides a very small attack surface which can be analyzed and hardened.
Most security experts recommend restricting prod access away from your dev team because doing so alleviates risks from a compliance perspective, and prevents bugs and regressions from being introduced inadvertently.
I’m not providing links here because I do think it’s worth googling and discovering more of the nuanced points many others have made. Sure, you’ll find some shops that use another model, but for most use cases separate environments exist for a reason.
It also makes investigating difficult bugs extremely difficult (staging tends to be slightly different from prod, smaller as well, different hardware, network, etc) since you can't reproduce them, and your prod team can't help you much, since what you need is actual full box access to poke around.
I agree with you on the compliance point.
But a far more likely scenario is that the attacker will simply leverage existing sessions/ steal a socket, which, notably, will bypass any sort of 2FA on SSH connections.
Maybe I should calling the su binary directly from /usr/bin/. Any thoughts on that? Or should I open a new VT?
But yeah, ptrace is definitely something to watch out for. Monitoring ptrace is also something defenders can do if they're not in a position to disable it (if you're working for a software company your engineers will ptrace).
This is just snake oil that doesn't actually add protection.
There's no such thing as perfect security, but that doesn't mean you shouldn't lock your door.
In an ordinary environment (at home, at big corp.) you treat your keys as highly personal valuables and would not store keys at shared storage (unless special circumstances, due to air gaps or because shared non personal jobs need to run with a key - and then by a separately generated key).
If I'm at that point that I would have to rotate my private SSH key, then game is lost anyway.
If the storage at your personal device cannot be encrypted, might it be a solution to pass along something like ssh-add - <<< "$(pass show ssh/env1-key1)", and have that gpg-key on a yubikey for instance.
In my (perhaps too small) view, this issue is not the issue, and the issue should instead be tackled. For instance, not by letting SSH be publicly accessible as the first thing and educate staff on locking their personal workstations. With the correct mindset other things should be of focus in the area of security.
I recommend using a bastion host through which all SSH connections go and to use a DevOps tool like Ansible to automate certificate/key placement on every single host. In addition, if you have a trusted network that people log in via VPN you should also limit SSH connections to the bastion host to addresses from this network, which will make it harder again to use stolen credentials. Finally, only a handful of people should have such credentials, most maintenance should be performed in an automated way via DevOps tooling.
Best description of Bluetooth I've ever read.
You only get some limited number of pin attempts before it locks you out.
A stolen key is useless for gpg/ssh.
No one in that transaction cares if it was lost or stolen.
See "ProxyJump" in the "ssh" (client) man page.
Ahem, no. I use Yubikeys for a few years now. They are literally braindead to use, and works out of the box in recent Ubuntu. Here is an Ansible role to get started: https://github.com/cristiklein/stateless-workstation-config/...
Stop making excuses and start protecting your SSH keys!
Disclaimer: I'm not compensated in any way by Yubico, but their product is so darn good that I really want people to start using it.
Using U2F in the browser, you just buy the cheapest yubikey, plug it in and it works - any OS is fine.
But to do the same with SSH you've got to buy a particular yubikey, install five different bits of software, adjust a bunch of config files, restart services, adjust your agent autostart files, upload your 'subkeys' (whatever those are)... and that's just to support one OS.
It just seemed like the kind of second-class-citizen setup that barely anyone else is using or testing against, so it'd be constantly breaking down.
For GPG/SSH, there is a bit of an initial setup process to setup the card and generate keys (ideally generate them on-card, so you know they cannot be exist elsewhere) - this can be scripted though, as we have done. As part of our deployment process we generate all needed passphrases and revocation certificates, storing in encrypted storage, as well as uploading the public key to a known URL, which is also referenced in the smartcard configuration.
Once the card is setup - all you need on a machine is gnupg/gpg-agent and a ~/.gnupg/gpg-agent.conf file that looks like:
Using the card on a new machine is as straightforward as fetching the public key to your local/default keychain (gpg --card-edit, then 'fetch').
Switching between machines is then seamless - we have many engineers switching between macOS + Linux multiple times per day without issue.
I've had one on my keychain for a couple of years now, and so far it appears to hold up pretty darned well. It sits in my pocket unprotected with all my other keys, and despite its fair share of scratches it still works.
The reason for the "testing" is that they appear kinda "flimsy" compared to my Nitrokey, but so far it has stood up to every beating i've given it.
+1 for Yubikey.
There is probably a difference between company usage and personal usage. In a company setting i would expect to have backup yubikeys, but for a personal setup a recovery situation would involve getting your keys "out of it" until a replacement arrives.
1. Generate keys off card and import them (you can then backup these keys)
2. Generate keys on-card
I always chose the 2nd option; not being able to extract the keys from the card is strongly desired security feature.
I generated my new GPG key while booted into a "live CD" environment on an air-gapped host.
Because I believe in doing things right, I deliberately selected a machine for this task that 1) didn't have Intel AMT/ME, 2) didn't have any wireless network interfaces, 3) had no storage devices installed, and 4) had PS/2 ports for the mouse and keyboard (for "better"entropy)!
I set up the new Yubikeys, generated my new GPG master key, and generated a different ("authentication") subkey for each Yubikey. The master (certification) key and signing and encryption subkeys -- but not the authentication subkeys -- were exported and then backed up on a brand new USB flash drive that I'd purchased at a retail store, just taken out of the package, and created a small LUKS-encrypted filesystem on -- using an outrageously long, randomly-generated passphrase, of course.
The USB flash drive is kept in a sealed envelope inside a tamper-evident bag that's kept in the safe. The passphrase is kept, well, somewhere else, obviously, as is the passphrase for the GPG master key. Using the keys on the Yubikey doesn't require them; only the PIN -- which is long but not as long as the passphrases -- that exists only in my head -- and that's easy enough; I typically do that a few dozen times a day.
Since going through that whole process, there have been two times that I've retrieved the USB flash drive and passphrases. Once was to sign a bunch of GPG keys (from a key-signing party) and the other time was in order to rotate my authentication (SSH) subkeys and "renew" (i.e., extend the expiration date of) the others.
Was it a huge pain in the ass? Absolutely! Was it worth it, though? Sure. First and foremost, I don't worry about the security of my keys at all and -- perhaps more importantly -- I don't have to keep an eye out for the next bug that's found in the third-party (Infineon, IIRC) libraries that Yubico chose to use.
>The approach we take in our own infrastructure, modeled after TLS certificate infrastructure and Let’s Encrypt in particular, is to authenticate each device+person combination with a separate private key; that way, if a credential is stolen, we always know from where.
The startup I co-founded, bastionzero.com, is motivated by this exact problem: What is the best way to do SSH key management. We hope to improve remote shell access by replacing long lived SSH keys stored on your machine with cryptographic identity attestations from multiple sources, e.g. OpenID Connect, FIDO, Network, etc... The end goal is to eliminate both single points of trust and long lived secrets.
I have a hardware-backed "doomsday key" to use if the Google Doc stops working.
Writeup and script at https://github.com/mmdriley/authorized_keys
But still, surely there's a better way than relying on google not controlling your "key infrastructure", even for personal use?
I don't keep important private keys in my .ssh folder. Well, it's just security by obscurity. An educated, determined attacker would find them. But
some random malicious code would not immediately find them.
I run the Web browser in firejail (Linux).
If there is a bad actor that releases a widely using dependency, for sure it's going to be gone from npm quite fast most of the time! However, it'll take some time for it to get noticed, and people will invariably get affected.
You shouldn't bring an open honeypot to a place where bears can attack you easily, right?
And most of them also execute external code on module importation... what I'm not sure if it's even relevant, because you will run the module at some point anyway.
So, yeah, JS makes the problem one or two orders of magnitude larger. But the problem is still there, whether you use npm or avoid it.
Unless I'm missing something, that essentially solves the issue, no? Persistent private key, usable across all your machines that can't be exfiltrated. Bonus since you can even generate the key directly on the device!
This gets even easier with newer OpenSSH versions if you use FIDO2 auth for SSH. I've not played with it personally, but word is you just plug in the key and off you go
In a multi-person enterprise the issue becomes how to manage all these keys with on-boarding, off-boarding, logs, access policies as people's roles change. Maybe someone only needs access for a day? How do you ensure their key is removed tomorrow morning. Maybe someone never setup their yubikey and their SSH key is just living on their laptop and backed up to dropbox. SSH is a great solution for individuals but I've encountered so many anti-patterns when used among large teams.
Additionally, sshd supports "Match", which can limit where any or all of your users can log in from.
"AuthenticationMethods publickey", "PasswordAuthentication no", and "PermitRootLogin no", all of which one should also be using -- ideally, on top of (both host- and network-based) access lists / firewall rules preventing access to 22/TCP from everywhere except the hosts and/or networks you've explicitly permitted.
I've never understood the appeal, nor have I seen anyone do it well, honestly. It seems to get mentioned in all of these novice and semi-pro SSH related articles as a good idea and just makes my eye twitch when I see it.
Have a yubikey to connect to your jumphost(we called that access vm), then have a cronjob that will, every month, deploy ansible, generate a key pair on the accessvm and deploy those new key on accessible vms (ideally not your log collecting vm, that should be even more secured.).
Or, more likely, were all of these new private keys granting access to all of your hosts just sitting there on disk unencrypted -- and, as a result, freely available to and easily stolen and used to gain access to all of your production machines by the first attacker to come along and compromise the bastion host?
I rotate my ssh keys with a new laptop/pc. Private key should stay on one machine only. I think longest I had usable laptop was 5 to 6 years.
I read the article and probably people who are interested in security will click it so it also is preaching to the choir.
They can still be abused "live" while they are inserted via an SSH agent, but I turned on the "touch to sign" feature so for every use you have to touch the token button as well.
E.g. just get a yubikey per employee (or more).
Yes, they can be stolen (put PINs on them), but they can't be copied.
I have just one software key, because i don't have a solution for SSH from my phone with hw key yet.
— Did a MitM affected that design ... still? Asking for a friend.
Every X days generate a new RSA key pair, connect to all the hosts and replace the public key matching the previous one in the ~/.ssh/authorized_keys
Is there any issue with this solution?