Hacker News new | comments | show | ask | jobs | submit login
The default OpenSSH key encryption is worse than plaintext (latacora.singles)
545 points by rargulati 50 days ago | hide | past | web | favorite | 256 comments



Spoiler: the default SSH RSA key format uses straight MD5 to derive the AES key used to encrypt your RSA private key, which means it's lightning fast to crack (it's "salted", if you want to use that term, with a random IV).

The argument LVH makes here ("worse than plaintext") is that because you have to type that password regularly, it's apt to be one of those important passwords you keep in your brain's resident set and derive variants of for different applications. And SSH is basically doing something close to storing it in plaintext. His argument is that the password is probably more important than what it protects. Maybe that's not the case for you.

I just think it's batshit that OpenSSH's default is so bad. At the very least: you might as well just not use passwords if you're going to accept that default. If you use curve keys, you get a better (bcrypt) format.

While I have you here...

Before you contemplate any elaborate new plan to improve the protection of your SSH keys, consider that long-lived SSH credentials are an anti-pattern. If you set up an SSH CA, you can issue time-limited short-term credentials that won't sit on your filesystems and backups for all time waiting to leak access to your servers.


Spoiler: the default SSH RSA key format uses straight MD5 to derive the AES key used to encrypt your RSA private key, which means it's lightning fast to crack (it's "salted", if you want to use that term, with a random IV).

Indeed, I pointed this out in my May 2009 talk about scrypt (http://www.daemonology.net/papers/scrypt-slides.pdf). There were even OpenSSH developers in the room!


And a mere four years later they added a format that uses bcrypt... in a PBKDF2 construction? (I haven't found a motivation why, though Ted Unangst's post on it just says "I know about scrypt".)


Scrypt is "excessively parameterized." ;)


I don't mean to sound stupid; but I am, so it comes out that way.

What does this mean? Does it mean it's hard to use, or easy to mess up or something else?


It has a time space difficulty curve that's more complex, but some people like that. I will stipulate scrypt is "better" if I don't have to argue about it. :)


As an aside about scrypt since I saw it mentioned here:

How does scrypt fair against the recent TLBleed etc? Iirc intels claim was that TLBleed only affected poorly implemented crypto. But is not the memory access pattern of scrypt vulnerable to TLBleed and hard to make constant access?


OT: This is why I still come to HN. At some point the top comment chain is by tptacek (Matasano), cperciva (Tarsnap founder), lvh (Latacora), tedunangst (OpenBSD dev), willvarfar (Mill CPU). And that's a great thread!


LVH and I are Latacora. Matasano is long gone; our joking nickname for Latacora is Matwosano.


Thanks. I'll have to update my tags.


Kind of. If you can sniff the memory access pattern of scrypt, its strength drops to being the same as bcrypt.


My first guess would be that it has "too many parameters/knobs". I guess that could implicitly mean it's hard to use/easy to mess up if you don't know what each parameter means and what different values have.


I guess the same. Not sure why the downvote. The latest crypto functions expect the developer to pick parameters for the memory usage, the time to run and god knows what.

Too low and it's worse than MD5, too high and your login prompt takes a whole minute to check the password.


I think the history is a little more complex than that. It's not like provos was unaware of key stretching.


The article says its inherited from OpenSSL..?


> consider that long-lived SSH credentials are an anti-pattern.

Exactly. Consider switching to auto-expiring SSH certificates. You can build your own certificate management using a few open tools or switch to Teleport [1] which is 100% certificate based and doesn't even support keys. Disclaimer: I am one of the contributors.

[1] https://github.com/gravitational/teleport


Thanks for this. I will happily check out your tool, but what are some of the other solutions in this area?


There’s also BLESS and Hashicorp Vault. I have a penchant for Teleport.


Oh! Teleport does look pretty decent now that I've forced myself to look at it after going through this thread.

I mention this mostly for the sort of people like me who read 'SSH CA' and their eyes roll into the backs of their heads and they start rocking and making saliva bubbles at thought of PAM modules and LDAP servers and so on. But this doesn't look nearly as bad. Go ssh implementation sounds like a nice ancillary bonus.


It’s probably pretty rotted by now but I wrote a version of this while I worked at Microsoft called ALTAR that’s backed by Azure services.


I just went to the top 20 search results for "how do I generate ssh keys". Almost none of them use the defaults. Almost all of them suggest to use "-t rsa", and some with "-b 4096". The ones using defaults are from Joyent, SSH.Com, and git-scm.com. Since probably nobody is creating SSH keys without using a guide first, we should be able to get websites to update their guides with better default arguments, which will improve things going forward.

Almost all the Windows guides suggest using PuTTYgen with defaults, which gives you a 1024 bit RSA key, and that might be worse in the long run than these password shenanigans.


"-t rsa" uses the same terrible format as the default.


I'm saying that since they mostly don't use defaults, it should be trivial to argue for a few extra arguments.


What would that look like?


I created a gist[1] with the letter I'm sending with the suggested changes. It has all the pages I'm sending letters about and how I contacted them. If anyone else would like to use this script to also send letters, please do!

A lot of places make it difficult to contact them about their docs. I had to create accounts and file support tickets for some, and others only had a generic feedback form. So far Oracle and "w3docs" have been the most difficult; the latter only has a Facebook and Twitter for contact.

This whole process is really annoying. All these sites are giving the same advice. Why isn't there one Creative Commons wiki just for technical writing that people could link to?

[1]https://gist.github.com/peterwwillis/b447ab446a2052f3a6b5669...


Thanks for doing that. Half of the times where I spot a mistake or flaw in some docs I just leave it be if the way to contact them seems to cumbersome. I should try harder.


-o, for the newer key format. Or use -t ed25519 to generate ed25519 (elliptic curve) keys.

I do recollect that using RSA keys of 4096 bits slows down SSH more than the gain in security might be worth.


Look, 2048 is fine. Assuming no algorithmic speeduos you get 112 bits of security and that’s plenty. But SSH keys are only used to sign: they don’t affect bulk encryption. 4096’s performance is not what’s holding you back.


"Slows down" can also mean connection times. On a computationally weak device doing 4096-bit RSA is far from instant. This is, after all, one of the reasons people are enthusiastic about Elliptic Curve options in this space.

Some people need SSH to move a lot of data, e.g. for SFTP but some people just want their connection to a nearby machine to feel "snappy" and not take a beat to do the key exchange and authentication steps.


What machine are you SSHing from? My laptop does 200 RSA 4096 signatures per second. 5ms vs 1ms doesn’t seem like a perceptible added latency.


I'm not thinking about myself, the desktop I do Social Media from can do almost three hundred, and indeed RSA 4096 seems fine on that PC, but lots of people have crappy under-powered devices like Raspberry Pis. How many can those do? Four? Ten?

We're in the weeds here, we're agreed that if your weakest point is a 2048-bit RSA key you're in unexpectedly good shape, definitely anyone who feels 4096 even "might be" too slow should just use RSA 2048 (or get an elliptic curve algorithm that's nice and fast on their CPU). I was just pointing out that "too slow" doesn't necessarily mean "Not as much peak throughput as I would like". Station wagons full of tapes remain sub-optimal for video conferencing :D


Add -a 100 for extra rounds of KDF.


this answer seems to be constructive, but, leaving a poor default behavior in a universally-deployed security tool is .. sub-optimal...


Do you have a guide for home users you could point to that provides clear guidelines for managing your SSH keys?

I have a few home servers, but if one of my devices were compromised I don't think it would take much longer for the whole network to fall.

I'd love an end-to-end example that shows how you're storing everything, in both meatspace and your devices. Do you use hardware authentication devices? How do you handle backups?


If you can put up with GPG cards, gpg-agent knows how to pretend to be an SSH auth socket meaning it can do your signing for you.

At key generation time you can dump the key (once!) and then feed it to paperkey as a backup.


Why would you do this? Isn't it almost just as simple to generate a second software key and paper-key that? The difference between propagating one key to servers or two is a single newline character.


Sure, I think that’s a fine option too.


I think the explanation for most bad defaults in ssh is "redhat" which is kinda insane but I very much stay out of that sandbox. The default can't be changed until existing systems can read the new keys. So like 2028 or something.


For the key storage format, not the key type, wouldn't that only be a problem where you copied a key to a redhat system? Requiring conversion there doesn't sound too bad?


You're basically asking a "you broke my workflow" kind of question. https://xkcd.com/1172/

In high school, I definitely kept password-protected private keys on a USB key that got plugged into whatever machine happened to be available. (Now I am affluent enough to carry around a real Security Key and a trusted laptop.)

If you were the maintainer, would you really want to change the defaults and deal with the backlash of complaints from users who do copy keys around?


Even the OpenSSH people will tell you not to copy private keys around. Non-ephemeral private keys should be generated where they will be used. You can copy the public key...


Okay I generated a key on a computer that I will only be using for the next 3 hours, and may never see again.

How do I install it?

Is that method actually more secure than carrying around an encrypted key?


Yep, there is no point in generating a new key if you can't get the other end to trust it. You could generate a new trusted key every time you log on (and encrypt with password), and invalidate the old key, making sure to copy it to your USB. That would roll the keys, and cause you to be locked out with that key if anyone used it. But a bit involved.

Single use login codes on paper probably the easiest way around the problem. https://www.digitalocean.com/community/tutorials/install-and...

Also, you can configure the google-authenticator TOTP module to request key and token IIRC. GA also has OTPW backup codes.


It doesn't take that long to arrive at the conclusion that you should not expose any secret to untrusted hardware.

So creating a one time use key for that computer is probably a good idea, you can revoke it once you are done using it and then it won't cause you any problems in the future.


Why can't it be fixed and then just break for Redhat (until Redhat rolls out fixes/patches)?


obsd is highly break-things-for-security and pick up the pieces later so saying Linux is the reason kind of baffles me.


Is there an openssl command to translate the old key format to the new key format while keeping the same RSA key which works with OpenSSH?


I think

    ssh-keygen -p -o -f (oldfile)
should do the trick.


> consider that long-lived SSH credentials are an anti-pattern.

I don't think that's necessarily true, provided that your keys are:

- Properly encrypted

- Protected by a decent password

- You use ssh agent to avoid 1) copying the key everywhere, and 2) typing your password all the time.

Of course it depends how critical security is. Access to a few dev servers inside the company firewall is not the same as managing your client-facing production infrastructure.


And/or use a smartcard to store your SSH keys. With YubiKey 4 being as cheap as they are, there's little excuses :-)


The user experience of doing this is very bad. Every time I look into doing this I end up with blog posts that describe punching numbers into the GPG CLI, master keys, subkeys, PIN's. I don't want to be a GPG enthusiast, I just want to use my SSH key safely. (No offense to GPG enthusiasts!)


You don't need any of that. This is all you need:

  # Generate key
  $ gpg2 --card-edit
  > admin
  > passwd
  change both user and admin PIN to a secure password (can be the same, it's called PIN but you can just use a regular alphanumeric password)
  > key-attr
  choose RSA, 4096 (or whatever you consider sufficient)
  > generate

  # Add this to your .bash_profile (use GPG agent instead of SSH)
  export SSH_AUTH_SOCK=$(gpgconf --list-dirs agent-ssh-socket)

  # Export your SSH public key
  $ ssh-add -L


I found an X509 oriented device relatively easy for ssh (for gpg it is naturally harder to use these non gpg devices with some kind of bridge daemon).

Assuming Linux or OsX where your distribution has an opensc that already supports your device, ssh is only about 3 incantations of magic:

https://nilsschneider.net/2013/06/20/epass2003-quickstart.ht...

But there is no safe way of getting away from having 1 pin.


On current versions of macOS it’s built in so all you need is to generate the key and add one line to your SSH config:

    PKCS11Provider=/usr/lib/ssh-keychain.dylib
https://support.apple.com/en-us/HT208372


I dislike GPG as well, but I have the OpenPGP smartcard. I've just loaded it with x509 certs. There's OpenSC so I can use it in a browser/VPN/SSH/Apple Mail etc.


That's a side effect of the GPG being the back end technology that the Yubikey-based SSH keys are based on.

If you don't want to have to learn gpg (because why should you?) the master/sub keys, PINs, keyservers, and all that can be dumped, just like ssh-keygen is able to create keys without passphrases - not exactly recommended, but still better than the alternative.


FWIW: if you really, really don't want to learn GPG: Yubikeys will also speak PKCS11, it's a separate applet, and they ship PKCS11 libs for every major platform. We've used it for OpenVPN in the past (before we had wireguard).

If that's better... I dunno :-)


This is what I meant by "elaborate new plans". I like Y4's as much as the next nerd, but if you're going to put that kind of energy in, spend the energy on setting up an SSH CA.


I recently rolled out YubiKey 4s to my whole organization and it was a painless experience. There's not a single file-based key left. Provision the keys, replace the old file-based keys via our configuration management tooling, done.

Wouldn't a SSH CA just introduce a whole different kind of complexity?

- How to protect the SSH CA and its key and make it highly available? I don't want to be locked out of my bastion host after something the CA depended on broke and my certificate has just expired.

- How to authenticate users against the CA? Most solutions I've seen use a longer-lived client-side secret, which is just as susceptible to theft than a regular SSH key, or some sort of OAuth or SAML SSO. A malicious Chrome extension can now compromise the SSO process, you still need a U2F token (like a YubiKey 4) to properly secure the SSO account, etc.

- How to make it work with our SCM and random things like a storage appliance and various JunOS devices, which support regular SSH keys, but don't know about SSH certificates?

I would assume that Latacora is using a SSH CA, and I'm legitimately curious how you approached these challenges.


You raise a bunch of valid points. Just to answer the one about authing against the CA (I'm in the LV airport for BlackHat and my laptop battery is about to die): yes, still get U2F tokens. You're right that malicious Chrome extensions will mess you up, but that's true for everything else you run too: you need to enforce Chrome extensions via MDM regardless of what your SSH key story looks like. I consider having to SSO in a good thing: it means onboarding/offboarding/audit logging is easier.

The context for SSH CA/Teleport is SSHing into a box. When you do actually need an SSH key, Yubikeys are the best answer. (I like using gpg-agent's ssh-agent emulation mode because I find it works better on Macs, but that's irrelevant to the security analysis.)


I agree that for a large organization - which has the necessary pieces in place - it makes a lot of sense to use SSO for SSH access. The SSO is mission-critical anyway, there might not even be direct SSH access except via an authenticated proxy, there's centralized audit logging and intrusion detection, ...

However, I would argue that unless this is the case, operating a SSH CA is riskier (both from a security and availability point of view).


We like Y4's just fine. For the very limited set of machines we maintain that we ever need to SSH into, we use Y4 SSH keys. But we don't promote them to clients; we're working on rolling out short-lived SSH credentials with them.


Is there a good guide or tool out there for implementing/managing SSH CAs?

I don’t really play in this space, but I have some business partners who were struggling with it.


I think the easiest way to get started is to deploy Teleport or BLESS.


> I recently rolled out YubiKey 4s to my whole organization and it was a painless experience.

A writeup would be awesome.

I have been trying to set up keys for road warrior VPN as well as our access to cloud, and it has been FAR from straightforward.

Generally, everything consists of "Somehow communicate with a Radius server" that Yubi no longer supports(YubiRadius) or "Trust Google".


I think most (if not all) SSO solutions can do push-based MFA these days. It's not perfect by any means and I'm sure plenty of people will blindly approve any requests to the app but it's a lot more secure than without it and it's pretty convenient.


One way we reduce the risk of CA key compromise is to intermediaries for signing most of our stuff. Our implementation has Vault that's using intermediates certs to sign user's ssh keys, these ssh cert are short lived and we use it for signing in to ephemeral app hosts.

One day we had bit of time skew because of bad ntpd & lo we couldn't login because of our short lived certs :-)


If I use an ssh key to connect to computers owned by multiple organizations and I can't control how those servers are configured can I still use an SSH CA? For instance I use my key to connect to my servers but also computers at work, super short lived sessions when I'm debugging some embedded device through SSH (those tend to be wiped/replaced a lot so I keep having to ssh-copy-id my key to them), github and other "cloud" hosts etc... Does SSH CA make sense in such a configuration?

I'm currently using a yubikey to hold my key (in the GnuPG smartcard applet) so I felt pretty confident security-wise but now you're making me doubt.


> If I use an ssh key to connect to computers owned by multiple organizations and I can't control how those servers are configured can I still use an SSH CA?

Not really. In general you need the same level of access for setting up the sshd host key, for setting up/enabling sshca: the server must trust the ca pub cert, and there's some configuration needed wrt principals (although AFAIK you can/should embed most of that in the certificate).


If only there was a certificate authority management tool that was convenient to use from command line and through an API, so it could be made into a company-wide service.

There is this old tinyCA that comes with OpenVPN, but it's awful and can't do much (I don't even remember if it could revoke a certificate). There are a few instances of WWW-only CAs, and there are desktop/GUI applications. But command line? /usr/bin/openssl only, and it's unwieldy. Even worse situation with a CA library.

People like to fetishize OpenSSH's CA (for both client keys and server keys), but there still a lot to do before it becomes usable. (Though the same stands for the traditional save-on-first-use method, honestly.) You're basically proposing to deploy software that maybe will be usable in a few years, with a big "maybe", because until now it haven't materialized.


Completely agree. Most are horrible enterprisey java stuff. I currently use django-ca.


Hashicorp Vault has a great CA.


Yes, I've seen. Even more unwieldy than OpenSSL's one, and you need whole Hashicorp's thing, too.


One does not exclude the other. If it did then a hardware token like Yubikey easily beats a CA.


How does it "easily beat" a CA? It's still a long-held credential that you now have to manage across a fleet instead of a single SSH CA. You can get that short-term credential via a long-held key; that long-held key can even live on a Yubikey if you use U2F/WebAuthn. You get all of the security and usability benefits of U2F/WebAuthn as well as the off-boarding/on-boarding/compliance benefits of tying everything to SSO as well as the audit benefits of an SSH CA.


You can work around not having a CA, by distributing keys. I'm exaggerating when I say "just write a script", but it's not hard. You cannot work around not having hardware keys.

SSH CAs improve efficiency and convenience.

A hardware key that requires touch per login is a game-changer. When you go do lunch you know that your key did nothing, no matter how compromised your workstation is. When your machine is turned off you know that there's no copy of the key somewhere. That key cannot be used.

A software cert-based key may be valid for only hours (if you set it up that way), but that means that there are 7 billion possible attackers who could use your key. They could break into your workstation and wait for the screensaver to kick in, and then log in to every single host you have access to, and do their naughty business.

For a hardware key someone has to take a plane from China and break into your house to use your key.

> It's still a long-held credential

Doesn't have to be. But if it is, so what? Given physical locks that are unpickable and keys uncopyable, would you rather instead change locks every day, where the keys are copyable? (even if cost of changing locks scales O(1) with price)

> that long-held key can even live on a Yubikey if you use U2F/WebAuthn

Like I said, one does not exclude the other. You can't prove that A is better than B by saying A+B is better than B.

There's also devices that don't support SSH certificates (e.g. embedded devices), but supporting pubkeys is vastly more common.


Technically supporting public keys is mandatory. Of course not only can real world implementations ignore a MUST in the RFC they can also, and more conveniently, just reject all proposed public keys, leaving public key auth as just a stub.

One of the servers I've had the misfortune of using responds to even proposed public key auth by failing all subsequent authentication on that connection. So you need to immediately do password auth if you want to get in. Brilliant.

I presume the WG specifically wanted to see SSH with public keys deployed widely rather than a world where most places upgrade from telnet to SSH with passwords and think that's the job done.


> Technically supporting public keys is mandatory.

Yeah, but even if the sshd supports it that doesn't mean that the product has a way to configure it. There may be no non-volatile space for a pubkey.

I have encountered it, but yeah it's rare.


I agree. Hardware token makes a huge difference here because it ruins attack momentum.

The rest of the attack is very technical, very network applicable - copies of key files, guessing passwords - your adversary may be the far side of the world, and they may have done all this in seconds.

But suddenly a hardware token means ground assets. Different skill set. Some adversaries may be able to buy all the Cloud Compute and Network Bandwidth they can ask for (especially if it's all with somebody else's credit cards...), but putting even one black bag job together in a foreign country is beyond them. And even for adversaries that are able to do this you can't just spin up ground assets instantly.

Yes, in "Rainbows End" Rabbit actually does (if you pay attention) build a ground team to execute the lab infiltration plan despite apparently not having any corporeal existence. But that's science fiction. Here and now that's not how it works.


Aren’t Y4s still long lived credentials?


> there's little excuses

Yikes that statement has a lot of moral overtones. Is it a good idea to use a Yubi? Arguably yes. Does one need to find "excuses" for not doing so? The vast majority of the time, no.


I think the intent was more that security keys greatly reduce the friction associated with more secure practices, reducing the reasons to use less secure ones.


No moral overtones intended, the point being that the low cost and the form factor significantly lowers the barrier to entry.

I've been using a GPG smart card for a long time, and it required a separate card reader, and both card and reader were easy to break. A YubiKey 4 fits on a keychain, is hard to break (though some of my colleagues succeeded) and you just plug it in.


I like my smartcards. Bummed they didn't catch on. How do you feel about losing the PIN and using a password? I like how, if I'm at my desk using a hardware PIN pad, it's much less likely i'll have problems.


For SSH, it's fine. The hardware PIN pad does not show which server I'm logging into, so it adds little extra security.


I've not been able to find a good tutorial on how to store keys, plural, on a Yubikey 4 or any other smartcards. They're all limited to storing one, maybe two "authentication" GPG keys. Would you have some pointers?


That’s right: the number of keys you can store on it is limited. That’s one of the reasons we think you should just use them for identity, not temporary authorization.


The KDF used for password-based symmetric encryption (gpg -c, private keys on disk) in GnuPG is also terrible - but at least it is an iterated KDF of a fast function, as opposed to one-shot. I would happily put up funds for a bounty to fix the default in GnuPG but I don’t know where to post/fund such a thing.


You would have to change the OpenPGP standard - other implementations would have to follow that.


The iteration count is serialized into the message.


Out of curiosity, how can you have individual user logins on a host while delegating authn/authz to a CA? All the examples I've seen thus far involve a shared login, whereas I find it's a lot easier to audit hosts when a unique user ID is in the logs, last/w/who output, etc.


bless allows to specify the remote user in the certificate. you'll need to customize the certificate issuing program (a small python lambda program)


I think I need some more information. What I'd like to do is to have a signed certificate that only lets me into the "otterley" account on the remote host, while not letting "jsmith" into my account (only hers) or vice versa.

My understanding of CA principals is that they identify the user or role that requested the signing, but not necessarily the login ID on the server that is allowed to be logged into. Ideally there'd be a 1:1 mapping between the principal and the login ID on the server. I think there's some sshd configuration that needs to be done, but I haven't seen any clear instructions for doing so.

Do you know how to accomplish this?


Have a look at eg:

https://linux-audit.com/granting-temporary-access-to-servers...

And re-read the (imnho rather obtuse) ssh-keygen man pages.

[ed: and maybe this too: https://framkant.org/2017/07/scalable-access-control-using-o... ]


Thanks! So it looks like AuthorizedPrincipalsFile/AuthorizedPrincipalsCommand gives us a method for doing this. This would have to be combined with some sort of user ID management system still, like distribution of /etc/{passwd,group} files, LDAP/AD, etc.


There's also: https://github.com/uber/pam-ussh/blob/master/README.md

https://medium.com/uber-security-privacy/introducing-the-ube...

And for the ssh ca part, bless and teleport (as others have mentioned).

There's the option of putting stuff in ad/ldap - but if you're already using ad, kerberized ssh (and sudo etc) might be the way to go.

I like the idea of a system that's simpler than ad/ldap+kerberos - and ssh certs fits most of the bill.

The challenge becomes auth/authz beyond just login - ldap basically requires ssl ca anyway - and at that point, especially with kerberos set up - I think one might be better off sticking with one complex auth/authz system rather than two...

At least it should be possible to avoid passwords with something like: https://wiki.freeradius.org/guide/2FA-Active-Directory-plus-...


> consider that long-lived SSH credentials are an anti-pattern

With due respect: have you considered the myriad systems where you need to upload your SSH key to an UI? If my key is short term then I need to do that all the time. I can't set up an SSH CA on github for example.


Yes, we mentioned GitHub repeaatedly in these threads.


GitHub has APIs for automation (eg via Terraform). Granted not ever web-based service does, but if you were sufficiently determined to use SSH CA then I'm sure you'd find a way (or an alternative service that did support your workflow).


The fact that GitHub does it, too, doesn't make it any less of an anti-pattern.


Why hasn't anyone done for OpenSSH what Jason Donenfeld did with Wireguard?


Oxy is trying: https://github.com/oxy-secure/oxy

But it’s important to remember that SSH is a lot less fucked overall than the VPN situation was. People mostly grok SSH. SSH mostly doesn’t negotiate BF-CBC.


I disagree! SSH is not great! Because it doesn't need to support every browser on the Internet, new crypto percolates through the ecosystem much faster than it does in TLS. But it's still a janky protocol with dumb options, and a Noise-based no-negotiation alternative that fulfilled the same interface would indeed be super useful.


I don’t think we actually disagree; I said “a lot less fucked” not “great” :) you can get real AEADs with Ed25529 in SSH easily; and some of the dumb options problems may still apply with Oxy, depending on what you’re taking about.


I have been using tinysshd for a number of years and I am hooked. Keen to experiment, I have also been using ed25519 keys instead of rsa since this option was added to openssh. No one told me to use tinysshd or ed25519 keys. As someone else pointed out, it seems like most "guides" on ssh, even ones written after ed25519 was added, still advocate rsa keys.


use wireguard with telnet/ftp bound to it


I guess that should be wireguard with rsh.


I put my SSH keys inside KeepassXC (regular Keepass supports this via plugin). Way better encryption and it automatically manages the adding of keys to the ssh-agent.

All my other SSH keys I don't have in there are plaintext on the disk. The ssh askpass is in userspace and easily spoofed, any local attacker could easily fish it out anyway. Full disk encryption at rest ought to be enough for most people.


Would you say the "you don't need to migrate to Argon2" culture is actually a bad thing?


I think I’m missing context. You don’t need to migrate from what? If you have reasonable scrypt or bcrypt I’m not gonna tell you to move.


monitoring thread for mentions of tooling. Seen teleport and vault - no mention yet of bless/blessclient yet.


A rogue npm package is my nightmare scenario, those packages are like the only (or I wish..) untrusted code my OS is running.

That's why I came up with a small script/service waiting for an "inotify" event on a honey-pot SSH key (and some other files like ~/passwords.doc) which will immediately shutdown the computer on any kind of access of those files.

    #!/bin/sh

    inotifywait -e open --fromfile ~/.config/killer/killer.cfg --outfile ~/.config/killer/killer.log

    if [ $? -eq 0 ]; then
        poweroff
    fi


This is a neat idea. Though in my case an accidental grep through my homedir would be incredibly annoying.


Run untrusted stuff in a VM/container.


How does that solve an accidental grep that touches a file being monitored? Grep and your key are both trusted, and there are numerous ways someone can try to read the key despite only running specific things in a vm (and forgetting that VMs aren't bulletproof)


mac "containers" run in virtual box with exposed paths, which is vulnerable to symlinks attacks and others


I fear this might be too slow to stop an automated attack. Passing -f to poweroff, so that shutdown(8) is not called, might help a bit.


You could also just down your network interfaces.


Don't forget about ransomware. Imagine the rogue package encrypted your files and demanded payment to get the decryption key. With public key encryption or a symmetric key that is securely deleted, there is no need for network access for the attack to succeed. A fast shutdown could protect most of your files.

https://en.m.wikipedia.org/wiki/CryptoLocker

Although I think the best way to deal with ransomware is a strong backup policy, not a tripwire shutdown trick.



Good point. I haven’t thought about it much since I’m on macOS and Linux but it’s not like they’re immune. The idea of a node module encrypting my home directory (even with backups) makes me a tad nervous.


"Are you sure you want to quit? iTerm2 is still open."


You don't have to run untrusted npm packages for development. I do all of my development on development servers and just sync over node_modules for IDE autocompletion.


Agreed, I don't have to, old habits die hard.

I guess I will just set up a development VM (as @pas suggested) instead of remote development though, so thank you to both of you.


And how can you tell if that was the source of your power off?


By examining the timestamp on the log file (defined by the --outfile parameter).


Powering off will destroy valuable evidence, though, most payloads live in-memory nowadays. Maybe just disable the NIC?

It's not going to do much against a determined adversary, anyway. If he's prepared to turn the NIC back on, he would just kill your inotifywatch first.


Honestly, I didn't give it any serious thought, I came up with the idea and quickly executed it as a PoC in a couple of minutes after that compromised npm package incident a while ago.

Disabling the NIC is a nice idea too, it should also be much quicker than shutting the machine down, so, please, feel free to make the script better.

The only "clever" thing I did was making sure the honey-pot key comes first in the ~/.ssh directory and that it's big (to gain time to poweroff while it's being transmitted to somewhere).

Clearly, this is not a protection against a determined adversary.


I am not familiar with Node or npm. Is there something specific to Node's package system that requires running untrusted code?


Not really. A lot of other package managers are vulnerable to the same kind of attacks seen on node/npm.

It's just that a lot of npm packages can have an excessively long dependency chain which make it harder to audit. And the npm ecosystem is massive.


Not really. Most dependencies get RCE one way or another, just not necessarily at installation.


Sorry what is RCE?


My bad: remote code execution.


Okay. Sure. We should move behind the kindergarten stage of KDF selection. Still, look at the threat model. We're talking about the encryption of the private key at rest. In ~/.ssh, 0700. If an attacker can even read your private key file, you've probably already lost, encryption or not. That attacker can probably change your .profile to include a keylogger. You lose.

I mean, sure. Change the KDF default to something modern. But the threat we're discussing is marginal, and it's not as if the security of the SSH network protocol, which is paramount, is under threat.

If you care about this class of attack, you probably care about it enough to use an SSH CA anyway.


The blog post explicitly addresses this, but you don’t seem to interact with its point. We have evidence that smash and grab attacks exist, and since they affect more people, you’re more likely to get screwed by something like the recent eslint-scope thing than a targeted attack where the attacker does shit to your .profile.

That said; yes: you should have long-held auth on a hardware token and then use an SSH CA for temporary auth.


This argument seems to hinge on the following statement that I don't understand:

> an SSH key password is unlikely to be managed by a password manager: instead it’s something you remember.

Why is that the case? I've always been just as likely to use a password manager for my SSH key password—after all, I'm usually prompted for my SSH key password from my terminal, which accepts pasted passwords just fine.

Am I missing something?


Here's my anecdata:

I didn't know much about ssh keys when creating, so I followed directions online. The directions I followed created my key with no password. Soon my Mac encrypted the ssh key by itself, without my intervention (!), to use my Mac logon password. The encryption format was the old ssh format the article complains about, even though my openssh supported the new one. The password I use to sign into my computer is not grabbed from a password manager.


What? Are you saying macos changed the key to be encrypted SSH format? That doesn't sound right. In fact, implausible.


>You might ask yourself how OpenSSH ended up with this. The sad answer is the OpenSSL command line tool had it as a default, and now we’re stuck with it.

Given that (as far as I know) the decryption is handled only on the client side, why does an old default matter? Does it break any kind of compatibility to change that? Why only use the better algorithm for EC keys?

And even if it did break compatibility it's not like it usually stops the OpenSSH folks, old ciphers and hash functions are deprecated regularly (I have to whitelist a bunch of ciphers at work to connect to legacy systems running an old dropbear).


Some people move private keys. You shouldn't, but people do. And so they expect that an encrypted private key file from their brand new Mac will work when pasted into a five year old CentOS install.


Seems like simply issuing a warning on key creation "this key uses a newer, safer encryption scheme but as a result might not be directly portable to older versions of openssh, if that's an issue use <someflag> to use the older, weaker scheme instead" would probably solve the problem in 99% of the cases. After a few years you might remove the warning and move on.

For this type of applications security should trump backward compatibility, especially when in this case the breakage is fairly limited and easy to work around.


I'm assuming that practically everybody who is a developer on ssh-keygen knows about this. This is not a bug, right?

It's basically bad usability, where the default is unsafe, and in order to make it safe, you have to know what you're doing. Or is it like the author writes in 'TFA' that "we're stuck with it", because it used to be a default in OpenSSL?

Some people will say, "if you're using ssh-keygen we can expect that you know what you're doing". But this is a wrong assumption that gets people into trouble. I'd even say a minority of people who invoke ssh-keygen really know what's going on in detail and what providing a flag like "-t ed25519" actually does. I certainly don't, and I've had to generate quite a few ssh keys so far.


The new format has been around for a pretty long while (2013). I think it makes sense to upgrade to -o. While it's obvious to developers (I hope!) it's not obvious to most end users I think :)


not only does it seem like this is a better default, this option is awfully well-hidden on my system; doesn't show up in the short help or the example usage in the man page; gotta scroll down to find it in the exhaustive list of flags


The utility also does not warn about insecure defaults.


It's worth noting that some large cloud providers (for example, AWS) still do not support ed25519 keys.


I believe you can change to the new format (still not ed25519 though). After generating your keypair in the AWS console, do "ssh-keygen -p -o -f aws.pem" and add a strong passphrase generated from a password manager.


It seems like a simple warning message would go a long way, esp if it included the current recommended command line arguments.


This is a decent article with a click-batey title.

The argument that it's worse than paintext hinges on you both using a weak password and using the same password for something else. If you don't do either of those things then it's much better than plaintext, although technically if you use the same strong password for something else then you may have weakened the password by using it for old-format ssh ... but it still has to be brute forced.


If you select from a 10k word list, you get 13.3 bits of entropy per word. 4 of those get you 53 bits of entropy. One p3.16xlarge does 450GH/s. So, as a first order approximation: 253 / (450E9) = 2E4 or about 6 hours. Sure: you don’t get to keep that performance, but we’re talking a day. What kind of passwords do you have that make it “much better”? Or are you saying that already counts?


53 bits of entropy is basically the same entropy as a 9 character alphanumeric password. That's not a strong password to begin with and makes this calculation senseless. Redo that calculation with 71 bits (12 char alphanumeric, still not really long) and it takes close to 200 years, thus making it impractible.

The problem with these word lists is, that they yield rather low entropy compared to random strings, when the attacker knows that a word list is used. That's because one word has just a little bit more entropy than 2 alphanumeric characters.


Do you have evidence that passwords for SSH keys are as good as you claim?


...conversely, do you have evidence most people pick their passwords from a 10k word list?

And the Oxford dictionary has ~171k words, so not sure where a 10k list even comes into play


If you look at Oxford you'll never have heard of 80% of those. People mostly know 15-30k words, but their productive vocabulary is smaller than their receptive, so for anyone coming up with random words 10k sounds pretty reasonable. Once you go beyond about 35k in word frequency lists, it turns into pretty technical/archaic stuff.


I think haveibeenpwned is good evidence people choose weak passwords and reuse them.

DICEWARE uses ~8k words, chosen so that people can remember and spell them.


haveibeenpwned is great! but it doesn't have anything to do with password strength. it just tells you if your password hash has been leaked/stolen.

edit: seems I'm wrong and they do actually have a passwords section: https://haveibeenpwned.com/Passwords


haveibeenpwned provides no such statistic, other than reuse (75% of original set) -- a breached password is unrelated to its strength.

The simple fact that the HIBP password list contains 512 MILLION unique passwords...


> a breached password is unrelated to its strength

But most of them were weak. And most of them have been cracked by hobbyists.

https://blog.cynosureprime.com/2017/08/320-million-hashes-ex...

512 MILLION is not a large number of passwords to check. What do you think the hash rate of modern GPUs/ASICs is?

https://www.troyhunt.com/86-of-passwords-are-terrible-and-ot...


You're forgetting the salt added. Compared to the original post I commented on, 512 mil is way larger than 10k. So it's 512 mil known passwords, times however large the salt is...it's not trivial -- and their 10k list was expected at 6hours...

Extrapolating, 10k:6hrs == 512m:35 years


A GTX1080 gets 25GH/s on MD5, that is 1.5x10^16 per week. The salt is known -- makes rainbow attacks impractical, but doesn't reduce the hashing rate.

Martin Kleppman explained the problem back in 2013: https://martin.kleppmann.com/2013/05/24/improving-security-o...


I conveniently forgot the salt is public ::facepalm::

I also was using the OP numbers, verses trying to do any math or research myself first ::second facepalm::


It alright, that's why we have HN - not to be always right first and fastest, but to go deeper.


No, clearly most people don't pick their passwords from a 10k wordlist. My argument doesn't rely on them doing that: my argument relies on them picking bad passwords. I am using 10k wordlist diceware-style passwords only as an entropy estimation of a pretty good password.


If you're asking what I'm using myself, I use 5 random words from the COED (240,000 words). If you want to stick to a 10,000 word list you can just up the word count. Every word you add makes it 10,000 times stronger.


Good for you -- but I hope you'll agree that even 4*rand(10k) is a pretty gracious assumption for password quality. The argument is not "under literally all circumstances is it worse", but just rather, "it's worse". For that argument to hold, it just has to be plausible worse at least on average. All empirical data we have on password quality points in that direction.


> but I hope you'll agree that even 4*rand(10k) is a pretty gracious assumption for password quality

I do. However, for it to be "worse" than plaintext you still need to be using it somewhere else, and (previously unmentioned) for the attacker to have a feasible way of attacking the other use. For instance if it's your desktop password, an attacker may have no way of leveraging that (not that I'm saying this overlap is a good thing security-wise). Whereas if they have your unencrypted ssh key then they 100% have your ssh key, no work required, no AWS instance, no cost.

I don't want to get too bogged down in this. I was using the old format, and I've now changed, so thank you!


I suppose the author meant that the illusion of proper encryption is worse than simply storing the password as plaintext - especially when the encryption is presented as an opaque string instead of clearly stating that it's using MD5.


> I suppose the author meant that the illusion of proper encryption is worse than simply storing the password as plaintext

In this case, the alternative would be not storing the password at all. Note that "plaintext" in the title refers to the SSH key and not the password protecting the SSH key.


I'd suggest following the excellent guide at https://github.com/drduh/YubiKey-Guide and using one of your GPG subkeys (kept on a hardware device like a YubiKey) for SSH authentication.


I wrote this; happy to answer questions.


May I suggest that you change your blog colors? Light grey on dark beige is close to unreadable at any size without resorting to Firefox reading mode...


and thin letters on my screen - my eyes are straining reading it


I just spent a bit of time tweaking it; hopefully that's better :)


That makes a huge difference, thank you!


That's much better, thank you :)


Beside compatibility, is there any reason not to use Ed25519 keys for everything?


Nope! Ed25519 is great and honestly I think the compatibility concerns are overblown. It was introduced in 2014. If you are running four year old SSH you have other problems, like probably RHEL or some nonsense.

That said: Ed25519 is a local optimum: the bigger win is to not have long-held credentials at all, and instead use an SSH CA or something like Gravitational Teleport. Doesn't work for GitHub though.


In a world where you don’t have long held credentials - how do you get the short term credentials?


We do this:

Master user records in an identity/SSO system. We use ADFS.

Use a protocol which provides a portable “signed”, temporary credential. We use SAML but OIDC also works.

Have a command-line client which auths to the SSO and then relays an identity proof to a trusted component (lambda, cloud function, dedicated instance, whatever your policy says is ok). The identity proof is just a saml assertion or oidc token.

Have trusted component validate the identity proof from SSO and generate time limited credentials (we issue ssh, kube and iam). Our client then auto-xfers those creds up to jumpbox but you could drop them on workstation instead if your policy allows.


This is roughly what I'm talking about when I bring up SSH CA short-lived credential workflows.


I currently work in academic environments and I don't have much faith in the SSO systems. I thought about this, but often the SSO credentials are also the email password – the one password that gets entered and stored just about everywhere for the average user. I'm not willing to bet the security on a user's main login/email password, which seems to be a common setup. With that password or even just access to a user's main mailbox, a smart attacker might get through multiple layers of security by simply convincing services or staff to reset credentials.

Though it might work with a second, more limited (not-so-)SSO system for specific users and environments.


You're right that shitty systems train users to be phishable -- by typing their password all over the place -- all the time, not even counting the poor default password hygiene most users have. This is one of the reasons U2F/WebAuthn is so incredibly valuable.


Agree, SSO failures train people to type their domain password everywhere. Also used for proxy login, sent out via SMBv1 etc.

A jumpbox with 2FA might be workable. Servers only accept logins from the jumpboxes, but you don't have to get 2FA working on them. Essentially it's an internal VPN gateway. (Be on the lookout for the knobheads in IT that set up parallel login mechanisms without 2FA, like mounting the file share, virtual desktop logins from the hypervisor, Kerberos auth.)


I love SAML (not the XML part), it gives me one place where I can concentrate. My SSO first tries kerberos (if they've logged into their Windows machine using AD), x509 for smartcards, and if all that fails, username+password+totp. Password login is blocked without a TOTP token provisioned. Also it's one place I can stick fail2ban rules.


If I understand correctly: with long-held credentials.

Basically, you use long-held credentials to generate short-term credentials, and then use the short-term credentials themselves to access whatever server or service you need.


You could have a key rotation process where you use the old key to install the new key and delete the old key. But it seems sketchy because if you don't rotate in time the old key expires and you're locked out. I think in practice you'll always end up with a super-duper secret, rarely used, long term key. Or have some other backdoor like root password, LDAP/Kerberos, ADFS, etc.


RHEL 7 (and derivatives) do support Ed25519 keys.


The only thing I can think of is that no smartcard/usb-key that I know of support them. Please if someone knows of any, let me know.


The OpenPGP 3.3 card has ECC but not Ed25519 :(

FYI it supports [0]: ansix9p256r1, ansix9p384r1, ansix9p521r1 and brainpoolP256r1, brainpoolP384r1, brainpoolP512r1

[0] https://gnupg.org/ftp/specs/OpenPGP-smart-card-application-3...


I think some OpenPGP smartcards like the GnuK might support it, but no PIV mode cards AFAIK. And I don't know if the components higher up the stack (gpg-agent, ssh-agent, PKCS#11 drivers, etc) can make use of those keys. Each new key type requires protocol support up and down the stack.


Yes, Ed25519/Curve25519 has weird cryptographic choices that make it difficult to do certain things. I wouldn't use it for anything other than the specific use case it was designed for: key agreement and single signature authorization/verification.


Elaborate? I thought Ed25519 was a general deterministic signature scheme, and Curve25519 a general key agreement scheme.


And yet people have used it for ring signature authorization schemes (CryptoNote/Monero) which has directly led to inflation bugs (permitted double-spends) and an inability to have key derivation for secure long lived wallets.

That’s due to Curve25519 having a non-unit co factor. There’s a half dozen other weird properties of the curve chosen for smal efficiency gains or whatnot which also might lead to security holes in any application which is not the single signer or 2 party key agreement use case.


No, you should use Ed25519 keys wherever you can.


Could you maybe elaborate why this would make attacks better than pw brute-force feasible?

I get that this isn't a good practice, but plain brute-forcing a 20 character pw (~120 bits of entropy as alphanumeric, same as md5 hash size) is still impracticle even if it's a low cost md5 calculation.


I’m not sure I understand the question: most places where you try a password you don’t get an offline MD5-speed oracle. So two ways: it’s faster than say cracking a PBKDF2 dump, and it’s offline, unlike, I dunno, a web password prompt.

You’re right that you can imagine passwords that you still can’t enumerate. But suggesting that people have a non-reused memorized 120 bit password is, uh, optimistic.


The chances are a lot better among the list of people that actually use SSH keys.

And this should be one of the top priority passwords, so it's not too hard to imagine it being in a password manager or using a similar quality.


Or it's just their local password with keychain-integrated ssh-agent so they never type anything. I think the chances of that (which is sort of the premise of this post) are much, much higher.


I'm not saying it is literally impossible for this to be safe. I'm saying in pratcie, most of the time, it is not. A 120 bit password is not normal.


A password manager is not 'normal' but it's common enough that you shouldn't be classifying it as negligible to conclude that these passwords are worse than nothing.

Also the 120 bits was an example and a good amount more than necessary. A 12 character password isn't exactly standard but it's not ridiculous to expect. Once you exclude the passwords that are so bad even bcrypt can't save them, the number of users where the password algorithm makes a difference starts to look a lot smaller than 100%.


I wasn't sure I did understand the attack vector correctly and if there are some known vulnerabilities that make cracking this MD5 approach faster than brute-force.

In other words I wanted to know, if my SSH keys with md5 algorithm are fine, if I use a high entropy pw (which I do, especially for my ssh keys).

Sure some people will use short and reused passwords for ssh keys, but the persons using password protected ssh keys are way more informed about security than your average user. Password statistics are usually based on pw database leaks of your average user and/or sites that are not worth protecting.

There is an interesting short paper about password meters (I'm traveling without a laptoo right now, but can link it later) that found out people cared more about the strength of their password, if the login it's protecting is more important.

Combining these two points I think the percentage of ssh key users that use at least 12 character passwords is rather high, but of course I have no evidence. I spoke however about prod pws with some devs at my company and the answers ranged from 12 to ~20 chars and completely randomized 40chars in pw managers. Nothing especially short. You can also copy your ssh key pw from a pw manager.

You still make a good point in your article and using MD5 is not ideal, but I was missing information for the evaluation of risk I'm in. High enough entropy passwords seem not only better than plaintext, but actually still safe to me when MD5 is used.


Why not uses a decent work function and not have to remember so many chars? I'm keeping track of >20 high complexity passwords that mostly rotate regularly, and my brain is full.

Anyway, it doesn't matter that your password is ygGucg,guc52f when your colleague's is bigbum99.


If you have so many passwords, why not use a password manager?

Also there are some ways to create memorable and long passwords. E.g. I'm using short pictureable phrases and their translation in another language, sometimes adding a numeral if required. Example: "2YellowChopsticksZweiGelbeEssstäbchen" (even with spaces if allowed)

Very easy to remember for me, very high entropy, decent entropy if the pattern is known and requires a hand-crafted dictionary attack that even needs decent translation. E.g. in the example above chopsticks has two common German translations and the ä can also be written ae. Bonus points if you use it for a language you are currently learning.


The passwords are on different systems, you sit at a console. You have to change them regularly. I can use long DICEWARE style passwords for SSH, but I'd rather just have it use scrypt.

Though I have thought of having a mechanical typing device, actuators for each key, that would just type them for me. But it would be conspicuous.

I do use keepass for all the hundreds of other passwords where possible.


Am I correct in my understanding that the vulnerability depends firstly on the leaking of the private RSA key? (as a rogue NPM module could do...)

How can I tell if my file is vulnerable once leaked?


Did you use a simple password? It's vulnerable. If its semi-hard, and if you can't easily roll the key, try to crack it. Unless the attacker is targeting you specifically, they will give up after $x of cracking.

Switch to the newer format, roll your keys regularly, improve your logging, honeypot with the old keys, put AppArmor/SELinux controls on the .ssh directory.


Yep; this is an attack on the password-encrypted file.

You can't tell if it's leaked or not unless osmeone starts using it. You can tell if it's in the safe format right now by looking at the first few lines of the file.


Why the .singles TLD?


‘tqbf is responsible for that one :)


How about using macOS's Keychain to store keys?

On recent versions, need this in your ~/.ssh/config:

    # for Keychain (acting as agent) to load the ssh keyfile
    AddKeysToAgent yes
    # For ssh to then use Keychain as an agent
    UseKeychain yes


You can use the keychain to store keys, but if the key is ever saved in the old (broken) format, you've got a problem.


Mac keychain shits the bed on my randomly... never know why, but it happens and i hear people at work grumbling about it too so i don't put much faith into it.


If you have a recent MacBook, use https://github.com/ntrippar/sekey for SSH keys. Private key is stored in the built-in HSM (Secure Enclave) and access is controlled by biometrics (TouchID). You can do the same with a Yubikey, but it’s not quite as good because a) you can lose your Yubikey easily and b) there’s no biometrics.

Private keys stored on filesystems is an antipattern.


Ahh, if only Apple released a non-touchbar MBP with Touch ID! I'd have used this in a heartbeat.


My ssh key is stored in a yubikey. Haven’t had to type a password for ssh purpose in a long time!


A TPM chip is also easy to use with libsimple-tpm-pk11. Not a perfect solution but at least the private key cannot be stolen and misused without direct access to the computer.


If you have really old clients you may apparently be able to do this instead:

https://martin.kleppmann.com/2013/05/24/improving-security-o...


>"You might ask yourself how OpenSSH ended up with this. The sad answer is the OpenSSL command line tool had it as a default, and now we’re stuck with it."

Could someone elaborate on this? Is this saying that the ssh-keygen utility shells out to the openssl-command line tool which at one point defaulted to md5 for encryption? If this is the case why would be stuck with it?


Backwards compatibility. I don’t think it shelled out, but close enough.


It's best to assume all passwords need to have sufficient entropy to withstand a brute-force attack on some super fast hash until proven otherwise.

I didn't change my macOS password to 8-characters (to save myself 20 seconds a day) until I thoroughly researched what key-stretching was being used under the hood.


The author states:

>"Finally, most startups should consider not having long-held SSH keys, instead using temporary credentials issued by an SSH CA, ideally gated on SSO."

Can someone explain what "gating" SSH CA issued key with SSO means? How would SSO "gate" the keys?


You use SSO with strong authentication to sign a temporary credential for SSH.


Thanks, might you know which SSO implementations support this?


I like Teleport. It's not an SSO implementation but if you pay Gravitational you get SAML. BLESS, the Netflix solution, relies on you being able to call a Lambda. You can SSO that as well via STS' AssumeRoleWithSAML. (But be careful, AWS SSO should be like the last SSO you implement, generally hardening IAM is super complicated, talk to me for more details. We owe y'all a bunch of dinky "here's how you do IAM" blog posts.)

EDIT: I previously incorrectly claimed that you need to pay Gravitational for Teleport SSO. That is incorrect: you only need to pay for _SAML_, specifically -- I forgot that you can GitHub auth into it, which is a form of SSO. (Though for most of our customers I think that single trust store is a core part of SSO, and GitHub isn't a good SSO by that metric, by vritue of account reuse and the fairly tenuous links between users and organizations. GitHub does a great job of modeling open source interactions, but that model falls over a bit when translated to commercial software engineering orgs.)


The open source edition of Teleport has SSO support for logging via Github [1]. The enterprise version adds enterprise SAML implementations like Okta, Auth0 and things like ADFS.

[1] https://gravitational.com/blog/replace-static-ssh-keys-with-...


Whoops; my bad!


Use SSH keys generated and stored on some sort of PKCS11/PIV hardware device. It’s relatively easy to do now. Don’t trust a general purpose computer to keep your keys secure.

Better yet, use U2F where you have control of how you are authenticated.


    It’s tricky to find out what this DEK-Info stuff means
This is really where the modern crypto protocols and libraries are doing well imo. Even if something that opaque is implemented perfectly, it presents a risk.


Filed https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=905407 to change the Debian default.


Is anyone aware of a successful infiltration of a system that occurred by stealing an encrypted SSH key with a reasonably good password and successfully decrypting it?


If such an infiltration is successful enough, nobody (besides the attacker) would be aware of it. :)


It's the damage we'd notice -- exfiltration of data, malicious destruction, etc. The intrusion is but a step towards the real goal.


The damage wouldn't necessarily be noticed for what it is. What if such exploit was/is used to get more compromised zombie boxes into botnets for hire (such as the one currently spamming Freenode and other IRC networks for the past week or so) ?


tl;dr to move to the newer more secure format use

  ssh-keygen -o -p -f file_name
such as:

  ssh-keygen -o -p -f ~/.ssh/id_rsa
not necessary for ed25519 as it can only be stored in the new format.


What are you thoughts on using Krypton? https://krypt.co


All my ssh key passwords are generated from password managers. They are random and strong.


What if you generate the public and private keys without a password but keep it blank?


I'm finally vindicated for not using a password on my key :P


Does this affect non-RSA key types too? I use ecsda25519.


> How do you fix this? OpenSSH has a new key format that you should use. “New” means 2013. This format uses bcrypt_pbkdf, which is essentially bcrypt with fixed difficulty, operated in a PBKDF2 construction. Conveniently, you always get the new format when generating Ed25519 keys, because the old SSH key format doesn’t support newer key types.


So is that what happened? Somebody got a hold of one of the developers' private SSH keys, and cracked the encryption?


Seems like gpg auth would be a much better choice the ssh keys


Nice deflection of the real madness here that a compromised npm package gets to read stuff in your user directory.


I explicitly raise that point in the post. Hardening one aspect of something does not preclude you from hardening others. Also: if I was trying to somehow draw attention away from that fact, I probably wouldn't write a blog post that mentions it 2 weeks later.


I mean, of course it does. Most people run their development environments as their own user, and most package management systems have post-install script support.

Installing random software from the internet is problematic.


I'd be interested in running my dev environment with a dedicated user account -with no access to the main account's personal files, credentials, etc-, do you know of any guides for doing so?

(I could just give it a shot, but I reckon there will be gotchas)


What I do is use a VM as my development environment, managed by Vagrant. Any dependencies stay on the VM image.


What platform? GUI or shell?

Docker/LXD or a dev VM lets you access both simultaneously. Or create a separate user, but not as secure or convenient.


macOS. Not much interested in VMs, for efficiency.

Yeah I was playing with a separate user account, could get some basics working but I wonder how far could I get with that.

What are the bigger security risks for that approach? Assuming constrained file permissions, and that no secrets are in ENV (https://gist.github.com/telent/9742059 )


A normal user has full access to the kernel API. Always kernel info leaks, occasional easy exploits.

On macos you can use Apple's native sandboxing. See for example http://mybyways.com/blog/creating-a-macos-sandbox-to-run-kod...


Yes, that's what this was. An elaborate attempt to shield npm from scrutiny. We know where our bread is buttered.


Is that not true of most dependency manager + build systems? To focus on maven as an example, maybe it doesn't have a post-install hook for dependencies it downloads, but any build or test dependencies downloaded while building a project are going to be immediately executed unsandboxed.

At least npm has the package-lock.json feature that lets you lock down the entire dependency graph. If the package-lock.json was created with a safe set of dependencies, then any dependencies compromised in the future won't affect you (because already released versions can't be modified). Maven and gradle both seem to entirely lack a comparable feature as far as I know; if they do have that functionality as an obscure feature or plugin, then I still criticize them for not making it more prominent. (Npm generates and uses package-lock.json by default!)


Technically, other systems are vulnerable to the same. In practice, there's something about the node/js ecosystem that encourages the proliferation of dependencies, increasing the attack surface.

More

Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: