So are we all going to jump back to pre-forked, multi-process Apache now, tack on a TLS slave daemon, and ignore gaping big holes in the application layer?
In the short term your user is compromised whether it's a cookie, an AES key for the TLS session (which will presumably still have to be resident in the process sending you data), a credit card number in a POST request, or your certificate master key.
Anyone who can intercept my traffic in close to real time, and wishes to target me, is going to know I'm talking to amazon.com, IP x.y.z.f, and that that's where they should target their Heartbleed attack for a good stab at accessing my PHP session cookie or TLS session AES key.
There are some cases, like e-mail phishing, where this isn't the case of course... but then a redirection service would be sufficient to let me script an attack against many sites.
It does make my life easier in the event of a vulnerability, with little cost to users. Since I can more efficiently respond, the user arguably gets a better experience.
I agree this doesn't solve everything, but it is a strict improvent over our current system.
Cookies are remarkably sensitive, but they can be far more easily rotated. I can make sure that every cookie is rotated transparently every day or so and leave that running as a sensible background precaution. If we had infrastructure that let us renew our TLS keys every 24 hours or so, this wouldn't be such a big deal (it would still be a big deal, but not quite as bad as it is today). But TLS keys have an expiry of usually years.
The sad thing is... we do. 24 hours is a bit much, but why not have a different certificate for each server? The whole point of a certificate chain is to give us the flexibility to issue and revoke certificates from lower down in the tree... of course most of us serfs don't get the privilege of using our own intermediates.
Oh... and we're repeating some of the same mistakes in DNSSEC. Looking at deploying DNSSEC I kept reading that the general idea of the KSK was to function as a long-term key, and the ZSK as a short term key, but I have yet to see a method of managing things with the KSK offline that isn't like pulling teeth. The latest BIND requires that both the KSK and ZSK private keys be resident on your primary nameserver when you switch on the "auto-dnssec" magic.
Still, at least setting up DNSSEC is free.
opendnssec unfortunately is a little... industrial strength.
It takes some time and consideration to configure unlike bind's "gimme the keys and I'll just take care of it for you" approach.
Think of the problems credit card processors deal with: Hiding the keys themselves from their own employees, so that getting a root password is not enough to be able to just take all the credit card information. You don't want the key in any filesystem, and you don't want the key in an easy to retrieve memory location. You end up with servers that require multiple people to boot up, as the keys only really appear when multiple people provide their own piece of the secret.
Eventually, enough security leads to the risk of data loss, as an error can make the keys become unrecoverable.
This is why we have to add security breach detection, and make recovering from a breach easy and having low consequences. Linus said that with enough eyeballs, all buts are shallow. With enough attackers, all systems are insecure.
However, my personal webserver isn't a bank. Not everyone can justify spending this much money on a HSM to get this level of assurance. What I'm proposing is a simpler solution that isn't robust against sophisticated attacks (eg when the attacker manages to get root), but is far more robust to some classes of the common attacks we see today (where the attacker can read any memory/file that the webserver has permissions to see).
HSM = Hardware Security Module (http://en.wikipedia.org/wiki/Hardware_security_module)
Of course, this requires using Apple's APIs, which are poorly documented and a pain in the neck even compared to OpenSSL. It's also not suitable for servers.
Linux has Gnome-keyring, which, amongst other interfaces, operates as a PKCS#11 softhsm (I think), but it still runs as your user.
That doesn't mean that the solution is worthless. It simply means that it doesn't cover an unrelated class of bugs.
Migrated to hardware-based tokens, or Intel SGX-protected software tokens, would extend the solution to cover the case of arbitrary code execution. That doesn't eliminate the value of the software-only solution.
You need root to get at the keys otherwise. There is code to do it here:
(This pulls the key wrapping key out of the process and then decrypts the keychain file directly.)
Be careful that in our haste to secure the private keys, we ignore easier attacks. The article seems to gloss over an attacker hacking the web server, when in fact that gives them such powers that going on to grab the private key might not even be attempted.
Looking past OpenSSL, C didn't magically become a safe language in a week, either; this approach guards against a real problem in C that is not limited to a single bug in OpenSSL: over-reading off the end of a valid buffer.
I work at a pretty security conscious company (this might be an understatement, we're pretty big on security), and even as a developer on the inside I'd have to get pretty creative to get access to our production servers.
So instead of having the key to the hard to change site certificate on many vulnerable front-line servers, it rolls up a key and on boot sends a certificate signing request to a hardened internal system?
However, I don't think X.509 supports the concept of CA certs being limited to signing only subdomains (could be wrong), and you have a large industry that prefers the status quo of you having to pay them for each cert you mint.
This ends up with ridiculous things like tying payment to the lifetime of the certificate, which allows for things like "2 year certs", which are obviously less secure than 2×1 year certs.
But having your server roll it's cert every 12 hours from a more secure cert elsewhere would be a very nice feature.
• Instead of issuing plain leaf-node certs, CAs could (and would) issue CA-certs by default.
• You'd be able to issue as many plain certs as you like, using your own CA-cert, and revoke them as often as you like. (OCSP would be much more necessary here.)
• The current CAs would be renamed to "global CAs": their power would come from the fact that they have no subject (or their subject is '.') in their CA-certs.
• Anyone owning a domain would become the CA for its own subdomains. (foo.tumblr.com would be signed by Tumblr's CA; foo.s3.amazonaws.com would be signed by the Amazon AWS CA; etc.)
Also, CAs make more money if they can issue each leaf cert themselves. Some CAs don't even allow you to get multiple private keys signed (only one active at a time) without paying more.
For your proposal to work the CA system would have to check with dnssec and probably another protocol to enforce the subca signs only its domain constraint.
DNSSEC needn't be involved; you aren't determining whether the CA owns the domain it's issuing certs for at runtime. Instead, the parent-CA who issued the CA's signing cert determined that when they issued the cert. As long as each certificate in the sent chain both 1. checks out as signed by its parent, and 2. has a subject hierarchically below its parent's subject, you can be sure each CA in the chain did whatever it considers diligence before issuing certs to its child-CAs.
It seems primarily geared at clients rather than servers, but in theory can be used for both (I'm not even sure you can load your openssh server key into ssh-agent, can you?)
Yes, actually, as of OpenSSH 6.3 you can. (I wrote most of the patch that added that feature.) However, even without doing that the OpenSSH server performs crypto operations in a separate process from the network-facing child process (unless you've disabled UsePrivilegeSeparation). The purpose of having the server talk to an ssh-agent was to allow keeping your host keys encrypted on-disk or loading them from a smart card.
No need, servers only need to do signature verifications during authentications, thus they only need users/clients public keys which must be listed as authorized_keys.
Edit: I maybe didn't fully grasp you question, if you were referring to ssh host keys, in this case to my knowledge you're right they can not be used with ssh-add.
Does an open spec HSM module exist? I can be somehow sure that linux and apache/nginx don't have backdoors as the source is audited by many people, but I need to be "sure" of my HSM too.
opencryptoki has a softhsm too, but again, it appears to run in process. Same problems.
> You can use it to explore PKCS #11 without having a Hardware Security Module.
The same amount of security can probably be obtained by just launching a process on server startup to do this with sufficient isolation from the parent process. I believe OpenSSH does something along these lines to run most of its code as an unprivileged user. It's probably even possible to do this seamlessly based on the existing SSL config directives in apache/nginx requiring no more intervention from the sysadmin than upgrading to a newer version.
PKCS#11 has a few irritants, but it's a fairly sensible API. and it's already implemented by many things (browsers, gnome-keyring, ssh, ...). OpenSSL, GnuTLS at least both support it via one mechanism or another, my only real complaint from the webserver side is that the configuration knobs aren't really plumbed through.
PKCS#11 is a little funny looking and has some small rough edges, but it's actually reasonably designed and easy to implement from scratch. That's not something I can say for many of the other PKCS standards.
It's apparently not supported by Apache/nginx nor does a suitable software-HSM exist to use it, so you're basically writing both ends of the communication. But if you do go with a separate daemon PKCS#11 may very well be a good solution. I just think forking off a process yourself is much cleaner for the use case of securing a web server.
Apache/nginx don't have to support pkcs11, they just need to support the use of existing crypto libraries that already support pkcs11:
If a server uses gnutls and passes the user-supplied filename directly to gnutls_certificate_set_x509_key_file2(), a PKCS#11 URL can be used directly without changes to the server.
> I just think forking off a process yourself is much cleaner for the use case of securing a web server.
It's something that everyone has to write for every server; people will get it wrong. Additionally, there's no support for hardware modules or plugging in new software security modules, so you'd be starting with a handicapped solution.
Fine you get Apache->SSLLib->PKCS#11. Now you need to write a PKCS#11 compliant library to talk to your HSM, and a custom serialization protocol for that communication anyway.
>It's something that everyone has to write for every server; people will get it wrong. Additionally, there's no support for hardware modules or plugging in new software security modules, so you'd be starting with a handicapped solution.
If we're worried about http servers it's basically apache/nginx. As I mentioned in another comment if apache/nginx implement this directly most users will get it by default. If they implement it with a separately configured daemon only very security conscious people will do it. So if your objective is preventing the most dangerous bugs in the most exposed daemons (and HTTPS tends to be that) in the most number of cases doing this directly by default in those two servers seems like a better solution. That doesn't stop you from also doing the other option to support actual HSMs and other fancy 1% cases.
HSM modules already have PKCS#11 drivers, because it's a standard, and that means they work readily with existing software and cover the requisite industry use-cases.
You're proposing taking web servers in a different direction simply because you find the general, widely supported solution to be antithetical to your tastes?
Unless you're actually going to write code here, I don't really understand why you care, or why you're advocating ignoring hard-won wisdom and experience that's encoded in a decent spec, just because you don't think you'll like it.
The OP isn't talking about using an actual HSM, but using a new software based daemon to do the HSM role just so the crypto calculations (and the key) aren't in the webserver's address space. He confirms that he is indeed trying to write the PKCS#11 driver himself. Just using the existing crypto code in Apache/nginx but moving it into a separate process seems much cleaner to me and has the one feature this suggestion doesn't, it works with existing config files without modification so will be used much more widely. That's all I am saying.
Apache could ship a fall-back PKCS#11 driver implementation that did this, transparently.
The OPs proposal was for a daemon so it could be run as a different user. My suggestion was indeed to fork and figure out how to isolate the process (as forking may not be enough if you have permissions to do things like ptrace processes running under the same user).
>Apache could ship a fall-back PKCS#11 driver implementation that did this, transparently.
What you are proposing then is something different. It's to make the only crypto code path the PKCS#11 one and then make the normal case the special case of that. So you are going Apache->Gnutls/OpenSSL->custom PKCS#11 driver->fork->Gnutls/OpenSSL(to actually do the crypto). Since apache already has working code for the first and last steps you could just do Apache->fork->Gnutls/OpenSSL and be done with it. Your suggestion adds more complexity but improves the support for other more exotic PKCS#11 providers.
I say this as someone who works on PKCS#11 code: It's not really possible to have a productive conversation with someone that is missing key domain experience and knowledge, but is so certain of their correctness anyway.
More concretely, a forked daemon only needs to support RSA and other crypto operations without revealing their keying material. They don't need a full TLS/SSL stack.
That said, there's absolutely no additional complexity in having both Apache and the hypothetical daemon using a full TLS/SSL crypto library. Any __TEXT pages will be shared, and duplicated __DATA and base-line library allocations are essentially zero.
I'm happy to learn. But all I am saying is that you're adding a PKCS#11 step to the call stack when you can just fork and use the existing code. That's a simple assertion, is it wrong?
>To be a bit more concrete, a forked daemon only needs to support RSA and other crypto operations without revealing their keying material. They don't need a full TLS/SSL stack.
I didn't say they did. I said that you had to implement some GnuTLS/OpenSSL code in apache to invoke the PKCS#11 operations, then implement your forking PKCS#11 driver that will then need to call GnuTLS/OpenSSL to do the crypto to return the PKCS#11 results.
>That said, there's absolutely no additional complexity in having both Apache and the hypothetical daemon using a full TLS/SSL crypto library. Any __TEXT pages will be shared, and duplicated __DATA and base-line library allocations are essentially zero.
The complexity I was referring to was in the code that you need to setup the call stack you are proposing. Obviously the gnutls/openssl .so will be shared.
Yes, that's wrong. What existing code is there that provides an IPC mechanism for offloading RSA signing operations that are done within the TLS libraries themselves?
To do that Apache needs some form of internal IPC to communicate its TLS sessions to the forked process. Maybe that's more complex than forking and doing IPC at the PKCS#11 driver level? Don't know.
Also, bear in mind that you can't just fork and continue running in modern software.
A process shall be created with a single thread. If a multi-threaded process calls fork(), the new process shall contain a replica of the calling thread and its entire address space, possibly including the states of mutexes and other resources. Consequently, to avoid errors, the child process may only execute async-signal-safe operations until such time as one of the exec functions is called.
This construct has a much worse bug. It separates the TLS from the rest of apache so it protects against bugs in other parts of the server (HTTP parsing for example) but it doesn't separate the TLS session code from the crypto primitives, so it wouldn't protect against heartbleed. For that forking at the PKCS#11 boundary would be much safer.
Thinking about it a better OpenSSL patch than the Akamai one of protecting the memory with a different alocator would be to run the actual crypto in a different process with a well-defined IPC between that and the main library. That would give you much of the safety of a software HSM without any changes to Apache/nginx or any other TLS server.
While webserver's support for PKCS#11 is annoying, it's well supported by lots and lots of other stuff (usually client side stuff like ssh, browsers etc tho). You can get webservers to do PKCS#11 today, there are docs on how to do it. They usually start with "download the source, and run configure with this pile of options."
All the complexity in this proposal is the serialisation/deserialisation which is about the same amount of work if it's pkcs#11 or some custom thing.
Pro: Marginally simpler to implement.
Pro: If the webserver fork()'s it by default, then more users get the benefit for the case that you can read the webserver memory.
Con: Doesn't protect against attacks that can read files readably by the webserver.
Con: Becomes complicated when you want to move to a real HSM.
Con: Isn't reusable between webservers, let alone for your mail server, xmpp server, webbrowsers, ssh clients and so on.
Pro: Can start with a PKCS#11 softhsm running as a seperate user today, migrate to hardware HSM with little change tomorrow.
Pro: Reusable across multiple webservers, already usable by browsers and ssh clients.
Pro: A well defined, maintained, open standard with a wide variety of implementations that already exist.
Con: Slightly more complex than a custom protocol, but I'd argue that the custom protocol would grow to cover at least what PKCS#11 supports. I'm currently investigating using dbus for the protocol, so serialisation/deserialisation is mostly taken care of.
Am I missing some Pro for a custom protocol?
This isn't a con for the custom API, it's a con for a forking solution instead of a serialization between two different users.
If PKCS#11 is indeed a good fit for that protocol then it's indeed a much better solution that something custom for the reasons you mention. Good luck with the implementation.
Much more likely that they'd just hack the web server and MITM you or something.
Personally I think the web server should do the encryption. As it is the part of the software that contains the sensitive information, AKA the content.
You can get new keys you can't get new content.
When you say "you can get new keys" which is true (although startssl appears to be the fly in this particular ointment), browsers don't validate CRLs, so the old keys are still just as valid as the new ones. Which makes getting new keys potentially worthless.
This is providing similar protections for your TLS keys to what your database server already applies.
Protecting content involves protecting keys. So to prioritize protecting content, you have to prioritize protecting keys.