
Webservers shouldn't have direct access to keys - remosi
https://plus.google.com/106751305389299207737/posts/WM4i4Rqxs5n
======
nly
This really isn't really the end of the story. As far as your web app goes,
HTTP cookies can be just as or more sensitive than your SSL keys, and they
also slop around in your web servers memory. This is one reason why we run
SSL/TLS in the first place, after all. In many cases we really use TLS as a
way to ensure application layer authentication. Confidentiality, in and of
itself, is often not the primary concern. Do you care more about people
accessing your Amazon account, and buying things in your name, or people
seeing what you're buying? With your Amazon cookies, I can do the former.

So are we all going to jump back to pre-forked, multi-process Apache now, tack
on a TLS slave daemon, and ignore gaping big holes in the application layer?

~~~
phamilton
They are orthogonal issues. The point of separating private keys is to contain
exposure. Heartbleed still would have happened, and all data could be exposed.
But right now we not only have to deal with data leakage, but after we patch
and fix the bug we have exposure due to the private keys potentially being
leaked. We then have to get new certs signed and experience all sorts of
additional certs. If the private keyvwas not leaked, then while we still have
to deals with the security breach, we can at least avoid having to revoke and
reissue all certs.

~~~
nly
As long as forward secrecy is/was used then the impact on the individual user
is more or less the same. Remember we're largely talking about active MITM.

In the short term your user is compromised whether it's a cookie, an AES key
for the TLS session (which will presumably still have to be resident in the
process sending you data), a credit card number in a POST request, or your
certificate master key.

Anyone who can intercept my traffic in close to real time, and wishes to
target me, is going to know I'm talking to amazon.com, IP x.y.z.f, and that
that's where they should target their Heartbleed attack for a good stab at
accessing my PHP session cookie or TLS session AES key.

There are some cases, like e-mail phishing, where this isn't the case of
course... but then a redirection service would be sufficient to let me script
an attack against many sites.

~~~
phamilton
Agreed that the effect on the user is the same, for the most.

It does make my life easier in the event of a vulnerability, with little cost
to users. Since I can more efficiently respond, the user arguably gets a
better experience.

I agree this doesn't solve everything, but it is a strict improvent over our
current system.

------
hibikir
Key management is a major issue across the board, not just web servers. Even a
theoretically unbreakable crypto will always have a weakness if the keys
themselves are compromised. Stopping keys from being copied is a major
challenge though, because anything you can do to truly protect them involves
major hassle.

Think of the problems credit card processors deal with: Hiding the keys
themselves from their own employees, so that getting a root password is not
enough to be able to just take all the credit card information. You don't want
the key in any filesystem, and you don't want the key in an easy to retrieve
memory location. You end up with servers that require multiple people to boot
up, as the keys only really appear when multiple people provide their own
piece of the secret.

Eventually, enough security leads to the risk of data loss, as an error can
make the keys become unrecoverable.

This is why we have to add security breach detection, and make recovering from
a breach easy and having low consequences. Linus said that with enough
eyeballs, all buts are shallow. With enough attackers, all systems are
insecure.

~~~
remosi
If I was running a bank, I'd hopefully use a proper HSM. You ask it to
generate a private key, you then ask it for the public key, get it signed into
a cert, and use that. The HSM promises to never give out the private key to
anyone (including the administrator), usually in a tamper evident way (if
someone did manage to extract the key, you'd notice). Even if you have root on
a machine that has an HSM plugged into it, you can't get the private keys out.

However, my personal webserver isn't a bank. Not everyone can justify spending
this much money on a HSM to get this level of assurance. What I'm proposing is
a simpler solution that isn't robust against sophisticated attacks (eg when
the attacker manages to get root), but is far more robust to some classes of
the common attacks we see today (where the attacker can read any memory/file
that the webserver has permissions to see).

~~~
mhb
For other curious readers:

HSM = Hardware Security Module
([http://en.wikipedia.org/wiki/Hardware_security_module](http://en.wikipedia.org/wiki/Hardware_security_module))

------
jakobe
Mac OS X has something similar to this "Software HSM": the Keychain. You can
put private keys in your keychain, and apps can use them for signing or
encrypting, but they can't extract them. It's quite nicely implemented; when
an app tries to access a key the first time, a dialog will pop up saying
something like "Mail is trying to use key xyz for decryption. Do you want to
allow?".

Of course, this requires using Apple's APIs, which are poorly documented and a
pain in the neck even compared to OpenSSL. It's also not suitable for servers.

~~~
DrStalker
That wouldn't help when there is a bug that lets an attacker read your
server's memory; you'd still need to reissue your certificates as a
preventative measure because you couldn't guarantee that the bit of memory
used by the software HSM hadn't been compromised.

~~~
teacup50
The keychain operates out-of-process.

~~~
remosi
But not out of user. If I can run code as your user, I can attempt to retrieve
those keys, although I assume MacOS prevents you from attaching a debugger to
the keychain.

Linux has Gnome-keyring, which, amongst other interfaces, operates as a
PKCS#11 softhsm (I think), but it still runs as your user.

~~~
teacup50
Of course. If you can run code as the user, then solutions intended to protect
against arbitrary memory _read_ bugs don't apply.

That doesn't mean that the solution is worthless. It simply means that it
doesn't cover an unrelated class of bugs.

Migrated to hardware-based tokens, or Intel SGX-protected software tokens,
would extend the solution to cover the case of arbitrary code execution. That
doesn't eliminate the value of the software-only solution.

------
rntz
This proposal is very similar to Plan 9's "factotum" scheme (see
[http://qedragon.livejournal.com/99938.html](http://qedragon.livejournal.com/99938.html)
for a nice explanation with reference to Heartbleed; factotum is similar to a
generic ssh-agent or gss-proxy), except proposing that the daemon run as a
separate user, which is a reasonable extra layer of security that deals with
some remote-code exploits.

~~~
remosi
Yeah, I was aware of factotum when I wrote this post. GNOME uses p11-kit
(which is a wrapper around PKCS#11) and gnome-keyring to kinda provide similar
functionality.

------
joosters
Is everyone falling into the trap of over-securing last week's security
problem? Isn't this just like banning water bottles on planes after a failed
liquid bomb attack?

Be careful that in our haste to secure the private keys, we ignore easier
attacks. The article seems to gloss over an attacker hacking the web server,
when in fact that gives them such powers that going on to grab the private key
might not even be attempted.

~~~
teacup50
OpenSSL isn't last week's security problem: The code didn't magically get
better in a week, and all signs indicate that there are likely more serious
issues in the library.

Looking past OpenSSL, C didn't magically become a safe language in a week,
either; this approach guards against a real problem in C that is not limited
to a single bug in OpenSSL: over-reading off the end of a valid buffer.

------
bcoates
Wouldn't it make sense to lower the exposure by having the server only have
access to its own ephemeral private key?

So instead of having the key to the hard to change site certificate on many
vulnerable front-line servers, it rolls up a key and on boot sends a
certificate signing request to a hardened internal system?

~~~
ctz
This is feasible in the current X509 public CA system, thanks to name and path
length constraints. However, I don't know of any CAs which will issue
restricted suitable certs for any sensible amount of money.

~~~
derefr
I'm very confused why the X.509 model isn't already set up to accommodate
this. Imagine that a CA could only sign CSRs for subjects hierarchically-below
its own subject. Then:

• Instead of issuing plain leaf-node certs, CAs could (and would) issue CA-
certs by default.

• You'd be able to issue as many plain certs as you like, using your own CA-
cert, and revoke them as often as you like. (OCSP would be much more necessary
here.)

• The current CAs would be renamed to "global CAs": their power would come
from the fact that they have no subject (or their subject is '.') in their CA-
certs.

• Anyone owning a domain would become the CA for its own subdomains.
(foo.tumblr.com would be signed by Tumblr's CA; foo.s3.amazonaws.com would be
signed by the Amazon AWS CA; etc.)

~~~
antocv
Because your browser doesnt know the chain, yourdomain.com could sign
google.com and browser will accept it as is today.

For your proposal to work the CA system would have to check with dnssec and
probably another protocol to enforce the subca signs only its domain
constraint.

~~~
derefr
I think the change would simply require servers to always send a certificate
chain (up to at least the cert's most-proximate global-issuer CA) instead of
just a cert. Which is pretty much what every web-scale site does already, to
short-circuit OSCP lookups on intermediate CAs.

DNSSEC needn't be involved; you aren't determining whether the CA owns the
domain it's issuing certs for at runtime. Instead, the parent-CA who issued
the CA's signing cert determined that when they issued the cert. As long as
each certificate in the sent chain both 1. checks out as signed by its parent,
and 2. _has a subject hierarchically below its parent 's subject_, you can be
sure each CA in the chain did whatever it considers diligence before issuing
certs to its child-CAs.

------
doe88
What is described is something like ssh-agent of openssh.

~~~
gingerlime
My thought exactly. It loads the key into memory and never exposes it, just
lets you perform operations such as signing and returns the result.

It seems primarily geared at clients rather than servers, but in theory can be
used for both (I'm not even sure you can load your openssh server key into
ssh-agent, can you?)

~~~
1amzave
> _(I 'm not even sure you can load your openssh server key into ssh-agent,
> can you?)_

Yes, actually, as of OpenSSH 6.3 you can. (I wrote most of the patch that
added that feature.) However, even without doing that the OpenSSH server
performs crypto operations in a separate process from the network-facing child
process (unless you've disabled UsePrivilegeSeparation). The purpose of having
the server talk to an ssh-agent was to allow keeping your host keys encrypted
on-disk or loading them from a smart card.

------
y0ghur7_xxx
> Ideally you'd want your TLS keys to be stored in an HSM

Does an open spec HSM module exist? I can be somehow sure that linux and
apache/nginx don't have backdoors as the source is audited by many people, but
I need to be "sure" of my HSM too.

~~~
praseodym
There's an open-source software HSM:
[http://www.opendnssec.org/softhsm/](http://www.opendnssec.org/softhsm/)

~~~
remosi
It runs in process tho, so it would have had the exact same result with
heartbleed. Its keys need to be readable to that user, so exploits like
[http://blog.detectify.com/post/82370846588/how-we-got-
read-a...](http://blog.detectify.com/post/82370846588/how-we-got-read-access-
on-googles-production-servers) would also still leak your private keys. So no
net win here unfortunately.

opencryptoki has a softhsm too, but again, it appears to run in process. Same
problems.

------
pedrocr
This seems like a good idea but this fixation on PKCS#11 seems strange. Why
use a whole API when Apache and Nginx can just add a simple daemon with their
own internal API to do this?

The same amount of security can probably be obtained by just launching a
process on server startup to do this with sufficient isolation from the parent
process. I believe OpenSSH does something along these lines to run most of its
code as an unprivileged user. It's probably even possible to do this
seamlessly based on the existing SSL config directives in apache/nginx
requiring no more intervention from the sysadmin than upgrading to a newer
version.

~~~
remosi
The major reason is that when your website becomes popular, and becomes more
of a target, you can swap out the software hsm daemon with a more
sophisticated hardware solution, if implemented properly, by just changing a
pkcs11: url[1] to point at the new HSM.

PKCS#11 has a few irritants, but it's a fairly sensible API. and it's already
implemented by many things (browsers, gnome-keyring, ssh, ...). OpenSSL,
GnuTLS at least both support it via one mechanism or another, my only real
complaint from the webserver side is that the configuration knobs aren't
really plumbed through.

[1]: [http://tools.ietf.org/html/draft-pechanec-
pkcs11uri](http://tools.ietf.org/html/draft-pechanec-pkcs11uri)

~~~
pedrocr
That would be an argument for supporting both. I fear that otherwise what you
will end up with is that a minority of security conscious people will have HSM
(actual hardware or software) and most others will just configure their
Apache/nginx software as quickly as possible and get on with it. Having the
basic config be more secure by spawning the soft-hsm itself sounds like a win.

------
joosters
How much protection does this really give? If you manage to hack the web
server, then you can quickly feed the HSM/software daemon unlimited amounts of
chosen plaintext to encrypt. Would this make it possible to recover the
private keys?

~~~
cbhl
Provided that a large enough private key is used, using a "chosen-plaintext
attack" (the kind you describe) to obtain the key should be computationally
infeasible with known attacks on RSA/DSA/ECDSA.

Much more likely that they'd just hack the web server and MITM you or
something.

------
dfc
Has anyone ever used the thinkpad's tpm for anything under linux? Whenever I
looked into it the tpm support seemed shoddy especially on my t41. I have not
checked the w500 in a long time.

~~~
remosi
[https://www.lorier.net/docs/tpm](https://www.lorier.net/docs/tpm) are my
notes with experimenting with the TPM in my T530. The trick is that the TPM
will protect itself fairly aggressively, so before you start turn off the
laptop, unplug the power _and_ battery (if possible), and on the FIRST boot
after you put eveything back together, go into the BIOS and clear the TPM. If
the menu option isn't there, then you probably have to power everything off :)

------
al2o3cr
Soft HSM, meet local privilege escalation exploit. #sadtrombone

------
callesgg
The stuff that should encrypt, should have the keys. That is how easy it is.

Personally I think the web server should do the encryption. As it is the part
of the software that contains the sensitive information, AKA the content. You
can get new keys you can't get new content.

~~~
remosi
Your content is often not in the webservers user, it's often stored in a SQL
or NoSQL database somewhere. Various access controls can be applied there. But
your right, unfortunately this isn't a 100% magic pixie dust solution to
everything.

When you say "you can get new keys" which is true (although startssl appears
to be the fly in this particular ointment), browsers don't validate CRLs, so
the old keys are still just as valid as the new ones. Which makes getting new
keys potentially worthless.

This is providing similar protections for your TLS keys to what your database
server already applies.

~~~
callesgg
The content in this scenario is. The http body with banking info or wathever.

