

How Heartbleed Leaked Private Keys - jgrahamc
http://blog.cloudflare.com/searching-for-the-prime-suspect-how-heartbleed-leaked-private-keys

======
junto
Every time I see a blog post from the Cloudflare team I am impressed.

It isn't just their attention to detail that is impressive. They go out of
their way not just to point out issue, but suggest fixes.

Too many Heartbleed blog posts from across the blogosphere, and sadly, HN
comments, set out to criticize without suggesting something better.

Cloudflare has shown again that they are actively engaging the community for
the communities benefit as well as their own.

This seems to be a very effective marketing strategy (I assume it is anyway).

~~~
ibmthrowaway218
I would have been more impressed had their original assessment of heartbleed
(on nginx) not been "We've reviewed the code and we don't think it's
vulnerable."

Looking impressive with hindsight is relatively easy.

~~~
jgrahamc
The original blog post on Heartbleed said: "Here’s the good news: after
extensive testing on our software stack, we have been unable to successfully
use Heartbleed on a vulnerable server to retrieve any private key data. Note
that is not the same as saying it is impossible to use Heartbleed to get
private keys. We do not yet feel comfortable saying that."

In that same blog post we also said that we were revoking and reissuing all
our SSL keys, and started the challenge web site to see if private keys could
be retrieved.

[http://blog.cloudflare.com/answering-the-critical-
question-c...](http://blog.cloudflare.com/answering-the-critical-question-can-
you-get-private-ssl-keys-using-heartbleed)

So, yeah, we didn't from the outset figure out how to get the private keys,
but we decided that the risk was too high so we want ahead with revocation.

~~~
ibmthrowaway218
I don't think it's unfair to paraphrase this statement:-

"And, we have reason to believe based on the data structures used by OpenSSL
and the modified version of NGINX that we use, that it may in fact be
impossible."

as

"We've reviewed the code and we don't think it's vulnerable."

Obviously you disagree.

~~~
willvarfar
Yeap, when I read the original statement it really seemed to be telling the
customers there was nothing to worry about, and I think that's how it was
meant to be read.

When the original statement was posted I actually went back to the header to
check the author, thinking "surely jgc wouldn't post something this
marketroid?"

Only in hindsight do they point at the get-out clauses.

------
yread
I am still surprised there is code like

    
    
          if(b->d) OPENSSL_free(b->d);
    

in OpenSSL :(

~~~
bbwharris
Forgive me for asking but what is wrong in particular with this single line?

To me it reads fine and the idea is that is frees that data address if its
occupied. I'm sure I'm not the only one who is curious why this is bad.

~~~
patrickas
The sensitive data that was saved in that address is still there. Memory has
been freed so the os can use is again but the actual data is still there is
memory untill get get overwritten by something else...

The program will work with no problems, but sensitive data that has been used
then freed is available for retrieval when bugs like heartbleed are found.

As the article suggests the right way is to clean the data from memory ( by
overwriting it with something else) before freeing it.

~~~
Nursie
I've been looking at this recently, part of the problem with that approach is
that compilers will often optimise out an overwrite if they can't see anything
happening afterwards.

For instance if you set a stack-resident buffer that contained a key to all
zeros using memset, then simply exit the scope, most optimisations will detect
it as unnecessary (wtf? this never gets read back, who cares?) and ditch the
line.

Search for memset_s (part of the C11 standard) for a clear function that can
survive optimisers.

