Co-founder & CEO, CloudFlare
I didn't think about it at the time, but it was only on a newly started apache instance.
Coupled with the fact the apache as a frontend tls server is pretty rare on big sites nowadays, I'm feeling pretty good about what did happen, vs what could have happened.
Someone else who uses Powerpc?
i've never felt so thankful for a memory allocation pattern.
I.e., is it a 8192-bit AES256 key?
1) CloudFlare confirms success.
2) The winner publishes their solution, including source, publicly.
3) Promptly send the link to the publication to [adam | ionicsecurity | com] (for tracking the order of submissions)
My email is public: email@example.com , and here is my proof of identity: https://keybase.io/indutny .
If you wish - you could contact me via it.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
-----END PGP SIGNATURE-----
running an exploit script against one of our own services showed only 1-2 KB of information, most if it being the (public) cert, and the rest zeroed out.
It's like multiplayer microcorruption.com
In the spirit of this challenge? A single exploitable endpoint where any number of N people are to go at the same implementation?
You can't tease this hard, man.
Also, Friday launch? :)
Has anyone already made a patch for this bug, where the lib returns random data instead of actual heap chunks?
(Of course, creating suitable fake data with a separate PNRG to avoid this would be pretty easy.)
-----BEGIN RSA PRIVATE KEY-----
-----END RSA PRIVATE KEY-----
93.142.x.x - - [11/Apr/2014:10:44:36 -0400] "GET /heartbleed HTTP/1.1" 200 1148 "https://news.ycombinator.com/" "Mozilla/5.0 (Windows NT 6.1; WOW64)AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.116 Safari/537.36"
EDIT: forgot to cite neel mehta https://twitter.com/neelmehta/statuses/453625474879471616
How many full-time people are working there?
there are about 10 of us at conformal.
Since the bug exposes a few kilobytes of uninitialized malloc() memory the kind of data the attacker will retrieve is heavily dependent on the software the server is running.
(* this is still good though as any risk is unacceptable and shouting Fire makes everyone move... and they need to on this issue)
At CloudFlare, we received early warning of the Heartbleed vulnerability and patched our systems 12 days ago.
I commented on that post that the date of discovery was 7 days ago http://www.vocativ.com/tech/hacking/behind-scenes-crazy-72-h...
For whatever reason that HN post was deleted and resubmitted so it now has no comments on it. https://news.ycombinator.com/item?id=7572796
Was the bug discovered 12 days ago or 7?
So telling major Linux distros == telling the public. At this point, you have to decide whether you want to forewarn big hosters handling millions of sites and billions of visits like Cloudflare, Google, AWS, etc., or not. I don't think there's a good universal answer to this.
The importance of telling large distros doesn't lie in them immediately releasing a fix; it lies in them being able to prepare a package with the fix before the announcement, and then exactly when the announcement happens, they can publish the package (and possibly do something to make it propagate faster to their distribution servers)
As is, when Heartbleed was announced, many distros took an hour or significantly more to offer a fixed package. Proof-of-concept exploits were also made in that time. That was a dangerous situation.
I fully expect critical vulnerabilities that are "responsibly disclosed" to be reported to major distros so that packages can be prepared, but not released, in advance; furthermore, it allows people to be ready when it's announced at an agreed-upon date so that the packages can be pushed to live.
I'd actually be okay with a system where smaller distros which use similar packaging formats to larger ones are alerted with "There is an exploit. We will publish a fixed package that will likely be compatible with your distro on DATE. Be awake then to make sure these changes go live quickly, not when one key dude wakes up in 5 hours".
Sorry for the long-winded comment. What I really wanted to do is just explain "no way to share ... deploy a fix ... without making it public" is not the reason for sharing. The reason for sharing is so that the fix can be deployed more quickly when it is deployed.
Considering the circumstances, it appears that the process that was followed with regards to the release was as flawless as it could have been.
Other than that, yes, that's exactly the notion.
AFAICT, none of this happened here. A very small number of organizations was told in advance, but nobody knows what the criteria were to get on this special advance notice list. Given how completely off-guard some really big organizations were caught (yahoo, for instance; all the linux distros, etc), this could have been handled a lot better.
The reality is that the open source community isn't vetted like an intelligence agency when it comes to holding secrets to their vest. It only takes one person in all those OSS communities to leak to the press about something of this magnitude and then the result could be even worse. The fact that this was kept under wraps for 12 days (that we know) is a testament to the folks who made the decision whom to inform.
It seems (barring evidence saying otherwise) that CloudFlare managed to keep it secret while working on it, so they were proper to have in the n.
"While we believe it is unlikely that private key data was exposed, we are proceeding with an abundance of caution. We’ve begun the process of reissuing and revoking the keys CloudFlare manages on behalf of our customers. In order to ensure that we don’t overburden the certificate authority resources, we are staging this process. We expect that it will be complete by early next week."
0390: 00 00 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C ..OLOLOLOLOLOLOL
03a0: 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C OLOLOLOLOLOLOLOL
03b0: 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C OLOLOLOLOLOLOLOL
03c0: 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C OLOLOLOLOLOLOLOL
03d0: 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C OLOLOLOLOLOLOLOL
03e0: 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C OLOLOLOLOLOLOLOL
03f0: 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C OLOLOLOLOLOLOLOL
0400: 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C OLOLOLOLOLOLOLOL
0410: 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C OLOLOLOLOLOLOLOL
0420: 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C OLOLOLOLOLOLOLOL
I do not expect this will make a material difference to the challenge - presumably you used the quoted commands to generate the answer.
# Automatically figure out what echo options to use so that echo
# '\r\n' actually just outputs the characters CR and LF and nothing
# else. This is very shell dependent.
ECHO_OPTIONS := -e -n -en
$(foreach o,$(ECHO_OPTIONS),$(if $(call seq,$(shell echo $o '\r\n' | wc -
c),2),$(eval ECHO := echo $o)))
$(error Failed to set ECHO, unable to determine correct echo command,
tried options $(ECHO_OPTIONS))
0690: 79 37 70 71 31 76 63 2F 74 70 49 67 68 4C 4F 4C y7pq1vc/tpIghLOL
06a0: 4A 4B 3D 3D 2D 2D 2D 2D 2D 45 4E 44 5F 52 53 41 JK==-----END_RSA
06b0: 5F 50 52 49 56 41 54 45 5F 4B 45 59 2D 2D 2D 2D _PRIVATE_KEY----
1ef0: 4E 63 46 39 33 51 75 34 64 2F 48 6E 47 6D 4B 50 NcF93Qu4d/HnGmKP
1f00: 51 42 72 4A 43 50 53 69 54 67 70 2F 63 79 37 70 QBrJCPSiTgp/cy7p
1f10: 71 31 76 63 2F 74 70 49 67 68 4C 4F 4C 4A 4B 3D q1vc/tpIghLOLJK=
1f20: 3D 2D 2D 2D 2D 2D 45 4E 44 25 32 30 52 53 41 25 =-----END%20RSA%
1f30: 32 30 50 52 49 56 41 54 45 25 32 30 4B 45 59 2D 20PRIVATE%20KEY-
1f40: 2D 2D 2D 2D 20 48 54 54 50 2F 31 2E 31 0D 0A 48 ---- HTTP/1.1..H
Seems like some experimenting (based on differing offset, and `%20` vs `_`).
Still going to double-check though.
Even communicating over IPC you would still be vulnerable.
It may not be possible in this clean minimal install, but in a real production environment, it should still be treated as a threat?
OpenSSL/0.9.8o zlib/184.108.40.206 libidn/1.15 libssh2/1.2.6
0600: 20 38 3D 3D 3D 3D 3D 3D 3D 3D 3D 3D 3D 44 20 20 8===========D
0610: 20 38 3D 3D 3D 3D 3D 3D 3D 3D 3D 3D 3D 44 20 20 8===========D
0620: 20 38 3D 3D 3D 3D 3D 3D 3D 3D 3D 3D 3D 44 20 20 8===========D
0630: 20 38 3D 3D 3D 3D 3D 3D 3D 3D 3D 3D 3D 44 20 20 8===========D
i could be missing something but it looks like signing does m mod p and m mod q and part of these operations involves doing a left shift on the divisor (p, q) and this is allocated to a temporary buffer. if these buffers are allocated near the heartbeat buffers then they could be leaked.
i'm looking at crypto/rsa/rsa_eay.c and it is possible that this code is not the code that is being used to do the signing in ssl.
/* signing */
static int RSA_eay_private_encrypt(int flen, const unsigned char *from,
unsigned char *to, RSA *rsa, int padding)
if ( (rsa->flags & RSA_FLAG_EXT_PKEY) ||
((rsa->p != NULL) &&
(rsa->q != NULL) &&
(rsa->dmp1 != NULL) &&
(rsa->dmq1 != NULL) &&
(rsa->iqmp != NULL)) )
if (!rsa->meth->rsa_mod_exp(ret, f, rsa, ctx)) goto err;
static int RSA_eay_mod_exp(BIGNUM *r0, const BIGNUM *I, RSA *rsa, BN_CTX *ctx)
/* compute I mod q */
if (!(rsa->flags & RSA_FLAG_NO_CONSTTIME))
c = &local_c;
BN_with_flags(c, I, BN_FLG_CONSTTIME);
if (!BN_mod(r1,c,rsa->q,ctx)) goto err;
if (!BN_mod(r1,I,rsa->q,ctx)) goto err;
/* compute I mod p */
if (!(rsa->flags & RSA_FLAG_NO_CONSTTIME))
c = &local_c;
BN_with_flags(c, I, BN_FLG_CONSTTIME);
if (!BN_mod(r1,c,rsa->p,ctx)) goto err;
if (!BN_mod(r1,I,rsa->p,ctx)) goto err;
#define BN_mod(rem,m,d,ctx) BN_div(NULL,(rem),(m),(d),(ctx))
int BN_div(BIGNUM *dv, BIGNUM *rm, const BIGNUM *num, const BIGNUM *divisor,
if (!(BN_lshift(sdiv,divisor,norm_shift))) goto err;
BN_lshift() shifts a left by n bits and places the result in r (r=a*2^n).
though, considering left shifted p and q look a lot like normal p and q i'm surprised no-one has found this by just searching through the leaked data. so maybe this is not leaked or it requires a read at the correct time because these buffers might be trashed by another computation.
As far as I can tell, it's certainly possible for intermediate data to be leaked, but it'd require pretty spectacular timing.
That said, I'm having a bit of a hard time understanding why this challenge exists. If the possibility (even remote) exists that key material was leaked in any form, it should be assumed that the key material was leaked in the worst possible way, and that everything is compromised. They seem to agree, having rolled their keys.
The only real effect I can see from this challenge existing is that people will see it and make the assumption that Heartbleed is "not a big deal, because Cloudflare's keys weren't leaked".
In other words, I think this challenge is harmful to security overall; Cloudflare seems to be doing the right thing, but the messaging here is going to cause people to not follow suit.
leaking the intermediate values in nginx to recover the private key looks like a dead end. however, i think this could be quite promising for apache mpm_worker. :)
Edit: It seems that these are only trusted CA certs, there is no server cert in there.
But maybe you meant that he's alrady been proven wrong, and the nsa would be another? Then I did misinterpret that bit.
The renowned clandestine operative B.Smalls, eloquently stated rule #2:
Never let 'em know your next move
Don't you know bad boys move in silence and violence?
That's the joke. It's a nutty professor situation.