Hacker News new | past | comments | ask | show | jobs | submit login
The Heartbleed Challenge (cloudflarechallenge.com)
284 points by jgrahamc on Apr 11, 2014 | hide | past | web | favorite | 118 comments

You can read more about the CloudFlare Challenge and our own tests on obtaining SSL Private Key material here:


Matthew Prince Co-founder & CEO, CloudFlare

Thanks. This makes a lot of sense, and matches what I saw myself. I never got anything out of nginx, but I found apache quite easy. I never built more than a POC, and left it at that.

I didn't think about it at the time, but it was only on a newly started apache instance.

Coupled with the fact the apache as a frontend tls server is pretty rare on big sites nowadays, I'm feeling pretty good about what did happen, vs what could have happened.

It seems like a "feature" that you'd want your process to store the private key material as low as possible in the address space so that arbitrary read overruns don't run the risk of hitting it. It seems to just accidentally be this way in nginx, but I wonder if it should just be another (tiny) layer in the overall security design.

Shouldn't they be at the top of the address space then overruns would never hit them as they'd start below or does it not work like that.

Or you can put in a memory area with unmapped sections on both sides if you are paranoid.

IIRC openbsd's malloc does something like that by default, so every bit of data gets its own protected address space... and then the openssl guys built their own malloc without that feature, to get better performance :(

True.. I wonder if any specs/certifications actually require something like that. Typically I mostly use tricks like that to track down bugs, but there's nothing wrong with using it in production for something like a single key/cert alloc. It becomes a bit unwieldy if you have lots of things to protect. (Especially on machines with 64k pages :))

Well it seems like Akamai were doing exactly that. If you have lots I would go multi process though.

Someone else who uses Powerpc?

The bug is reading 64k from x -> x+64k. You'd want the key as low as possible in memory so the chance of the heap implementation allocating a request below it (thus allowing the +64k to overlap into the key) is next to nil.

So if your key was at an address less than x the bug would never read it, was my point. So I guess that means you'd have to force the UDP datagram payload to be stored high as that dictates what x is?

nice write up. we saw similar results in that the keying material never made it into the memory leaked.

i've never felt so thankful for a memory allocation pattern.

What is the format of the private key?

I.e., is it a 8192-bit AES256 key?

It's using RSA with key length of 2048 bits [1], and one can assume that both prime factors have equal bit length [2] i.e. 1024 bits each, and so the size of all the other private components can be derived from that. I don't know what OpenSSL's in-memory format is for this data, I suppose therein lies a good part of the challenge!

[1] https://www.ssllabs.com/ssltest/analyze.html?d=cloudflarecha...

[2] https://github.com/openssl/openssl/blob/master/crypto/rsa/rs...


The don't sign comments rule is designed to stop pointless posting of company URLs when there is no need to. In this case it is clearly appropriate for Matthew to disclose who he is as it is important to the context of the post.

We will add a $10,000 bounty for the first published and confirmed successful completion of this challenge.


1) CloudFlare confirms success.

2) The winner publishes their solution, including source, publicly.

3) Promptly send the link to the publication to [adam | ionicsecurity | com] (for tracking the order of submissions)

Good luck!


Hello Adam!

My email is public: fedor@indutny.com , and here is my proof of identity: https://keybase.io/indutny .

If you wish - you could contact me via it. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIcBAEBAgAGBQJTSJ7IAAoJEPsOEJWxeXmZYnkP/1r9qVARb89x2bAteh9RPcRI VGUmRZVz/1wLqy/LmvB+XwgkEGRyjQBCa2vHPi8PpwenLEl8IXMMYyzQSqx94tkV y54ZTwABtXXcPIaFOu4O8sG8RM6rDVsF9FpJICVxuYzrkyQPVDMEFa3faBNEgTHo zpgOf5keNq3nCnhTwhkMryBzYVVMLUdQy6JUzhzOXTarmgNH5CtRW0CzFN+9sxM9 6/EH1W4VcJt0lpcvCQK75Kv9syrbavB6qXP85b0gSvKcMHvkX0z5dPphUJcyL/9Q QyXE2vloNj6qwjLQRPoCymSjePeQsodhec47iQxVgil72U5X4YFYJpDHurE5KVOb VIGjmiXhAcL7M8MgywNNtP9iIsi45WiOBmNQVYrBr3/37TSL6FFMfpFVuOxxVrNV fRKRx7VFihTyYxqacwBLAkNPQ6V4QiEdEt74DQFZsokgk1dcchP4GrSypNbrM4SX SJW0RwqF1t44mvuAHmt0U6otgzKy4XyjmDGvki6FNE8ww+OIEQX6tgRPSTD0bURn PyNtZ1EKYZguQt3b4pveVK8JMgWxuCcO9LgKFbPTZJ8YBYOTCU6WtTm4OfCdnTG7 1EtOv6c2k5nlOiK11K8M9ZPEkjq//6C0MZFn1CB7/43+tkWDTr/vSayW/6Yi8pF5 /C1vMZXz3MmpY/gj9z+W =mH1/ -----END PGP SIGNATURE-----

Thank you!

That is awesome Adam, please have it posted on the Challenge page so that more people can see it. Hopefully someone solves it soon :)

Hopefully someone doesn't solve this!

We have no official relationship with CF so they will have to decide to list it on the challenge page. We are a happy CloudFlare customer though!

Didn't Juliano Rizzo already post that he'd been able to extract keys from a server? I'm not clear on the circumstances; for instance, it might have been right after boot, and it might have been Apache, and it might have been FreeBSD.

Errata security modified their original post to say that yes they are extractable. And a moderator properly fixed the title after the article was changed to reflect the retraction:


i suspect the memory accessible by this bug depends a lot on the software, OS and possibly hardware, e.g. on openbsd and bitrig amd64, the amount of memory leaked per exploit is less than 64 KB, closer to 32 KB. if you go much past the 32 KB mark on these OSes, it segfaults.

running an exploit script against one of our own services showed only 1-2 KB of information, most if it being the (public) cert, and the rest zeroed out.

I wonder if Cloudflare is logging all data sent to and from the challenge server, and then searching it for private key fragments. If not, they should be! As it's not guaranteed that a lucky attacker will be aware of receiving key material, if it is exposed.

It's cool seeing other people's attempts to extract the key in my return buffer.

It's like multiplayer microcorruption.com

Which, FWIW, is in the works.

Please please please email out to past participants when it is available, the first one was amazing amounts of fun.

The new one is b-a-n-a-n-a-s.

What do you mean "multiplayer"?

In the spirit of this challenge? A single exploitable endpoint where any number of N people are to go at the same implementation?

You can't tease this hard, man.

Also, Friday launch? :)

Care to post more info about it?

This is fun, indeed.

Has anyone already made a patch for this bug, where the lib returns random data instead of actual heap chunks?

Irony would be such a patch leaking information about the state of any random number generator leading to more easily guessable session keys or the like.

(Of course, creating suitable fake data with a separate PNRG to avoid this would be pretty easy.)

Someone is submitting false keys in their POST data :D


Note that by visiting, your IP and referer become accessible by anyone running an heartbleed exploit:

  93.142.x.x - - [11/Apr/2014:10:44:36 -0400] "GET /heartbleed HTTP/1.1" 200 1148 "https://news.ycombinator.com/" "Mozilla/5.0 (Windows NT 6.1; WOW64)AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.116 Safari/537.36"

Whoops, too late. Were you able to get that info from actually running the exploit?

Yes, I was. I extracted access-log entries from 23 unique IPs in a few hours, though most came from a single IP

what i am expecting ppl to see is that you can't actually get to the tls private key itself. we have done some testing with our backup service, cyphertite, and have yet to attack and actually compromise any keying material.

EDIT: forgot to cite neel mehta https://twitter.com/neelmehta/statuses/453625474879471616

You all are involved in so many great open source projects -- just wanted to give a shout-out to Conformal. People do notice and appreciate your contributions.

How many full-time people are working there?

we appreciate your supportive comments :)

there are about 10 of us at conformal.

I like spectrwm but it really needs a NEWS/CHANGELOG. Users can not quickly get an idea of what has changed in between release X and Y. I went to update the debian "please package latest spectrwm" bug and there is no easy way to post the changes between 1.0.0 (yes the debian package is that old) and 2.5.0.

This post first agreed, then upon further review retracted that finding: https://news.ycombinator.com/item?id=7561399

It's a good idea but even if nobody successfully exploits this particular website with the heartbleed bug doesn't mean much for the rest of the vulnerable sites.

Since the bug exposes a few kilobytes of uninitialized malloc() memory the kind of data the attacker will retrieve is heavily dependent on the software the server is running.

I read elsewhere that the bug exposes first and foremost memory that OpenSSL itself used before (because OpenSSL has its own allocator running on top of malloc).

Is this really an accurate challenge? I have wondered if the true exposure risk from Heartbleed may be over-stated* due to memory separation between processes, etc. This however is probably a clean server with a fairly static install. There isn't a risk of things like session leakage which I think is the true risk of Heartbleed. Nor would there be the memory fragmentation that would occur in a production system. While intriguing I'm not sure how much it proves?

(* this is still good though as any risk is unacceptable and shouting Fire makes everyone move... and they need to on this issue)

It may be the case that Heartbleed did not practically expose private keys. The risk was not overstated in general, though; a virtually undetectable method that has been proved to do things like leak plaintext username/password credentials is still catastrophic.

There's no memory separation between a web server that incorporates OpenSSL and... itself.

It might be more accurate if this server ran some services, though, like a real server would.

Anything OpenSSL decrypts surely has to hit the OpenSSL process's memory, right? That would include requests, which could easily contain sessions, passwords, etc.

Much kudos to the guy who cracked it later in the day. I'm glad this proved a useful test. I gladly stand corrected.

I wonder if one could use this server (or, of course, any other vulnerable server, but the hostname on this one is nice) for phishing people. For example, e-mail them the link, then try to snarf the referer header out of the server's memory in the hopes that their webmail URL contains something juicy, or give them a fake form that POSTs interesting data to it. Probably far-fetched, but it's amusing to consider.

I seriously don't expect anyone will find anything (reasons mentioned in the blog post), but that doesn't mean I haven't pointed my exploit that searches for private key material in the return buffer at it on a one second interval. If anything I suppose CloudFlare will be able to produce some pretty pictures of the amount of incoming heartbeats they received!

A different CloudFlare post on HN https://news.ycombinator.com/item?id=7572666 claimed that they had a fix for the HeartBleed bug 12 days ago.

At CloudFlare, we received early warning of the Heartbleed vulnerability and patched our systems 12 days ago.

I commented on that post that the date of discovery was 7 days ago http://www.vocativ.com/tech/hacking/behind-scenes-crazy-72-h...

For whatever reason that HN post was deleted and resubmitted so it now has no comments on it. https://news.ycombinator.com/item?id=7572796

Was the bug discovered 12 days ago or 7?

The bug was discovered by whitehat researchers approximately 12 days ago. It was publicly announced 5 days ago. We got early word of it from the researchers who initially discovered it, allowing us to patch our systems and ensure all sites behind CloudFlare were not vulnerable. However, we have no way of knowing how long blackhats may have had it. It had been present in the OpenSSL software for the last 2+ years. Therefore we're trying to get an informed sense of the security risks. Hence our work attempting to use the vulnerability to recover SSL private keys and, as announced today, the CloudFlare Challenge.

I'm disgusted they chose to share it with you early and not the major Linux distros...

There's no way to share such a bug with the major Linux distros and let them deploy a fix to users without making it public at the same time. Even assuming that the distros commit to handle the fix submission and silently repackage openssl (which they don't always do, depending on their policy), the word would get out minutes after it's pushed to the update servers.

So telling major Linux distros == telling the public. At this point, you have to decide whether you want to forewarn big hosters handling millions of sites and billions of visits like Cloudflare, Google, AWS, etc., or not. I don't think there's a good universal answer to this.

Not so; companies like RHEL understand the importance of disclosure timelines and won't leak it early.

The importance of telling large distros doesn't lie in them immediately releasing a fix; it lies in them being able to prepare a package with the fix before the announcement, and then exactly when the announcement happens, they can publish the package (and possibly do something to make it propagate faster to their distribution servers)

As is, when Heartbleed was announced, many distros took an hour or significantly more to offer a fixed package. Proof-of-concept exploits were also made in that time. That was a dangerous situation.

I fully expect critical vulnerabilities that are "responsibly disclosed" to be reported to major distros so that packages can be prepared, but not released, in advance; furthermore, it allows people to be ready when it's announced at an agreed-upon date so that the packages can be pushed to live.

I'd actually be okay with a system where smaller distros which use similar packaging formats to larger ones are alerted with "There is an exploit. We will publish a fixed package that will likely be compatible with your distro on DATE. Be awake then to make sure these changes go live quickly, not when one key dude wakes up in 5 hours".

Sorry for the long-winded comment. What I really wanted to do is just explain "no way to share ... deploy a fix ... without making it public" is not the reason for sharing. The reason for sharing is so that the fix can be deployed more quickly when it is deployed.

It doesn't matter if they show "responsibly disclosed" all their repos are publicly available to see, it takes one person looking through commits then going wtf is this followed by a quick look at the code and a blog post to make this a wildfire.

Every major linux distro has a procedure in place for discreetly preparing updates for pre-disclosure security flaws.

It only takes one person on any of these teams to leak the details and the cat is out of the bag, and then the scramble is on.

Considering the circumstances, it appears that the process that was followed with regards to the release was as flawless as it could have been.

If this is really the case, there is no excuse.

I think the notion is the distro security team codes and tests a patch, but doesn't commit the code to public repos or release the patch publicly until an agreed-upon disclosure date.

Not necessarily that the distro security team codes the patch even. In most cases, upstream (e.g. openssl here) should have an official patch/commit that is private, but is given to these trusted distros. The security team only has to create a package with the upstream patch.

Other than that, yes, that's exactly the notion.

No, it works like this: everyone gets their fix ready hush hush, and on an agreed date it's made public, and vendors hit the "publish" button more or less simlultaneously.

Except that it doesn't work like this. The major Linux vendors are too decentralized for this to work.

Except that it does work like this on a regular basis. It's not just something that sounds nice in theory. Distros and other major software vendors regularly coordinate disclosure. Have there been failures of the process? Sure, but that's the nature of secret keeping. The advantages of coordinated release far outweigh the risk of occasional mistakes, since the latter simply leaves people in the same position as they'd have been without any coordination (i.e. the exact same position that the distros were in with heartbleed).

The fix wasn't a massive one. Even without giving details they could have let the distros know that an emergency release of OpenSSL was coming.

You would think that they could easily work on in advance of incidents who they can trust to with early information. I'd be amazed if Canonical (Ubuntu), Redhat (RHEL) and The Attachmate Group (SUSE) wouldn't show discreet discretion. I don't know about Debian and similar projects, but you would think they could determine this in advance.

I haven't been involved in distro security for a few years, but all this coordination used to happen via a mailing list. Organizations (distro maintainers, OS vendors, security people representing some of the larger/more security sensitive open source projects, etc) would need to apply to be on the list. They'd need to document who would have access to the sensitive materials posted, what their procedure would be for handling, etc. Impact assessment, disclosure timelines, CVE assignments from MITRE, attributions, etc etc would all be coordinated on this list. Fixes would not be pushed to public VCS systems or package repositories before the agreed-upon disclosure date.

AFAICT, none of this happened here. A very small number of organizations was told in advance, but nobody knows what the criteria were to get on this special advance notice list. Given how completely off-guard some really big organizations were caught (yahoo, for instance; all the linux distros, etc), this could have been handled a lot better.

There is just no easy way to handle this and someone had to make the decision on who got what.

The reality is that the open source community isn't vetted like an intelligence agency when it comes to holding secrets to their vest. It only takes one person in all those OSS communities to leak to the press about something of this magnitude and then the result could be even worse. The fact that this was kept under wraps for 12 days (that we know) is a testament to the folks who made the decision whom to inform.

That works up until it doesn't. Telling n people can work great, telling n+1 people may be complete failure, worse than telling none at all.

It seems (barring evidence saying otherwise) that CloudFlare managed to keep it secret while working on it, so they were proper to have in the n.

I have not seen any comment by the security researchers about why they choose to disclose to some organizations (such as cloud flare) earlier than the general public or the distributions. Perhaps, they felt the need to have some organizations with very large OpenSSL installations test the patch prior to make sure it worked and did not cause any unexpected problems?

I'm sure this bug was discovered at least 1 month ago. I would not trust my keys are safe even if using cloudflare.

From the related blog post: http://blog.cloudflare.com/answering-the-critical-question-c...

"While we believe it is unlikely that private key data was exposed, we are proceeding with an abundance of caution. We’ve begun the process of reissuing and revoking the keys CloudFlare manages on behalf of our customers. In order to ensure that we don’t overburden the certificate authority resources, we are staging this process. We expect that it will be complete by early next week."

If it was a month ago that was a very long period of time to sit on something so devastating.

Look what I found...

  0390: 00 00 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C  ..OLOLOLOLOLOLOL
  03a0: 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C  OLOLOLOLOLOLOLOL
  03b0: 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C  OLOLOLOLOLOLOLOL
  03c0: 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C  OLOLOLOLOLOLOLOL
  03d0: 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C  OLOLOLOLOLOLOLOL
  03e0: 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C  OLOLOLOLOLOLOLOL
  03f0: 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C  OLOLOLOLOLOLOLOL
  0400: 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C  OLOLOLOLOLOLOLOL
  0410: 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C  OLOLOLOLOLOLOLOL
  0420: 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C 4F 4C  OLOLOLOLOLOLOLOL

For the command to match the description, shouldn't that be "echo -n"? Otherwise the signed string would include a trailing newline.

I do not expect this will make a material difference to the challenge - presumably you used the quoted commands to generate the answer.

There is no portable way to echo without a newline. Use printf instead.

Meanwhile over in a Makefile...

    # Automatically figure out what echo options to use so that echo
    # '\r\n' actually just outputs the characters CR and LF and nothing
    # else. This is very shell dependent.

    ECHO_OPTIONS := -e -n -en
    ECHO :=
    $(foreach o,$(ECHO_OPTIONS),$(if $(call seq,$(shell echo $o '\r\n' | wc -
    c),2),$(eval ECHO := echo $o)))
    ifeq ($(ECHO),)
    $(error Failed to set ECHO, unable to determine correct echo command, 
    tried options $(ECHO_OPTIONS))

Maybe they changed the description in the meantime, it now includes the newline, saying: "Proof I have your key\n"

So out of curiosity, I ran the python heartbleed tester against their server. Surprisingly, the first (and every response thereafter) returned a memory dump which contained at least part of what looked like a private key. I immediately became skeptical when each of these apparent private keys ended in LOLJK.

  0690: 79 37 70 71 31 76 63 2F 74 70 49 67 68 4C 4F 4C  y7pq1vc/tpIghLOL
  06a0: 4A 4B 3D 3D 2D 2D 2D 2D 2D 45 4E 44 5F 52 53 41  JK==-----END_RSA
  06b0: 5F 50 52 49 56 41 54 45 5F 4B 45 59 2D 2D 2D 2D  _PRIVATE_KEY----
Seems like a clever setup to return hand-crafted responses.

Got that too:

  1ef0: 4E 63 46 39 33 51 75 34 64 2F 48 6E 47 6D 4B 50  NcF93Qu4d/HnGmKP
  1f00: 51 42 72 4A 43 50 53 69 54 67 70 2F 63 79 37 70  QBrJCPSiTgp/cy7p
  1f10: 71 31 76 63 2F 74 70 49 67 68 4C 4F 4C 4A 4B 3D  q1vc/tpIghLOLJK=
  1f20: 3D 2D 2D 2D 2D 2D 45 4E 44 25 32 30 52 53 41 25  =-----END%20RSA%
  1f30: 32 30 50 52 49 56 41 54 45 25 32 30 4B 45 59 2D  20PRIVATE%20KEY-
  1f40: 2D 2D 2D 2D 20 48 54 54 50 2F 31 2E 31 0D 0A 48  ---- HTTP/1.1..H
Curiously enough, one of the first responses, after which they became somewhat predictable (with the occasional "herring").

Seems like some experimenting (based on differing offset, and `%20` vs `_`).

I got a private key without the LOLJK, but I'm 99% certain this is someone trolling me.

Still going to double-check though.

Would a multi-process server engine help protect against this? Think what Chrome does with tabs. If the network request is received by a dedicated IO process which then uses IPC to communicate with other parts of the server, then perhaps sensitive information like keys would not be in the same address space so could not be leaked? I guess if the bug was in a sensitive process then it would still happen. Disclaimer: I have no idea how modern servers are architected, perhaps they already do this :) Would be interested to hear from anyone more knowledgeable.

The keys would be in the same process as the one doing the SSL in the first place, and that is the part that is vulnerable.

Even communicating over IPC you would still be vulnerable.

If what CloudFlare is saying is true (and I think it is), that the only possibility of Nginx/Apache leaking the private key is on start up due to how low of a memory address the private key is given, is there anything in Nginx/Apache to fire a bunch of dummy requests at itself before accepting public connections? This should effectively bury the memory address that the private key is stored in. Would this be useful if it doesn't exist or would it only be useful in hindsight?

Anyone know of a quick way to force nginx to restart?

Haha, that's what I've been thinking about since this was posted as well :)

Someone POSTs the "BEGIN SECRET KEY" strings and such to server. Funny thing is, key is probably stored as binary and without file header/footer.

Where it might not be possible to get SSL private keys directly via heartbleed. Is there not also the possibility of exposing reused credentials or something that exposes a further exploit that could provide root access or similar to a server, allowing the retrieval of these keys?

It may not be possible in this clean minimal install, but in a real production environment, it should still be treated as a threat?

I have got this header:

OpenSSL/0.9.8o zlib/ libidn/1.15 libssh2/1.2.6 Host: www.cloudflarechallenge.com Accept: / PrivateKey: EiS3mdBFanVEaeRkk4otJRRHFTGi6tVZUJKl5v7rGpjJnY0gTn4PWSlOJqA2l32o Content-Length: 1721 Content-Type: application/x-www-form-urlencoded Expect: 100-continue


That is somebody trolling you by modifying their request headers.

hmm, all I'm getting is a bunch of dicks...

  0600: 20 38 3D 3D 3D 3D 3D 3D 3D 3D 3D 3D 3D 44 20 20   8===========D
  0610: 20 38 3D 3D 3D 3D 3D 3D 3D 3D 3D 3D 3D 44 20 20   8===========D
  0620: 20 38 3D 3D 3D 3D 3D 3D 3D 3D 3D 3D 3D 44 20 20   8===========D
  0630: 20 38 3D 3D 3D 3D 3D 3D 3D 3D 3D 3D 3D 44 20 20   8===========D

Too bad the link they give on that page doesn't exist (yet): http://blog.cloudflare.com/the-heartbleed-challenge/

Sorry about that. It's meant to redirect to: http://blog.cloudflare.com/answering-the-critical-question-c...

Found this among the returned data: http://cl.ly/image/2o391V1T2R46 Who would request that much lorem ipsum nonsense and for what purpose?

perhaps to push the memory allocation for url out a bit wider, to get a segment of memory in a different place?

so i assume any intermediate values from the RSA computations will end up in the heap and may be accessible by an attacker. is it possible to reconstruct the key from these values?

i could be missing something but it looks like signing does m mod p and m mod q and part of these operations involves doing a left shift on the divisor (p, q) and this is allocated to a temporary buffer. if these buffers are allocated near the heartbeat buffers then they could be leaked.

i'm looking at crypto/rsa/rsa_eay.c and it is possible that this code is not the code that is being used to do the signing in ssl.

    /* signing */
    static int RSA_eay_private_encrypt(int flen, const unsigned char *from,
             unsigned char *to, RSA *rsa, int padding)


            if ( (rsa->flags & RSA_FLAG_EXT_PKEY) ||
                ((rsa->p != NULL) &&
                (rsa->q != NULL) &&
                (rsa->dmp1 != NULL) &&
                (rsa->dmq1 != NULL) &&
                (rsa->iqmp != NULL)) )
                if (!rsa->meth->rsa_mod_exp(ret, f, rsa, ctx)) goto err;

  static int RSA_eay_mod_exp(BIGNUM *r0, const BIGNUM *I, RSA *rsa, BN_CTX *ctx)


          /* compute I mod q */
        if (!(rsa->flags & RSA_FLAG_NO_CONSTTIME))
                c = &local_c;
                BN_with_flags(c, I, BN_FLG_CONSTTIME);
                if (!BN_mod(r1,c,rsa->q,ctx)) goto err;
                if (!BN_mod(r1,I,rsa->q,ctx)) goto err;


          /* compute I mod p */
        if (!(rsa->flags & RSA_FLAG_NO_CONSTTIME))
                c = &local_c;
                BN_with_flags(c, I, BN_FLG_CONSTTIME);
                if (!BN_mod(r1,c,rsa->p,ctx)) goto err;
                if (!BN_mod(r1,I,rsa->p,ctx)) goto err;

  #define BN_mod(rem,m,d,ctx) BN_div(NULL,(rem),(m),(d),(ctx))

  int BN_div(BIGNUM *dv, BIGNUM *rm, const BIGNUM *num, const BIGNUM *divisor,
           BN_CTX *ctx)

    if (!(BN_lshift(sdiv,divisor,norm_shift))) goto err;

  BN_lshift() shifts a left by n bits and places the result in r (r=a*2^n).
EDIT: obviously if you leak p and q you can trivially reconstruct the private key :) but it is possible other intermediate values might be leaked as well that allow for key reconstruction.

though, considering left shifted p and q look a lot like normal p and q i'm surprised no-one has found this by just searching through the leaked data. so maybe this is not leaked or it requires a read at the correct time because these buffers might be trashed by another computation.

> maybe this is not leaked or it requires a read at the correct time because these buffers might be trashed by another computation.

As far as I can tell, it's certainly possible for intermediate data to be leaked, but it'd require pretty spectacular timing.

That said, I'm having a bit of a hard time understanding why this challenge exists. If the possibility (even remote) exists that key material was leaked in any form, it should be assumed that the key material was leaked in the worst possible way, and that everything is compromised. They seem to agree, having rolled their keys.

The only real effect I can see from this challenge existing is that people will see it and make the assumption that Heartbleed is "not a big deal, because Cloudflare's keys weren't leaked".

In other words, I think this challenge is harmful to security overall; Cloudflare seems to be doing the right thing, but the messaging here is going to cause people to not follow suit.

i've locally tested the BN_div function and the sdiv buffer is intact at the end of the function with a p or q value. however, the BN values seem to be allocated in some kind of pool and they are reused so the private keys get clobbered by another BN function soon after. also, at the end of the RSA_eay_private_encrypt function the BN values are zeroed (or written over with crap data) so unless there is another implementation bug it should be impossible to leak the intermediate values in a single threaded implementation.

leaking the intermediate values in nginx to recover the private key looks like a dead end. however, i think this could be quite promising for apache mpm_worker. :)

So it looks like you might be right. Debian uses Apache mpm_worker by default, and after getting a test install running and using ab to provide load I managed to get the private key in under a minute on the first run: http://t.co/nYvIw7q4M8 (I was lucky the first time, usually it seems to take a little longer.)

for nginx it might not be possible for the intermediate data to be leaked. nginx is single threaded so as long as the intermediate buffers are safely freed or clobbered by the end up the ssl signing method then it won't be possible to leak the key that way. i suspect multithreaded apache would pose a real risk.

ssl_certificate /home/nick/ssl-bundle.crt

ssl_certificate_key /home/nick/server.key

Got that one too. Also got this [1] but it looks like public keys maybe ? I'dont have time to check right now.

[1] http://pastebin.com/KiVNV0c6

I got this, but I have too little time to verify whether it actually contains key information or not: http://pastebin.com/0CRw6hSy

Edit: It seems that these are only trusted CA certs, there is no server cert in there.

Don't be surprised when the winner has an @nsa.gov email address.

I would be shocked if Neel was wrong about private key exposure AND it is a nation state that highlights his misunderstanding.

Cloudfare isn't making the same claim - they're just saying it doesn't seem to happen on their (modified) Nginx setup.

I do not understand what you are trying to convey in your message. I never mentioned Cloudflare and neither did the comment I was responding to. What part of my comment led you to believe I was presenting/responding-to a claim made by cloudflare?

The article is about Cloudfare saying their keys are safe. We were discussing implications of someone winning the Cloudfare challenge. You suggested the fall of the challenge would confirm Neel being wrong? I was alluding to Neel's position that that any key leaking is "unlikely" - a much more tenuous position than Cloudfare's.

But maybe you meant that he's alrady been proven wrong, and the nsa would be another? Then I did misinterpret that bit.

Too often you seem lame jokes like the original comment that require the belief that NSA is leaps and bounds ahead of the world in infosec AND yet NSA is composed of bumbling morons. NSA would be grossly incompetent if they knew how to retrieve the private key AND then informed the world of their offensive capability. This is a basic tenet of intelligence operations, you do not publicize your capabilities to your adversaries.

The renowned clandestine operative B.Smalls, eloquently stated rule #2:

  Never let 'em know your next move
  Don't you know bad boys move in silence and violence?

> leaps and bounds ahead of the world in infosec AND yet NSA is composed of bumbling morons

That's the joke. It's a nutty professor situation.

Is this running on an off-the-shelf nginx or your custom version ?

The challenge web site is running stock NGINX.

Anyone know what the comodoca cert are all about?

I must be the only one that's pretty much done with having to read about heartbleed.

My guess is that if this gets exploited this is going to be used in combination with another (yet unknown) bug. Now all we can do is set back and hope the white hats prevail

perfect attempt to steal 0day spl0its :P

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact