Hacker News new | comments | show | ask | jobs | submit login
Akamai release source to their custom secure_malloc for OpenSSL (gmane.org)
267 points by nikcub on Apr 13, 2014 | hide | past | web | favorite | 80 comments



For background, this came up as Akamai mentioned in their Heartbleed announcement that they didn't think they had any sensitive information[0] disclosed by the bug as they had wrapped OpenSSL's malloc to create two heaps: a secure heap for sensitive information like keys and an ordinary heap:

https://blogs.akamai.com/2014/04/heartbleed-update.html

It prompted a lot of interest in their solution, which prompted them to release the patch.

edit: [0] by "sensitive information" we mean keys, if I understand the implementation correctly, you could still overrun into web server allocation and get back HTTP headers.


We definitely leaked session cookies and passwords, until we patched. This only protected SSL private keys.


You could do better with a probabilistically secure allocator instead: http://people.cs.umass.edu/~emery/pubs/ccs03-novark.pdf

Randomized allocation makes it nearly impossible to forge pointers, locate sensitive data in the heap, and makes reuse unpredictable.

This is strictly more powerful than ASLR, which does nothing prevent Heartbleed. Moving the base of the heap doesn't change the relative addresses of heap objects with a deterministic allocator. A randomized allocator does change these offsets, which makes it nearly impossible to exploit a heap buffer overrun (and quite a few other heap errors).


That paper only seems to mention heap overflows for purposes of writing to a target object that will later be used for indirection or execution. I don't see how it makes Heartbleed any better to extract a shuffled heap instead of a sorted one. What am I missing?


It's not just a shuffled heap, it's also sparse. Section 4.1 covers heap overflow attacks, with an attacker using overflows from one object to overwrite entries in a nearby object's vtable. Because the objects could be anywhere in the sparse virtual address space, the probability of overwriting the desired object is very low (see section 6.2).

The same reasoning applies to reads. If sensitive objects are distributed throughout the sparse heap, the probability of hitting a specific sensitive object is the same as the probability of overwriting the vtable in the above attack. The probability of reading out any sensitive object depends on the number of sensitive objects and the sparsity of the heap.

There are also guard pages sprinkled throughout the sparse heap. Section 6.3.1 shows the minimum probability of a one byte overflow (read or write) hitting a guard page. This probability increases with larger objects and larger overflows. You can also increase sparsity to increase this probability, at a performance cost.


An attack that reads everything is different from an attack that writes everything; 4.1 doesn't seem to understand that. The latter will just crash the computer like some kind of Core Wars champ. The former can copy out the whole heap! So a writing attacker has to worry about crashing the server or getting caught. A reading attacker can just loop, then run.

The guard pages I believe help---but random guard pages just mean I won't know quite what's protected and what is not. This last week I benefitted quite a bit from being able to reconstruct year old server memory layouts precisely.

In this case, I want a marginal chance of compromise no worse than 2^-192, about the strength of RSA-2048.


From reading that blog post it is not clear to me if their wrapping code existed prior to the discovery of the heartbleed bug: The post summary says:

Akamai patched the announced Heartbleed vulnerability prior to its public announcement. We, like all users of OpenSSL, could have exposed passwords or session cookies transiting our network from August 2012 through 4 April 2014. Our custom memory allocator protected against nearly every circumstance by which Heartbleed could have leaked SSL keys.

So: did the custom memory allocator exist already in August 2012? From reading the post this looks to be the case. Could it be that someone at Akamai took a look at the heartbeat (or other OpenSSL) code, decided that it could lead to memory leaks, and wrote their own memory allocator wrapper code to guard against this?

Edit: SSL -> OpenSSL


From the original post:

> This patch is a variant of what we've been using to help protect customer keys for a decade.

So it protects keys against heartbleed, but not other HTTP related data (urls, cookies, headers, etc).


How I read it is that this code already protected their private key, but the Heartbleed bug still disclosed other private details (such as submitted user data).


Akamai's patch is older than the heartbleed issue in OpenSSL so it wasn't a response to heartbleed, and they were still vulnerable for non-key data.


Keys are not passwords and session cookies.


Why can't long-term private keys be protected, and used solely by, the kernel?

Linux CryptoAPI seems to have an asymmetric key interface[0], RSA signatures are implemented[1], and there's even a x509/ASN1 parser[1]. Shouldn't this be the default on Linux rather than furthering the NIH syndrome with crude library-local allocators? In the very least, shouldn't there be a secure malloc in glibc? If not, why doesn't OpenSSL use an existing secure malloc library[2]?

[0] https://www.kernel.org/doc/Documentation/crypto/asymmetric-k...

[1] https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux....

[2] http://www.jabberwocky.com/software/secmalloc/


There have been a lot of posts about the custom malloc. From what i understand, parts of OpenSSL actually depend on the buggy implementation (specifically, the fact that you can the last thing free'd can be malloc'd again).


> If not, why doesn't OpenSSL use an existing secure malloc library?

Any extra library dependency makes OpenSSL less portable. It doubles the work needed to get it running on your new embedded system. shrug It may also increase the code size and attack surface (although probably a good trade-off).

I don't disagree that it sounds like a good idea :).


OpenSSL aims to be cross-platform, so their options are to use kernel-specific cryptography where possible, and a generic implementation where one isn't available; or always use the generic implementation. They probably reasoned at one point that the latter is less work.


I would be interested to know if this actually protects against heartbleed in the real world. I can see how it would keep the private key protected, but not sure if it protects intermediate values used during calculation.

Given that it's likely that the private key can be derived from those parts, it would be good to know if this custom allocator is 'enough' (i.e. are the intermediate values stored in memory allocated using the same allocator)?

This week has been full of surprises. Has anyone tested this?


I didn't but akamai itself did [1]:

    To test this belief, our engineers have rebuilt 
    every Akamai configuration since August 2012, 
    when we upgraded to openssl-1.0.1. This included 
    every kernel, hardware, and edge server software  
    combination; and then a careful inspection of the ways 
    in which memory was allocated, to see if any 
    non-long-term memory allocations might border 
    on our secure heap. Most of the configurations 
    were proven safe; but we found one configuration 
    that was not - there was an available memory block in
    range of the secure key store.

    This less safe configuration was active on our network
    for nine days in March 2013
[1] https://blogs.akamai.com/2014/04/heartbleed-update.html


That is very impressive. I'm amazed they essentially keep a revision history of their system configuration.


Yes, I'm also very impressed by this. It shows that they have their OPs in total control and know what they are doing.


Thanks!

As someone who grew up in the Akamai ops environment... How else could you get anything done? I've taken this tool for granted since it was built in 2000. I expect every planetary scale computing company to do this. Isn't it what Heroku and similar git-push-to-deploy systems are supposed to get you?


It apparently did for Akamai, though it certainly wouldn't be the first time someone's wrongly thought they were protected from Heartbleed key extraction. It doesn't seem to work in general though; I finally got the version without guard pages working on Debian (the version with them is totally broken and crashes during init) and it doesn't seem to make much difference. Also, it doesn't support threaded servers.


As Rich said: a prototype that works in our world, for our use cases. It needs lots of work to generalize. For example, we have about ten thousand SSL private keys per machine. This, without guard pages, protects all but the first few hundred perfectly. Something else protects those first few hundred (and that something else is a lucky freak accident).

If you load 1000 keys, can you extract anything past the 256th?


Did you actually test this? Because apparently someone looked at the code and found your assumptions are wrong and you're not protecting the private keys as well as you thought you were: http://lekkertech.net/akamai.txt (Also, your blog post appears to be based on the same assumptions.)


Disclaimer: I'm not too familiar with the code.

"This arena is mmap'd, with guard pages before and after so pointer over- and under-runs won't wander into it."

Doesn't that mean that this will only protect against overreads of a certain max length (such that the int16 length in heardbleed)? Seems like that wouldn't help with a length defined as a bigger int. I wonder if there's any better ways of doing this.


The beginning and end of the allocated section of memory (the guard pages they refer to) are marked as PROT_NONE with mprotect, meaning that any access to them will cause a segfault. It's possible that a misbehaving process could jump straight into the unprotected memory, but it would have to not read from the guard pages at all. Buffer overruns don't have that problem (since they access memory sequentially), and would cause the program to crash before any sensitive data could be read (assuming the overrun starts outside the protected area).


That makes sense, thank you!


The idea is that a read overrun where the base pointer starts in a different arena would have to cross the guard pages to "get to" the secure arena.


Guard pages address lots of issues with really small overhead, nice. But if you know where to look they're not foolproof.

Wondering what the cost of putting each primitive (e.g., RSA private signing + key) in a separate (child) process would be? Mailservers (qmail and postfix) seem to do designs like this.

You could also imagine a page type that would only be readable to certain code segments (would take CPU support to do it.)


> You could also imagine a page type that would only be readable to certain code segments (would take CPU support to do it.)

Isn't this what TPM is supposed to provide? It received an unfavorable reputation because of its association with DRM, but it addresses the same problem.


Intel SGX would allow implementing this efficiently in-process: https://software.intel.com/en-us/blogs/2013/09/26/protecting...


Expensive, and it's not clear that it would help.

Context switching between applications is costly in terms of cache hit and CPU time.

The other problem is you then have to IPC the sensitive data to the other process, which will mean putting in temporary memory, which would expose it to stack crash, or heap attacks. You've also got to worry about synchronizing the different processes.


A CDN writing a custom malloc sounds kind of crazy, but I guess it paid off.


An open source SSL library writing a custom malloc sounds kind of crazy, but I guess, oh, it was crazy.


Here's a good blog article which looks at their patch: http://blog.nullspace.io/akamai-ssl-patch.html.


If Akamai realized there was a major security vulnerability more than a year ago why didn't they share this information with the OpenSSL team?


Because they didn't realize there was a major security vulnerability. Instead, they decided they weren't comfortable with the allocation policies of mainline OpenSSL and rewrote them.

As for why they didn't "share" earlier, and assuming they didn't: the OpenSSL project would probably not have accepted this changeset anyways. It's extremely intrusive and the problem it addresses was, at the time this was written, speculative.


I don't think they knew about heartbleed. This is stopgap to protect against leaking private keys via memory bugs.


in other words, this is a general protection against a whole class of vulnerabilities


I think it's a fair question. If they had what they believed to be an improved (more secure) OpenSSL why not contribute the patch back to the community? After all they are standing on the shoulders of giants here, it seems a bit selfish to take an open-source project, improve it, and then not share that back.

Yes I know that many open-source licenses do not obligate one to do this, but it still seems like the right thing to do to me.


I can't say what happened in this case but after you submit a patch to openssl and wait 6mo, a year, two, or even close to four, and simply don't hear anything back or if you do that they are doing something their own way instead, you just sort of lose the will and might get to simply be pragmatic and do what you need for your own job and customers after a while.


Probably this. Submitting patches back to open source is expensive - you have to dedicate engineers to tidying up and submitting patches, for no benefit (other than being closer to upstream, which is of marginal use, especially if there are clear forks) whereas they could be developing new functionality. In the real world, the priority if often to do new development instead.


If OpenSSL had been GPL licenced, this would have been public a long time ago.


Actually as long as they don't distribute the code, even under GPL they don't have to share it.


GNU Affero license changes that: https://www.gnu.org/licenses/why-affero-gpl.html

"It [Affero GPL License] has one added requirement: if you run the program on a server and let other users communicate with it there, your server must also allow them to download the source code corresponding to the program that it's running."


AGPL is not the same as plain GPL. The vast majority of the code with a licence from the GPL family is not AGPL, but GPLv2, GPLv3 or LGPL.

I understand the reasoning behind AGPL and on the surface it seems like a good idea, to stop the parasitic behaviour many have towards FOSS. But there are many situations where you have very legit reasons to avoid it. Even in this case, AGPL would put you in a delicate position where you need to immediately disclose the changes you made without respecting the non-disclosure period for mitigation.

It's a licence that tries to solve the problem of the GPL going obsolete when many companies no longer distribute software, but rather services based on the software (like Google or Facebook) and they can basically get away with not giving anything back at all to the community work they built upon. But this is very hard to regulate and as a result AGPL is often so cumbersome that AGPL-licensed code is strongly avoided, and it's also extremely hard to enforce when the service simply doesn't release any code or software.


It is not true that AGPL is incompatible with responsible disclosure. Responsible disclosure timelines mean that the source code would be published long before anyone had time to complain about the source not being available.

Enforcement is hard, but AGPL is strictly stronger than GPL which doesn't evenninvitr enforcement on sharing server code.

Yes, many companies avoid AGPL software. That isn't a problem for AGPL-leaning authors, that's the point.


Even if (probably) no one has time to complain about it, you'd still technically be violating the license by withholding code, right?


Probably. AGPL says you need to "provid[e] access to the Corresponding Source from a network server at no charge, through some standard or customary means of facilitating copying of software", and GPLv3 also seems to require digital distribution of software to include immediate source access. GPLv2's source requirement, on the other hand, can be satisfied by a "written offer" to provide the source no matter how the binary is distributed; I think this is (along with apathy) how Apple gets away with taking months to publish GPLv2 sources.


That exposes whoever does this to uncharted legal territory.

There are many reasons why AGPL is used in extremely few projects of any relevance, compared to other GPL licences, or BSD, MIT, etc.


And if OpenSSL had been licensed with a GNU Affero license, we'd all be running something else.


I severely question the enforceability of the AGPL. If they're not distributing they don't need to accept the license in the first place.


AGPL is based on modification rather than public performance or use. You don't have permission to modify copyrighted things by default so if you do modify, then you either comply with the AGPL or violate copyright law.


You can modify copyrighted things, you just can't in general copy your modified version.


Because a number of court cases have sided with the interpretation that "copying bits into RAM for execution" is itself governed by copyright law, in practice you wouldn't be able to use your modified software without breaking copyright. This is why software licenses are "needed" -- lacking some kind of limited license to make copies, execution of software isn't permitted.


Usage licenses are enforceable. At least in the US for sure, and the DMCA builds on that.

Now, to prove that someone is running AGPL software and that they have modified it, that seems pretty hard in many scenarios.


So does this mean users don't need to accept EULAs?


EULAs are largely a farce but in theory you have to accept the EULA to accept the software. As far as I know GPL/AGPL comes in at a different point entirely.


Please research what the "L" in "GPL" means.

http://www.gnu.org/licenses/gpl-3.0.html

Software is protected by copyright by default, meaning you can't make a copy. A license is a limited waiver of copyright that grants permission to make a copy, if the conditions of the license are met.


You're not the one making a copy when you're just accepting the software to run.


Yup. It's pretty common for open source to be used, abused, modified in house and not shared. Just look at Sergey Aleynikov and Goldman Sachs. [0] Further I've seen a shop deploy hundreds of thousands of CentOS nodes to avoid RHEL license fees. Doesn't even begin to scratch the surface of shops not funding FOSS they exploit like OpenSSH, etc. Most of them are stingy, demanding, greedy bastards.

[0] http://blog.garrytan.com/goldman-sachs-sent-a-brilliant-comp...


>Most of them are stingy, demanding, greedy bastards.

I'd go so far as to say "nearly all of them."


There's a few counterexamples, which is a good thing:

iXSystems does a lot of cool community work via FreeNAS, PC-BSD and FreeBSD.

Github (Ruby), thoughtbot (Ruby), Google (Go, Python), Dropbox (Python), AT&T (C++, Ruby), Zurb (Ruby, Foundation), Twitter (Ruby, Scala), ...

missing a bunch and some platforms too.


Also part of the reason I grabbed the eject handles on enterprise devops consulting. Although I did manage to fight and win open sourcing changes to net-ldap so that it worked with A-D.


Further I've seen a shop deploy hundreds of thousands of CentOS nodes to avoid RHEL license fees

Such decisions often happen at the engineer level, and it isn't for money management reasons, but rather "avoid purchasing and requisition BS, followed by licensing compliant BS" reasons.


You mean the application? They need to distribute the code if they provide applications based on the code.


But it's still GPL and anyone who works at the company is free to share it. Whereas with Apache they can just make the entire thing proprietary and forbid sharing. Granged, we don't know that that's what happened here. Maybe their version was still Apache licenced and nobody bothered to share it at any point.


No, this is incorrect. Just because the original code is GPL, doesn't automatically make it legal for any modifications to be released to the public as GPL without the copyright owner's consent. (In this case, the company owns the copyright.)


IANAL, but I don't think that's how it works. The GPL allows the licensee to redistribute the code and so on. But if you're working for a company, you are not the licensee, the company is (in the same way the company owns the copyright on your work, not you).

Otherwise, the AGPL would never have seen the light of day.


Modifications made to non-distributed GPL code are not automatically GPL. Only distribution of code derived from GPL-licensed code requires a GPL license for that derivative. But even on publication a modification/derivation of GPL-licensed code is not automatically GPL. It can also just be copyrighted code published in violation of the GPL.

So, it's also not true that "any employee of the company can publish the modications to the GPL-licensed code".


And Akamai would probably not be using OpenSSL if it were GPL (maybe because of a business decision within Akamai, or just because OpenSSL would never have become so popular).


Why not? They're using Linux and it's GPL. Undoubtedly they have made kernel modifications as well.

They're not distributing the code outside of their company, so the source license in this case is entirely irrelevant.

We could make a different argument if they were using AGPL'd code...


Can't distribute, actually. We have internal combinations of GPL'd software and OpenSSL, which has Eric Young's advertising requirement. The GPL forbids us from distribution that combination with the extra requirement, and OpenSSL's license forbids us from distribution without that requirement.

So we're stuck: we avidly use and consume free and open source software, but can only give back code by carving off bits here and there.


Some GPL project have added an OpenSSL exception to their licenses specifically allowing OpenSSL to be combined with their project.


I think this is a widespread misunderstanding about GPL, and I am afraid it is the reason why businesses typically have a GPL-aversion.


I haven't worked for Akamai for more than a decade, but at that time, I can assure you that not only were there lawyers who thoroughly understood the GPL, but there were many, many engineers who would not have remained silent for a breach of the terms. Assuming that culture to have continued -- and I see no reason why it would not -- Akamai should be up there with Red Hat in terms of license compliance.


I haven't been there for almost two years, even then, I'm sure most engineers wouldn't have stood silent if an OSS license was being violated... unless their managers made them afraid for their jobs and careers for doing so. Oh, and getting permission to submit a simple patch made to an otherwise completely GPL'd piece of software was always an exercise in frustration. Send the necessary info to legal. Wait 3-6 months for them to say no. Lather, rinse, repeat. It was pretty damn bad while I was there.

Now, my current employer... Email your manager: "I wrote this entirely in-house and would like to open-source it." Manager: "OK, let's talk to Legal" Legal: "OK, get at least one other person to verify that it doesn't contain any trade secrets and sign this." Upload to github. Done.

And submitting patches upstream - just a matter of code-review and sending it out. Now that's OSS-friendly.


They run FreeBSD don't they ..?


Based on job descriptions, it would appear they are mostly a linux shop with some solaris. [0]

[0] http://jobs.akamai.com/search/?q=linux&locationsearch=


I think they (businesses) often pick OpenSSL exactly because it's not GPL licensed so it's kind of chicken and egg


It would have to be the Affero GPL actually, since they just run it on their servers and don't distribute the software.




Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: