Hacker News new | comments | show | ask | jobs | submit login
OpenSSL is written by monkeys (2009) (peereboom.us)
212 points by dptee 1146 days ago | hide | past | web | 180 comments | favorite

I've done quite a bit of programming with the OpenSSL library and this article is only scratching the surface of the awfulness. Documentation is horrible to non-existent, you really do need to go spelunking into the source to figure out how things work, and the code really is that horrible.

The worst thing is that error reporting is not consistent - sometimes -1 means error, other times 0 means error, other times 0 means success, and sometimes it's a combination. This is really, really bad for a crypto library since properly detecting errors is usually critical to security.

O.K. This is probably a stupid question but if it was apparent to many that the code of OpenSSL was horrible why people kept using it and nobody tried to re-factor it? How it is possible that such a popular and critical piece of opensource software survived the years without a complete face-lift and nobody wrote thorogh documentation?

It is really difficult to refactor such things at the public API layer once they are widely adopted by 3rd party projects.

Take the aforementioned error return inconsistencies -- how do you possibly deal with refactoring that for consistency when so many other projects are consuming OpenSSL as a library? You're trading one big problem (new clients are likely to get error return handling wrong) with another (if you modify all the returns for consistency, existing clients are now expecting the wrong return codes or, if you're smart and rename the functions to avoid that, at the very least the 3rd party consumers now need to do a lot of rewriting and deal with a painful crossover phase supporting both the old and new APIs).

> how do you possibly deal with refactoring that for consistency when so many other projects are consuming OpenSSL as a library?

You split your project into two components:

1. a new, clean, minimal core with a nice, shiny, new API; and

2. an "OpenSSL emulation layer" that loads your core library, and wraps it in OpenSSL-compatible cruft.

It'd be very similar to, say, replacing Direct3D with OpenGL, and then writing a Direct3D emulation layer that calls OpenGL.

it sounds so easy when you break it out like that but it is a considerable investment of time and money. Convincing people to spend either of those on refactoring vs adding new features is a constant battle. No one ever really understands it and typically responds with "you want to go back and do what you already did, just differently?"

Projects like this where it needs urgent attention but it's so horrendously complicated and virtually impossible to get implemented correctly are very hard to commit to. You're going to be in for a world of hurt no matter what you do.

Unless there's an actual federation behind the project that can fund it and keep it stocked with talent and oversight, something Mozilla or Apache could do, it will continue to be an unmitigated disaster.

Agreed. In fact, a lot of people are raging on OpenSSL and the OSS community not ever fixing this stuff but.. no one really raged on Apple over their SSL issue. They came out and showed the code and showed the missing curly brace and gave the "accidents happen" excuse but.. why wasn't that caught in unit testing?

The only reason it wouldn't have been caught in unit testing is because Apple isn't unit testing their own SSL code. If Apple doesn't commit to doing security well why do people expect OSS to?

It's because open-source software doesn't have a profit motive. It doesn't have an excuse not to.

People did rage over Apple's SSL issue, but Apple handled it swiftly, pushing out updates to all affected devices as quickly as practical. It's also worth noting it was a validation bug, not something that could leech arbitrary bits of data from a device.

I have no idea how some open-source projects get anything done, their development process is insane, but they manage to hobble along. Something as significant as OpenSSL needs to be held to a higher standard.

If the OpenSSL team can't get their act together, it would be worth it if a larger non-profit organization took the initiative and made a tested, rock-solid implementation of same with an optional compatibility layer.

I've had to integrate with OpenSSL before and the code in there is truly mad. Well written code should be boring and obvious, but OpenSSL is the polar opposite.

> It is really difficult to refactor such things at the public API layer once they are widely adopted by 3rd party projects.

But how did it ever get so widely adopted, if it's so hard to use?

And even if it is widely adopted, that still shouldn't stop anyone from forking it and fixing all the problems.

"But how did it ever get so widely adopted, if it's so hard to use?"

Usefulness and lack of alternatives?

So why didn't you?

It's not something I work on. But clearly a lot of other people do. And I have some trouble believing that all those people building stuff on top of OpenSSL don't care about security.

Look, if this was about something fairly inconsequential, like a library for a game or a blog or something, sure, use some badly maintained, badly documented, inconsistent library. But when you care enough about security to use SSL, surely you care enough to use something that's put together in such a way that you can be sure it does what you think it does? Bad documentation and inconsistent error handling in a security package sound like an unbelievably dangerous idea. And if it's really an unmaintainable mess, like everybody says, then how can bugs possibly be fixed? How easy would it be for the NSA to insert some crucial vulnerability into the code base?

Security, moreso than anything else in computer science, needs to be clear, transparent, understandable, well documented and well maintained. If it's not, mistakes will be made and security will be compromised.

Why do you not include yourself in the list of people that work on it? Who are these other people that should be working on it? There's only 2 people on the OpenSSL team apparently.

1. Other libraries do exist -- NSS, GnuTLS, etc.

2. Do you have time for a rewrite? What makes you think anyone else would?

3. The (generally correct) mindset is don't implement your own crypto. This is particularly true of something like TLS, which is complicated and has subtle requirements that are easy to screw up. Unfortunately, this means that even people who have time are discouraged from doing an OpenSSL rewrite.

> The (generally correct) mindset is don't implement your own crypto

That sounds like bad advice. The huge impact of Heartbleed is precisely because of the monoculture in SSL/TLS implementations. Shit happens but this vulnerability is such a catastrophe because an estimated half a million of the top Internet websites have been vulnerable since 2012, including popular services that everybody is using and the vulnerability once discovered, is easy to use and far reaching, allowing one to steal sessions, passwords, the domain's private keys and anything important.

It really can't get any worse than this. At the very least we found out about it, better late than never and we can discuss the source-code, but it does make one wonder ... if SSL/TLS is such an important piece of infrastructure and if we need a monoculture (because apparently it's bad advice to reimplement crypto), why aren't we using a library that's the reference and that's peer reviewed by a standards committee, instead of relying on a library for which commits don't necessarily trigger sounds?

The reason we tell people not to implement their own crypto is that no matter how skilled someone is, no matter what language they program in, it is nearly certain that self-rolled crypto code will have security flaws. Cryptographic security hinges on key, subtle assumptions that are very easy to screw up. World famous cryptographers do not roll their own crypto (except for research code).

When people do implement cryptography they rely on having large numbers of other people review it. That is not a model that scales to having a large number (more than a couple dozen) of competing implementations.

What I don't understand is why there aren't companies around that have an interest in improving this software and pay ~1-5 people ~1 million to rewrite it in a year.

Because companies are cheapskates nowadays.

Back in the Linux bubble days (1997-early 2000s), lots of companies employed full time people to work on all kinds of stuff, not just profitable company project, but also needed infrastructure. Even front end projects, like Nautilus and Evolution were created by full paid teams.

I posted the 'public good' link elsewhere, and this goes to the very heart of the problem: why should I pay if someone else might, and then I get it for free?

There might be a bunch of companies willing to chip in a bit, but coordinating that is difficult. Something like kickstarter might be able to help, but that has its own problems: if you spend your time writing a commercial SSL thing, you can go out and sell it right now when it's ready. If some company has to wait around for the results.... it's not as nice for them.

Historically GnuTLS had even more security trouble than OpenSSL (like that bug where it was trusting a self signed root certificate even if not in the certificate store that went undetected for years).

And NSS is pretty light on the server code IIRC...

In my past as professional code auditor, I've seen OpenSSL shipped with products quite a bit, and in 99% of the cases it was openssl the command line tool that was used to access the functionality because yes, the API is that bad and confusing that people would rather pipe to an executable that use library calls...

NSS has enough server code that it was possible to implement mod_nss, a replacement for mod_ssl in Apache:


If there's alternatives then why do so many pieces of software (nginx, Apache, etc.) use OpenSSL as opposed one of these alternatives?

Today there are alternatives. Once upon a time those alternatives were either not written, not stable, or not free. Now it is just a matter of convincing an enormous number of developers to carefully rewrite large parts of their systems to use a different library instead of new features (or ironically, security fixes).

Historically, the alternatives are incomplete, solve only part of the problem, or are even worse (GnuTLS?).

It was among the first at the time. OpenSSL was probably the dopest shit on the interwebz of crypto (or came close to it) at the time. Furthermore, these days it's too widely used so, everyone is just "fuck it." OpenSSL is the PHP of the cryptography world.

Also, I've see a ton of code just invoke the OpenSSL tool instead of actually use OpenSSL library.

I tried to use the library and was down in spirit, much like the author of the blog post, but I was too disappointed to even care, and just gave in to calling the tool from my code.

An idea: create a fork, call it OpenSSL2 and clean it up, but maintain both codebases, and end support for OpenSSL1 in 2-5 years?

Apple has deprecated OpenSSL as of Mac OS X 10.7 and they are not including it with iOS. Instead they provide their own APis.

The semi-official reason though is that OpenSSL doesn’t offer API compatibility between versions.

In case anyone's curious, here's a link to the libsecurity_ssl source (albeit not totally up to date — latest version is pre-"goto fail;" fix):


I guess we see how well that worked out for them.

Compared to what?

You are aware that GnuTLS had a very similar "goto" style verification bypass bug, and that OpenSSL has had several flaws and broken stuff discovered, including the major recent "Heartbleed Bug".

Well... Certainly better right now.

OS X 10.9.2 still ships with OpenSSL. Luckily it's 0.9.8 and immune to the heartbeat bug.

Because the core idea about the "bazaar" thing that ESR evangelized is BS.

Most people don't have the time or care -- even if they know how to do it. The big push for open source was when companies where sponsoring its development with a lot more paid developers than today (companies from IBM and SUN to RedHat, Corel, Novell, Eazel, VA Linux -- remember them? The bubbliest IPO in history).

That can happen not only for highly specialized code like OpenSSL (which even fewer people can understand), but even for very common and widespread application level code, including projects that one would assume would attract many people.

Gtk, for example, the GUI kit used in Gnome, has just one developer really working on it (I know, cause he complained). Compare that to the 20+ GUI teams companies like Apple and Microsoft, with dedicated technical writers for the documentation et al.

Did you ever noticed, that not always the best Code/Tools are used in projects ?

Its a pattern that you can watch on many IT-Projects.

Some "Key-People" like a project (for whatever reason) and a mass of developer follow them.

And than a thing get "standard" and more and more people use it.

It's because the people who know the least are usually the loudest, most "arrogtant" and most opinionated people in a set.

It's just humanity; it doesn't just plague the IT community. :-)

Most people don't bother - they just start using a different one: gnutls, nss, etc.

Yes, reports of awful code quality in OpenSSL are not exaggerated.

Because they say "don't roll your own crypto" so nobody does it anymore. There is some truth in it but it's getting to sound like nobody should touch crypto.

well, i appreciate mr dipropyl tryptamine having linked this. anyone who has looked at openssl source knows the score.

to quote one of our devs (davec) on the topic of openssl:

"funding openssl won't stop it from happening again... openssl is largely unmaintainable is the problem"

the ifdefs alone in the source make it nigh unreadable, much less the inconsistent variable naming, inconsistent indentation/nesting and inconsistent overall style.

> you really do need to go spelunking into the source to figure out how things work

What are the chances that this is how a lot of the vulnerabilities were discovered (including heartbleed)? The more people forced to look through the source, the better? (That is, unless they keep that discovery to themselves.)

Edit: maybe they've improved it since the article was written; around 2 years ago I wrote an SSL MITM'ing HTTP(S) proxy, complete with CA certificate generation + signing, in a little over 24h of work.

So I only have a basic/intermediate understanding of SSL and X509, but I am curious about your opinion: Is the awful code design just a flaw in that particular implementation, or are the standards involved so hard to implement correctly?

I think it's largely a problem with OpenSSL itself.

It's true that TLS and X.509 are quite complicated and I think it would be a challenge for anyone to implement them correctly, but that doesn't explain even half of OpenSSL's awfulness. Having consistent naming and error handling conventions should be possible regardless of how complicated the standards are. Plus, their crypto library is awful too, despite not having to deal with TLS or X.509.

While the core parts of the crypto algorithms are, to some extent, intrinsically messy, that doesn't have much to do with the specific complaints here: inconsistent error return codes, pseudorandom indentation, #ifdef and if(0) stuff which deeply obscures control flow, and so forth. All of that is just bad style.

OpenSSL is a mess on its own. Other libraries are better organized and better documented, so I doubt that it is the fault of TLS or x509.

I understand that a lot of OpenSSL features were implemented by or because a company sponsored a specific feature, but was only interested in getting that specific feature done rather than any wholesale refactoring of the code. So it's had a bunch of different projects glommed onto it with no overall structure.

Seems like GnuTLS suffers from the same inconsistent error reporting:


Have you ever tried axssl or polarssl?

I am just curious; I make no endorsement of either.

I'm getting a certificate warning on Chrome 33.0.1750.152. Is there a security corollary to Muphry's law?[1]

[1] - http://en.wikipedia.org/wiki/Muphry's_law

This might be intended. Supporting SSL is better than not supporting it (well, except in situations where an OpenSSL bug could leak your server's memory, but that's a bit of a stretch <http://filippo.io/Heartbleed/#www.peereboom.us> -- but put this aside), because it's always better to encrypt traffic even in a way vulnerable to passive attacks; and some people may reasonably opt-out of the SSL CA business because they do not like the way it is structured, doubt the security it offers, or feel that they do not need it.

(I use a free StartSSL signed certificate, but only because it's free and not very hard to get. If there were no free provider with widespread support, I would be very happy to use a self-signed cert and think of it as a protest against the stupidity of browsers which present SSL+self-signed as less secure than plain HTTP).

If someone has passive access to snoop, the odds they can perpetrate an active attack are so close to 100% as to make the distinction immaterial. Encryption without authentication accomplishes nothing.

Worse, self-signed certificates train users to freely click through certificate warnings. People running servers with self-signed certificates are actively reducing what security we have available for the web.

If we ever do get anything better than the existing CA structure, it won't do us any good if users have been trained to ignore browser security warnings anyway.

I guess you don't agree with Poul-Henning Kamp in his FOSDEM keynote this year: https://www.youtube.com/watch?v=fwcl17Q0bpk (around 15:00)

His claim is that browsers treating websites with self-signed certificates as less secure than non-encrypted websites by displaying big fat warnings is just about the greatest gift we can give to organisations like the NSA. It's not true that there is no material distinction between an active attack and passive eavesdropping. For the NSA to do active MITM attacks on all self-signed https traffic (in a world where all unencrypted http is replaced with self-signed https) would take massively more resources than passively monitoring unencrypted traffic. Plus the odds of them getting caught would be very high (since some people are going to notice that the certificate that shows up on the client is not what the server sent).

PHK says many things I disagree with.

You can't make browsers simply treat self-signed certificates the same as plain HTTP. You still have to warn the user that their desire for a secure connection (conveyed by their request to retrieve an https URL) cannot be fulfilled, and do so in a way that assures all users will notice before they do something like, say, type in their password.

There is very little self-signed HTTPS traffic out there. The NSA assuredly has the resources to MITM all of it.

Yes, someone at some point would notice if it were all attacked, but the key thing is that, almost all of the time, nobody would, and you yourself have no assurance that your connection has not been MITM'd unless you have done some other out-of-band verification on that specific certificate. That is almost never done, especially by the general public.

You might be willing to gamble your own security, but you should not gamble everyone's security.

There is very little self-signed https traffic out there because browsers have chosen to treat it as somehow less secure than unencrypted http. (I do realise it's rather problematic to mess with the security expectations of https at this point, but you could imagine an http variant that allows encryption without authentication, without giving dire warnings to users.)

If all http traffic instead used self-signed https, then the NSA assuredly would not have the resources to MITM all of it. So because of the reflexive "self-signed certificates are worse than no encryption!" dogma, we have an Internet where most traffic is unencrypted and therefore trivial for governments and criminals to intercept.

Regarding certificate warnings: even if self-signed certificates were accepted, browsers could still put up a warning if the certificate changes between visits, similar to ssh's treatment of host keys (being relatively unsuspicious of unknown hosts, but putting up a big fat warning if the host key has changed).

This is why I disagree with many things PHK says. Like you, he pays a great deal of attention to how he wishes things had turned out, instead of figuring out how to make what we have better.

Get browsers to act in the way you want, then we'll talk. Until then, people using self-signed certificates are causing active harm now.

Counter point: If you self sign your browser does cert pinning. With CA certs it accepts ANY valid certificate. Certificate pinning should be on by default for all certs. I use CertPatrol to accomplish this: https://addons.mozilla.org/en-us/firefox/addon/certificate-p...

> You still have to warn the user that their desire for a secure connection (conveyed by their request to retrieve an https URL) cannot be fulfilled

Normal users do not enter URLs in their browsers, much less https ones, and do not pay attention to the https prefix when clicking a link or pasting an URL, so the client retrieving an https URL usually does not indicate a desire from the user to have a secure connection.

I believe that users have been trained to pay attention to the presence of padlocks and colors in the address bar, not to the presence of the "https" prefix, so that self-signed HTTPS could look the same as plain HTTP from that perspective and I don't believe security harm would ensue.

> There is very little self-signed HTTPS traffic out there. The NSA assuredly has the resources to MITM all of it.

I find this doubtful. I am willing to believe that the NSA might be passively snooping on a large portion of HTTP traffic, but performing active attacks, even when you have the resources, is a lot harder to be doing smartly (unlike passive attacks, which are invisible).

If the NSA were routinely MITMing self-signed HTTPS traffic, knowledgeable users would have noticed. They would be investigating under which conditions, and along which routes, does the MITM take place. As nobody seems to have witnessed this, I doubt that is happening now, or that it is likely to happen in an indiscriminate way.

> You might be willing to gamble your own security, but you should not gamble everyone's security.

Even then, how is MITMed HTTPS traffic worse than passively monitored, and potentientally MITMable, HTTP traffic?

As for your point about training users to ignore security warnings: blame the browsers, not the websites. If using self-signed HTTPS is no worse than HTTP, those messages are stupid, and I'm not going to give up on the additional security of self-signed HTTPS just because browsers are doing something stupid with it.

Can you MITM public wifi? Real question, but I thought the answer was "no", which would make public wifi a compelling counter-example.

I have no idea where you'd get the idea that the answer is "no". The answer is emphatically yes.

Yes. You arp poison the default gateway and route traffic through your own host instead.

Or spoof DNS responses to point to a machine under your control.

Or spoof a disassociation frame, then impersonate the access point so the victim connects to your computer instead.

I use(d) free StartSSL for my hobby projects too. So I wanted to revoke my certificates today, post heartbleed and stuff.

They charge $25 to do the right thing and revoke your cert. So I will not use them again.

Edit: Btw. That's per fucking subdomain, as you don't get wildcard certs for free.

Going back to self signed I guess...

I put the CVE id in my "reason" and it went through with no charge.

I just tried it. They mailed me asking for credit card / paypal details. Are you sure they didn't just bill a payment method you previously added?

Gotta try that after i catched some sleep. Got no reply from them on twitter. I'd expect them to an annouce this officialy though.

They've been tweeting saying they aren't revoking them for free.

I'd trusts self-signed certs ahead of most CAs, frankly.

That doesn't make any sense.... even if you don't trust a CA at all, it is impossible for a self-signed cert to be MORE trustworthy, since it provides absolutely ZERO authentication. It could be created by anyone at all, including on the fly by a MITM.

That's only true if you're not verifying the identify of the presented cert - how do you suppose client/server certificates work? For example, along the lines of what every VPN system in the world uses? VeriSign isn't involved in the transaction between my company laptops and the ASA in my datacenter.

Besides, having the third party CA signature in this day and age doesn't tell me much other than the person presenting the cert coughed up whatever protection money the particular face of the PKI protection racket demands, and ostensibly the CA did some level of verification (could be more, could be less, I honestly do not know or have time to find out) as to the "identity" of the person who's info is on the CSR.

Really, for your own uses, you're better off self signing with your own CA, noting down the identifying information of the cert at generation time, and then installing that cert as trusted either ahead of time, or hitting it from the third party and double checking the information matches up.

Maybe he is saying by removing trusted root certificates he will trade the possibility of receiving a fraudulent certificate the first time he uses a site for the possibility of not being warned if a certificate changes because the trusted roots are compromised. If he needs to worry about state actors there is an argument to be made for this trade-off, but really if he was worried about that he would be delivering certificates out of band and not visiting any unknown websites on the machine he needed that level of security on. (Maybe justifiable against mass surveillance where you're a general target rather than a specific one).

There are extensions to warn you about certificate changes even if the new is signed by a CA, so that's a terrible reason.

Ah, yes, certificate patrol[0]. However, your argument, I think, is not valid. Sure, it's technically possible for him to know if any cert changes, but in reality very few people are going to install the extension and those that do might not even notice the message because it notifies the user so frequently (fully desensitizing them I imagine).

I don't think that his choice not to install an extension invalidates his argument.

[0]: https://addons.mozilla.org/en-us/firefox/addon/certificate-p...

very few people are going to install the extension

My argument is, pinning certs is a bad reason for removing the root certs from the browser, since you can pin them without breaking the CA chains.

I'm not sure how does that work as a counter-argument; We're discussing a decision of a particular person, not some broad policy. How is the number of people who install the extension relevant?

those that do might not even notice the message because it notifies the user so frequently (fully desensitizing them I imagine)

So does the browser, if you remove the root CA certs.

Yes, verifying certs out-of-band is most likely more secure than a CA. I was not counting that in the self-signed cert vs CA signed cert comparison, since that is not what most people mean when they talk about a self-signed cert.

You ignored my first point. You can trust the self signed certificate to be used by the same entity(A or C) after the first connection, admittedly you may have been MITM (by C) for that first time(and renewal times) and because of that you may be screwed. Alternatively if you are trusting other entities(B) to verify the certificate of A while B are not trustworthy then after that first communication (even if that first communication was legitimate) another entity (C) can pretend to be A and so long as B verifies C then you are in trouble. I am pointing out that they are different risks and I agree that in nearly all use cases it is better to use verified certificates than self-signed for general internet use, that doesn't mean that there isn't a reason to do otherwise though.

Trusting myself with out-of-band verification rather than trusting a third-party to do out-of-band verification is more trustworthy.

If you do not trust yourself to do it, why do you trust that the third-party would?

Yes, if you use a self signed cert and out-of-band verification, then yes it would probably be more trustworthy than a third-party verified cert (like a CA).

I don't think that is what the parent was saying, however, in saying he would trust a self-signed cert. I don't think he meant "Call up the website host and ask for them to verify their public key"

TOFU: don't most browsers tell you if you've been to the site before, and whether the cert has changed since your last visit? I seem to recall that at one time Firefox could be configured to not even complain when it saw a self-signed cert it recognized?

No. That is a capability provided by an extension called CertPatrol, at least in the Firefox ecosystem.

I have been using the following phrase a lot lately:


"When you look at it, it breaks!"

I joined this company in January and had to peel back all the layers of hte onion to find out where its weaknesses were, and as soon as we start looking at how a component of the system is built and how it works - it breaks. We learn a ton real fast and figure out how to make it resilient.

Yesterday in my Ops Stand-up I told my staff: We have come a long way in rebuilding the infra; I am now paranoid that a security vulnerability will be the next thing that causes an outage...

Lucky, our ELBs on AWS were all updated without issue...

The cert is self-signed, that's why Chrome gives the warning.

Firefox gives the same warning.

Yes, well, I expect most browsers would warn on self-signed SSL certs.

Those with glass ceilings shouldn't throw stones

I'm not visiting something criticizing OpenSSL if they can't get basic security right.

And I've had with the "encryption without checking certificates provides an illusion of security" people, so yes, I won't visit the site.

Calm down, it's a static HTML page without any forms in it. There is no reason why it needs to be encrypted anyway.

Adobe's Flash Player used OpenSSL on Linux for a long time, but eventually switched to NSS because the OpenSSL project would repeatedly break their library ABI without changing version numbers. The OpenSSL developers said that a stable ABI was a non-goal of theirs. (Disclosure: I was an engineer on Adobe's Flash Player team.)

OpenSSL's thread safety policies is undocumented and unintelligible. We gave up and just put a single mutex around any call to OpenSSL within the process.

That's actually something openssl does okay. See http://www.openssl.org/docs/crypto/threads.html it's actually one of the few things that are properly documented.

We had the same issue with KDE, pretty sure Qt did too.

>OpenSSL is equivalent to monkeys throwing feces at the wall. It is, bar none, the worst library I have ever worked with

Ah, he haven't worked with MS Crypto APIs.

I was implementing TLS/SSL in one of the services working at MS. I couldn't figure out many things from MSDN and samples - they would not cover error and some variations code paths, and there was just no way to figure it out. And recovery would be something like "in third buffer there will be value x, you have to it pass to other function". And there was a need to do it correctly for obvious reasons, really. So finally I got IIS sources to see how it's done correctly, and discovered couple thousand lines of code with comments like this "xxx told us that buffer X will contain value bla if condition Z is met, and this is why we are doing it here". I had no choice but to cut-n-paste it to my service. I can tell you for sure, nobody outside MS can implement TLS/SSL correctly by using MS Crypto APIs. At least with OpenSSL you can read sources for both library and services and figure it out.

The absolute worst API in the history of computers is the Microsoft Text Services Framework (TSF) API.

It's incomprehensible, nearly undocumented, 10x more complicated than necessary, and even Microsoft doesn't understand how it works and hates it.

Don't forget x64 SEH. From http://www.nynaeve.net/?p=105 : "This layering violation is not the most egregious in the x64 exception handling scene, however."

It looks like this was the precursor to assl, 'A library to hide awful OpenSSL API in a sane interface' which hasn't gotten much GitHub love.



For more, and more recently:



If you don't want to add a security exception.

I am a bit reluctant to repeat myself, but I can't see any mentioning of the sheer volume of code. It's more than 300.000 lines of code in the .c files of the official tarball.

My experience with code grown beyond >10⁵ LoC is that it forces you to monkey-patch around any bugs (or even features for that matter). Surely there's a Potterson's Law or something that describes the situation.

In short: it's not the monkeys (quality), it's the volume of code.

(Of course now we're getting in a chicken-egg situation.)

I seriously wonder what else is hidden in the mess called OpenSSL. And especially how much of the bugs are known by the NSA.

Something like Heartbleed would definitely make a live-injection attack feasible!

If the NSA is going to spend our tax dollars analyzing that code, the least they could do is contribute formatting cleanup patches!

i don't think the NSA works like that :)

remember "national security" equates to "we will watch you all the time and steal all your dataz". it would obviously be great if "national security" meant what it was supposed to...

The NSA wrote SELinux.

Has that been audited independently? A quick Google search didn't cough up anything.

You mean, has anyone formally audited SELinux to make sure the NSA didn't load it with backdoors? No idea, but it was accepted into mainline Linux kernel ten years ago. So if you don't trust SELinux, you can't trust Linux.

The following thread is not directly related, but might be interesting to those who would like to see a constructive response to the quality of OpenSSL.


What's are the best alternatives?

I've recently started looking, and PolarSSL[0] and Botan[1] are on my trial list. Mozilla NSS looks a little low level, and GnuTLS hasn't faired much better lately, despite having much better code hygiene than OpenSSL.

[0] https://polarssl.org/ [1] http://botan.randombit.net/

I have some experience with NSS, and for simple things it is not terrible despite being a bit low level. With about an afternoon of work I was able to write a very simple TLS server using NSS -- in Lisp, using SBCL's FFI.

How can a library of this size and importance not be run through a linter, with violating patches rejected? And why is a cleanup effort of the simplest stuff not undertaken?

And with it being C, doing a proper operator spacing+indentation of the whole source should be as easy as running it through a tool (yes, auto indent is always better than a broken indent).

What's the next discovery? The core is all php and there are zero tests?

Why is this so hard? I'm honestly curious to know. The theory is extremely well laid out, i.e. it's just number theory. We know to do stuff in a way that avoids side-channel attacks, i.e. try not to leak power usage when you're exponentiating, etc. Why do people keep using the awful mess that's openssl?

It's not just number theory; it's a web of interlocking state machines implementing the TLS protocol, which itself depends on a variety of different marshalling formats for the number theoretic parameters, built on the shifting sands of optimizations for different machines, all evolved over the the course of ~15 years, starting from a package that was, literally, Eric Young's "teach myself C" project.

Ok, I can see the interlocking state machines introducing complexity that is unavoidable but the rest of it just seems like self-inflicted nonsense. I'm more than willing to take a performance hit and have a correct implementation that is easy to reason about in some high level language than one that every other week is shown to be broken because of some dangling pointer in some patch committed 2 years ago. I don't think I'm the only one with this opinion. I'm sure many others would take the performance hit just so they could have the piece of mind that the underlying crytopgraphic library securing their application's data is actually formally verified. The best part about the high level implementation is that it will only get better over time because of improvements in compiler technology.

I'm not much of a conspiracy theorist but this one seems too well placed to be an accident and as long as people can't reason about the mess that is openssl stuff like this is gonna continue to slip in. We have formally verified browser kernels but yet we can't have something as fundamental as SSL/TLS be formally verified. The formally verified browser kernel: http://goto.ucsd.edu/quark/.

Edit: My google foo is very weak these days. Trending story is https://github.com/vincenthz/hs-tls, an implementation of SSL/TLS in Haskell.

That's kind of missing the point though. It would be relatively easy to define an interface that is simple to use and allows implementing 90% of the use cases with rather low danger of shooting yourself in the foot. We know that this is possible because there are libraries out there that do it.

This whole thread is about OpenSSL somehow failing to do that, despite being called out on it for years.

I don't understand. What part of TLS are you calling "simple"?

The interface between an application that wants to communicate using TLS and perform related tasks and a library that implements TLS can and should be simple - at least for the 90% of common use cases.

The whole point of the original article is that the interface of OpenSSL is horrible and undocumented. Consider how the article's author was not originally interested in looking at the internals of OpenSSL. All they wanted was a decent and documented interface for dealing with certificates; if OpenSSL had exposed such an interface, they would never even have looked at the source code of its internals.

It's open source, everybody can decide to invest his own time to do the fixes.

However, the functionality openssl provides is much bigger than most of the commenters can imagine. It's not easy at all.

inertia, it's what people know and too few people know it's internals to care.

if every google search for "https apache" comes back with openssl examples, it will be what's deployed.

personally I use gnutls for my irc servers, but, I'm the exception there. Those in the security community openly mock openssl due to the fact that it was a boys method of learning how SSL worked, which has been adopted but never properly structured...

.. or that the 'openssl' command itself is actually supposed to be for development only.

What is the alternative?

It's GPL, and, according to Wikipedia, doesn't support Secure Renegotiation[0].

[0] https://en.wikipedia.org/wiki/Comparison_of_TLS_Implementati...

The version at pastebin.


Last year I took part in a similar project in which I had to issue both server and client certificates, signed by another certificate, for a company's sales site. I'm not by all means an expert in cryptography (not even close), and after googling around a bit I found "The Legion of The Bouncy Castle"[1] crypto library, which was relatively easy to work with, and enabled me to solve my problem quick enough. I just wonder what more experienced programmers have to say about this crypto library.

[1] http://en.wikipedia.org/wiki/Bouncy_Castle_(cryptography)

Having worked with OpenSSL on only one occasion. I figured something was up when they have a function named SHA1_Update and SHA_Update that has the exact same signature and creates different outputs. A library designed by incompetents.

SHA_Update is for the SHA-0 hash, and SHA1_Update is for the SHA-1 hash, so it's hardly completely illogical. The main issue here, is that people that don't know the difference between SHA-0 and SHA-1 should likely not be in charge of computing hashes by themselves. Crypto requires some expertise or it goes awry very fast, but then the educational system is producing plenty people with good crypto knowledge, it's hardly the niche it was only 20 years ago.

Also SHA-0 support is left out if OPENSSL_NO_SHA0 is defined, which must at last be the case in the FIPS version of the library, the one your want to use for regulation reasons in many cases...

The problem was that the names are too close to each other and was one typo away from mysterious bugs. The non-existence of documentation didn't help either.

If you go that way there is also MD2_Update, MD4_Update and MD5_Update, that are also one typo away of a bug you can't miss at all if you test anything.

Of the many many faults of the OpenSSL API you just didn't chose the right one.

Having been bitten by SHA1_Update with no expectation that SHA_Update would work as well. I'm sort of bitter about that.

OpenSSL is a monoculture. Refactoring it or trying to prettify it would be a waste of time. The right response is to create a completely separate open source project that implements SSL/TLS. The biggest problem that comes to mind is that there are only so many people expert enough to do it right and lots of those people are already on OpenSSL, and produced the mess there. Then there's the adoption curve -- while there are probably quite a few people who'd jump on a better library, there is lots of inertia in the big and important projects.

Date on the article might belong in the title: 2009-08-24

What alternatives are out there?

It is time for a new open source SSL implementation, maybe minimizing what is supported to reduce attack surface?

Depending on your needs, but you can check:


Readability matters (Zen of python).

This kind of thing make me appreciate saner languages like python & pascal.

But this make look C too bad it is...

a guy setting out to write a CA (with LDAP backend! cue in nostalgy for my old days at one well known BigCo ) for what seems to be a first time, and as a first timer he obviously goes for OpenSSL (has he ever heard at least about NSS?). Does it all sound like part of the same problem he talks about?

I highly recommend fixing the style before attempting to read: https://www.dropbox.com/s/euuuyrjydo8z5jy/Screenshot%202014-...


To be frank, OpenSSL website itself is vulnerable. (At the time of this writing.)


Is there anyone with a more current read on this issue (heartbleed aside)? The linked article is 5 years old, lots can change in 5 years.

As having maintained a code base that uses openssl since 2004, no, not a lot has changed, openssl has just accumulated more of the same.

Article raises a good point.

Seeing this repeated 50+ times made me cringe though

    if ((req = X509_REQ_new()) == NULL)
        ERROR_OUT(ERR_SSL, done);

also, writing new code and not using Rust, or D, or even C++. WTFs like the above are par for the course when you're writing in a 40 year old language without any features of modern low-level languages.

I don't think writing bad code is a reflection on the language. You can write rubbish code in any language.

Can you elaborate what those features are?

Sounds a bit like Sun's Metro WS-Security implementation

Chrome thinks this site is unsafe, which amuses me at least.

this title is so offensive towards monkeys

either monkeys or NSA programmers

I'm going to assume "monkeys" is a poor translation from another language. Maybe it's supposed to say "OpenSSL is written by fuzzy hackers". It's impossible to tell without accepting a self-signed certificate I have no means of validating.

The irony is delicious, much like a banana.

Within the page you didn't bother to read, presumably due to the mortal terror well known to be inflicted by presentation of a self-signed SSL certificate for a resource for which there's no reason to give a damn about identity validation in the first place, there's an expanded version of the metaphor which involves a reference to flung feces. So, no, I'm pretty sure that when the author says "monkeys", it's monkeys he means.

It's not mortal terror, it's declining to enable someone who uses racially-charged language to attack their fellow human beings while configuring their server in a way that anyone competent to evaluate OpenSSL's merits in the first place would know was no better than not having SSL at all, and in fact much worse, since it trains people to ignore certificate warnings.

You're the only one bringing race into this. All that is meant is that the author views OpenSSL as approximately what you would get if you flung poo around for a while.

The author brought race into it by comparing human beings to monkeys, a common racist trope.

If your only defense for someone calling people sub-humans is "but it might not be racist!", you need to think hard about what you're defending.

You are wrong, racists are wrong, therefore you are a racist.


(The classic racist comparison is "black people are monkeys". This comparison is "the code is awful, as if it were written by non-sentient animals bashing on keyboards". This comparison has nothing to do with race, it's not suggesting any particular race wrote OpenSSL or that any particular race cannot write good code or good cryptography systems. It is not making any distinction by race at all.

If they described it as "OpenSSL was written by crashing ink tankers into alphabeti-spaghetti lorries" would that be racially comparing people with vehicles?)

If your only justification for calling someone a racist is "but sometimes racists use that word!", you need to spend less time on Tumblr.

I really have no idea what this crap about Tumblr is. I don't use it, and I don't really know anyone who does. When I click a link to it, it's almost always a cute animal picture or a piece of art that has no relation to any controversial subject. If there's something more to Tumblr, I wouldn't know, and there is no reason to assume I would.

People raised not to call others monkeys are not somehow the result of the latest social media fad.

It's because you sound like the "check your privilege" crew that tends to congregate on Tumblr. (http://knowyourmeme.com/memes/check-your-privilege)

By the way, the key line from the article you didn't read (yet presume to understand its author's intent): "I have come to the conclusion that OpenSSL is equivalent to monkeys throwing feces at the wall."

I did read it, after it was helpfully posted elsewhere. I wish I hadn't. Not surprisingly, it wasn't worth my time.

"Not surprisingly, it wasn't worth my time," says the guy conducting multiple conversations in the comments section of an article on the internet that he didn't agree with.

Says the person arguing with said guy. Did you have a point other than that you think you're better-qualified to judge how I use my time than I am?

Please stop.

If there's something more to Tumblr, I wouldn't know, and there is no reason to assume I would.

"This guy says something I associate with racism so he's racist. I say something you associate with Tumblr and there's NO REASON TO ASSUME I KNOW ANYTHING ABOUT TUMBLR".

nknighthb, meet double standards. Double standards, nknighthb.

The trope is universal and deeply embedded in American society. It is not surprising that someone apparently from the UK would not realize this. It is surprising that a non-American is so eager to force their own cultural context onto an American issue. Isn't that a very American thing to do?

> The trope is universal and deeply embedded in American society.

No, it's not. Quit trying to pass yourself off as the ambassador of American culture. The majority of my 50 years on this earth have been spent living in the Midwest or U. S. South, neither of which are exactly hotbeds of racial tolerance and likely the origin of any racial slur you'll hear. And yet when I saw the title it never occurred to me to think of race at all. That is, until your comment showed up and I said to myself, "what the hell is he talking about?"

Sure, there are some racial slurs that involve the word "monkey". But they're not even close to what you're talking about, and if one were to use them one would come off sounding about 76 years old.

April 8th, the day we learned that nknighthb doesn't know he's a racist.

Woah - have you never heard the phrase "pay peanuts, get monkeys"? Have you not heard of webmonkey.com which uses the very slightly derogatory term in a self-deprecating way? It's a little sinister to bring race into this at all - I don't think anyone else went remotely near that connotation until you brought it up.

> have you never heard the phrase "pay peanuts, get monkeys"?

No. I'm familiar with the idiom of "peanuts" meaning little money, monkeys have never been mentioned. In the modern American cultural context, monkeys are associated with bananas. Peanuts are associated with elephants.

> Have you not heard of webmonkey.com which uses the very slightly derogatory term in a self-deprecating way?

I don't care what people call themselves, nor do I care what members of a community call one another in jest. I care what people call others as an insult.

> In the modern American cultural context, monkeys are associated with bananas.

Actually, code monkeys are associated with Fritos.

And Tab, and Mountain Dew.

I'm tired of your elephantsplaining.

Why do you think the sky is going to fall if you temporarily accept a certificate?

Apparently we don't have enough eyeballs.

put the text up in a gist, to get around the dodgy cert.


dodgy cert for read only site + magic verisign fairy dust = unicorns and rainbows?

For future reference you can use archive.org in the future if you are afraid of dodgy certs: http://web.archive.org/web/20140125144231/http://www.peerebo...

Heck, it also gets rid of the white on black, thanks.

Be security conscious folks. Don't click through to a site with a self-signed cert.

Huh? All it means is that the site you are trying to visit may be an imposter. Since you don't even know who the site is in the first place, does it matter to you if someone is impersonating that site? Of course not.

Feel free to accept the self-signed cert, there is no security worries there. If you go to your bank's website - or anything at all with private data - then you need to make sure it isn't self signed.

There is nothing wrong with self-signed certificates. If you need to trust a website that uses self-signed certificates, just make sure to verify out-of-band.

There are two problems in Firefox:

* The excepted site displays the exact same padlock icon as hacker news for example

* "Add exception permanently" is selected by default.

What does that have to do with self-signed certificates in general rather than an issue with your particular browser?

Yes, I trust my own self-signed certs and I might trust the self-signed certs of certain sites. However, telling me that "There is nothing wrong with self-signed certificates." is irresponsible IMO. Who runs "peereboom.us"? #### if I know. I'm going to tell FF "It's okay - I'm good" this time b/c I'm not doing anything that matters but, as a rule, self-signed certs for unknown sites is not okay.

Would you trust visiting a site that's just HTTP?

Self signed certs for unknown sites are fine. There is zero problem visiting them.

@ScottWhigham I'm not being irresponsible. I'm with ghshephard on this one.


Self-signed certs are OK in every scenario that HTTP would be OK, and then some.

But you would probably click through it if it was plain http.

Here, this should be way less scary for you:


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact