The worst thing is that error reporting is not consistent - sometimes -1 means error, other times 0 means error, other times 0 means success, and sometimes it's a combination. This is really, really bad for a crypto library since properly detecting errors is usually critical to security.
Take the aforementioned error return inconsistencies -- how do you possibly deal with refactoring that for consistency when so many other projects are consuming OpenSSL as a library? You're trading one big problem (new clients are likely to get error return handling wrong) with another (if you modify all the returns for consistency, existing clients are now expecting the wrong return codes or, if you're smart and rename the functions to avoid that, at the very least the 3rd party consumers now need to do a lot of rewriting and deal with a painful crossover phase supporting both the old and new APIs).
You split your project into two components:
1. a new, clean, minimal core with a nice, shiny, new API; and
2. an "OpenSSL emulation layer" that loads your core library, and wraps it in OpenSSL-compatible cruft.
It'd be very similar to, say, replacing Direct3D with OpenGL, and then writing a Direct3D emulation layer that calls OpenGL.
Unless there's an actual federation behind the project that can fund it and keep it stocked with talent and oversight, something Mozilla or Apache could do, it will continue to be an unmitigated disaster.
The only reason it wouldn't have been caught in unit testing is because Apple isn't unit testing their own SSL code. If Apple doesn't commit to doing security well why do people expect OSS to?
People did rage over Apple's SSL issue, but Apple handled it swiftly, pushing out updates to all affected devices as quickly as practical. It's also worth noting it was a validation bug, not something that could leech arbitrary bits of data from a device.
I have no idea how some open-source projects get anything done, their development process is insane, but they manage to hobble along. Something as significant as OpenSSL needs to be held to a higher standard.
If the OpenSSL team can't get their act together, it would be worth it if a larger non-profit organization took the initiative and made a tested, rock-solid implementation of same with an optional compatibility layer.
I've had to integrate with OpenSSL before and the code in there is truly mad. Well written code should be boring and obvious, but OpenSSL is the polar opposite.
But how did it ever get so widely adopted, if it's so hard to use?
And even if it is widely adopted, that still shouldn't stop anyone from forking it and fixing all the problems.
Usefulness and lack of alternatives?
Look, if this was about something fairly inconsequential, like a library for a game or a blog or something, sure, use some badly maintained, badly documented, inconsistent library. But when you care enough about security to use SSL, surely you care enough to use something that's put together in such a way that you can be sure it does what you think it does? Bad documentation and inconsistent error handling in a security package sound like an unbelievably dangerous idea. And if it's really an unmaintainable mess, like everybody says, then how can bugs possibly be fixed? How easy would it be for the NSA to insert some crucial vulnerability into the code base?
Security, moreso than anything else in computer science, needs to be clear, transparent, understandable, well documented and well maintained. If it's not, mistakes will be made and security will be compromised.
2. Do you have time for a rewrite? What makes you think anyone else would?
3. The (generally correct) mindset is don't implement your own crypto. This is particularly true of something like TLS, which is complicated and has subtle requirements that are easy to screw up. Unfortunately, this means that even people who have time are discouraged from doing an OpenSSL rewrite.
That sounds like bad advice. The huge impact of Heartbleed is precisely because of the monoculture in SSL/TLS implementations. Shit happens but this vulnerability is such a catastrophe because an estimated half a million of the top Internet websites have been vulnerable since 2012, including popular services that everybody is using and the vulnerability once discovered, is easy to use and far reaching, allowing one to steal sessions, passwords, the domain's private keys and anything important.
It really can't get any worse than this. At the very least we found out about it, better late than never and we can discuss the source-code, but it does make one wonder ... if SSL/TLS is such an important piece of infrastructure and if we need a monoculture (because apparently it's bad advice to reimplement crypto), why aren't we using a library that's the reference and that's peer reviewed by a standards committee, instead of relying on a library for which commits don't necessarily trigger sounds?
When people do implement cryptography they rely on having large numbers of other people review it. That is not a model that scales to having a large number (more than a couple dozen) of competing implementations.
Back in the Linux bubble days (1997-early 2000s), lots of companies employed full time people to work on all kinds of stuff, not just profitable company project, but also needed infrastructure. Even front end projects, like Nautilus and Evolution were created by full paid teams.
There might be a bunch of companies willing to chip in a bit, but coordinating that is difficult. Something like kickstarter might be able to help, but that has its own problems: if you spend your time writing a commercial SSL thing, you can go out and sell it right now when it's ready. If some company has to wait around for the results.... it's not as nice for them.
And NSS is pretty light on the server code IIRC...
In my past as professional code auditor, I've seen OpenSSL shipped with products quite a bit, and in 99% of the cases it was openssl the command line tool that was used to access the functionality because yes, the API is that bad and confusing that people would rather pipe to an executable that use library calls...
Also, I've see a ton of code just invoke the OpenSSL tool instead of actually use OpenSSL library.
I tried to use the library and was down in spirit, much like the author of the blog post, but I was too disappointed to even care, and just gave in to calling the tool from my code.
An idea: create a fork, call it OpenSSL2 and clean it up, but maintain both codebases, and end support for OpenSSL1 in 2-5 years?
The semi-official reason though is that OpenSSL doesn’t offer API compatibility between versions.
You are aware that GnuTLS had a very similar "goto" style verification bypass bug, and that OpenSSL has had several flaws and broken stuff discovered, including the major recent "Heartbleed Bug".
Most people don't have the time or care -- even if they know how to do it. The big push for open source was when companies where sponsoring its development with a lot more paid developers than today (companies from IBM and SUN to RedHat, Corel, Novell, Eazel, VA Linux -- remember them? The bubbliest IPO in history).
That can happen not only for highly specialized code like OpenSSL (which even fewer people can understand), but even for very common and widespread application level code, including projects that one would assume would attract many people.
Gtk, for example, the GUI kit used in Gnome, has just one developer really working on it (I know, cause he complained). Compare that to the 20+ GUI teams companies like Apple and Microsoft, with dedicated technical writers for the documentation et al.
Its a pattern that you can watch on many IT-Projects.
Some "Key-People" like a project (for whatever reason) and a mass of developer follow them.
And than a thing get "standard" and more and more people use it.
It's just humanity; it doesn't just plague the IT community. :-)
Yes, reports of awful code quality in OpenSSL are not exaggerated.
to quote one of our devs (davec) on the topic of openssl:
"funding openssl won't stop it from happening again... openssl is largely unmaintainable is the problem"
the ifdefs alone in the source make it nigh unreadable, much less the inconsistent variable naming, inconsistent indentation/nesting and inconsistent overall style.
What are the chances that this is how a lot of the vulnerabilities were discovered (including heartbleed)? The more people forced to look through the source, the better? (That is, unless they keep that discovery to themselves.)
Edit: maybe they've improved it since the article was written; around 2 years ago I wrote an SSL MITM'ing HTTP(S) proxy, complete with CA certificate generation + signing, in a little over 24h of work.
It's true that TLS and X.509 are quite complicated and I think it would be a challenge for anyone to implement them correctly, but that doesn't explain even half of OpenSSL's awfulness. Having consistent naming and error handling conventions should be possible regardless of how complicated the standards are. Plus, their crypto library is awful too, despite not having to deal with TLS or X.509.
I am just curious; I make no endorsement of either.
 - http://en.wikipedia.org/wiki/Muphry's_law
(I use a free StartSSL signed certificate, but only because it's free and not very hard to get. If there were no free provider with widespread support, I would be very happy to use a self-signed cert and think of it as a protest against the stupidity of browsers which present SSL+self-signed as less secure than plain HTTP).
Worse, self-signed certificates train users to freely click through certificate warnings. People running servers with self-signed certificates are actively reducing what security we have available for the web.
If we ever do get anything better than the existing CA structure, it won't do us any good if users have been trained to ignore browser security warnings anyway.
His claim is that browsers treating websites with self-signed certificates as less secure than non-encrypted websites by displaying big fat warnings is just about the greatest gift we can give to organisations like the NSA. It's not true that there is no material distinction between an active attack and passive eavesdropping. For the NSA to do active MITM attacks on all self-signed https traffic (in a world where all unencrypted http is replaced with self-signed https) would take massively more resources than passively monitoring unencrypted traffic. Plus the odds of them getting caught would be very high (since some people are going to notice that the certificate that shows up on the client is not what the server sent).
You can't make browsers simply treat self-signed certificates the same as plain HTTP. You still have to warn the user that their desire for a secure connection (conveyed by their request to retrieve an https URL) cannot be fulfilled, and do so in a way that assures all users will notice before they do something like, say, type in their password.
There is very little self-signed HTTPS traffic out there. The NSA assuredly has the resources to MITM all of it.
Yes, someone at some point would notice if it were all attacked, but the key thing is that, almost all of the time, nobody would, and you yourself have no assurance that your connection has not been MITM'd unless you have done some other out-of-band verification on that specific certificate. That is almost never done, especially by the general public.
You might be willing to gamble your own security, but you should not gamble everyone's security.
If all http traffic instead used self-signed https, then the NSA assuredly would not have the resources to MITM all of it. So because of the reflexive "self-signed certificates are worse than no encryption!" dogma, we have an Internet where most traffic is unencrypted and therefore trivial for governments and criminals to intercept.
Regarding certificate warnings: even if self-signed certificates were accepted, browsers could still put up a warning if the certificate changes between visits, similar to ssh's treatment of host keys (being relatively unsuspicious of unknown hosts, but putting up a big fat warning if the host key has changed).
Get browsers to act in the way you want, then we'll talk. Until then, people using self-signed certificates are causing active harm now.
Normal users do not enter URLs in their browsers, much less https ones, and do not pay attention to the https prefix when clicking a link or pasting an URL, so the client retrieving an https URL usually does not indicate a desire from the user to have a secure connection.
I believe that users have been trained to pay attention to the presence of padlocks and colors in the address bar, not to the presence of the "https" prefix, so that self-signed HTTPS could look the same as plain HTTP from that perspective and I don't believe security harm would ensue.
> There is very little self-signed HTTPS traffic out there. The NSA assuredly has the resources to MITM all of it.
I find this doubtful. I am willing to believe that the NSA might be passively snooping on a large portion of HTTP traffic, but performing active attacks, even when you have the resources, is a lot harder to be doing smartly (unlike passive attacks, which are invisible).
If the NSA were routinely MITMing self-signed HTTPS traffic, knowledgeable users would have noticed. They would be investigating under which conditions, and along which routes, does the MITM take place. As nobody seems to have witnessed this, I doubt that is happening now, or that it is likely to happen in an indiscriminate way.
> You might be willing to gamble your own security, but you should not gamble everyone's security.
Even then, how is MITMed HTTPS traffic worse than passively monitored, and potentientally MITMable, HTTP traffic?
As for your point about training users to ignore security warnings: blame the browsers, not the websites. If using self-signed HTTPS is no worse than HTTP, those messages are stupid, and I'm not going to give up on the additional security of self-signed HTTPS just because browsers are doing something stupid with it.
They charge $25 to do the right thing and revoke your cert. So I will not use them again.
Edit: Btw. That's per fucking subdomain, as you don't get wildcard certs for free.
Going back to self signed I guess...
Besides, having the third party CA signature in this day and age doesn't tell me much other than the person presenting the cert coughed up whatever protection money the particular face of the PKI protection racket demands, and ostensibly the CA did some level of verification (could be more, could be less, I honestly do not know or have time to find out) as to the "identity" of the person who's info is on the CSR.
Really, for your own uses, you're better off self signing with your own CA, noting down the identifying information of the cert at generation time, and then installing that cert as trusted either ahead of time, or hitting it from the third party and double checking the information matches up.
I don't think that his choice not to install an extension invalidates his argument.
My argument is, pinning certs is a bad reason for removing the root certs from the browser, since you can pin them without breaking the CA chains.
I'm not sure how does that work as a counter-argument; We're discussing a decision of a particular person, not some broad policy. How is the number of people who install the extension relevant?
those that do might not even notice the message because it notifies the user so frequently (fully desensitizing them I imagine)
So does the browser, if you remove the root CA certs.
If you do not trust yourself to do it, why do you trust that the third-party would?
I don't think that is what the parent was saying, however, in saying he would trust a self-signed cert. I don't think he meant "Call up the website host and ask for them to verify their public key"
"When you look at it, it breaks!"
I joined this company in January and had to peel back all the layers of hte onion to find out where its weaknesses were, and as soon as we start looking at how a component of the system is built and how it works - it breaks. We learn a ton real fast and figure out how to make it resilient.
Yesterday in my Ops Stand-up I told my staff: We have come a long way in rebuilding the infra; I am now paranoid that a security vulnerability will be the next thing that causes an outage...
Lucky, our ELBs on AWS were all updated without issue...
I'm not visiting something criticizing OpenSSL if they can't get basic security right.
And I've had with the "encryption without checking certificates provides an illusion of security" people, so yes, I won't visit the site.
Ah, he haven't worked with MS Crypto APIs.
I was implementing TLS/SSL in one of the services working at MS. I couldn't figure out many things from MSDN and samples - they would not cover error and some variations code paths, and there was just no way to figure it out. And recovery would be something like "in third buffer there will be value x, you have to it pass to other function". And there was a need to do it correctly for obvious reasons, really. So finally I got IIS sources to see how it's done correctly, and discovered couple thousand lines of code with comments like this "xxx told us that buffer X will contain value bla if condition Z is met, and this is why we are doing it here". I had no choice but to cut-n-paste it to my service. I can tell you for sure, nobody outside MS can implement TLS/SSL correctly by using MS Crypto APIs. At least with OpenSSL you can read sources for both library and services and figure it out.
It's incomprehensible, nearly undocumented, 10x more complicated than necessary, and even Microsoft doesn't understand how it works and hates it.
If you don't want to add a security exception.
My experience with code grown beyond >10⁵ LoC is that it
forces you to monkey-patch around any bugs (or even
features for that matter). Surely there's a Potterson's
Law or something that describes the situation.
In short: it's not the monkeys (quality),
it's the volume of code.
(Of course now we're getting in a chicken-egg situation.)
Something like Heartbleed would definitely make a live-injection attack feasible!
remember "national security" equates to "we will watch you all the time and steal all your dataz". it would obviously be great if "national security" meant what it was supposed to...
I've recently started looking, and PolarSSL and Botan are on my trial list. Mozilla NSS looks a little low level, and GnuTLS hasn't faired much better lately, despite having much better code hygiene than OpenSSL.
And with it being C, doing a proper operator spacing+indentation of the whole source should be as easy as running it through a tool (yes, auto indent is always better than a broken indent).
What's the next discovery? The core is all php and there are zero tests?
I'm not much of a conspiracy theorist but this one seems too well placed to be an accident and as long as people can't reason about the mess that is openssl stuff like this is gonna continue to slip in. We have formally verified browser kernels but yet we can't have something as fundamental as SSL/TLS be formally verified. The formally verified browser kernel: http://goto.ucsd.edu/quark/.
Edit: My google foo is very weak these days. Trending story is https://github.com/vincenthz/hs-tls, an implementation of SSL/TLS in Haskell.
This whole thread is about OpenSSL somehow failing to do that, despite being called out on it for years.
The whole point of the original article is that the interface of OpenSSL is horrible and undocumented. Consider how the article's author was not originally interested in looking at the internals of OpenSSL. All they wanted was a decent and documented interface for dealing with certificates; if OpenSSL had exposed such an interface, they would never even have looked at the source code of its internals.
However, the functionality openssl provides is much bigger than most of the commenters can imagine. It's not easy at all.
if every google search for "https apache" comes back with openssl examples, it will be what's deployed.
personally I use gnutls for my irc servers, but, I'm the exception there. Those in the security community openly mock openssl due to the fact that it was a boys method of learning how SSL worked, which has been adopted but never properly structured...
.. or that the 'openssl' command itself is actually supposed to be for development only.
Also SHA-0 support is left out if OPENSSL_NO_SHA0 is defined, which must at last be the case in the FIPS version of the library, the one your want to use for regulation reasons in many cases...
Of the many many faults of the OpenSSL API you just didn't chose the right one.
It is time for a new open source SSL implementation, maybe minimizing what is supported to reduce attack surface?
This kind of thing make me appreciate saner languages like python & pascal.
But this make look C too bad it is...
Seeing this repeated 50+ times made me cringe though
if ((req = X509_REQ_new()) == NULL)
also, writing new code and not using Rust, or D, or even C++. WTFs like the above are par for the course when you're writing in a 40 year old language without any features of modern low-level languages.
The irony is delicious, much like a banana.
If your only defense for someone calling people sub-humans is "but it might not be racist!", you need to think hard about what you're defending.
(The classic racist comparison is "black people are monkeys". This comparison is "the code is awful, as if it were written by non-sentient animals bashing on keyboards". This comparison has nothing to do with race, it's not suggesting any particular race wrote OpenSSL or that any particular race cannot write good code or good cryptography systems. It is not making any distinction by race at all.
If they described it as "OpenSSL was written by crashing ink tankers into alphabeti-spaghetti lorries" would that be racially comparing people with vehicles?)
People raised not to call others monkeys are not somehow the result of the latest social media fad.
By the way, the key line from the article you didn't read (yet presume to understand its author's intent): "I have come to the conclusion that OpenSSL is equivalent to monkeys throwing feces at the wall."
"This guy says something I associate with racism so he's racist. I say something you associate with Tumblr and there's NO REASON TO ASSUME I KNOW ANYTHING ABOUT TUMBLR".
nknighthb, meet double standards. Double standards, nknighthb.
No, it's not. Quit trying to pass yourself off as the ambassador of American culture. The majority of my 50 years on this earth have been spent living in the Midwest or U. S. South, neither of which are exactly hotbeds of racial tolerance and likely the origin of any racial slur you'll hear. And yet when I saw the title it never occurred to me to think of race at all. That is, until your comment showed up and I said to myself, "what the hell is he talking about?"
Sure, there are some racial slurs that involve the word "monkey". But they're not even close to what you're talking about, and if one were to use them one would come off sounding about 76 years old.
No. I'm familiar with the idiom of "peanuts" meaning little money, monkeys have never been mentioned. In the modern American cultural context, monkeys are associated with bananas. Peanuts are associated with elephants.
> Have you not heard of webmonkey.com which uses the very slightly derogatory term in a self-deprecating way?
I don't care what people call themselves, nor do I care what members of a community call one another in jest. I care what people call others as an insult.
Actually, code monkeys are associated with Fritos.
For future reference you can use archive.org in the future if you are afraid of dodgy certs: http://web.archive.org/web/20140125144231/http://www.peerebo...
Feel free to accept the self-signed cert, there is no security worries there. If you go to your bank's website - or anything at all with private data - then you need to make sure it isn't self signed.
* The excepted site displays the exact same padlock icon as hacker news for example
* "Add exception permanently" is selected by default.
Self signed certs for unknown sites are fine. There is zero problem visiting them.