Hacker News new | past | comments | ask | show | jobs | submit login
Chrome: From NSS to OpenSSL (docs.google.com)
128 points by moonboots on April 10, 2014 | hide | past | favorite | 71 comments



Though Ryan disagrees strongly, I think this is unfortunate. I think a case can be straightforwardly made that NSS is the better library. Here's a shotgun blast of reasons:

* NSS has more institutional constraints; random people in Germany can't, as a general rule, add support for new TLS extensions to it.

* NSS has a clearer identity, as the TLS core for Firefox and Chromium. OpenSSL is a grab bag with hundreds of different stakeholders.

* The most important code in both NSS and OpenSSL is the SSL state machine. The code for that state machine is incontrovertibly clearer and better expressed in NSS.

* NSS has had better battle-testing as a clientside browser TLS library than OpenSSL, which, apart from Android Chrome, isn't a big factor in TLS clientsides.

I don't know that anybody loves NSS, but my perception is that more people believe OpenSSL to be hopeless than believe NSS to be.

Talking to Brian and Ryan on Twitter yesterday, I proposed that someone generate a list of questions to ask of a TLS codebase --- "what's the code path from first ClientHello to validating the public key signature in a certificate", or, "what's the code that determines which extensions are enabled" --- and we'd take a shot at answering them for NSS, SecureTransport, and OpenSSL. My bet is that the answers to those questions will be more favorable for NSS, even with the crappy certificate handling.

Maybe a fork of NSS would be a better plan.

However, to head off the drama HN is probably looking for here: one inevitable result of Google transitioning fully to OpenSSL is that OpenSSL will get a lot better, probably quickly.


> NSS has more institutional constraints; random people in Germany can't, as a general rule, add support for new TLS extensions to it.

Why not? NSS is handled by Mozilla. Mozilla prides itself on being open-source. People can submit patches to NSS by filing in Bugzilla.

> The most important code in both NSS and OpenSSL is the SSL state machine. The code for that state machine is incontrovertibly clearer and better expressed in NSS.

I agree with this to some extent (although I don't think it's quite as much better as you suggest), and cleanups in that code are needed. Other parts of NSS can certainly rival OpenSSL on the "dear god" scale however. I'll take the low-level parts of OpenSSL over NSS any day.

> NSS has had better battle-testing as a clientside browser TLS library than OpenSSL, which, apart from Android Chrome, isn't a big factor in TLS clientsides.

This is true. We will have to rack our brains and our bug database for some of the unwritten rules of TLS to port over to OpenSSL. I'm not too worried about this in the long term however.

Ultimately the proof will be in the pudding. Ask again in 12 months :)


>> NSS has more institutional constraints; random people in Germany can't, as a general rule, add support for new TLS extensions to it.

> Why not? NSS is handled by Mozilla. Mozilla prides itself on being open-source. People can submit patches to NSS by filing in Bugzilla.

Maybe he meant Mozilla as an organization is much better equipped to handle patching of NSS better than the OpenSSL team. I have seen Firefox bug reports number of times and there seems to be a proper process in place to review patches before accepting. I am not aware of anything that good on the OpenSSL side.


Am I wrong that it's easier to get code into OpenSSL, and thus into pretty much every product that uses OpenSSL, than it is to get code into the branch of NSS that Firefox and Chromium use?


I've submitted code to both. I don't feel that there's a huge difference. (Although NSS has Wan-Teh reviewing, who is a great code reviewer.)


Ok, wait: the code you've submitted to both has also probably been more obviously applicable than, say, the Heartbeat extension.

"Chromium developer has elliptic curve code accepted by NSS: film at 11."


Part of the reason it's so hard to get code into NSS is because no one really knows who owns those bits, and so they linger in obscurity. I think my TLS Channel Bindings patch is lingering on 3 or 4 years now.

However, I think you're doing a disservice if you only look at how easy it is to add code. As important is how easy it is to remove code. With NSS, it's far, far harder to remove code - and that makes it far, far harder to reason about any new code you might want to add, for who knows what grues are lurking in those shadowy depths.


Not being able to easily add or remove code is good from a security standpoint though. I would take a bugfix only, stable-API NSS over anything that is in constant flux. So it's not how fast you can add features that should be in focus for a mature product - it's how safely you can add bug-fixes, including security fixes. Unless you are saying you have a security fix lingering for 3-4 years, I think Mozilla is doing it right.

Edit: Not to ignore the valid point - having a designated code owner that responds would of course be good.


With NSS, it's far, far harder to remove code

I'm not sure how to interpret that statement. It could mean any of the following, or something I'm not even thinking of:

The code is inscrutable and/or the documentation is poor: That's obviously a problem.

Commits removing code are delayed/ignored: Probably also a problem, but maybe there's valid reasons for this, such as more thorough testing?

There's little cruft to remove: This would be a positive. Not having much of a mess to clean is hard to consider a problem.

Am I close with any of those?


It could also be that the public API is too large and exposes unnecessary bits that now make it hard to change even incidental, conceptually irrelevant behavior simply because some user probably relies on it. Even a clean, well documented implementation with active, capable maintainers is hard to refactor when the API is large and intrusive.

(I have no reason to believe this applies to NSS.)


> Am I wrong that it's easier to get code into OpenSSL

I might be wrong but the idea I have is that currently Google more or less owns OpenSSL and thus it is maybe more easy, makes more sense for them to get patches in OpenSSL and try to migrate exclusively toward OpenSSL.

Another question I have is since when the HeartBleed flaw was discovered by the Google's employee? Was it formally reported to the OpenSSL team or only to OpenSSL members working for Google? And if the other independant group of engineers had not independantly discovered the flaw would have it been disclosed as quickly as it has? Bottom line it seems to me (and I'd like to be wrong) that the current opacity of OpenSSL disserves everyone but Google.


Another question I have is since when the HeartBleed flaw was discovered by the Google's employee? Was it formally reported to the OpenSSL team or only to OpenSSL members working for Google?

I think it is generally regarded as sensible practice to disclose major problems like heartbleed that have wide ranging impact directly to the people responsible instead of a catchall "security team" address whenever possible.


I know nothing about crypto, but I found comparing both codebases at a high level very instructive from a pure software engineering / project management standpoint:

ftp://ftp.mozilla.org/pub/mozilla.org/security/nss/releases/NSS_3_16_RTM/src/nss-3.16-with-nspr-4.10.4.tar.gz

https://github.com/openssl/openssl

https://www.openssl.org/source/openssl-1.0.1g.tar.gz

In terms of sheer architecture, those two are almost epitomes of a very well maintained project (NSS) and a very poorly maintained one (OpenSSL). The OpenSSL codebase looks like it evolved for years with seldom any cleanups being done, with many people doing contributions, each person in its own unique style without much unification going on. The directory structure is chaotic, documentation is sparse, commented out code is laying around without any documented purpose, and browsing code at random you find things like this:

  /* This should be using (bs-1) and bs instead of 7 and 8, but
   * what the hell. */
   if (bs == 8)
       l=(l+7)/8*8;
It's pretty much what I would expect to happen if you simply gave everyone interested commit access, reviewed and refactored nothing, and only protested if something gets broken functionally. It's like an orchestra playing without a conductor, just a collection of "stuff" loosely connected together and corrected only as immediate needs surface. It might contain incredible individual pearls of crypto-wisdom of course, and tons of work has been put into it, but the technical debt is huge and paying it now all at once would be hard. I looked at GNU Emacs code a while ago, and had pretty similar overall impression, they for example have a test directory where every individual test suite (file) is done in a completely different style, being by a different author.

If you look at NSS on the other hand, you can tell there is someone who actually cares about the health of the codebase on a day to day basis. The directory structure is much more thought out, there are clear separate modules, there are way more tests, you can generate test coverage statistics, most function signatures are documented, there doesn't seem to be much redundant stuff in it, and so forth.


> NSS has more institutional constraints; random people in Germany can't, as a general rule, add support for new TLS extensions to it.

This is horrible for a large scale open-source project. Glibc, gcc, as examples. Trying to get them to fix their broken crap or reverse their idiotic decisions takes decades. If OpenSSL is less opposed to change, it's probably a good thing, considering the purpose of it.

> NSS has a clearer identity, as the TLS core for Firefox and Chromium. OpenSSL is a grab bag with hundreds of different stakeholders.

Do you not remember the myriad bugs that came out of Microsoft's, Netscape's and others' independently-developed SSL implementations when their only relevant benchmark was their own tools? They didn't give a shit if it broke someone else's tool because they didn't make it to support someone else's tool.

> The most important code in both NSS and OpenSSL is the SSL state machine. The code for that state machine is incontrovertibly clearer and better expressed in NSS.

So clean up the OpenSSL code! If your previous claim that it's easy to get code into OpenSSL is true, fix it! Don't throw the baby out with the bathwater!

> NSS has had better battle-testing as a clientside browser TLS library than OpenSSL, which, apart from Android Chrome, isn't a big factor in TLS clientsides.

OpenSSL is used in hundreds of TLS clients in all kinds of environments around the world. It's the de facto client library for 90% of open source tools. As opposed to whatever environments run NSS applications which is far fewer.

It seems like these are arguments for OpenSSL, not against it.


It's the de facto client library for everything but the most important TLS clients.

Ryan's argument that it's too hard to remove NSS code is compelling, but making it to easy to add code is exactly the problem we just had with OpenSSL. Heartbeat wasn't just bad code; it was code that was inappropriate for inclusion by default.


Would clean up of the OpenSSL code be accepted? I mean just from a quick glance at the fragments that have been posted, the first thing that would need to be done would be to get rid of all the ifdefs, i.e. decide on a canonical value for each of the flags in question; would this be accepted, or would the existing stakeholders respond with a list of reasons why they are committed to keeping the status quo?


> Maybe a fork of NSS would be a better plan.

Is forking it actually needed? I was under the impression that Mozilla has many of the same complaints about the current state of NSS as the Chrome team and merely don't want to dedicate the resources to cleaning it up.


"Don't want to" is a bit uncharitable. We want to, but have limited resources and a lot to do. We actually recently rewrote the certificate validation library and it is currently on Nightly. Check it out!

Of course, it is open source, so patches are always welcome. (Firefox dev here)


> My bet is that the answers to those questions will be more favorable for NSS, even with the crappy certificate handling.

Here's the thing, the SSL libraries are so much more than "just" SSL libraries, and products that use them have to weigh a lot of things in when comparing.

- Platform abstraction layers

* NSS builds atop NSPR, which is the Netscape Portable Runtime from ye-old

* OpenSSL doesn't have much of an abstraction layer - short of perhaps BIO* hiding the nature of POSIX FDs

- Scope of what they try to do

* OpenSSL attempts to provide basic BER/DER parsing, some mid-level abstractions (eg: X509* and the related structures), a "high-level" protocol (SSL), but without any 'productization'

* NSS attempts to provide a hardware abstraction layer (PKCS#11), a full certificate verification library (for both SSL/TLS and S/MIME/CMS - multiple ones, I should add), abstract notions of 'trust domains' within multi-process and single-process servers, etc

- How much flexibility they give developers

* OpenSSL gives you enough rope to hang yourself, and many developers often do. However, at the same time, it's popularity is in part due to it's generous supply of rope

* NSS is wildly inconsistent, with public APIs being adhoc (eg: for any given object, you're lucky if you find all four functions for CRUD, and even luckier if they're internally consistent, and you hit the jackpot if they're externally consistent).

As I mentioned on Twitter, there is no doubt that the SSL state machine of NSS is probably cleaner than OpenSSL's, having worked with both. However, there's a ton of hidden magic that NSS tries to handle, by design, that makes it impossible to reason about how well something will behave, what side-effects it might have, and what the security impact might be.

For an application like Chrome, I will highlight that some of the biggest challenges we've had is in ensuring proper privilege separation when validating certificates with different trust scopes. Some connections only wish to trust certain authorities, others may be ambivalent, and yet other connections may have a mixture of 'default' authorities plus administratively configured settings. Actually ensuring the separation of these authorities is a monumental task within NSS.

That's not because NSS doesn't want to deal with that problem - no, there have been multiple, half-completed attempts to do so, which have simply introduced layer upon layer of abstraction.

NSS is an excellent piece of software for some tasks. But for security software, the last thing you want is "magic surprises", which NSS is full of. Even though the SSL state machine is possible to reason about, if you can't reason about whether or not a given certificate will be marked as "trusted", then you're still in the danger zone.


This makes me wonder what will happen with the level of security in NSS, now that Google is quitting it. Can Mozilla keep maintaining it properly on its own?


They don't maintain it properly today. Go look at bugzilla, there are quite a few new feature patches from Google that just never get reviewed, let alone merged.


Many of the review peers work at Google, so eh...


[deleted]


When the ratio of comments to article-upvotes reaches a certain point, threads are punished as 'flamewars'.


That will be a poor flame detection scheme if HN gets too many readers like me. I use up-voting as a way to mark articles/threads that I have NOT yet read but want to come back to. The HN interface actually implicitly suggests this use for up-voting, by naming the list of articles you have up voted your "saved stories" instead of "up-voted stories".


That sounds like a very good reason to move those threads below the fold.

But in this case it would be nice to have transparency. DropBox is arguably the top Y Combinator success story.

Or else people will have suspicions it was flagged by moderators for other reasons. Disclosure and all that.


I flagged it, and I presume many others did as well. I also flagged several other overtly political boycott stories recently, but that didn't seem to have as much effect.


I only regret that I posted but one comment for partisan dickheads to downvote; oh wait here's another one for you.


It may actually be a good thing if Chrome migrates to OpenSSL. With their resources, perhaps they can do some testing to see if there are any more vulnerabilities out there. At the very least, their continued eye on the project should help it be more secure going forward.


Actually I think the Heartbleed vulnerability was most probably found because of this migration, as it was found by Neel Mehta of Google (and the Codenomicon guys). The date of this draft is 2014-01-26, so it makes sense, that the OpenSSL code is evaluated by the security team before the migration.


Considering how widely it's used, if Google could use some of their resources to better it, I'm all for it!


Considering how widely SSL is used, and the resources of Google, I wonder if they could come up with their own encryption toolkit? How hard can it be for a company the size of Google to create a library that lives up to eg SQLite's quality standards?


Fix OpenSSL, and everyone that currently uses OpenSSL benefits. Create a new library that's clearly better than OpenSSL, and ten years later there will still be important things that still haven't switched.


Google has already written and released a complete crypto stack for Go. Where you restricting your comments to C/C++ implementations?


The last time I looked at it the go stack was very weak compared to any of the mature C SSL stacks. IIRC it only took me a few minutes to find a security bug (which I reported and is now fixed) that I'd reported against various browsers several years earlier. In short, I highly doubt the go SSL stack is production ready.


The above questioner didn't ask whether it was any good, only whether Google could write and release one.


Good point. Though it only claims to "partially implement TLS 1.2". http://golang.org/pkg/crypto/tls/


I would say that Joel Spolsky answers this quite well:

http://www.joelonsoftware.com/articles/fog0000000069.html

What would be interesting to see is a fork of OpenSSL with the intention of cleaning up the code, removing abstractions where they are unnecessary and adding them where they are, and adding a comprehensive test suite to ensure correct behaviour wherever possible.


The real question is, how hard can it be to do that while starting with OpenSSL compared to starting from scratch?


Those were my thoughts exactly, I think OpenSSL just needs a bit of care and refactoring (possibly a lot depending on which blogs you read) for which they need devs and resources, both of which google has.


The problem with OpenSSL is that it's really problematic to simply fix and refactor stuff given their FIPS certification.


Ooh, OpenSSL's FIPS certification. The fellows who've been fighting for years to take it away http://www.itnews.com.au/News/65016,openssl-in-a-fips-flap.a... must be crowing right now. (Regardless of whether their own stuff is any better or no.) Another decertification incoming?


Hi, can someone help me understand what the significance of FIPS certification is? ( beyond the wikipedia page: http://en.wikipedia.org/wiki/FIPS_140-2 )


AFAIK the US federal government (excluding the military, which obviously has its own hoops to jump through) generally can't use your hardware/software unless it has the appropriate FIPS certification(s). https://en.wikipedia.org/wiki/Federal_Information_Processing...


And here is compulsory email thread explaining some context:

https://groups.google.com/forum/?_escaped_fragment_=msg/mozi...

> Strange that "PKCS#11 support" is listed as a "con" for NSS

> It was not accidental that it was listed under "Con", nor do I see ENGINE_pkcs11 as a "Pro"


This conversation circles back to Theo de Raadt's argument that developers are caring too much about performance in security-critical applications.


Which is the typical point in which Theo de Raadt is right even though he may be saying so quite singularly.

This is another instance of the lack of 'engineering' in Computer Science/Programming/you name it. Engineers have long ago learnt that you need "automatic" standards and compliance before, during and after building anything, and that the best practice is always assuming thing will go wrong.

This, unfortunately, is not the case with programming as we can see time and again.

But I digress.


Smartcard support is pretty important for a lot of people. If Chrome drops that feature, that would be a huge shame.


every belgian citizen is issued a pkcs e-id, not supporting it means they all need another browser to do their taxes and log in to healthcare websites...


I found the following part in the cons to NSS amusing, or maybe rather frightening:

> Certificate path building is C code designed to emulate Java code, through indirection of an average of 7 layers of macros, two platform abstraction layers, and then the remaining NSS abstraction layers (listed above)


IIRC... That is in reference to libpkix, the certificate validation component of NSS. I believe Mozilla developers are working on a replacement for it, called mozilla::pkix, in large part due to libpkix being overly complex. Last time I checked mozilla::pkix was pretty far along.

https://bugzilla.mozilla.org/show_bug.cgi?id=915930


It landed on Nightly a week ago, check it out! You can help us by testing it [0] or reviewing the code [1].

[0] https://wiki.mozilla.org/SecurityEngineering/mozpkix-testing... [1] https://wiki.mozilla.org/SecurityEngineering/mozpkix-testing...


To be even more precise, this new certificate validation library is the current default in nightly (ff-31), testing is welcome.


Note that NSS contains a classic certificate verification engine, which was originally written in C.

The above statement refers to an external contribution that was added to NSS. It added a second validation engine to NSS, libpkix.

It's true that the libpkix portion is very complicated code, but the above statement doesn't apply to NSS in general.


On Debian Chromium 33.0.1750.152-1 seems to be using GnuTLS, I didn't know that the official Chrome used NSS:

  $ ldd /usr/lib/chromium/chromium|grep -E tls\|ssl
        libgnutls.so.28 => /usr/lib/x86_64-linux-gnu/libgnutls.so.28 (0x00007fbae0878000)
In fact on Debian most client software is using either NSS or GnuTLS, OpenSSL is more popular server-side of course.


Due to (mostly ideological) licensing incompatibilities some package maintainers and/or distributions link software with GnuTLS instead op OpenSSL. GnuTLS provides a basic compatibility layer with OpenSSL.

So Chromium may in fact be programmed against OpenSSL but might work when linked to GnuTLS


The doc mentions this among the cons of NSS: "Certificate path building is C code designed to emulate Java code, through indirection of an average of 7 layers of macros, two platform abstraction layers, and then the remaining NSS abstraction layers (listed above)".


Serious question: why is GnuTLS very often not considered as an alternative to OpenSSL?



Also, it uses `strlen` and `strcat` on binary data: http://www.openldap.org/lists/openldap-devel/200802/msg00072...


GnuTLS is LGPL 2.1+: https://gitorious.org/gnutls/gnutls/source/39aaa63a1a4cb8432... not GPL 3 as that file seems to imply



No. Not only is Sodium (more accurately: NaCl) not a TLS library, but it doesn't even implement the primitives that TLS needs; it can't really even serve as a component of another TLS library.


Not that you're not absolutely right about NaCl & Sodium, but isn't the GP talking explicitly about Sodium and not NaCl?

When you say 'more accurately: NaCl', is there something particular you're getting at, or are you speaking from experience of NaCl rather than Sodium (or are you talking about the fact that the bindings seem to be named/built for the original NaCl?)

Edit: or do you just think it's still better to call it NaCl given that it's a fork of, or I suppose a refined implementation of NaCl?


Any SSL implementation worth its salt needs to implement historical primitives like RC4 and MD5. libsodium will not and should not implement these because they conflict with the goal of providing an idiot-proof crypto library.


Isn't it better to not include stuff that's insecure, even if they have historic value? Isn't this a big part of the problem of some (even OpenSSL) software?


This has nothing to do with what I asked. Also, this I know. Sorry? :P


it's more accurate to discuss NaCl since that's what libsodium is. NaCl is the combination of crypto primitives and the "box" abstraction. libsodium is just the repackaging and clean-up of the original NaCl implementation. If you're discussing crypto, you're going to be looking at the papers on NaCl.


Mm that's roughly the impression I got, but I hadn't thought of the papers - thanks for the clarification!


Sodium does not implement TLS.


Unless Google is going to clean OpenSSL up along the way, I don't see any immediate benefit.


AFAIK Heartbleed was found during one of Googles code-reviews of OpenSSL.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: