From an application developer's perspective crypto libraries look like a lightning rod even for the experts. No one wants to get too involved or make recommendations lest they back the wrong horse (admittedly an easy thing to do). So kudos to the OpenBSD team for rolling up their sleeves and attempting to build a solid foundation for the future.
I'd be happy if OpenSSL is simply fixed. I'd be overjoyed if there was a solid crypto system underneath an OpenSSL compatible API that gives us a path towards an open source, reusable crypto platform.
Can any crypto experts comment on whether it is feasible / how much work it is to implement SSL on NaCl? Maybe the issue is that NaCl doesn't support all the ciphers you need.
NaCl's goals are vastly different to those of SSL/TLS. SSL/TLS aims to provide a simple, clean interface with sane defaults for the majority of simple use-cases, whereas SSL/TLS aims to provide an interface with near-infinite flexibility for the case of providing an encrypted, authenticated tunnel.
NaCl also deliberately does not support lots of ciphers, as that makes it easy for developers to choose poorly, for example, (Alleged) RC4, as is supported in OpenSSL.
> NaCl aims to provide a simple, clean interface ...
The end result is that NASA is still using computers from the 60s, and it took someone like Elon Musk to change the status quo which resulted in reduced costs (which is why NASA is currently contracting SpaceX, which can do things cheaper).
NASA uses the RAD750  as well as a few other still in production CPUs. Remember Voyager? The satellite is still flying and sending back priceless data after many decades? To launch missions (and critical infrastructure like GPS) you need years, decades even, for usage & reliability data to accumulate and industry to perfect the manufacturing process of parts that can handle environments that are, by any definition, at the extremes. But, please, show me a mission launched in the last 5 years with a 50 year old CPU design.
One of the things the Deep Space Industries people were feeling confident about was they were signing all their contracts to have a 1/3 success criteria - they could launch 3 spacecraft, but would have fulfilled the contract if only 1 succeeds. That little bit of leeway, they think, will allow them to try new things and definitely lower costs.
Obviously some of this you can't do with manned missions, but its still a big cost saver when you change how you balance the equations. One of the things driving the CubeSat movement at the moment is the ability to fail - the low cost is encouraging people to take non-spacerated components, launch them, and see what happens, which then turns them into space-rated components (for a given mission envelope).
He thinks that, save for a few critical infrastructure missions, the diminishing returns get to such an extreme that they add no benefit to most missions. However, because NASA has to play politics for an ever shrinking pie, they can't afford to take no-brainer risks like launching ten missions instead of five because four failures looks worse than two, even if they both represent 40% failure rates.
And they're also contracting Orbital Sciences and Sierra Nevada and Boeing and probably a bunch of others of whom I have no knowledge.
SpaceX is following in the footsteps of many other private companies that have developed spaceflight hardware on their own initiative. They might be 'younger' and more agile but it's not a new innovation.
Some of that is OpenSSL's fault, but a lot of it is inherent in implementing a 200-page (before extensions!) standard. GnuTLS is 170 KLOC, CyaSSL is 205 KLOC, and even PolarSSL, commonly thought of as the gold standard in trimming the fat, is 55 KLOC.
If you want to reduce the level of paranoia, you would have to pair down the size and scope of TLS by an order of magnitude. For the record, I support that kind of thing, but you would have to cut a lot of things people actually use, not to mention breaking backwards compatibility.
Doing the hard work of getting this right, or accepting the status quo and getting it wrong, will have a very real impact on the future of many, many people. Should the time come that changes have to be made to the standards themselves then we can debate the merits of breaking backwards compatibility. I value pragmatism more than most (I think) but there comes a point where the cost of not changing is higher than the cost of changing. I think we're there.
All the bugs in OpenSSL to this date have been reviewed by someone. Unfortunately those people haven't been veteran industry programmers but rather academics and "hero" types who think the kinds of things in OpenSSL are good code. We can only hope that OpenBSD folks can clean it up a bit but I'm not hopeful, because OpenBSD is just a different tribe of "hero" programmers.
Please grow out of this religion if you care about robust, secure and well written software.
You can look and see what are the types of things that get fixed in OpenBSD. Tests are not going to tell you that code works around safety measures built in to the OS. Tests are not going to tell you that the code uses time as entropy to seed RNGs. Tests are unlikely to tell you about the subtle integer overflows or out-of-bounds accesses that attackers can exploit. Tests are unlikely to tell your daemon will ungracefully fail due to fd exhaustion while serving a larger amount of requests on a loaded system. Tests are not going to tell you there are deceivingly familiar looking functions with surprising return values that will confuse people. Tests are not going to tell about bad style and code smell that makes life harder for anyone working on or looking at the code.
There are so many things that are hard or impossible to test, but which are vital to handle right if you're doing anything robust and secure. In fact you could construct a test for some of these things, in theory, but in practice you never will. And you certainly never will unless you have actually read the code, because all these things are issues within implementation details inside the black box; you must read the code (and understand the underlying platform) to even become aware of these issues.
Sure, tests are really useful for catching some specific issues. But there is so much more. Tests are somewhat like compiler warnings: often extremely helpful, but even totally insecure crap can compile cleanly. And it is possible to write good code even if you're not using the smartest compiler to compile it.
If you wish to learn how to review code with no tests, I recommend you read the C language chapter from The Art of Software Security Assesment. Actually read the entire book. Then go through the commit logs in some project -- such as OpenBSD, who've been doing it for almost 20 years -- and see what kind of issues they find and fix.
There's fuzz testing. There's QuickTest-like testing, which is tricky in this sort of situation, but powerful if you take the time. There's static analysis, which are basically a form of automated testing. In fact, static analysis can in fact test some of the things you claim can't be tested. Some of the other things are also more testable than you are saying, such as resource exhaustion, which can be both simulated via injection or via simply exhausting resources in the test case.
Further, even the relatively-weak guarantees that traditional unit testing can provide are a foundation for further analysis. What's the point of analyzing a bit of code for whether or not it uses entropy correctly if you miss the fact that the code is straight-up buggy? Given the demonstrated difficulty of writing code even before it's security code without solid testing, this is hardly the time to be claiming that testing isn't necessary.
This strikes me as another iteration of the Real Man argument ("Real Men write in assembler" being one of the originals). The truth is we need all the help we can get because we humans aren't very good at programming in the end... and of all the places I'm not inclined to accept the Real Man argument, it's in security code. This is the sort of code that ought to be getting covered by two or three separate static analysis tools, and getting reviewed by skilled developers, and getting fuzz tested, and getting basic unit tests.
As far as I am concerned, all techniques that help improve code are welcome and even necessary. This includes testing, manual review, static analysis, etc.
By reading it, running it in your mind, and seeing if it runs the way its author expects.
Tests are ideal, but they aren't the only way to specify the correct behavior of code.
Instead of simply fixing the bug and moving on, we're decided to assume the whole thing is tainted and burn it alive like a depressed teenager at the salem witch trials?
or is the heartbeat and IETF RFC considered suspicious and the heartbeat extension offers us nothing?
There's a quote referenced so often it has become a cliche: "Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away." by Antoine de Saint-Exupery. I think it's sometimes over-applied, but in the case of crypto software, it really fits. In a sense, the heartbeat code got unlucky and hit the mega-anti-jackpot, disproportional to its "actual" risk... but there was risk in adding it, and even before we know the outcome it would have, somebody really should have been asking hard questions about whether the reward justified the risk. I imagine there's a lot of other bits of the code where the risk/reward tradeoff suggests removal. You can look at much of what has already been trashed from that point of view. For instance, if you have no real need to support a particular defunct architecture, why carry around its code? It has no benefit, but non-zero risk; toast it.
"The most secure code in the world is code which is never written; the least secure code is code which is written in order to comply with a requirement on page 70 of a 100-page standard, but never gets tested because nobody enables that option."
Sure, it's convenient to have it already available for you to use as an application developer, but critical security software needs to be kept as small and simple as possible.
The heartbeat extension is classic protocol bloat. In theory it can be useful for DTLS as a keep-alive for NAT port mappings or to do PMTU discovery, but you can do either one without it fairly easily. And then they went and added it to traditional non-datagram TLS where it serves the traditional screen door:submarine relationship.
The code is a mess from what I hear. As long as they keep the functionality intact I dont really think anyone can complain.
I was quietly just thinking... Man I've gone to some lengths to force a rewrite in the past but this just takes it to a whole new level.
Basically, that's the version that gets those sorts of patches.
I'd imagine if this reaches the point where people want to put the OpenBSD fork of OpenSSL on other platforms, a similar approach would be used.
It's been pretty successful for OpenSSH:
It already is used in OpenBSD. The changes are being made in the OpenBSD source tree.
There's surely code in this project older than some of the developers working on it.
Sort a side note: OpenBSD really seems to have picked up speed after they fundraising. It wonderful to see the money has been so well spent.
If you want a git client that's graphical, there's dozens to choose from, many of which do a lot of what cvsweb does and more.
You can also create your own Gitorius (https://gitorious.org/) server and view changes there before pushing to the upstream. It's like your own personal GitHub.
It is wonderful to see improvements. It's just that a refined CVS workflow is downright awful compared to the default git one.
Being able to git clone, monkey around with your own branches, and never need commit rights is a huge deal for those looking to work with and improve your software.
Me too. It would be really nice though if it gave the option to show diffs for all files touched by a single commit.
I'd also love to get the diffs along with commit messages straight into my mailbox...
I think this will be good news!
Someone at the linked URL humorously commented "I hope the new version is called OpenOpenSSL."
I like how the OpenBSD team is picking up the ball here instead of throwing up their hands in despair, even if it means forking the code.
That is one reason for the popularity of Debian: (at least in the past) they did not jump on any new bandwagon immediately. Other distributions have much newer software, but also the newest bugs.
IMO, this a symbol of OpenSSL's unreasonable complexity.
OpenSSL is a default system library yet the user cannot compile it with only default system libraries.
(OpenBSD has added perl to their userland but this is not universal across other UNIX-like OS.)
Where are the compile-time options to turn things off?
The compile process has apparently become so "challenging" that the developers could not figure it how to do it easily without using perl.
* Added dozens of tests.
Is this going to be an openbsd only fork (I guess linux will get a build of some sort as well).
A lot of OpenSSL is portable C and IMO they should just let portable C be portable C.
I wouldn't be surprised if a lot of the "Windows" support code is of similar age, and thus was actually only needed to support pre-NT Windows.
If the code can be compiled without this, then they might as well remove it.
But at the same time, one would typically assume that something as large as OpenSSL has been reviewed and tested...
The last thing OpenBSD or OpenSSL needs is a whack of totally-useless js on their website.
Documentation would help, but a good cleanup would make provided documentation much less necessary. Crypto may be difficult to understand, but with clean coding practices and formal verification (even on an audit workflow level) would be a much better investment, IMHO.
Duplicate IDs and font tags? Come on...
That is pretty much a given. OpenBSD considers deficiencies in documentation the same as deficiencies in the software.
>or "making our website less shitty"
OpenBSD developers have nothing to do with openssl's website.
Edit: I just read parennoob's comment about how OpenBSD isn't the maintainer of OpenSSL. I didn't know this was the case.