You may think this is a lot. It's not. This largely appears a lot because OpenSSL treats a lot of very low impact issues as vulnerabilities. A lot of other projecs would rather start arguing that this is not exploitable, should never get a CVE etc.
This is a good sign. OpenSSL is taking security seriously by treating even low impact things as vulns. More severe stuff still happens, but I'm pretty positive there is a decline in severe issues.
Is HN still really up for that argument? Yes, we know the downsides of C. But the upside is that _we know the downsides of C_.
Constantly complaining about C isn't going to get anyone anywhere. It's been shown over and over again that secure C coding is possible and doable and feasible and exists in the real world. Some of the most secure software is written in C, and that's not attributed to enough fingers banging at a keyboard over a C program, but because C forces the programmer that _wants_ to write a secure program think about all the issues, the universe and everything.
So please, pave the road and show us the safer OpenSSL-rust you have. Until then good luck and thank you for your feedback.
(Incidentally, I don't have a quarrel with C specifically. All memory-unsafe languages expand the range of terrible vulnerabilities available to the engineer. This is not defensible these days, in my view.)
> So please, pave the road and show us the safer OpenSSL-rust you have.
My contribution here is https://github.com/ctz/rustls
OpenSSL was all but abandoned as well. At least now there's a lot of attention being focused on making it better and more maintainable.
There's numerous tire fires out there: ImageMagick, OpenSSL, some Linux kernel drivers, and other projects people just take for granted without pitching in to help fix things.
Re-writing in Rust is a form of helping, and maybe in the process we'll find bugs in the originals or wholesale replace them with something better.
Not to mention I would argue memory exhaustion bugs like this happen in memory safe garbage collected languages frequently too.
>broken, obsolete, badly designed, underspecified, dangerous and/or insane
Which list does Kerberos fit in?
So .. progress is being made.
This reasoning doesn't make sense. Use a language that makes a bunch of security flaws impossible (barring OS and compiler bugs anyway) so that you can concentrate properly on the possible security flaws left. Why deliberately make life hard for yourself if there are other language choices (assuming there are other choices to C in what you're doing)?
Even the best developers make mistakes. When you're picking your stack for security sensitive work you should be picking the stack that minimises the chance of mistakes and the impact of those mistakes. C is at high risk of making mistakes and those mistakes have a high chance of being exploitable.
The implicit assumption here is that the language isn't at the same time introducing a number of other vulnerabilities. Is there a language you would like to suggest?
That is a preposterous argument.
I know it says "partial" implementation, but it's quite good even so, and has had everything I need. It doesn't do SSLv2, but nowadays that's more virtue than vice.
Erlang has an independent SSL implementation as well; it binds to OpenSSL but as I understand it just uses the heavy-duty math parts, not the protocol parts, which are instead written in Erlang.
Non-C SSL implementations in memory-safe languages exist.
Note that "but are they as well road-tested" or similar such things would be moving the goal posts. Not necessarily wrong or bad objections, but different objections. Probably nothing, not even the other C implementations, is as well "road tested" as OpenSSL, but, then again, OpenSSL hasn't exactly passed that road testing with flying colors now, has it?
I completely agree with everything you point out. The argument I am bringing up is not about implementation, it's about "if they didn't use C, they would have less (security) problems": No.
C is the way it is towards secure coding because there are compromises to satisfy other dimensions. <Fancy new language that promises to solve all of C's problems> is either making huge sacrifices in these other dimensions or is at best an experiment.
We understand C's weak points so well (due to being widespread, battle tested, really simple, or what have you), that most of the time it's safer (read: more secure) to tread dangerous well-understood territories carefully than uncharted ones only promising to be safe.
I'm only superficially familiar with Rust, but my understanding was that it was basically "C with strong memory guarantees". I naively thought that its entire purpose was to avoid such trade-offs.
It's a bit more upfront work to satisfy the compiler but I've found the payoff in debugging memory issues to be completely worth it.
This is a false choice. Go and Rust are not replacements for each other. They both have qualities which make each better suited for different environments.
Rust is actually capable of replacing all uses of C, Go's runtime will generally be an impediment to using it in certain cases.
Also I would argue that there are entire classes of bugs that are still available in Go that are not in Rust, that make it less suited for security. Null pointer exceptions, and unchecked errors are the two that come to mind.
Compiling C code is easy, but auditing C code is hard. Getting Rust code to compile without getting berated about every little thing is hard, but at least you're confident then you've got everything right.
Wont help when developers then go out of their way to write a large amount of unsafe code. OpenSSL went out of its way to reimplement the C standard library wrong, making it impossible to use with analysis tools like valgrind or a checked malloc and generally confusing developers. iirc it even used malloc(pop)/ free(push) as a stack to pass data around at some point.
There's a big difference between arguing for safety and arguing for Rust specifically. Everyone likes some languages and dislikes others. Can you design a safe(r) language that you would consider to be beautiful?
And even wrt memory, Rust (with the standard library) is not free of its own warts, see the OOM situation.
See e.g https://news.ycombinator.com/item?id=10545877 for a discussion on this.
Just to be clear: I generally like the ideas of Rust. I just dislike its presentation by some as a magic solution to all kinds of concerns.
I've almost never seen unnecessary unsafe code being used in Rust. I've seen it very few times for performance, but it's carefully done.
Why? We have a requirement for code quality. The code can't just work, it has to make sense. The OpenSSL people don't seem to care about badly formatted and/or non-understandable code.
All commits MUST build cleanly without warnings on multiple operating systems, and under multiple compilers. We have test cases for a large chunk of the code base. We scan all releases through three different static analysis tools.
Security is important. We make it important because we care. I wish other projects would do the same.
I think it is pretty foolhardy to assume that just because security issues haven't been found means they don't exist.
No one can reasonably say that the practices of the OpenSSL programmers result in secure code. No one can reasonably say that lots of people examining it later for defects is a good idea.
We have lots of legacy code in C. The only sane way to maintain it is tests: unit tests, functional tests, and static code analysis.
> a lot of OpenSSL's issues are due to legacy code
i.e. the OpenSSL people don't care to actively maintain / clean up their software.
What a depressing statement to make.
a) one which has tests, no build warnings, and is run through 3 different static analysis tools?
b) one which has none of those things?
2) Also, which of these products is more likely to be secure?
a) one which has a lot of third-party analysis?
b) one which has some third-party analysis?
3) Are these two questions the same?
My answer to (3) is "no".
While best combination of answers would be 1(a) and 2(a), OpenSSL is at 1(b) and 2(a). I'd bet they still have more security issues than FreeRADIUS, which is at 1(a) and 2(b).
That's all I meant.
With all respect to your project, I bet wider adoption of OpenSSL and consequently more interest from "interested" parties plays a bigger part here.
What an utterly ridiculous thing to say.
Long answer: If Unix and C hadn't won the fight in the 1980s like VHS did, we wouldn't deal with such low-level bugs. It took 30 years, but we're finally getting CPUs (like lowRISC) that were inspired by BS5000 or i960 and the languages to go with it are getting mainstream. To be fair, had we as an industry taken security more seriously decades ago, we would have written critical pieces of the stack in a high-assurance Ada profile, and put microkernel design capability kernels into production. We can extend 1960s kernel designs with all kinds of features, but without a coherent design it's impossible to provide the same assurance. This is why it's great that we have L4 descendants that incorporate a capability scheme and also support a multikernel scheme (ala Barrelfish) for making better use of a cluster of cpu cores.
You're writing on a computer magnitudes faster than the fastest computers of the 70s, with significant compiler improvement to boot.
Until about 10 years ago, hand crafted Assembler was still faster than C, and the speed improvements were actually needed.
This "let's write a text editor which is pretty much an advanced nano/pico in JS and have it take up 145MB" is only possible when everyone has a supercomputer in their hands anyways
It still is, especially when it comes to vectorization.
Setting aside the fact that this can already be done in numerous different ways on many platforms, how is this a win?
It's hard enough for developers to write correct code, let alone maintaining the correctness of that code while loading and executing unknown code from other parties (see web browsers, and the last 20 years of security bugs related to plugins, extensions, addons, etc).
I wonder if this bug affects them, typically the HIGH's haven't
It really feels like every other week there is a bug in OpenSSL and after following along with the libressl blog I understand why- the code is an absolute mess
Practically every OpenSSL bug posted here gets the standard "Luckily LibreSSL isn't affected by this kind of thing" response. On a couple of occasions I've taken the bait and linked to the LibreSSL source to show that the relevant bits are in fact not changed at all, so they were both vulnerable.
Not saying LibreSSL isn't doing good stuff, I just wish people would actually check if it's affected before using any opportunity to jump on the let's-hate-on-openssl bandwagon.
(I haven't looked into these bugs in the LibreSSL code since I don't have time for it right now, but I'm sure some message is forthcoming).
That's not a bulletproof way to assess bugs in LibreSSL because its authors also removed vulnerabilities from general helpers functions, proactively fixing issues in the code that uses them.
Well it looks like that has just changed: https://marc.info/?l=libressl&m=147454940615412&w=2
You say this like LibreSSL has been out for a long time. It's just barely crossed it's 2 year mark... and for most of it's life, it's difficult to call it "production ready".
Coupled with its low adoption rate (really only some select BSD's, and some adventurous linux folks), it's not surprising more vulnerabilities haven't been discovered (yet).
The OpenBSD folks do good work, but let's not pretend they are infallible.
For now, watch this spot:
Where did you get that impression?
https://github.com/libressl-portable/openbsd/commits/master shows an actively developed project
If we still stay with C, Ring would be a good replacement, which is partially/primarily written in Rust.
That said, LibreSSL's TLS abstractions look like a welcome improvement if you need a C API.
Google don't recommend you do this.
BoringSSL is a fork of OpenSSL that is designed to meet Google's needs.
Although BoringSSL is an open source project, it is not intended for general
use, as OpenSSL is. We don't recommend that third parties depend upon it.
LibreSSL has taken fixes from BoringSSL but it has the aim of maintaining API compatibility to the point where most applications using OpenSSL should "just work" for the most part. I've not tried building against BoringSSL but form reading their documentation it seems like the API is very much a moving target.
Then there's ocaml-tls which aims to provide a drop-in implementation of the OpenSSL API too.
We're finally seeing much activibity, and in that sense, I'm glad that Heartbleed happened, but it should have been earlier :).
In my time as security officer, it was a rare and surprising occurrence when we didn't need to hold an upcoming release due to a pending OpenSSL advisory. It got to the point of the release engineer saying "I think we're ready to start the release builds tomorrow, any news from OpenSSL" and me replying "nothing yet, but I'm sure it will come" -- their timing was absolutely impeccable.
I believe some people are working on bringing libressl into the FreeBSD base system, but there are some challenges; for example, FreeBSD supports stable branches for 5 years, while libressl follows OpenBSD's "break everything once a year" model.
Also most bugs in OpenSSL seem to be during renegotiation of protocol zzzz defined by some obscure RFC that nobody really understands how to implement, is that correct? Why can't they simplify these protocols, do we really need these fancy renegotiation features?
Actually Daniel Bernstein says that over-complicating the protocols is a clever way to make sure that software infrastructure remains insecure.
Which isn't to say the optional (but usefully fast) 0-RTT mode might not introduce a few more, for those lackadaisical devs who inevitably ignore all dire warnings and abuse it for non-idempotent requests like POSTs (or GETs that do something - bad idea!).
But even if it's not how I would have designed it afresh (the Noise Protocol Framework from Trevor Perrin is how I would have designed a new connection protocol now) it's still a big improvement over previous TLS/SSL iterations and a drastic enough change it probably deserves to be called TLS 2.0.
Edit: This is only the commit for the HIGH severity
https://security-tracker.debian.org/tracker/source-package/o... has a good overview of each issue with links to commits etc on each CVE entry.
But yes, that issue is a serious problem, too.
So does this affect TLSv1.2 only servers that do NOT support client renegotiation of any type?
The advisory makes it sound like that would be the case, but it would have been great if that was explicitly stated.
In case anyone is using older versions, like we do in our production environment.
A Chinese hacker
A very popular chinese anti-virus company which fashions the US freemium model, with advertising on their homepage being the biggest revenue, and a market cap of $11.42B.
Not a hacker.
I'm trying to track these sites, mainly for my own amusement: https://github.com/KeenRivals/Bugsite-Index