It is surprising that a project that is quite mission critical is completely at the bottom of the scale when it comes to how much the development process is oriented toward reliability. There are no systematic unit tests, no systematic documentation, the best you get is a bunch of disorganized integration tests, so it is not even at the level you would expect for a decently maintained business project: https://github.com/openssl/openssl
Perhaps the open source model of development is just not very good for software of this kind. Of course it's good that the source is open for everyone to look and potentially contribute, but without funding and without having a real process and a full time team it seems to me it is hard to get the level of quality required.
I also wonder how much in the end the big institutions care about this stuff. Intel hires a bunch of guys to do formal models of their processors to ensure bugs aren't shipped to millions of customers, why is nobody funding a formally specified version of SSL? For other mission critical systems, like what goes into spacecrafts, or hospitals, or gets developed in the military there are rigorous processes in use to prevent stupid mistakes, so it's somewhat disappointing that the major infrastructure pieces don't receive this kind of treatment.
However, there are gotchas. At this point the TLS protocol has gotten pretty complex. Even the act of just stating what the desired security property are is not simple. Furthermore, understanding what their proof means requires you to go through at least two of their papers (and probably both of their tech reports to grok it fully). Then you need to look at the code itself and the different assertions/assumptions spread throughout, and make sure that those conform to the properties you expect. Even if you are familiar with the field, starting from scratch would probably take you a couple of weeks, just to gain confidence that what they prove is correct. And it might be possible that there are some features of TLS that they don't implement (as you can tell, I haven't taken the couple of weeks to fully understand their stuff).
As I said in a similar post, their proofs show that their implementation is logically correct--their implementation will behave according to their formalization of the RFCs.
This says nothing about how secure it is with respect to side-channel attacks. For example, they admit that they did not model time in their proofs, so they have NOT proved that they are immune to timing attacks. Moreover, F# runs on top of the unverified CLI--bugs in the CLI can be used to break miTLS.
My assumption would be that this formally verified implementation is not optimized, and therefore runs rather slowly compared to the C implementations. Plus it requires the .NET framework or Mono to run it. Am I off base?
I would imagine it is 'optimised for verification' rather than for speed, ie. the code will be written to follow the structure of the specification.
However, the nice thing about having a verified implementation is that it can be used as the specification (reference implementation) for any other optimisations you want to make.
Such a two-step process is usually much more tractable than trying to verify optimised code directly. This is used extensively by BedRock ( http://plv.csail.mit.edu/bedrock/ ) which uses functional programs as specifications and an assembly-like language for the "real"/optimised implementation. The resulting verification problems are straightforward enough to be mostly automated.
No, you're right on target. They are using non-optimized data structures, and rely on (Bouncy Castle's) managed code for encryption. From their paper, the throughput is around 10x slower than OpenSSL (but much closer to JSSE). My guess is that one could probably speed this up significantly, while still retaining the guarantees, but that's probably less interesting to these guys.
As for Mono or .NET, I assume so (AFAIK that's the only way to run F# applications).
There are certainly large commercial entities who have sufficient incentive to keep OpenSSL secure that they really ought to be contributing actively to the project; I wonder whether Amazon might step up given how badly (and publicly - witness Mojang's taking minecraft services offline and pointing the finger at Amazon as the vendor they were waiting on to fix things - https://twitter.com/notch/status/453529143121309696) they got bitten by this vulnerability. Cloud hosting companies clearly have a responsibility to deliver trustworthy services, and that means they need to be deploying software stacks they can rely on not to screw them; either they will need to step up and support OpenSSL or switch to different SSL solutions that have stronger guarantees. Network appliance vendors like F5 and Cisco also have a clear interest in fixing this stuff. Question is, will they?
(disclosure - I work for IBM, though not in cloud infrastructure. What I say for Amazon applies to our cloud services guys too.)
I'm not sure that OpenSSL is the project they ought to be contributing to. It looks to be beyond repair architecturally (as a project as well as codebase).
It's a readily-available collection of complicated things (ciphers, digests, cryptographic protocols, etc) that everyone needs. Implementing 'yourself' (your company, whomever) takes lots of time and thus money. It seems the world made the assumption that this open source project was the end-all be-all of cryptographic implementations.
It was developed outside the US at a time when the US had export restrictions on strong crypto. Now that those restrictions are gone, anybody can just use NSS instead.
Are the alternatives much better? GnuTLS has had it's fair share of embarrassing bugs too and I can't think of a 3rd open source product that's as mature.
NSS is more complicated, for a lot of reasons. As I recall, it handles its own keystore which it doesn't share with other implementations (e.g. the ca-certificates package in Ubuntu/Debian); it requires you to initialize the keystore manually and teardown when you're done, but sometimes you don't know if anything else has done the initialization, so you don't know if it's safe for you to tear it down.
There are other issues as well; not blocking problems necessarily, but reasons why it might not be a great implementation or why it would break the way current SSL works for distro maintainers and users.
It's hardly a problem of open-source, it's a problem of open-incentives. While nobody disagrees the project is critical for everyone, are you (or I) willing to make it a priority? It's a essentially a "who will build the roads" issue.
Yeah, free software isn't a development model and it doesn't mean it's made by amateurs. As another prominent free project with a much better track record for security and stability, consider OpenSSH, the crown jewel of the OpenBSD crowd, a free software distribution itself popularly renowned for its security. Theo's criticism here of OpenSSL carries a lot of weight.
Of course, they are disclosing their vulneratiblities, they disclose everything, and they have an established audit process. Quite the leaders in security in the free software world:
Generalization in this case is really unhelpful, since there is almost no diversity in SSL libraries. There is 3 of them, but almost everyone uses openssl.
One case do not make a trend at all, and two cases is the worst statistical proof there is for trends.
On the positive side, open source model of development do allow projects to use not only what is common, but also the exciting fronter regarding security models. My project has used gnutls with pgp for 6 years now, nicely avoiding all those 2 major vulnerabilities, and that was only possible because someone found themselves wanting to implement the new TLS standard that supported pgp keys.
The same license that some libraries inside Starcraft 2 has.
But blizzard must be one of those small companies that do not care about profits, or have no lawyers. No "serious company" or serious software would dare to use LGPL, right?
But instead of arguing license religion, I suggest we move back to the topic of OpenSSL and vulnerabilities.
I would actually be interested in hearing some of the issues related to LPGL in video games and other projects. Are there stories / write ups on this that you could link to or would you mind explaining it some?
I'm not aware of any long form text on this, but the issue is relatively straightforward.
A publisher hires a studio to make a game, which then gets published on platforms that actively prevent users from (compiling and) running their own software (consoles, iOS).
Since the user is not provided with the tools to replace a library used in a game, the terms of the LGPL cannot be met even if the studio were to release the source to their version of the library.
In fact, even if they released the source to their whole game (something the publisher would never allow) the LGPLv3 still cannot be met because of the anti-Tivoisation clause, which stipulates that users must be able to run their own compiled version of the software.
Would a clause in LGPLv3 that exempts the publisher/distributor from limitations beyond their control (console/iOS restrictions) help?
Also, I think the the user isn't prevented from recompiling and running altogether; a user can get a developer account. Although, I guess that presents an additional cost, something (L)GPL might be against.
>Would a clause in LGPLv3 that exempts the publisher/distributor from limitations beyond their control (console/iOS restrictions) help?
I doubt very much that RMS would want to add such a clause. Remember, the GPL and LGPL are political tools as much as software licences. They exist to move the world in a certain direction. Adding such clauses would reduce their reason for existing.
A linking exception from the library authors could accomplish that, I believe it's what GCC uses. LGPLv2 may also be ok, but GPL licenses in general are complex enough that no one is entirely certain.
If the library is statically linked (which is the case at least on consoles), the user would also need the game's source code. I think the restriction is enough to trigger the (L)GPL clauses anyway.
> Would a clause in LGPLv3 that exempts the publisher/distributor from limitations beyond their control (console/iOS restrictions) help?
Tivo would just create a complicated enough corporate structure to provide the same shroud for themselves and will then be able to satisfy all requirements.
Based on some informal research I did ~10 years ago (i.e. I ready the LGPL as a high school student), the LGPL requires, among other things, an end-user license allowing reverse engineering. The LGPL isn't just "my code is GPL, the rest of the program is whatever"; it puts forth some requirements for the entire finished product that are either impractical (must include source code) or illegal (would violate NDA or other license terms for the console libraries) for console games.
Only reverse engineering for debugging modifications of the LGPL code in derivative works.
if you try to reverse engineer the protocol starcraft 2 uses with battlenet, you will have a hard time arguing in court that the LGPL license gives you permission to do so. If I recall right, starcarft 2 has some heavy restrictions on reverse engineering for this specific purpose, and I trust their lawyers to actually understand this difference.
For console games (which starcraft 2 is not), the console environment is incompatible with LGPL. If adaption of a security library depend on the console market, then such adaption won't happen.
My attitude towards complaints about open source projects is that if you think the errors are rudimentary then just submit some patches. If there is not enough test coverage then add one. This applies especially if you believe that the project is critically important.
I think what might be happening here is expert syndrome. People may be told that if they're not experts then they shouldn't be reviewing or changing the code.
Even if you work on a commercial project, where there is a reasonably stable core team, people just committing stuff, even if it's in itself pretty good stuff, won't result in good overall code quality. You need to have a systematic process, including a detailed coding standard, requirements about test and documentation coverage for submitted code, precise guidelines for contributing, rules of code review, and you need people who understand the whole project, review pretty much all of the code changes, have an overall development schedule in mind including refactorings and technical improvements, and all the time putting work into maintaining overall integrity of the project:
This is pretty much a must for any project where there is more than one person working, and the more people contributing or the more mission-critical the project the more of this kind of high-level coordination is required. Some open source projects with strong leadership do get to this kind of integrity, but most don't. It's easier when there is a relatively small team of very dedicated individuals, but some large projects have succeeded to some extent in building a real development culture in an open source setting, like Linux.
back in 2010, my business partner, marco peereboom, submitted a patch to openssl to add support for aes-xts. it was coded by joel sing, now a google employee and golang dev. they _didn't even respond to the mailing list email_ and after marco nagged for a reply the response was "we have a different plan for how to implement XTS" (i'm paraphrasing). _2 years later_, they added XTS support.
the openssl dev team is not responsive, doesn't accept contributions and generally speaking suffers from "you're not an expert" syndrome. look how expertly they've managed their project!
A while back, I tried submitting a patch to OpenSSL. It was a 3-line change (IIRC) related to the build process - in a particular esoteric setup, the build failed. Got literally zero replies regarding the issue on the mailing list, their IRC channel and on my/their github pull request.
I'd love to contribute more often to other OSS projects. But this behavior is more common than not.
I've also submitted patches to OpenSSL and been entirely ignored[1]. The latest of which strictly improved testing, documentation and included a detailed study and write-up of actual bugs in OpenSSL and downstream code[2].
My conclusion was that the OpenSSL project is not interested in external contributions.
This has not been my experience. I've contributed to many different projects, large and small (chpasswd, sendmail, apache, git, gerrit, openconnect, homebrew, msmtp, textmate, ...) and never had trouble getting a patch accepted.
Those projects don't suck, though. When projects (continue to) suck, it's usually because their maintainers suck. When maintainers suck, it's hard to get patches in.
The failure mode with this approach is that it disallows coaching.
There may be some bug in a library that's similar to what I do for my day job, and I spot it immediately. Or perhaps I've seen the inevitable results of certain design decisions play out in multiple organizations.
I may not have time to code up a patch. Frankly, I've got about 5 projects on the go besides work and kids.
So, FOSS projects may lose out on that kind of expertise that could easily be crowd-sourced if it didn't ask so much of contributors.
Well, I can code myself out of a hole I have coded myself into. But I guess the world is a better place because I don't try to help crypto projects by contributing.
Stiff, I think you answered your own question. Intel HIRES them, open source projects don't generally hire people. They sit around and wait for someone to contribute. Are you truly surprised that a volunteer created software is not as rigorously tested as software created by Intel?
I had some problems expressing what bothers me clearly and edited the comment heavily, perhaps it makes more sense now. You're right it is not that surprising, but it's still disappointing that even the most rudimentary best practices are not adopted. Have a look at sqllite for comparison, also an open source project, also in C, certainly less mission critical, and what a difference:
sqlite does not seem less mission critical to me, and definitely relied on funding:
"D. Richard Hipp designed SQLite in the spring of 2000 while working for General Dynamics on contract with the United States Navy.[7] Hipp was designing software used on board guided missile destroyers" -- http://en.wikipedia.org/wiki/Sqlite#History
I work for a very large company that relies on a fork (with contributions back upstream) of SQLite for a majority of its massive enterprise SOA. It is not just unpaid volunteers keeping that project going.
What do you consider when choosing SQLite issues to assigns resources? I would guess it would start with issues relevant to your roadmap. If that's the case with most enterprise FOSS contributors, they most likely trusted the features of OpenSSL they were using. Thus no reason to go poking around that section of the code. It's understandable why a team might choose to not perform an ad-hoc security audit of features that pass specs, even more so when such an audit requires niche expertise. We can hope this bug changes that attitude and more enterprises with the resources and knowledge start performing security and encryption audits. Just as your buildings have security guards, we need proactive and preemptive audits of at least the most common libraries is use, flagging of software that implement unaudited encryption libraries. A Travis CI like badge on GitHub for these audit metrics would bring attention to the problem. We could call it EncryptCI. Maybe this already exists?
Being FOSS doesn't have to mean relying on volunteers. Linux is mostly written by paid developers; why isn't OpenSSL, considering its reach in the commercial world?
Both, as they provide the bulk of the code. It would be more illuminating to examine where the unpaid volunteers contribute. My guess would be device drivers, but I don't know.
I agree this is incredibly surprising. Even beyond typical production testing it seems to me that critical infrastructure crypto should have a formalized testing structure based on logic and information theory (as I had assumed OpenSSL did). Build trust from the bottom up. Test that only known affirmatively tested primitives are used for memory allocation, and other known sensitive operations. Things like buffer overflow, range checking, executable code in data, etc. can all be easily tested.
There is a lot of work in crypto research about trust in the logical and mathematical sense, why is this work not applied to software testing at least for infrastructure crypto?
P.S. By way of incentivizing this work... seems like a pretty good dissertation topic.
Chrome and Firefox both use NSS. I would assume that the big institutions looked at OpenSSL and decided their money would be better invested in other projects.
Has anyone started a rumor yet that the NSA infiltrated the OpenSSL development team to make OpenSSL ineffective and full of holes?
The convoluted code of OpenSSL alone (from yesterday's Hackernews post) seems like a great way to add all sorts of "bugs" inadvertent or not.
Unfortunately with the Snowden disclosures, there isn't much that I rule out of bounds for the NSA when it comes to things critical to internet security. OpenSSL is so widely used and critical, it would be silly to think that it would escape scrutiny by the NSA.
Remember that the Snowden docs says that the NSA can break SSL/TLS/https/VPN, etc., but we do not know the full details: http://blog.cryptographyengineering.com/2013/12/how-does-nsa... But the one thing all of these technologies (SSL,TLS,https,VPNs) have in common is usually OpenSSL.
There's a talk that was given in Belgium / Brussels at FOSDEM2014 two months ago or so by Poul-Henning Kamp (FreeBSD) regarding the NSA and how he'd do it if he had to create holes in software:
He's talking specifically about OpenSSL quite a lot (basically saying it's too complex to ever be secure and probably received many "security patches" from NSA employees).
The entire talk is an eye opener. He explains how NSA shills are reading reddit / HN and poisoning communities / standards / protocols / etc. How everything is made, on purpose, needlessly complex to prevent honest developers from working on important things.
He talks about shills submitting a few correct patches over the months / years, slowly gaining reputation among the community and then misusing that trust to submit (not so) subtle patches introducing security holes on purpose.
He mentions a few of the "common mantra" repeated often (including here) by people who have an interest in the status quo.
He explains why SSL/TLS is broken and says that the "SEC" part of "DNSSEC" is not going to be that secure ; )
I think that the problem is much worse than most people think and that Poul-Henning Kamp is closer to the truth than the ones constantly repeating "bug happens" as if nothing malicious was ever going on.
I'm watching this right now, it's brilliant! This is exactly the kind of stuff I love to think about, but PHK has done a wonderful job thinking it out to a lot of detail. I suggest everyone drop what you're doing and watch it asap. It's exciting and really holds your interest.
I particularly appreciated the "QUEEN success example: SSC" (self-signed certificates, starts at slide 23), which also got a good laugh from the audience ...
(if you're just watching the video, this is the text of the second slide in the PDF, not shown at FOSDEM:
"""
What is this ?
This is a ficticious NSA briefing I gave as the closing keynote at FOSDEM 2014
The intent was to make people laugh and think,
but I challenge anybody to prove it untrue.
Playlist spoiler:
Electric Light ORCHESTRA: Confusion
ABBA: Money, Money, Money
Backman Turner OVERDRIVE: Taking Care Of Business
QUEEN: Bicycle Race
Beastie BOYS: Sabotage
PASADENA Roof Orchestra: Pennies From Heaven
"""
)
The NSA isn't the only entity who could stand to benefit from a privately held exploit to most SSL implementations. The NSA is also not the only group who could actually accomplish this.
Who's to say there aren't groups out there that are doing this already to steal and profit from corporate information, credit cards, etc.?
Reality is NSA has to do very very little to make this happen. Tiny nudges here and there. Profit motive encourages people to move towards complexity. Simple software commodifies too quickly.
Yes, but it's not reasonable to assume that our armchair "how I would do it" theorizing is, in fact, how they would do it. We can assume that they have come up with the idea, but maybe they discarded it for reasons that aren't obvious to us.
Sorry to introduce the hyperbole, or not moderate it.
When the government runs a campaign to cause the public to fear and doubt your neighbor that is the beginning of a dark time ahead. This idea that they are one of you, or someone on reddit is just poison.
On an emotional level, more and more Edward Showden is becoming a mystical figure.
> How everything is made, on purpose, needlessly complex to prevent honest developers from working on important things.
Ah, so that's why OpenSSL is not implemented as a RoR web application with REST-API and Cucumber tests in a TDD way so that Johnny Webmonkey can easily contribute. I always assumed it had to do with portability and performance.
The heartbleed bug looks exactly like an NSA backdoor:
* the protocol extension rfc and implementation are from the same person (who is now working for the largest German IT service contractor; formerly state owned T-Systems).
* the extension provides means for one party to send arbitrary data which needs to be returned - a keepalive mechanism would have worked with an empty packet
* there is no input validation on network originated data, in a crypto library, in 2011
This has huge implications on all of the Internet, while the individual parts being reasonably deniable.
* T-Systems was never state-owned. The German Federal Post used to be state owned but was split into separate post, communications and banking divisions in 1995. Deutsche Telekom IPO'd in 1996. T-Systems was founded in 2000 as a subsidiary of Deutsche Telekom. The German state currently holds around 15% of Deutsche Telekom stock.
* RFC 6520 was published in February 2012. At that time Robin Seggelmann wasn't employed by T-Systems but was instead writing his dissertation at the University of Duisburg-Essen [1].
* His dissertation [2] actually gives an explanation for the payload:
"The payload of the HeartbeatRequest can be chosen by the implementation, for example simple sequence numbers or something more elaborate. The HeartbeatResponse must contain the same payload as the request it answers, which allows the requesting peer to verify it. This is necessary
to distinguish expected responses from delayed ones of previous requests, which can occur because of the unreliable transport."
This is the first that I've had time to actually sit down and look at code, the RFC, etc, and I'm scratching my head.
The rationale for the payload for DTLS (sent over UDP) is reasonable, but I'm scratching my head over why the payload is even there for TLS. I guess there's some logic in making the protocol the same irrespective of underlying transport, but the payload is completely redundant for reliable protocols, which will handle any retransmission or reordering automatically.
Edit: Upon further (amateur and arguably ill-informed) reflection I'd argue that the heartbeat functionality shouldn't apply to TLS at all. It's needless complexity that can be handled by the underlying reliable transport. That doesn't change that the OpenSSL project clearly has code-quality issues, but it would have averted this particular fiasco, at least.
I am dismayed that a security-oriented protocol isn't focused on minimalast design. That seems very backward to me.
If you ever implement connection oriented protocols, you will very quickly find the need for a heartbeat mechanism - otherwise you have no way of telling whether the remote end is dead, or if is just busy creating data for you/doesn't have any data for you right now.
Not having heartbeats, you end up relying only on arbitrary timers to time out your connections, which are far from optimal, or you end up relying only on the underlying protocol which is not enough in most cases.
TCP doesn't provide that for you. (TCP does provide keepalives, intended for other means, they might be a poor mans substitute, but they arn't exposed to the TCP user in a particularly useful way - it's also the wrong layer for this problem)
TCP heartbeats certainly will let you know if the remote end is entirely dead.
They won't let you know if the application process is jammed up --- but TLS hearbeats won't do that for you either! (For example, it's common to have the crypto handled by front-end processes, with the application logic elsewhere.) So if that's what you want, then you need heartbeats at the application layer, not in TLS.
Given all that, the case for hearbeats at the TLS layer for TLS over TCP seems ... weak. Particularly since simply having them there violates the commonly cited best practice that security kernels should be kept small. And this violation has already had disastrous consequences.
Besides, for all the times I see people making this argument, I've yet to see them cite an example of a real problem in a real world deployment for which TLS hearbeats were used, and turned out to be the best solution.
I'm familiar with the paradigm. I've written some code to talk to dodgy embedded devices that don't "talk" unless they have something to say and I have ended up having to use timers. It sucks, to be sure.
I looked back to RFC1122 to get a feel for the nature of TCP keepalives, and I would agree that they're unsuitable for this application.
If we posit that a keepalive mechanism is useful for TLS I'd still argue that a payload is unnecessary because TCP will handle in-order delivery. A NOOP with some random-length padding would have worked just as well.
It's easy to see why, without the benefit of hindsight, one might think that the minimal approach was to have the same extensions apply to both DTLS and TLS.
I don't know where to go look (mailing list archives? meeting minutes?) to find any discussion about this extension (I'm woefully ignorant of Internet standards processes), but it would certainly be nice to review that discussion.
Given the rather dramatic differences that need to occur in a protocol designed to work over unreliable versus reliable transport, though, it seems like a bad idea to try to keep "parity" between DTLS and TLS. I can see how somebody might see it as "elegant" to do so, but I'd argue they're different animals solving different problems.
It seems odd to, in a security context, say "The payload of the HeartbeatRequest can be chosen by the implementation", giving a huge amount of freedom to implementers (apparently it was determined that "something more elaborate" might extend to an implementation needing to send a full 64K of chosen payload data) without further investigating whether any particular choices made by the implementation might have consequences for the security of the protocol.
Why allow variable length payloads at all? Why allow more than about 128 bits of data for distinguishing responses? I guess you might argue a client might want to disguise the nature of these heartbeat packets from an eavesdropper, so they need to be variable in size - that's reasonable, but if it's a requirement, then surely you'd want to comment on the security risk of using a "simple sequence of numbers" rather than "something more elaborate". 64K of data, though? That seems like a lot of leeway to give an implementation - considerably more than enough to disambiguate heartbeat packets; implementations are going to keep 64K of data around and compare it byte for byte to check whether the echo they got back matches the payload they transmitted earlier?
I really don't mean to ascribe malice here; I tend to assume this is just a bug, and that the payload for heartbeat packets gets to be up to 64K because once you've decided to make something a variable length buffer, you're probably choosing between having a one byte or a two byte field for length, and 256 bytes seems like it might be too small so you figure 'why not make the length field two bytes?' and then... oops; someone makes a buffer overrun and suddenly we're faced with 64K at a time of data being ripped out of any old server's heap.
I wonder if it's simply that the DTLS heartbeat is modeled on the ICMP echo-request packet? Pings also contain a blob of up to 64kB of arbitrary data (in the case of ICMP, this is/was handy for detecting pattern-dependent link problems). Unlike DTLS, though, the length of the payload is determined by the enclosing protocol, not redundantly by a length field in the packet itself.
If he's working for a German firm now, it wouldn't be an NSA backdoor. Maybe the US isn't the only government with an eye on developing these exploits.
There is absolutely no basis for such an inference. It could have been a one off job, he could be undercover, etc. In fact if I were a clandestine organization I would definitely try to recruit people from other countries to make matters appear less obvious.
This bug is too general and too wide open to be deliberate. Any college student could write code this shitty. An engineered flaw would have to be more subtle.
Check out phk's fosdem talk[0]. I've linked to the the part where he talks about openSSL, but I'd suggest watching the whole thing. I'm now starting to believe the talk wasn't a joke at all...
I was there at the talk and while he put a humorous spin on it by playing the part of a NSA agent, it's also extremely insightful to see it from that point of view. And yeah, when you really think about it... OpenSSL is the NSA's playtoy.
Of course, there's no way to prove that. But really, does it matter? Whether the NSA is behind OpenSSL sucking or not... we have to assume they know of several backdoors/exploits, and the OpenSSL API still sucks and prevents people from doing productive crypto.
Or maybe it's just a bug. There's really no need for tinfoil hat theories unless you have any evidence for a possible conspiracy.
That being said I agree that OpenSSL could do with a good code cleaning, but that's a massive undertaking, especially for such a popular library. You have to be backwards compatible.
Maybe a big name in software could go and write a modern crypto library without all the cruft of OpenSSL but for now we have to deal with it.
Unfortunately with the Snowden disclosures, there isn't much that I rule out of bounds for the NSA when it comes to things critical to internet security. OpenSSL is so widely used and critical, it would be silly to think that it would escape scrutiny by the NSA.
There is a difference between infiltrating companies and deliberately introducing a bug that compromises a large chunk of the Internet. I would assume the NSA is interested in gaining access to systems in a way that doesn't allow basically anyone to ride on their coattails.
> I would assume the NSA is interested in gaining access to systems in a way that doesn't allow basically anyone to ride on their coattails.
Don't we actually already have evidence of other cases of the NSA adding backdoors to widely used systems, exploitable by anyone aware of them? Isn't that what the elliptic curve backdoor was? http://www.wired.com/2013/09/nsa-backdoor/all/
Apparently your assumption is quite not safe to assume.
I guess the NSA either figures nobody else will notice the backdoor, or just doesn't care. I guess only they know their motivations.
The mooted backdoor in the Dual-ECDRBG involves the possibility that the public parameters were generated based on some secret values, such that knowing the secret values allows you to break the generator.
It doesn't allow just anyone to break it - merely knowing the backdoor is there isn't enough.
I don't see why people think that NSA is simultaneously genius-level forces of the Illumanti architecting events that will culminate years later, and yet would make the rookie level mistake of introducing a backdoor that is protected only by obscurity.
They know other security services understand how to decompile code (or read the original source).
More importantly, they know that critical government services they can't predict will end up possibly running things like their broken version of OpenSSL. There's a FIPS standard, of course, but note that the Heartbleed page made quite clear that FIPS is vulnerable too.
> the NSA is interested in gaining access to systems in a way that doesn't allow basically anyone to ride on their coattails.
If possible, yes. However, if they were indeed responsible for this, they'd probably have a couple honeypots to detect exploitation by third parties. If they considered the third party harmless, they could decide whether the vulnerability should be "independently discovered" or not.
Planting a complicated exploit would be harder and increase the risk it could be traced back to them.
Interesting. However, your sources are rather obscure, and I'm not sure what you mean by "a weak form."
I'd say a Baysian analysis is a way to evaluate the veracity of an argument from ignorance, not necessarily a replacement of that argument. An argument from ignorance is a strict subset of a baysian analysis in which there is no causal relationship between the missing evidence, and the conclusion. This is such a case. So we can conclude by using a baysian analysis that this is in fact an argument of ignorance.
Unless, of course, you have additional statistical samples that show a non zero relationship between not knowing about NSAs activity and the fact that they are engaged in that activity.
> Or maybe it's just a bug. There's really no need for tinfoil hat theories unless you have any evidence for a possible conspiracy.
But we do have "evidence for conspiracy". We known for certain that there are organizations with huge budgets that have introducing such vulnerabilities pretty much written into their mission statements. Why would we assume they would not be involved in something they obviously should be (however perverted their goals are)? It would be grossly incompetent of the NSA not to try introducing bugs into OpenSSL.
You have kept up to date with the news, right? Truecrypt, RSA... OpenSSL is low-hanging fruit by comparison. Of course it's back doored - probably more than once.
You're right that it needs cleaning - it needs a full audit. I'm surprised in the light of the Snowden revelations that none of us have suggested this sooner, but hindsight is a bitch and all that.
Lots of people have suggested it. Very few people have dedicated their time to helping do it, though. Welcome to the 'other people should do this' club!
I don't know what the gp's skills are, but assuming he's not a computer security professional then the 'other people should do this' club is exactly the club he should be in. Would you trust his audit of the code if he wasn't heavily experienced? it's the same case as "don't roll your own crypto".
There's a difference between "trusting his audit of the code" and "trusting his code". Any interested and motivated person can audit code. If nothing is found, that might not prove much. But if a vulnerability is found, the audit was worthwhile from some perspective.
Sure, so long as the bug is a genuine bug and not a misunderstanding. Debian's openssh valgrind warning springs to mind. Crypto implementations can be subtle and non obvious. Maybe it's crap design for that reason, but it seems like it's what we've got to work with currently.
What would evidence of conspiracy look like? Do we need comments in the code, "this is an NSA back door...", seriously?
>You have to be backwords compatible
we have been making this trade off for years and it wont get any easier in the future, its about time we started a fresh initiative that doesn't make backwords compatibility a priority, except of course in the sense of maintainability moving forward.
There are very few competent FOSS crypto implementers, and unfortunately, they're spread very thinly. I'm not sure that having yet one another crypto library is going to be helpful unless we actively move our users to the new library and kill the old projects.
I think now would be great time to reconsider the OpenBSD ipsec debacle in this kind of new light. Correct me if I'm wrong, but doesn't ssh use openssl libraries and the openbsd guys were heavily involved in openssl's development? A lot of people think the closed audit that found a minor unintentional bug was the last word on this subject, but the orginal guy who made the claim posted in 2012 further explaining how the FBI had made this attempt.
You are correct, just got done reading Theo's rants on the subject and realized I was mistaken, so apologies. The same style of potential subversion may be at hand, regardless.
I may be biased in that I have not written much C-code in years (decades?), but whenever I find a codebase covered in IFDEFs, I start assuming that every single new IFDEF introduces a new condition into the system which has not been properly nor recently tested, and that the software is horribly broken.
For me, fixing software, is just as often removing IFDEFs together with unmaintained and broken code. It's IMO much better to be honest about "this isn't supported", than pretend you support something you don't.
And this seems to be another one of those stories, coupled with a (bad?) case of NIH.
At a first approximation, no one should write their own memory allocator. Just as no one should write their own garbage collector, and no one should write their own filesystem.
All these subsystems have one thing in common: they are incredibly complex and very hard to get right, but seem easy on the outside.
These are complex to make robust, but they are not hard concepts. I think everyone should write their own GC, write a filesystem, and handle their own memory layer, at least once.
Treating lower levels of the stack as "too complex, there be dragons, just write some Javascript that hits RESTful API for JSON" is a great way to ensure you never progress as a software dev or an engineer.
I agree that reinventing the wheel poorly - without understanding geometry or physics - is a terrible idea. But I also think that we don't have enough people nowadays making toy wheels to learn about geometry and physics.
For each of the things you mentioned, there are concrete and good reasons why they grow complexity: filesystems need to be robust to media corruption and power faults; memory allocators need to be highly tuned to OS and architecture memory layouts; GCs need to be optimized for the code patterns of the particular language they serve, as well as understand the same OS and hardware memory constraints as allocators.
But much of the complexity of these layers is not intrinsic to the problem, and not related to the above. In fact, most of the complexity and bugs of production code at these layers have to do with legacy compatibility, or code inheritance. (If you look at what Theo says about this specific OpenSSL bug, it arises because the OpenSSL team wrote their own allocator that doesn't use malloc() protection bits because those are not compatible across some platforms.)
As the old saying goes: in science, we advance by standing on each others' shoulders; in software, we stand on each others' toes.
> I think everyone should write their own GC, write a filesystem, and handle their own memory layer, at least once.
Of course. But not in production. Which is a detail missed by the snarky one-liner, maybe because one-liners suck. Still, the point isn't that people shouldn't ever write them, just that they shouldn't actually use the ones they developed while not being a part of a team of experts specializing in the issue at hand.
I think the important part of the previous comment is "At a first approximation" - of course there is nothing wrong in writing code with the primary goal of personal understanding, but re-inventing the wheel when there are potentially very serious consequences for others is hubris (unless, of course, you have a really good reason why, NIH not being a good reason in my book).
[NB Ken Shirriff's blog articles on Bitcoin are a superb example of someone coding in an effort to understand a system - but I don't think he is attempting to write a real Bitcoin library]
> These are complex to make robust, but they are not hard concepts.
Yes, yes, I didn't quantify my statement with "for production use".
Your sentence I quoted above was exactly my point: they aren't necessarily hard (or at least they don't seem hard), but they are very, very difficult to get right (robust). Getting systems like these to work well in practical production use is more than 80% of the effort.
I wasn't saying that no one should learn how a garbage collector works by implementing one. My point was that no one should implement their own garbage collector for a production system unless they already became an expert in the field of garbage collectors by implementing them for the last 10 years or so. Same goes for memory allocators. If someone thinks these are simple systems, it means they don't know what they don't know.
Indeed. The proper thing to do is: write your own malloc, learn an enormous amount from implementing malloc, never ever under any circumstances allow your malloc implementation to enter into production use.
Hopefully the act of writing your own malloc implementation will be enough to convince you not to use it in anger, but it's probably best to be clear on the subject.
The discouraging of everybody who wants to provide an alternative only locks us in to supershitty software that got there first. Worst is Better reigns.
I mean, there is an extremely small chance that anybody will adopt your library if it is unquestionably better than the dominant alternative. We don't actually need to discourage people, we need to do everything to encourage them. Sure, 99 of 100 will be no better than our present crap. What does it matter? Few will adopt it anyway. The tragedy is that the better alternatives will probably meet the same fate.
The idea is to leave each component to the experts. If you're a concurrency/memory expert, then write a malloc library. If you're a cryptography expert, write your crypto code and use someone else's malloc.
"nobody" is not an absolute here. It's a reminder that the vast majority of people who get the idea to write their own memory allocator, garbage collection or filesystem will do an even worse job at it than the people who implemented the ones we typically use did, but it will end up going mostly unnoticed and possibly make it into some critical piece of software.
I don't understand this attitude, if something is incredibly complex and very hard to get right, this can indicate that there is a mismatch between the tools you use to implement it and what you should use.
With the right language it might be much more pleasant to implement:
Consider for example https://github.com/GaloisInc/halfs a filesystem implemented in Haskell, I think it has ~10000loc
It can also mean that existing implementations are poorly documented and research papers on the subject place no importance on implementation details.
Of course you should not use your first implementation in production, but that is as obvious as not letting someone do brain surgery who has no prior training in it.
10kLOC of Haskell for a filesystem seems like an awful lot, especially when considering how much more concise and higher-level Haskell is supposed to be compared to languages like C. Maybe filesystem code just don't fit too well into the functional paradigm?
For most uses, you don't really have to get them right, though. The problem is knowing when yes, you absolutely do need to get this right.
The cost in effort is highly non-linear. It may be fairly straightforward to get the 80% solution. I agree that it's a problem when devs bring their home-grown 80% solutions to the table when you need a 99% solution.
I make games. No one dies if there's a bug in the AI. No one may even notice.
> All these subsystems have one thing in common: they are incredibly complex and very hard to get right, but seem easy on the outside.
And have teams of operating systems engineers whose entire existence is predicated on them, rather than them just being an afterthought so you can get some other stuff done. If the OS does it, you shouldn't.
Au contraire, crypto developers probably should write their own memory allocators (just, you know, not quite so badly,) otherwise they're open for timing attacks.
There is a difference between complex and complicated. Crypto is a complex issue, with some subtleties that elude too many people. Low-level crypto, like this, has many small little details (such as allocating and deallocating buffers) that make it very complicated.
So what are the options then if OpenSSL isn’t fit for purpose? Is it possible to move wholesale to a different project? Are any of them trying to ease migration over from OpenSSL to themselves?
Not sure which one I'd pick; all of the main libraries seems to either have had very bad issues reported at one point or another or are maybe not used enough to inspire enough conficdence; anyway here is a list https://en.wikipedia.org/wiki/Comparison_of_TLS_Implementati...
Perhaps this task should be moved away from libraries (which in some cases are even statically linked and hard to update) into a well-audited daemon, written in a safe language. Or even into several separate daemons for better protection (each with the least privileges required and minimal interface), for example one for handling keys / signing requests only.
> And then we'd have potential exploits both in the daemon and in the safe language runtime/compiler.
That doesn't make the idea futile, especially considering the current alternative. A language runtime focused on safety and a daemon that uses it for that purpose are going to present less of an attack surface than ifdef soup with custom malloc implementations and hand-managed buffer transfers.
> And then we'd have potential exploits both in the daemon and in the safe language runtime/compiler.
Now we have exploits for libraries and runtimes that often allow access to arbitrary application data. If this functionality is in a separate process without such privileges, application data will be safe.
> One just needs to keep on looking for exploits and keep on patching them.
This approach hasn't worked in the past and will not work in the future either. Especially when we start with the existing, desolate and probably thoroughly compromised codebases.
Now we have exploits for libraries and runtimes that often allow access to arbitrary application data. If this functionality is in a separate process without such privileges, application data will be safe.
This will prevent the prior vulnerability. Will it prevent the next one? Do you know what it won't make the next vulnerability worse?
A year or two ago I had to deal with a bunch of people complaining that the embedded device my team was selling was vulnerable to BEAST, because their scanning tool told them it was. I wrote a long essay for the support team about how BEAST is exploited and it's really hard to do it against a limited purpose control GUI with no publicly routeable address, and this worked for about half the people who complained. For the others we had to turn on RC4, which may have been even worse, but now their tool wasn't complaining at them so they were happy, even though they may have been worse off.
Thankfully my old team didn't rush to the new OpenSSL version so they got to avoid the past 48 hours being complete shit for them.
There aren't clean answers. Like with financial regulation, what you do to make sure the last thing never happens again may create the next thing.
> Now we have exploits for libraries and runtimes that often allow access to arbitrary application data. If this functionality is in a separate process without such privileges, application data will be safe.
Even now if OpenSSL would not have made their own memory manager focusing on performance this bug would not have such major consequences.
Frankly they should just accept performance penalties and make a memory manager that focuses on security.
As an example all allocations should be from mmap so that any overflow will automatically segfault (with checks that one doesn't get continuous pages from the OS unmapping selectively to create disjoint areas). Every free should be a munmap etc.
I do also agree that moving this to a separate process would be useful.
Maybe at the OS level? You have to trust the OS anyway so you don't loose anything. Allowing untrusted apps to do encryption makes auditing their activity much more difficult.
PolarSSL is used by the OpenVPN-NL variant of OpenVPN that has a Dutch government 'seal of trust' attached to it. They require you to use OpenVPN-NL (instead of regular OpenVPN) for various purposes. They have issued a statement on their mailinglist that they (of course) were not vulnerable.
That page has not been changed since May 7th 2013, I am not sure that this is currently still actively pursued.
Also I haven't seen this on the proposals for F21
Good to know, I missed that obviously, but that might be because it wasn't a new proposal ;) I am following the proposal by email only and couldn't recall that one :-)
GnuTLS had its own problems recently (see: http://www.gnutls.org/security.html), and regarding NSS... I couldn't find a proper public "security" page for it (eg, release notes for 3.16 point to CVE-2014-1492; but the link to the bugzilla issue is not public), so I don't know.
Obviously a good security record doesn't imply things like a good code base, good practices, etc; (although it could).
Quote: "The cert_TestHostName function in lib/certdb/certdb.c in the certificate-checking implementation in Mozilla Network Security Services (NSS) before 3.16 accepts a wildcard character that is embedded in an internationalized domain name's U-label, which might allow man-in-the-middle attackers to spoof SSL servers via a crafted certificate."
I always felt like there unfortunately was no proper alternative to libopenssl (might be unfair but it seemed like a somewhat smelly library) but thanks to this event I've now been made aware of all the alternatives.
How much more fun/comfortable is NSS/GnuTLS to use in a typical C project in comparison to OpenSSL?
Great potential to learn for developers and users alike this heartbleed.
There are quite a few alternatives (you may also want to look at matrixssl, polarssl/tropicssl and cryptlib), but if you use one of them, people will report interop problems as bugs in your code.
Been there, done that. Sad to say, but I now use openssl.
"Anyone who ever told you that swear words have no place in technical discussion is right. They're right, and sadly, they're part of the problem because they miss the point. The sterile word placement that's supposed to support an argument makes any true motivation indistinguishable from all the hired bullshit.
[...]
However, when someone starts swearing in technical discussion, showing emotion, that's a strong indicator that I'm about to receive wisdom. Wisdom is earned the hard way, and it is permanent, not like some statistically shaky performance benchmark that we'll all forget about next week."
I feel the same - I'd rather have people feeling passionate about their work and use swear words, than work with polite non-personalities that don't give a fuck.
Swearing isn't required for passion. Martin Luthor King didn't have an (expletive of choice) dream, he just had a dream. There are plenty of people who care about their work and manage to express that without swearing. Similarly there are plenty of people who swear about stuff who may well be committed but it's highly questionable about what they're committed to and whether it's to the benefit of the project / team.
If someone does swear (and as it goes, I'm fine with swearing for emphasis and swear way more than I feel I should) they need to be aware that some people find it offensive and that if they don't moderate their behaviour it may be taken very personally which can result in barriers between colleagues and / or low morale.
You can choose to say that that's the person reacting's problem but if you do so you need to be aware of the price that comes with (that they're working in an environment with which they're not completely comfortable which is unlikely to get the best out of them). The alternative is that the swearer moderates themselves (with which they in turn may not be comfortable).
Obviously the better you know your colleagues the more leeway you have. If you've built up trust and togetherness overtime you can get away with anything but even then you need to consider what happens when new people join the team.
But ultimately this is about people and where it's about people there are no absolutes - what works for one group will be kryptonite for another.
Theo has a twenty year record of being a brash, non-calm[1] asshole. The pesky problem is that he has this very bad habit of being right, especially when he is angry, brash and profane. So yes, he may indeed may need to enhance his calm, but at the same time, other software developers need to realize that the best course of action in avoiding his ire is to not write software that sucks.
It's the Steve Jobs conundrum - where someone is an asshole but clearly also very talented.
For me two thoughts come out of this:
1) Yes they've been productive the way they work, but have they been more or less productive than if they'd been just as right but a little less confrontational?
Research generally doesn't show confrontation to be the most productive approach for most people. Is their style productive, or is it unproductive but they get away with it because they're so good that people will put up with it?
You say that the best course of action if you want to avoid Theo's ire is not to write bad code, but how many people have taken another approach? The one that springs to mind is not working with him and how much talent does the project miss out on because people just steer clear? I know plenty of very smart programmers who simply don't go near projects where the culture and personalities are like that. Bad behaviour towards others (even when you're right) limits the available pool of skills.
2) I can live with the arsehole geniuses but others who aren't as able often use the likes of Jobs or Theo to justify their own poor, destructive behaviour. For me at least, that doesn't work - if you're going to be as big a dick but you're not as good as them you're basically just a dick and if the day eventually arrives when the IT industry doesn't have a massive skills shortage, those people will cease to be tolerated.
I absolutely depend on some swearing to draw a clear line between the factual presentation and everything else. I think this is why beer:30 and stepping away from the office with coworkers is important.
People matter. The way software makes them feel matters. Code smell and related concerns are best conveyed in profoundly human ways. Fuck it. Say it like you mean it.
You must be joking. "OpenSSL is not developed by a responsible team" is probably the most benign criticism Theo has ever issued.
And he's quite right. This is a class of bug that should not exist or, at least, be restricted to the OSs where "malloc() performance is bad enough" to justify using their own implementation. Memory management is a non-trivial problem.
A secure library should be defensive in coding style and implementation, not sloppy. It should have defaults that err on the safe side, not the fast side, if you have to choose.
It's a massive open source project that's more than 15 years in the making and supports a massive range of architectures and ever-moving standards. Shit happens, we need to figure out how to make sure it won't happen again at this scale, there's really no need for name calling.
We need to stop have such a fucking accepting attitude about sloppy engineering in security-critical code. People who don't want to be held to a high standard should get out of there and go do something that can't break the internet.
This entire thread is eye-opening. It's as if open source gets a pass on almost everything related to quality because it's free and open. The unspoken belief is that a magic person will come along and fix everything in the indeterminate future. Also, the mantra: "all bugs are shallow."
If you refuse to understand the reasons for engineering, conceptual integrity, and discipline in writing code that needs to be readable (read: dull and boring), then I don't know what to say.
I'm not saying we should accept sloppy engineering in security-critical code, I'm saying that it's pointless to insult people who contribute to free open source projects from the outside.
We want to incite more people to audit and contribute to these projects, not the other way around. Insult the engineering, not the engineers. Or, alternatively, don't insult and propose improvements.
> We want to incite more people to audit and contribute to these projects, not the other way around
Prepare for some OSS heresy: in many projects, contributions are overrated.
Why? Presumably, the author is the one who feels the most joy/pain of what they've made. They're the ones who've had to grow and prune the code over time. They're the ones who've had to respond to features breaking their mental model sometime. They're the ones trying to make a cohesive abstraction. On crappy projects, the users shoulder more and more of this burden because the author did not.
I've had good luck with contributions in OSS (both making and accepting), but I realize a majority of them are "this isn't working for me, so I added this" without sitting down and considering it's effect on the entire design. I hate rejecting contributions, but if they compromise the project's modeling of the problem, or code quality, then it's for the better.
OSS lends itself to feature creep, just like commercial software. The marketing side of OSS rewards this, by incentivizing you to make more commits (such traction!) and accept changes from everyone (because, community!). New and shiny is a horrible heuristic to use when evaluating infrastructure (read: lots of OSS).
I didn't see it as an insult. I would react against personal insults as well, that's wholly unnecessary. Saying that a project is not cautious enough is not an insult, if it would be, we couldn't talk about quality at all.
Your first statement negates your second. Shit keeps happening because people keep using the excuse that it's a large project. The OpenSSL team needs to take a step back, start proactively auditing their own code, adding unit tests, getting rid of stupidity like having their own caching malloc library. When a project shows a consistent lack of responsibility like the OpenSSL project has, then calling the maintainers irresponsible /is/ the responsible thing to do.
If Heartbleed is the result of supporting a massive range of architectures and ever-moving standards, the answer is to do less. If OpenSSL can't maintain quality code and architecture support and feature support, the answer is NOT to let go of code quality.
theo's an asshole, but he's quite often right. i don't like his attitude too, but it has to be said that in this particular case, consequences have been - and/or will be - extraordinary.
Well, really the consequences would have been virtually the same either way, since the number of OpenSSL deployments where the system malloc uses guard pages is a rounding error.
Not necessarily. It's probably still more likely that the bug would have been discovered sooner, even if only via a relatively small number of crashes.
Yes, but because of the caching malloc, the OpenBSD implementations are just as affected as the rest of the world's. Irresponsible software design sucks, plain and simple.
The lower you are on the stack, the more you need quality. And the more you need quality, the more likely you are to run into hackers with 'tough love' personalities, because they simply cannot tolerate poor excuses for code.
How they express it is a reflection of their character, but they will let you know when you screw up. Theo was extremely tact given the enormous impact of this bug.
I echo what someone said earlier on another OpenSSL thread: security applications should not be handling their own memory at a low level like this. It's just too easy to mess up, and it often leads to the worst vulnerabilities.
I think many eyes + code audits is a good mix. It seems like OpenSSL has many eyes but the code is so incredibly complex and "big" that audits are nearly impossible (or prohibitively expensive).
Perhaps we need to hit the reset button, and have a bit more oversight into "NewSSL" with continuous audits. There are enough big players in need of secure communication that money shouldn't be a problem.
So "many eyes" means "paid researches at Google and a security firm"?
Also, this bug was in place for what, two years? If the many eyes hypothesis has a two year lead time to find bugs this severe, we can stop talking about it because it's fucking worthless.
It worked in the sense that the lifeboats on the Titanic worked - they did, after all, save hundreds of lives.
The combination of time and severity in this case should mean that we can move on from naive 'all bugs are shallow' dogma towards developing a more evidence-based approach to the verification of critical software.
If there are so many problems with OpenSSL, why are there no alternatives that are readily available and anywhere near as functional?
The whole internet runs OpenSSL, but why hasn't anyone tried to do something different? I know it's complicated, but if a few big companies really chose to put some muscle behind it, it could happen, right?
That really doesn't illuminate anything, because you'd also need to explain why open source has been spectacularly successful generating other public goods (linux and others).
The economics of open source are pretty clear at this point. The software industry spends a lot of money supporting open source, because it's in their own interest to do so -- it's cheaper to share the costs than to build your own infrastructure from scratch every time, when the infrastructure is not part of your competitive advantage.
This particular bug was found by people that Google pays to audit open source code all day, in an effort to improve said code.
> open source has been spectacularly successful generating other public goods (linux and others).
No one doubts that some open source software has been very successful. What I'm not sure of is whether levels of open source provisioning are optimal: maybe there should be 10X what there is now. Maybe Linux should dominate the desktop world, but does not due to lack of funding. This is Bastiat's "what is not seen" - what we have now is good, but perhaps it could be better. Maybe a lot better, under different circumstances.
Also, that link mentions Coasian solutions, and privileged goods, which between them explain a lot about open source software, no?
If this is a consequence of the difficulties with the production of open source software, does that mean there are much more secure proprietary implementations of SSL/TLS? Which ones?
So what if someone tried to crowdfund a new implementation (or a thorough rewrite of OpenSSL, if that makes more sense). Could they raise something on the order of $1M? It seems like it would cost that much for, I'm guessing, three absolutely top developers for two years. Unlike a lot of crowdfunded projects, this one would not launch a business -- there's no additional revenue opportunity for the developers once it's completed -- so the amount would have to compensate them not just for their time but also for their opportunity cost.
On the other hand, given the importance of TLS to the Internet, $1M seems less than trivial -- literally pocket change, if the cost were well distributed among the millions of websites using SSL.
Yes, C gives you a machine gun and no guard against your foot. However, the OpenBSD team also has a philosophy, and design measures, to ensure that said machine gun never gets anywhere near your foot. Given their security track record, I think they've done a pretty good job of such.
Can you give an example of a "secure" OS not written in C so I can see what this mythical beast looks like?
I guess we should just let you code it in Enterprise java and pay you dividends.. Get a hold of yourself - most open source is actually funded by large companies in the first place. And being drunk has nothing to do with bad code, I for one code better when inebriated. Your straw man banter doesn't need to be on HN.
I know I'm replying to a troll, but... you have been following the news, right? Would you rather have us use the backdoored RSA bSAFE library? That was developed by paid professionals, who by your logic should be the paragons of competence.
Perhaps the open source model of development is just not very good for software of this kind. Of course it's good that the source is open for everyone to look and potentially contribute, but without funding and without having a real process and a full time team it seems to me it is hard to get the level of quality required.
I also wonder how much in the end the big institutions care about this stuff. Intel hires a bunch of guys to do formal models of their processors to ensure bugs aren't shipped to millions of customers, why is nobody funding a formally specified version of SSL? For other mission critical systems, like what goes into spacecrafts, or hospitals, or gets developed in the military there are rigorous processes in use to prevent stupid mistakes, so it's somewhat disappointing that the major infrastructure pieces don't receive this kind of treatment.