Perhaps the open source model of development is just not very good for software of this kind. Of course it's good that the source is open for everyone to look and potentially contribute, but without funding and without having a real process and a full time team it seems to me it is hard to get the level of quality required.
I also wonder how much in the end the big institutions care about this stuff. Intel hires a bunch of guys to do formal models of their processors to ensure bugs aren't shipped to millions of customers, why is nobody funding a formally specified version of SSL? For other mission critical systems, like what goes into spacecrafts, or hospitals, or gets developed in the military there are rigorous processes in use to prevent stupid mistakes, so it's somewhat disappointing that the major infrastructure pieces don't receive this kind of treatment.
However, there are gotchas. At this point the TLS protocol has gotten pretty complex. Even the act of just stating what the desired security property are is not simple. Furthermore, understanding what their proof means requires you to go through at least two of their papers (and probably both of their tech reports to grok it fully). Then you need to look at the code itself and the different assertions/assumptions spread throughout, and make sure that those conform to the properties you expect. Even if you are familiar with the field, starting from scratch would probably take you a couple of weeks, just to gain confidence that what they prove is correct. And it might be possible that there are some features of TLS that they don't implement (as you can tell, I haven't taken the couple of weeks to fully understand their stuff).
This says nothing about how secure it is with respect to side-channel attacks. For example, they admit that they did not model time in their proofs, so they have NOT proved that they are immune to timing attacks. Moreover, F# runs on top of the unverified CLI--bugs in the CLI can be used to break miTLS.
However, the nice thing about having a verified implementation is that it can be used as the specification (reference implementation) for any other optimisations you want to make.
Such a two-step process is usually much more tractable than trying to verify optimised code directly. This is used extensively by BedRock ( http://plv.csail.mit.edu/bedrock/ ) which uses functional programs as specifications and an assembly-like language for the "real"/optimised implementation. The resulting verification problems are straightforward enough to be mostly automated.
As for Mono or .NET, I assume so (AFAIK that's the only way to run F# applications).
(disclosure - I work for IBM, though not in cloud infrastructure. What I say for Amazon applies to our cloud services guys too.)
NSS, which is used by Mozilla and Chrome. For Apache, just google mod_nss or read this: https://stomp.colorado.edu/blog/blog/2010/06/04/on-setting-u...
It looks like NSS isn't supported on nginx. Hope somebody puts that on their todo list!
There are other issues as well; not blocking problems necessarily, but reasons why it might not be a great implementation or why it would break the way current SSL works for distro maintainers and users.
One case do not make a trend at all, and two cases is the worst statistical proof there is for trends.
On the positive side, open source model of development do allow projects to use not only what is common, but also the exciting fronter regarding security models. My project has used gnutls with pgp for 6 years now, nicely avoiding all those 2 major vulnerabilities, and that was only possible because someone found themselves wanting to implement the new TLS standard that supported pgp keys.
Sadly, it'll never become popular because of the license.
But blizzard must be one of those small companies that do not care about profits, or have no lawyers. No "serious company" or serious software would dare to use LGPL, right?
But instead of arguing license religion, I suggest we move back to the topic of OpenSSL and vulnerabilities.
But yes, let's go back to lamenting the state of crypto libraries, we can at least agree on that.
A publisher hires a studio to make a game, which then gets published on platforms that actively prevent users from (compiling and) running their own software (consoles, iOS).
Since the user is not provided with the tools to replace a library used in a game, the terms of the LGPL cannot be met even if the studio were to release the source to their version of the library.
In fact, even if they released the source to their whole game (something the publisher would never allow) the LGPLv3 still cannot be met because of the anti-Tivoisation clause, which stipulates that users must be able to run their own compiled version of the software.
Also, I think the the user isn't prevented from recompiling and running altogether; a user can get a developer account. Although, I guess that presents an additional cost, something (L)GPL might be against.
I doubt very much that RMS would want to add such a clause. Remember, the GPL and LGPL are political tools as much as software licences. They exist to move the world in a certain direction. Adding such clauses would reduce their reason for existing.
If the library is statically linked (which is the case at least on consoles), the user would also need the game's source code. I think the restriction is enough to trigger the (L)GPL clauses anyway.
Tivo would just create a complicated enough corporate structure to provide the same shroud for themselves and will then be able to satisfy all requirements.
if you try to reverse engineer the protocol starcraft 2 uses with battlenet, you will have a hard time arguing in court that the LGPL license gives you permission to do so. If I recall right, starcarft 2 has some heavy restrictions on reverse engineering for this specific purpose, and I trust their lawyers to actually understand this difference.
For console games (which starcraft 2 is not), the console environment is incompatible with LGPL. If adaption of a security library depend on the console market, then such adaption won't happen.
I think what might be happening here is expert syndrome. People may be told that if they're not experts then they shouldn't be reviewing or changing the code.
This is pretty much a must for any project where there is more than one person working, and the more people contributing or the more mission-critical the project the more of this kind of high-level coordination is required. Some open source projects with strong leadership do get to this kind of integrity, but most don't. It's easier when there is a relatively small team of very dedicated individuals, but some large projects have succeeded to some extent in building a real development culture in an open source setting, like Linux.
back in 2010, my business partner, marco peereboom, submitted a patch to openssl to add support for aes-xts. it was coded by joel sing, now a google employee and golang dev. they _didn't even respond to the mailing list email_ and after marco nagged for a reply the response was "we have a different plan for how to implement XTS" (i'm paraphrasing). _2 years later_, they added XTS support.
the openssl dev team is not responsive, doesn't accept contributions and generally speaking suffers from "you're not an expert" syndrome. look how expertly they've managed their project!
I'd love to contribute more often to other OSS projects. But this behavior is more common than not.
My conclusion was that the OpenSSL project is not interested in external contributions.
This has not been my experience. I've contributed to many different projects, large and small (chpasswd, sendmail, apache, git, gerrit, openconnect, homebrew, msmtp, textmate, ...) and never had trouble getting a patch accepted.
edit: downvotes, really?
There may be some bug in a library that's similar to what I do for my day job, and I spot it immediately. Or perhaps I've seen the inevitable results of certain design decisions play out in multiple organizations.
I may not have time to code up a patch. Frankly, I've got about 5 projects on the go besides work and kids.
So, FOSS projects may lose out on that kind of expertise that could easily be crowd-sourced if it didn't ask so much of contributors.
"D. Richard Hipp designed SQLite in the spring of 2000 while working for General Dynamics on contract with the United States Navy. Hipp was designing software used on board guided missile destroyers" -- http://en.wikipedia.org/wiki/Sqlite#History
There is a lot of work in crypto research about trust in the logical and mathematical sense, why is this work not applied to software testing at least for infrastructure crypto?
P.S. By way of incentivizing this work... seems like a pretty good dissertation topic.
Well, if they did care more, you'd see engineers from Intel, IBM, et. al., contributing to the project in droves like they do with the Linux kernel.
The convoluted code of OpenSSL alone (from yesterday's Hackernews post) seems like a great way to add all sorts of "bugs" inadvertent or not.
Unfortunately with the Snowden disclosures, there isn't much that I rule out of bounds for the NSA when it comes to things critical to internet security. OpenSSL is so widely used and critical, it would be silly to think that it would escape scrutiny by the NSA.
Remember that the Snowden docs says that the NSA can break SSL/TLS/https/VPN, etc., but we do not know the full details: http://blog.cryptographyengineering.com/2013/12/how-does-nsa... But the one thing all of these technologies (SSL,TLS,https,VPNs) have in common is usually OpenSSL.
He's talking specifically about OpenSSL quite a lot (basically saying it's too complex to ever be secure and probably received many "security patches" from NSA employees).
The entire talk is an eye opener. He explains how NSA shills are reading reddit / HN and poisoning communities / standards / protocols / etc. How everything is made, on purpose, needlessly complex to prevent honest developers from working on important things.
He talks about shills submitting a few correct patches over the months / years, slowly gaining reputation among the community and then misusing that trust to submit (not so) subtle patches introducing security holes on purpose.
He mentions a few of the "common mantra" repeated often (including here) by people who have an interest in the status quo.
He explains why SSL/TLS is broken and says that the "SEC" part of "DNSSEC" is not going to be that secure ; )
I think that the problem is much worse than most people think and that Poul-Henning Kamp is closer to the truth than the ones constantly repeating "bug happens" as if nothing malicious was ever going on.
Developers are quite willing and able to do this all on their own. Feature/scope creep is pretty horrible these days.
I particularly appreciated the "QUEEN success example: SSC" (self-signed certificates, starts at slide 23), which also got a good laugh from the audience ...
(if you're just watching the video, this is the text of the second slide in the PDF, not shown at FOSDEM:
What is this ?
This is a ficticious NSA briefing I gave as the closing keynote at FOSDEM 2014
The intent was to make people laugh and think,
but I challenge anybody to prove it untrue.
Electric Light ORCHESTRA: Confusion
ABBA: Money, Money, Money
Backman Turner OVERDRIVE: Taking Care Of Business
QUEEN: Bicycle Race
Beastie BOYS: Sabotage
PASADENA Roof Orchestra: Pennies From Heaven
The NSA isn't the only entity who could stand to benefit from a privately held exploit to most SSL implementations. The NSA is also not the only group who could actually accomplish this.
Who's to say there aren't groups out there that are doing this already to steal and profit from corporate information, credit cards, etc.?
When the government runs a campaign to cause the public to fear and doubt your neighbor that is the beginning of a dark time ahead. This idea that they are one of you, or someone on reddit is just poison.
On an emotional level, more and more Edward Showden is becoming a mystical figure.
Ah, so that's why OpenSSL is not implemented as a RoR web application with REST-API and Cucumber tests in a TDD way so that Johnny Webmonkey can easily contribute. I always assumed it had to do with portability and performance.
* the protocol extension rfc and implementation are from the same person (who is now working for the largest German IT service contractor; formerly state owned T-Systems).
* the extension provides means for one party to send arbitrary data which needs to be returned - a keepalive mechanism would have worked with an empty packet
* there is no input validation on network originated data, in a crypto library, in 2011
This has huge implications on all of the Internet, while the individual parts being reasonably deniable.
* RFC 6520 was published in February 2012. At that time Robin Seggelmann wasn't employed by T-Systems but was instead writing his dissertation at the University of Duisburg-Essen .
* His dissertation  actually gives an explanation for the payload:
"The payload of the HeartbeatRequest can be chosen by the implementation, for example simple sequence numbers or something more elaborate. The HeartbeatResponse must contain the same payload as the request it answers, which allows the requesting peer to verify it. This is necessary
to distinguish expected responses from delayed ones of previous requests, which can occur because of the unreliable transport."
The rationale for the payload for DTLS (sent over UDP) is reasonable, but I'm scratching my head over why the payload is even there for TLS. I guess there's some logic in making the protocol the same irrespective of underlying transport, but the payload is completely redundant for reliable protocols, which will handle any retransmission or reordering automatically.
Edit: Upon further (amateur and arguably ill-informed) reflection I'd argue that the heartbeat functionality shouldn't apply to TLS at all. It's needless complexity that can be handled by the underlying reliable transport. That doesn't change that the OpenSSL project clearly has code-quality issues, but it would have averted this particular fiasco, at least.
I am dismayed that a security-oriented protocol isn't focused on minimalast design. That seems very backward to me.
Not having heartbeats, you end up relying only on arbitrary timers to time out your connections, which are far from optimal, or you end up relying only on the underlying protocol which is not enough in most cases.
TCP doesn't provide that for you. (TCP does provide keepalives, intended for other means, they might be a poor mans substitute, but they arn't exposed to the TCP user in a particularly useful way - it's also the wrong layer for this problem)
They won't let you know if the application process is jammed up --- but TLS hearbeats won't do that for you either! (For example, it's common to have the crypto handled by front-end processes, with the application logic elsewhere.) So if that's what you want, then you need heartbeats at the application layer, not in TLS.
Given all that, the case for hearbeats at the TLS layer for TLS over TCP seems ... weak. Particularly since simply having them there violates the commonly cited best practice that security kernels should be kept small. And this violation has already had disastrous consequences.
Besides, for all the times I see people making this argument, I've yet to see them cite an example of a real problem in a real world deployment for which TLS hearbeats were used, and turned out to be the best solution.
I looked back to RFC1122 to get a feel for the nature of TCP keepalives, and I would agree that they're unsuitable for this application.
If we posit that a keepalive mechanism is useful for TLS I'd still argue that a payload is unnecessary because TCP will handle in-order delivery. A NOOP with some random-length padding would have worked just as well.
Given the rather dramatic differences that need to occur in a protocol designed to work over unreliable versus reliable transport, though, it seems like a bad idea to try to keep "parity" between DTLS and TLS. I can see how somebody might see it as "elegant" to do so, but I'd argue they're different animals solving different problems.
Why allow variable length payloads at all? Why allow more than about 128 bits of data for distinguishing responses? I guess you might argue a client might want to disguise the nature of these heartbeat packets from an eavesdropper, so they need to be variable in size - that's reasonable, but if it's a requirement, then surely you'd want to comment on the security risk of using a "simple sequence of numbers" rather than "something more elaborate". 64K of data, though? That seems like a lot of leeway to give an implementation - considerably more than enough to disambiguate heartbeat packets; implementations are going to keep 64K of data around and compare it byte for byte to check whether the echo they got back matches the payload they transmitted earlier?
I really don't mean to ascribe malice here; I tend to assume this is just a bug, and that the payload for heartbeat packets gets to be up to 64K because once you've decided to make something a variable length buffer, you're probably choosing between having a one byte or a two byte field for length, and 256 bytes seems like it might be too small so you figure 'why not make the length field two bytes?' and then... oops; someone makes a buffer overrun and suddenly we're faced with 64K at a time of data being ripped out of any old server's heap.
This bug is too general and too wide open to be deliberate. Any college student could write code this shitty. An engineered flaw would have to be more subtle.
I was there at the talk and while he put a humorous spin on it by playing the part of a NSA agent, it's also extremely insightful to see it from that point of view. And yeah, when you really think about it... OpenSSL is the NSA's playtoy.
Of course, there's no way to prove that. But really, does it matter? Whether the NSA is behind OpenSSL sucking or not... we have to assume they know of several backdoors/exploits, and the OpenSSL API still sucks and prevents people from doing productive crypto.
That being said I agree that OpenSSL could do with a good code cleaning, but that's a massive undertaking, especially for such a popular library. You have to be backwards compatible.
Maybe a big name in software could go and write a modern crypto library without all the cruft of OpenSSL but for now we have to deal with it.
Unfortunately with the Snowden disclosures, there isn't much that I rule out of bounds for the NSA when it comes to things critical to internet security. OpenSSL is so widely used and critical, it would be silly to think that it would escape scrutiny by the NSA.
For example of NSA efforts in related areas (which I figured you would already know): http://www.cnbc.com/id/101301261 http://nation.time.com/2013/11/04/google-shocked-the-nsa-hac... http://www.reuters.com/article/2014/03/31/us-usa-security-ns... https://www.techdirt.com/articles/20130909/11430124454/john-...
Don't we actually already have evidence of other cases of the NSA adding backdoors to widely used systems, exploitable by anyone aware of them? Isn't that what the elliptic curve backdoor was? http://www.wired.com/2013/09/nsa-backdoor/all/
Apparently your assumption is quite not safe to assume.
I guess the NSA either figures nobody else will notice the backdoor, or just doesn't care. I guess only they know their motivations.
It doesn't allow just anyone to break it - merely knowing the backdoor is there isn't enough.
I don't see why people think that NSA is simultaneously genius-level forces of the Illumanti architecting events that will culminate years later, and yet would make the rookie level mistake of introducing a backdoor that is protected only by obscurity.
They know other security services understand how to decompile code (or read the original source).
More importantly, they know that critical government services they can't predict will end up possibly running things like their broken version of OpenSSL. There's a FIPS standard, of course, but note that the Heartbleed page made quite clear that FIPS is vulnerable too.
If possible, yes. However, if they were indeed responsible for this, they'd probably have a couple honeypots to detect exploitation by third parties. If they considered the third party harmless, they could decide whether the vulnerability should be "independently discovered" or not.
Planting a complicated exploit would be harder and increase the risk it could be traced back to them.
Can't look right now but was this 'bug' introduced at some point in time?
I'd say a Baysian analysis is a way to evaluate the veracity of an argument from ignorance, not necessarily a replacement of that argument. An argument from ignorance is a strict subset of a baysian analysis in which there is no causal relationship between the missing evidence, and the conclusion. This is such a case. So we can conclude by using a baysian analysis that this is in fact an argument of ignorance.
Unless, of course, you have additional statistical samples that show a non zero relationship between not knowing about NSAs activity and the fact that they are engaged in that activity.
But we do have "evidence for conspiracy". We known for certain that there are organizations with huge budgets that have introducing such vulnerabilities pretty much written into their mission statements. Why would we assume they would not be involved in something they obviously should be (however perverted their goals are)? It would be grossly incompetent of the NSA not to try introducing bugs into OpenSSL.
You're right that it needs cleaning - it needs a full audit. I'm surprised in the light of the Snowden revelations that none of us have suggested this sooner, but hindsight is a bitch and all that.
>You have to be backwords compatible
we have been making this trade off for years and it wont get any easier in the future, its about time we started a fresh initiative that doesn't make backwords compatibility a priority, except of course in the sense of maintainability moving forward.
There are very few competent FOSS crypto implementers, and unfortunately, they're spread very thinly. I'm not sure that having yet one another crypto library is going to be helpful unless we actively move our users to the new library and kill the old projects.
Can't wait to find out SSH is comprimised too!
For me, fixing software, is just as often removing IFDEFs together with unmaintained and broken code. It's IMO much better to be honest about "this isn't supported", than pretend you support something you don't.
And this seems to be another one of those stories, coupled with a (bad?) case of NIH.
All these subsystems have one thing in common: they are incredibly complex and very hard to get right, but seem easy on the outside.
These are complex to make robust, but they are not hard concepts. I think everyone should write their own GC, write a filesystem, and handle their own memory layer, at least once.
I agree that reinventing the wheel poorly - without understanding geometry or physics - is a terrible idea. But I also think that we don't have enough people nowadays making toy wheels to learn about geometry and physics.
For each of the things you mentioned, there are concrete and good reasons why they grow complexity: filesystems need to be robust to media corruption and power faults; memory allocators need to be highly tuned to OS and architecture memory layouts; GCs need to be optimized for the code patterns of the particular language they serve, as well as understand the same OS and hardware memory constraints as allocators.
But much of the complexity of these layers is not intrinsic to the problem, and not related to the above. In fact, most of the complexity and bugs of production code at these layers have to do with legacy compatibility, or code inheritance. (If you look at what Theo says about this specific OpenSSL bug, it arises because the OpenSSL team wrote their own allocator that doesn't use malloc() protection bits because those are not compatible across some platforms.)
As the old saying goes: in science, we advance by standing on each others' shoulders; in software, we stand on each others' toes.
Of course. But not in production. Which is a detail missed by the snarky one-liner, maybe because one-liners suck. Still, the point isn't that people shouldn't ever write them, just that they shouldn't actually use the ones they developed while not being a part of a team of experts specializing in the issue at hand.
[NB Ken Shirriff's blog articles on Bitcoin are a superb example of someone coding in an effort to understand a system - but I don't think he is attempting to write a real Bitcoin library]
Yes, yes, I didn't quantify my statement with "for production use".
Your sentence I quoted above was exactly my point: they aren't necessarily hard (or at least they don't seem hard), but they are very, very difficult to get right (robust). Getting systems like these to work well in practical production use is more than 80% of the effort.
I wasn't saying that no one should learn how a garbage collector works by implementing one. My point was that no one should implement their own garbage collector for a production system unless they already became an expert in the field of garbage collectors by implementing them for the last 10 years or so. Same goes for memory allocators. If someone thinks these are simple systems, it means they don't know what they don't know.
After writing your own allocator, you'll never view malloc() as a cheap and simple operation again!
Hopefully the act of writing your own malloc implementation will be enough to convince you not to use it in anger, but it's probably best to be clear on the subject.
I mean, there is an extremely small chance that anybody will adopt your library if it is unquestionably better than the dominant alternative. We don't actually need to discourage people, we need to do everything to encourage them. Sure, 99 of 100 will be no better than our present crap. What does it matter? Few will adopt it anyway. The tragedy is that the better alternatives will probably meet the same fate.
It can also mean that existing implementations are poorly documented and research papers on the subject place no importance on implementation details.
Of course you should not use your first implementation in production, but that is as obvious as not letting someone do brain surgery who has no prior training in it.
The cost in effort is highly non-linear. It may be fairly straightforward to get the 80% solution. I agree that it's a problem when devs bring their home-grown 80% solutions to the table when you need a 99% solution.
I make games. No one dies if there's a bug in the AI. No one may even notice.
And have teams of operating systems engineers whose entire existence is predicated on them, rather than them just being an afterthought so you can get some other stuff done. If the OS does it, you shouldn't.
You are right: it could be much simpler.
There is no silver bullet for this. One just needs to keep on looking for exploits and keep on patching them.
However there is a lot that can be improved on OpenSSL side. I hope they change their memory allocation strategy after this.
That doesn't make the idea futile, especially considering the current alternative. A language runtime focused on safety and a daemon that uses it for that purpose are going to present less of an attack surface than ifdef soup with custom malloc implementations and hand-managed buffer transfers.
Now we have exploits for libraries and runtimes that often allow access to arbitrary application data. If this functionality is in a separate process without such privileges, application data will be safe.
> One just needs to keep on looking for exploits and keep on patching them.
This approach hasn't worked in the past and will not work in the future either. Especially when we start with the existing, desolate and probably thoroughly compromised codebases.
This will prevent the prior vulnerability. Will it prevent the next one? Do you know what it won't make the next vulnerability worse?
A year or two ago I had to deal with a bunch of people complaining that the embedded device my team was selling was vulnerable to BEAST, because their scanning tool told them it was. I wrote a long essay for the support team about how BEAST is exploited and it's really hard to do it against a limited purpose control GUI with no publicly routeable address, and this worked for about half the people who complained. For the others we had to turn on RC4, which may have been even worse, but now their tool wasn't complaining at them so they were happy, even though they may have been worse off.
Thankfully my old team didn't rush to the new OpenSSL version so they got to avoid the past 48 hours being complete shit for them.
There aren't clean answers. Like with financial regulation, what you do to make sure the last thing never happens again may create the next thing.
Even now if OpenSSL would not have made their own memory manager focusing on performance this bug would not have such major consequences.
Frankly they should just accept performance penalties and make a memory manager that focuses on security.
As an example all allocations should be from mmap so that any overflow will automatically segfault (with checks that one doesn't get continuous pages from the OS unmapping selectively to create disjoint areas). Every free should be a munmap etc.
I do also agree that moving this to a separate process would be useful.
GnuTLS had its own problems recently (see: http://www.gnutls.org/security.html), and regarding NSS... I couldn't find a proper public "security" page for it (eg, release notes for 3.16 point to CVE-2014-1492; but the link to the bugzilla issue is not public), so I don't know.
Obviously a good security record doesn't imply things like a good code base, good practices, etc; (although it could).
Quote: "The cert_TestHostName function in lib/certdb/certdb.c in the certificate-checking implementation in Mozilla Network Security Services (NSS) before 3.16 accepts a wildcard character that is embedded in an internationalized domain name's U-label, which might allow man-in-the-middle attackers to spoof SSL servers via a crafted certificate."
Edit: Added Quote
My point is that having a proper security page is a good practice and I really appreciate it as user.
How much more fun/comfortable is NSS/GnuTLS to use in a typical C project in comparison to OpenSSL?
Great potential to learn for developers and users alike this heartbleed.
Been there, done that. Sad to say, but I now use openssl.
"Anyone who ever told you that swear words have no place in technical discussion is right. They're right, and sadly, they're part of the problem because they miss the point. The sterile word placement that's supposed to support an argument makes any true motivation indistinguishable from all the hired bullshit.
However, when someone starts swearing in technical discussion, showing emotion, that's a strong indicator that I'm about to receive wisdom. Wisdom is earned the hard way, and it is permanent, not like some statistically shaky performance benchmark that we'll all forget about next week."
I feel the same - I'd rather have people feeling passionate about their work and use swear words, than work with polite non-personalities that don't give a fuck.
If someone does swear (and as it goes, I'm fine with swearing for emphasis and swear way more than I feel I should) they need to be aware that some people find it offensive and that if they don't moderate their behaviour it may be taken very personally which can result in barriers between colleagues and / or low morale.
You can choose to say that that's the person reacting's problem but if you do so you need to be aware of the price that comes with (that they're working in an environment with which they're not completely comfortable which is unlikely to get the best out of them). The alternative is that the swearer moderates themselves (with which they in turn may not be comfortable).
Obviously the better you know your colleagues the more leeway you have. If you've built up trust and togetherness overtime you can get away with anything but even then you need to consider what happens when new people join the team.
But ultimately this is about people and where it's about people there are no absolutes - what works for one group will be kryptonite for another.
For me two thoughts come out of this:
1) Yes they've been productive the way they work, but have they been more or less productive than if they'd been just as right but a little less confrontational?
Research generally doesn't show confrontation to be the most productive approach for most people. Is their style productive, or is it unproductive but they get away with it because they're so good that people will put up with it?
You say that the best course of action if you want to avoid Theo's ire is not to write bad code, but how many people have taken another approach? The one that springs to mind is not working with him and how much talent does the project miss out on because people just steer clear? I know plenty of very smart programmers who simply don't go near projects where the culture and personalities are like that. Bad behaviour towards others (even when you're right) limits the available pool of skills.
2) I can live with the arsehole geniuses but others who aren't as able often use the likes of Jobs or Theo to justify their own poor, destructive behaviour. For me at least, that doesn't work - if you're going to be as big a dick but you're not as good as them you're basically just a dick and if the day eventually arrives when the IT industry doesn't have a massive skills shortage, those people will cease to be tolerated.
People matter. The way software makes them feel matters. Code smell and related concerns are best conveyed in profoundly human ways. Fuck it. Say it like you mean it.
And he's quite right. This is a class of bug that should not exist or, at least, be restricted to the OSs where "malloc() performance is bad enough" to justify using their own implementation. Memory management is a non-trivial problem.
A secure library should be defensive in coding style and implementation, not sloppy. It should have defaults that err on the safe side, not the fast side, if you have to choose.
If you refuse to understand the reasons for engineering, conceptual integrity, and discipline in writing code that needs to be readable (read: dull and boring), then I don't know what to say.
We want to incite more people to audit and contribute to these projects, not the other way around. Insult the engineering, not the engineers. Or, alternatively, don't insult and propose improvements.
Prepare for some OSS heresy: in many projects, contributions are overrated.
Why? Presumably, the author is the one who feels the most joy/pain of what they've made. They're the ones who've had to grow and prune the code over time. They're the ones who've had to respond to features breaking their mental model sometime. They're the ones trying to make a cohesive abstraction. On crappy projects, the users shoulder more and more of this burden because the author did not.
I've had good luck with contributions in OSS (both making and accepting), but I realize a majority of them are "this isn't working for me, so I added this" without sitting down and considering it's effect on the entire design. I hate rejecting contributions, but if they compromise the project's modeling of the problem, or code quality, then it's for the better.
OSS lends itself to feature creep, just like commercial software. The marketing side of OSS rewards this, by incentivizing you to make more commits (such traction!) and accept changes from everyone (because, community!). New and shiny is a horrible heuristic to use when evaluating infrastructure (read: lots of OSS).
How they express it is a reflection of their character, but they will let you know when you screw up. Theo was extremely tact given the enormous impact of this bug.
Perhaps we need to hit the reset button, and have a bit more oversight into "NewSSL" with continuous audits. There are enough big players in need of secure communication that money shouldn't be a problem.
Also, this bug was in place for what, two years? If the many eyes hypothesis has a two year lead time to find bugs this severe, we can stop talking about it because it's fucking worthless.
If that's how we're measuring things, then closed source isn't going to win either, e.g. .
The combination of time and severity in this case should mean that we can move on from naive 'all bugs are shallow' dogma towards developing a more evidence-based approach to the verification of critical software.
The whole internet runs OpenSSL, but why hasn't anyone tried to do something different? I know it's complicated, but if a few big companies really chose to put some muscle behind it, it could happen, right?
The economics of open source are pretty clear at this point. The software industry spends a lot of money supporting open source, because it's in their own interest to do so -- it's cheaper to share the costs than to build your own infrastructure from scratch every time, when the infrastructure is not part of your competitive advantage.
This particular bug was found by people that Google pays to audit open source code all day, in an effort to improve said code.
No one doubts that some open source software has been very successful. What I'm not sure of is whether levels of open source provisioning are optimal: maybe there should be 10X what there is now. Maybe Linux should dominate the desktop world, but does not due to lack of funding. This is Bastiat's "what is not seen" - what we have now is good, but perhaps it could be better. Maybe a lot better, under different circumstances.
Also, that link mentions Coasian solutions, and privileged goods, which between them explain a lot about open source software, no?
Most people probably think openssl is good enough and are not willing to pay for something else.
On the other hand, given the importance of TLS to the Internet, $1M seems less than trivial -- literally pocket change, if the cost were well distributed among the millions of websites using SSL.
What do you think? Could it be done?
be a generalist
keep up on current technologies
Can you give an example of a "secure" OS not written in C so I can see what this mythical beast looks like?
Look kids ssl is not that hard to implement.
Time to let the cargo cult go have a campfire and sing songs.
This is too important to leave to lazy paid for nothing programmers who want to write lazy ass c code after too many beers.
All software has bugs.