Hacker News new | past | comments | ask | show | jobs | submit login

This is yet another nail in the coffin of Eric Raymond's irresponsible and fallacious "many eyes make all bugs shallow" theory.

https://en.wikipedia.org/wiki/Linus's_Law

Linus's Law as described by Raymond is a claim about software development, named in honor of Linus Torvalds and formulated by Raymond in his essay and book "The Cathedral and the Bazaar" (1999). The law states that "given enough eyeballs, all bugs are shallow"; or more formally: "Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix will be obvious to someone." Presenting the code to multiple developers with the purpose of reaching consensus about its acceptance is a simple form of software reviewing. Researchers and practitioners have repeatedly shown the effectiveness of various types of reviewing process in finding bugs and security issues, and also that reviews may be more efficient than testing.

In Facts and Fallacies about Software Engineering, Robert Glass refers to the law as a "mantra" of the open source movement, but calls it a fallacy due to the lack of supporting evidence and because research has indicated that the rate at which additional bugs are uncovered does not scale linearly with the number of reviewers; rather, there is a small maximum number of useful reviewers, between two and four, and additional reviewers above this number uncover bugs at a much lower rate. While closed-source practitioners also promote stringent, independent code analysis during a software project's development, they focus on in-depth review by a few and not primarily the number of "eyeballs".




Raymond's quip may be somewhat hyperbolic for effect, but I think it's harder to argue with the assertion that "with many eyes, many bugs are shallow". True, there are some things that are esoteric and which only a few very specially trained persons would notice, but in many cases, it's not that way. The fact that a random person will show up on the mailing lists of open-source projects, even minor ones, and have a bugfix patch validates the theory that an open codebase contributes significantly to software quality, and particularly a greater degree of software buglessness.

I think that people who argue over Raymond's quip are just being pedants. It's true that not all bugs will be shallow, no matter how many eyes are on software, but I don't think the expression was meant to be taken literally.


  great_success = sum([i.debug_skill * log(i.audit_time) for i in code_reviewers])


And you can't just go:

    struct code_reviewer *qualified_code_reviewer =
        (struct code_reviewer *)malloc(
            sizeof(struct code_reviewer));
    assert(qualified_code_reviewer != NULL);


To quote Theo De Raadt:

    My favorite part of the "many eyes" argument is how few bugs 
    were found by the two eyes of Eric (the originator of the 
    statement).  All the many eyes are apparently attached to a 
    lot of hands that type lots of words about many eyes, and 
    never actually audit code.


What a pointless and content-less rebuttal.

I mean shit, I don't need to be an aeronautical engineer or work in the FAA to say that several people investigating an airliner crash will be more effective than one single dude sifting through a literal field of debris.

ESR's quote isn't wrong (perhaps hyperbolic, but in spirit it isn't wrong). It's just an uninteresting observation. What do you do when something has you stumped at work? You ask the guy across the hall to look over your shoulder for a second... It's just common sense.

People simultaneously revere and hate the quote only because ESR said it. If Joe Shmoe had said it all those years ago, it would have just been met with "no shit" and never given second thought.


I think there's a point here. The point being much more people talk about many eyes than actually audit existing code. The fact is, there are quite a number of bugs that stayed in the code for a long time despite being not that deep. Maybe that says we don't have enough eyeballs, but most of people quoting that sentence quietly assume we do, equating number of participants in FOSS communities with number of people actually auditing the code. Unfortunately, the former is much bigger than the latter. We pretty much have the same situation as we have in science publications where a lot of results are not reproducible and a lot nobody even bothers to check for reproducibility, despite reproducibility being the basic tenet of open publication and peer review and being necessary to validate the result. Unless there's a reason to - e.g. an identifiable bug - not many bother to go back to existing code and check it for correctness.


The quote is hyperbolic for the "all", and should probably use the word "shallower", but otherwise I don't see anything wrong with it.

"much more people talk about many eyes than actually audit existing code" is true of free software, but is also true of just about everything. More people think "I feel comfortable crossing this bridge because plenty of engineers have looked at it" than there are engineers actually looking at bridges. I haven't read the quote in it's original context for several years, but I don't remember it conflating software users with software developers.


The difference is that engineers actually know how to build safe bridges. I feel (and am actually) safe living in a house designed by a single competent architect, but I am not secure using a TLS implementation written by a single competent programmer. The only way we have found to end up with reasonable secure code is to have it reviewed by a lot of competent people.


There are a dozen points here... The shallow bugs statement is true when there is a community that is large and with incredibly deep understanding; linux and posix is a great example, there is staggering depth of knowledge on those interfaces and behavior. Unfortunately there isn't wide and deep knowledge of tls architecture and implementation, there isn't even a lot of commonality across interfaces.

There is a social axiom that you and I don't know crypto and we should leave it to the experts, yet they need help too.

There is conventional c style and this function, while documented, does the opposite. I had to look at the code a couple times to see the bug, a lot of reviews could have missed it.

There is conventional c style and the whole failure chain cleanup mess, look at that code again, they've got their own data representation and free function that uses it to detect if memory is allocated, the free is private but the initialize is done inline. That stuff happens everywhere by many projects, I'm not saying its wrong but it leaves it open for easy bugs, one gaffed variable declaration potentially screws everything and you need to know the data structure even though you don't directly use it.

And I don't want to pile on gnutls, I think their intentions are good and this is a bug, but this looks fucking prime for unit testing...

There are a lot of variables to determine and measure a projects health, the whole community needs to step up in the quality department, there are lots of ways to contribute and despite the adage, more people should look at crypto.


> I mean shit, I don't need to be an aeronautical engineer or work in the FAA to say that several people investigating an airliner crash will be more effective than one single dude sifting through a literal field of debris.

With zero people looking at airliner failures, nothing will be discovered, no matter how many inspectors the FAA has on its payroll. In the software world, there are almost no people doing code audits. Bugs and security holes are left in live code for years. Basically, Theo is saying "Shut up and audit."

How much open source code have you audited for bugs recently? How many subtle correctness issues have you found in projects you've looked at? For the sake of code quality, I certainly hope it's higher than the amount I've audited.


"Linus's Law" itself says little about whether or not people will actually look at code. It just says that more people looking is better than fewer people looking.

How much code I have audited, you have audited, ESR has audited, or hell, how much code Paul McCartney has audited, really has little to do with the obvious correctness and banality of the 'law'.


In other words, your interpretation of "Linus's Law" has no impact on code quality in the real world. Edit: However, since it's meant to imply that in open source, there are people looking at code, I think that the criticisms stand.


Yes, in the sense that Crito's Law (coined by me, now) "The bigger the ship, the more freight you can fit in it." has no impact on the real-world ship transport industry.

It is obviously true to the point of being banal. It is a pointless statement of uncontroversial fact that provides next to no utility to anybody. It's not even interesting for being a tautology.

If I were a particularly objectionable and self-promoting person, then perhaps people might object to Crito's Law whenever it were quoted on shipping forums, but that wouldn't make it incorrect. Nor would my shameless self-promotion make it profound.

(Is Crito's Law _precisely_ true? Well no, some large ships are not designed for freight after all... but the general principle is true.)


You seem to continuously miss the point he makes.

"Many eyes make bugs shallower" is indeed true and a tautology.

The way ESR meant it, it's merely BS.

He meant is as in: "because open source code is available for everybody to see, many people look at it, and so bugs are found more easily".

In the context it was said, it was meant as a factual observation about what GOES ON in OSS, not merely as a trite theoritical description about many eyes being better.

So, people are arguing against that, not against the idea that if more people ACTUALLY look, they will find more bugs.

The case is, very few people look at code. In some cases, even for extremely widely used software by millions of OSS users, even less people than the people paid to look at a particular proprietary software look at the code.

Heck, Gtk, the basis of most Linux desktops, had in the latest years like 1-developer really working at it (I know, because he complained publicly about the situation).

I don't know what happens in the Windows GUI toolkit or Cocoa, but I doubt there's one person doing all the work -- and only during his free time at that...


It is neither pointless nor content-less. It gets to the exact heart of the problems with ESR's claim. If Joe Shmoe had said it, people would not say "no shit", they would still say "no that is complete nonsense". Because it is in fact complete nonsense, for exactly the reasons Theo pointed out. All the eyes in the world don't amount to anything if they aren't being used to audit code. And to do that, they need to be attached to people capable of auditing code. ESR is simply being used as an example of his hypothesis being wrong since he is the most obvious example. It isn't attacking him to point out the flaw in his hypothesis.


"Eyes" obviously refers to "eyes of people auditing the code", not "eyes in skulls of people who are currently in the building", not "eyes sitting in jars in the Mutter Museum", not "every eyeball belonging to children currently living in Idaho". Furthermore, different eyeballs of course have different worth. Nobody in the world disputes that.

The law itself doesn't say anything about whether or not people will choose to examine the code. It just says that more people examining the code is better than fewer people examining the code. One audit is better than none. Two is better than one. Three is better than two. One shitty auditor is better than none. Two shitty auditors are better than one. One auditor twice as good as a shitty auditor is better than one shitty auditor, if you really want to get into tedious eyeball calculus. Etc.

Seems stupidly obvious; so obvious it isn't even worth stating? That's because it is.

Or to address your point another way:

If I said "You need fuel to make your car go." would you object that this is bullshit because not only do you need fuel, but it needs to be in your car, and the right sort of fuel? I don't think you would say, "But diesel fuel sitting in a puddle on the ground is useless and won't make your petrol car go anywhere!"


The point you seem to be deliberately missing is that the opportunity for something to happen and something happened are not equivalent. There is a natural assumption that because there is the opportunity for many people (however you decide to define "many") to review/audit/test code in open source projects, somebody must be doing so. Even when you base that assumption on the number of contributors to the project, non sequitur; feature addition/refinement does not imply bug fixing.


The NSA has lots of eyes, and they don't report the bugs they find. (Unless it's via parallel reconstruction...)


Another thought occurs to me. It isn't clear to me how much reading the source actually helps with finding most security vulnerabilities. It does seem like some of these recent ones in SSL have been found by detailed code review (after years of providing bypasses), but I mean in general.

It's remarkably hard to find a security problem reading the code unless that security problem is blatantly obvious. Part of the problem is that code also communicates intent, and security is in part counteracting that intent.

Most (more than 90%) of the security problems I found (almost all of which were in open source prjects) started off by observing misbehavior or wondering if I could induce misbehavior. This works because I would start off attacking an API, not the underlying implementation. Only after showing the problem would I look at the code. I think on one case, I was looking at code and thought "maybe that's a weak point." But in every other case, my testing was fully black-box to start (things later lead to a code audit in several cases).


> It's remarkably hard to find a security problem reading the code unless that security problem is blatantly obvious.

Then I guess I would consider most (C code) security problems rather obvious. The kind of stuff you see patches linked to in CVEs.. good old buffer overflows, off by ones, arithmetic screwups (mixing types or not checking for overflows), missing return value checks, the occasional swapped parameter, typo, simple logic whoopsie, etc.. These are incredibly common and rather easy to find once you get used to that sort of grunt work. But obviously you have to know what to look for.

The remarkably hard ones (for me anyway) are much more subtle, things like race conditions...


Using min where I mean max and max where I mean min is my favorite classic mistake that I usually just make most of the time and remember to check for and fix afterwards.

The same thing with misusing "this" in JavaScript closures: no matter how hard I try to consciously avoid doing it, it's always the first thing I check for when code doesn't work as intended, because I still make that same mistake all the time.

Another source of confusion in code that Ben Shneiderman pointed out, is that the meaning of "and" and "or" are opposite to programmers and normal human beings, so languages like SQL and logical expressions in most programming languages are fundamentally confusing to many people, and the inconsistency can be an un-obvious blind spot.

Normal humans mean to union when they say "this AND that AND those AND these", while programmers mean to intersect when they say "this AND that AND those AND these". So that's a terrible source of confusion.

In other words, programmers think adding "AND" clauses narrows down the selection like "((user.age >= 18) AND (user.gender = 'F'))", while humans think adding "AND" clauses augments the selection like "adults AND women".

I'm not advocating changing the meaning of "AND" and "OR", just pointing out that it's a source of confusion you should look out for, and be aware of when talking with non-programmers.

I saw Alvy Ray Smith give a talk about 3D graphics programming, and he confessed that he and his colleagues would just write some code and then flip the signs around until it did what they meant it to do. That made me feel a lot less embarrassed about doing that myself.


I try to read the code I need to use, but of course I never have the time to read as much as I should. It's really the best way to learn programming in general and to learn particular apis, as important as a musician listening to other musicians' music.

I feel much more confident using libraries whose source code I've already at least skimmed through. (Speed reading code and learning where to look for stuff later when you need it is a useful skill to develop.)

But reading static code isn't enough to trust it and be sure the comments and formatting aren't lying to you. Stepping through code in the debugger and looking at its runtime state and control flow is crucial to understanding what's really going on.

But the problem with reading code (especially code that you're not running in the debugger), is that you see what you think it's supposed to do, not what it's actually doing, especially when you're "skimming" over it as I like to do.

Occasionally I have the luxury of enough time to go into "study mode" and carefully read over code line by line (I've been reading the amazing npm packages in http://voxeljs.com recently, which is some amazing and beautiful JavaScript code that I recommend highly). But that is extremely tedious and exhausting, and uses so much energy and attention and blood sugar that I have to close my eyes and take little power naps to let my mind garbage collect.

And then I get these weird dreams where I'm thinking in terms of the new models and api's I've just learned, and sometimes wake up in a cold sweat screaming in the middle of the night. (I have sympathy for Theo and his neighbors he wakes up at night from nightmares about all the terrifying code he reads.) (So far no terrible nightmares about voxeljs, but a few claustrophobic underground minecraft flashbacks.)

Refactoring or rewriting or translating code to another language is a great way to force yourself to really understand some code. I've found some terrible bugs in my own code that way, that I totally overlooked before. And looking back, the reason the bugs were there was that I just saw what I intended to have written, instead of what I actually wrote.

And for those kinds of bugs, comments that describe the programmer's intent are actually very dangerous and misleading if they're not totally up to date and valid. Because the compiler does not check comments for errors!

I try to use lots of intermediate descriptive variable names (instead of complex nested expressions), lots of asserts and debug logs, and do things in small easy to understand and validate steps that you can single step through with the debugger. It's important to examine the runtime state of the program as well as the static source code. But that is hellishly hard to do with networking code in the kernel.

I also like to get away from the distractions of the keyboard and debugger, and slog my way through every line of the code, by printing it out on paper, going outside, sitting in the sun under a tree, and reading through every page one by one front to back, scribbling notes on the paper with a magic marker. That forces me to make my way all the way through the code before making any false assumptions, jumping around, and getting distracted. (ADHD Management Techniques 101!)


Don't let your mouth write a check your eyeballs can't keep.

And I'd rather be an asshole with eyeballs than a mouth full of bullshit.


Using analogies to avoid dealing with the actual topic is not productive. ESR claimed that open source projects are inherently more secure because "many eyes make all bugs shallow". That is what we are talking about. Not cars, not airliners. His hypothesis relies on the (false) assumption that many eyes are looking at the code simply because it is open source. That is not the case, as obvious security holes like these demonstrate time and again. It takes dedicated, qualified people spending significant time auditing code to make it more secure, not being open source. A closed source project with 1 security expert auditing the code is more secure than an open source project with 0 security experts auditing the code and a million users downloading the source and compiling it.


I am with Theo on this one. The many eyes argument is a poor one for a large number of reasons. Bugs are one issue regarding security but software design is a much bigger one. It is good design that ensures that software is robust even when bugs surface and that's a far larger issue than the question of bugs.

The problem with ESR's view here is that it occurs in a more general essay on this matter come up with the idea that open source has the advantage because of eyeballs, distributed design and so forth. But you can only distribute design so far, and almost every successful open source project has a small design and engineering team.

This being said there are plenty of cases where I do in fact rely on many eyes. It's not that it makes the bugs shallower but that repetitive review and discussion helps to shake out both design flaws and software bugs. I tend to push a lot of security stuff to the underlying platform for this reason. But part of it is also trusting the design team.

Code auditing is tough work and I generally assume it doesn't get done. What is good however is that a lot of other people are depending on software that is professionally developed in a relatively transparent manner and so chances are somewhat better that people will audit the code at some point.


This is just a nasty and unjust way to bring down ESR, whose intellectual contributions are valuable.

I don't personally know Theo De Raadt but if he spends his time bringing down other people, he's probably not a very happy person himself.


The comments you perceive to be nastily and unjustly bringing down ESR are presumably being made by people who do not agree with you about the value of his intellectual contributions. In fact, those comments are directly rebutting his best-known intellectual contribution. So your objection begs the question.


Indeed. A lot of people just see the criticisms and think that he is unfairly targeted, but you don't have to look far to find the obnoxious and arrogant behavior that earned him the derision he attracts today.


Theo de Raadt's argument (as presented here) isn't logically valid. A character attack is not an argument. So he's just hurting his cause. That's my point here.

As an aside, I'm not really begging the question because I defended ESR's "Linus' Law" thing in a different comment.


>Theo de Raadt's argument (as presented here) isn't logically valid

Yes it is.

>A character attack is not an argument

His argument is not a character attack. It is pointing out two problems with Eric's hypothesis. The reality that simply saying "lots of eyes" doesn't actually mean there are lots of eyes. And that the eyes have to be attached to people who actually know what bugs look like or they won't be found. ESR himself does not bother to look at code, thus providing counter evidence to his own claim that open source software is seen by many eyes.


>ESR himself does not bother to look at code, thus providing counter evidence to his own claim that open source software is seen by many eyes.

Do you not see how this basic statement is completely illogical?

"One guy doesn't audit code much, therefore nobody audits code." Seriously?


One guy is the example, not the proof.


Then where is the proof? This code was found wanting years ago....


The proof is the fact that we have just gone through two historically tragic, enormous, and terribly stupid security holes, each big enough to land the space shuttle on, either of which could have been easily caught by ESR or anyone else simply shutting their mouth and reading the code.


It was known to be bad several years ago and an article to that effect was published.


That's the point?


Thank you for using "begs the question" correctly. It is far too rare to see that on the web. </tangent>


A+++++ Speedy delivery. Well packed. Top notch. As listed. Great value. Good communication. Would do business again.


Read the quote in context. It makes a lot more sense: http://marc.info/?l=openbsd-tech&m=129261032213320


Not many people would know Theo "personally" but you do know his reputation. You do right?

That's all that is required to give his statements some weight.


Do you know Theo?

Because the Theo I know was kicked out of the netbsd project for being an asshole. He hasn't changed, really.


Um, I've had Theo in my house. I've known him for a long time, we've had technical conversations about the kernel that 99% of the people here couldn't understand (maybe it is me but I am so sick of "look at me I've figured this out" and it is stuff you should have learned as an undergrad. Go code more and talk less.)

My opinion is until you have done as much as Theo has done you should maybe not talk so much.


being an asshole and getting shit done are orthogonal characteristics.


Theo has earned my respect and the right to be as much of an asshole as he wants. The important things he's so good at doing often require that of him, so it's not counter-productive or self defeating.

I also have a tremendous amount of respect for RMS, and forgive his personality quirks, although I'd never want to work for him. Unfortunately, a lot of his personality quirks and ways of communicating are self defeating. But more importantly, his beliefs are totally logical and consistent and well thought out, and he sticks by them. It's his priorities and his way of communicating them that people have problems with.

He's also got a brilliant sense of humor, that a lot of people just don't get, and take offense at, when he was just trying to make them think. But at the same time, he's incredibly easy to wind up by mentioning Open Source Software. But I think he's in on the joke and it's just a theatrical performance, like Saint IGNUcius.

My Emacs Hacker Boss from UniPress Software and I ran into him at a scifi con, and my "Evil Software Hoarder" colleague asked him "I heard a terrible rumor about your house burning down. Is it true?" He fired back without missing a beat, "Yes, but where you work, you probably heard about it in advance." We all had a laugh and no offense was taken: he's got a sharp sense of humor and he's quick on his feet!

Here he is being a total dick, by chastising someone for posting a baby announcement (who is now 21 years old) to a mailing list about having dinner on the other side of the continent as he was on. But he's fucking brilliant and hilarious and makes some excellent points that are totally consistent with his beliefs, even through he wound everyone up and was repeatedly told to fuck off in various ways, which he took in stride.

http://www.art.net/~hopkins/Don/text/rms-vs-doctor.html

"You people just have no sense of humor. I thought the original message was pretty funny and made a few good points (if it didn't, nobody would have been offended). I guess it's a shock for smug self-righteous breeders to learn that not everybody in the world thinks babies are cute and special. -Wayne A. Christopher"

"Finally, someone read the message as it was intended to be read. -RMS"

"I'm somewhat surprised by the idea that a mere message from me could torpedo the happiness of parents. I'd think it wouldn't even come close to doing that. Not that I wanted to do that. The most I thought it could do was to discourage the posting birth announcements. -RMS"

RMS is like William Shatner, in that he's in on the joke, and can have a good laugh at himself, and at least he isn't the mean kind of narcissist. To extend that metaphor further than I should: RMS = Captain Kirk, Theo = Spock, ESR = Harvey Mudd, Microsoft = Klingons, and Free Open Source Software = Tribbles.


Since you've been so generous to offer you opinion of Theo, I think it's only fair and balanced for me to offer my opinion of Eric the Flute, in a way that is consistent with Eric's own name dropping protocol.

I believe that Eric the Flute was disrespectful to Linus by labeling ESR's "Many Eyes" theory "Linus's Law".

I believe that Eric the Flute was disrespectful to RMS by relabeling RMS's "Free Software" movement "Open Source".

I believe that Eric the Flute has made a career out of bogging down the FOSS world in internal doctrinal disputes, and that his "many eyes" argument gives people a false sense of security in open source software, and that kind of pap diverts attention and money away from supporting qualified eyeballs and assholes who do the incredibly difficult and tedious work of meticulously reviewing code and fixing bugs like Theo De Raadt does.

And I believe that Eric the Flute is being a narcissistic hypocrite when he writes stuff like this recent blog posting, with numbered instructions for where, when and how to drop and not drop his name. Specifically, number two, which gives me the right to drop his name in this context:

Namedropping "ESR" http://esr.ibiblio.org/?p=5266

    2. Do drop my name if by doing so you can achieve some
    mission objective of which I would approve. Examples
    that have come up: encouraging people to design in
    accordance with the Unix philosophy, or settling a
    dispute about hacker slang, or explaining why it's
    important for everyone's freedom for the hacker
    community to hang together and not get bogged down in
    internal doctrinal disputes.
So it's important to "not get bogged down in internal doctrinal disputes", huh?

My mission is to explain why it's important for people in the FOSS community not to base their careers on tearing other people down. Why can't we all just get along, huh?

I'd like to hear Eric the Flute explain how his goal of "not get bogged down in internal doctrinal disputes" squares with his decades-long ongoing feud with RMS about "free software" -vs- "open source software" on which he's based career?

And I'd like to ask him to please stop encouraging his followers to act as if there's some kind of war going on between Free Software and Open Source Software.

For example, Eric the Flute's friend and fellow right wing global warming denying libertarian gun nut internet celebrity "Tron Guy" Jay Maynard (who fawningly replied to that blog posting "FWIW, I apply my own fame in much the same way, and follow this set of rules both for myself and for my friendship with Eric. Like him, I didn’t set out to become famous.") has taken a stand on wikipedia and his Hercules emulator project about how there is a war going on, and he ideologically opposes Free Software but supports Open Source Software, and it's insulting to him for anyone to insinuate otherwise:

https://en.wikipedia.org/wiki/Talk:Hercules_(emulator)#So-ca... https://en.wikipedia.org/wiki/Talk:Jay_Maynard#Hercules_and_...

    The Hercules development community generally objects
    to the term "free software", and in several instances
    contributes to Hercules specifically as a reaction to
    the misuse of the term. As long as the portal and the
    categories use this misleading term to apply to
    software that is freely available and redistributable,
    please do not add Hercules to them, since it implies
    support for the "free software" side of the ongoing
    political war that does not, in fact, exist. -- Jay
    Maynard (talk) 08:57, 13 March 2009 (UTC)

    Please do not ascribe to me a viewpoint I do not hold.
    Hercules rejects the term "free software", and many of
    its developers - including me - contribute to the
    project on the explicit basis that it is not part of
    that world. This has been hashed out at the
    Talk:Hercules emulator page.

    Calling it "free software" here ascribes to me a view
    that I not only do not hold, but actively disagree
    with. Please don't count me as a supporter of "free
    software", the FSF, or Richard M. Stallman, and please
    don't enlist me on your side of the "free
    software"/open source war.

    I believe calling it "free software" is argument by
    redefinition, and fundamentally dishonest. It's also a
    naked attempt to glorify a major restriction of
    freedom for programmers by nevertheless calling it
    "free", in the same vein as "War is peace". The
    concept of freedom is far too valuable to demean it in
    that manner.

    As for "but it's free software anyway", the reverse
    argument, that "free software" is all open source, is
    just as valid - yet "free software" zealots reject it
    out of hand and say "don't co-opt our work!" Well,
    that sword cuts both ways.

    I am not a member of the so-called "free software"
    movement and never will be. Please don't insult me and
    misrepresent my views by calling me one. -- Jay
    Maynard (talk) 13:22, 16 August 2010 (UTC)
I wonder where "Tron Guy" got those ideas about this "ongoing political war" about "Free" -vs- "Open Source" software, and why he's getting so bogged down in internal doctrinal disputes?

http://rationalwiki.org/wiki/Talk:Eric_S._Raymond

    Like Lubos Motl, his crankery is a counterpoint to his
    area of brilliance, not a negation of it. And I
    disagree with just about every political opinion ESR
    has. (And have actually argued them with his good
    friend Jay Maynard.) - David Gerard (talk) 20:46, 29
    July 2010 (UTC)

    It saddens me that Jay "Tron Guy" Maynard is one of
    ESR's fans. Turns out the guy who made cosplay
    respectable for grownups is a right-wing asshole --
    wonder if he's a brony? (Anyway, it seems he's given
    up lead maintainership of the Hercules mainframe
    emulator, so, um... yay?) EVDebs (talk) 23:42, 10 July
    2013 (UTC)
For more background on Eric the Flute:

http://rationalwiki.org/wiki/Eric_S._Raymond


OpenSSH.

That one project and the integrity with which he has run it forgives all the problems you might perceive him to have had.

And yes, sometimes you have no option but to be an ass hole to get your point across. Linus has equally been accused of the same.


And here's to the fond yet irritating memory of the late great inspirational asshole, Eric Naggum. http://www.emacswiki.org/emacs-de/ErikNaggum

"But there is still one thing that America has taught the world. You have taught us all that giving second chances is not just generosity, but the wisdom that even the best of us sometimes make stupid mistakes that it would be grossly unfair to believe were one's true nature." -Eric Naggum

"I learned a lot from talking to Erik on matters technical and non-technical. But one thing I learned, not from what he said, but from the meta-discussion which was always there about whether to tolerate him, is that I think we as people are not all the same. We make rules of manners and good ways to be that are for typical people. But the really exceptional people among us are not typical. Often the people who achieve things in fact do so because of some idiosyncracy of them, some failing they have turned to a strength." -Kent Pitmann

"The purpose of human existence is to learn and to understand as much as we can of what came before us, so we can further the sum total of human knowledge in our life." —Erik Naggum

http://open.salon.com/blog/kent_pitman/2009/06/24/erik_naggu...


I kinda agree, if with many eyes come as many pair of hands writing potential bugs, then you don't reduce anything. We need eyes-only 'developpers'.


I think your analysis is irresponsible and fallacious.

ESR is making a completely valid point, and the underlying premise of his theory---that having software open to review can help---is only confirmed by this incident, not rejected.

Specifically: If GnuTLS were closed source, this problem would likely never be publicly discovered and disclosed.

So overall, ESR's theory is accurate and useful.

Note the word "almost" in the theory, which serves as a (completely valid) escape hatch (that you are mistakenly neglecting) for incidents like this---which fit the underlying premise, but are "corner cases" rather than "common cases."


I think the counter to Eric's claim is this:

If it is open source, as in a Debian release, an end user's recourse is to fix it themselves.

If it is commercial, be it closed source or not, an end user's recourse is to sue the supplier (or something similar).

The commercial supplier has a financial incentive to get it right, the open source developer has an intellectual and street cred incentive to get it right. I'm not sure which one actually works better, I know that the popular opinion is the ESR eyeballs claim but it's not clear to me which gets it more correct. Seems like they both fail at times.


> If it is commercial, be it closed source or not, an end user's recourse is to sue the supplier (or something similar).

Are there examples of doing this successfully? As far as I can tell, software manufacturers have largely been successful at avoiding traditional product liability for damages caused by malfunctioning software, through a mixture of EULAs and courts buying the "but software is different" argument. Here's an article series on that: http://www.newrepublic.com/article/115402/sad-state-software...


The commercial supplier has a financial incentive to get it right

Is this why Microsoft dominated the market for 15 years with the worst security model of all contemporary operating systems?

How many lawsuits were successfully pressed against Microsoft for losses due to their crappy security implementation? Forget about successfully, how many were even brought against them? Of those brought against them, how many were from companies not large enough to have their own legal departments?


>The commercial supplier has a financial incentive to get it right Is this why Microsoft dominated the market for 15 years with the worst security model of all contemporary operating systems?

No, that's why Microsoft after XP tightened their security. Because they had an incentive to "get it right".


>I have better things to do than wait 15 years for a vendor to look at fixing a serious issue.

Perhaps, but that's just one aspect.

For most of those 15 years there wasn't a better supported, friendlier to the common user, with tons of desktop and business software and compatible with almost all hardware, OS available.

They had an even more incentive to get that right first, and they did.


I have better things to do than wait 15 years for a vendor to look at fixing a serious issue.


> If it is commercial, be it closed source or not, an end user's recourse is to sue the supplier (or something similar).

That is assuming the supplier is still in business, which is probably a dubious proposition for a majority of commercial software that has ever shipped.


Something can be open source and used commercially. Apparently this bug was found via an audit by RedHat, which obviously is a commercial company that uses GnuTLS.

Don't mistake me as a zealot, though. There is a place for open source and there is a place for closed source. AFAIK, that is also ESR's point, and why he broke ranks with Stallman, who claims that closed source is evil.


The terms for almost any software redundantly state "no warranty".


There are also many reasonable ways for closed source software (or more accurately "non-free proprietary software") to make the sources available for review, but not give away all the rights. Like "Microsoft Shared Source Common Language Infrastructure", etc.

Of course it's better for software to be free / open source, but it's nonsense to imply that only open source software has the potential to be seen by "many eyes".

My eyes are still red and sore from staring at the MFC source code before the turn of the century.


ESR's point is false. All one needs to do is read Thompson's Turing Award lecture to understand why.


Why wouldn't it be discovered? As I understand, it was discovered by audit of an interested party. That happens to closed-source software too. If GnuTLS were a proprietary product of RedHat, of sold by proprietary company to RH while allowing RH to audit, but not publish, the source, the result would be the same. Disclosure might not happen, but discovery still would.


What evidence is there that there actually were "many eyes" on this code? If anything this underscores the importance of license compatibility in order to maximize the utility of the "many eyes" resource. Honestly, GnuTLS seems to exist purely for ideological shim reasons, and it's not surprising that ideological shim reasons don't motivate "many eyes" to bother show up.


That's a No True Scotsman argument. The problem with the "many eyes bugs shallow" theory is that all eyes aren't created equally, and the valuable eyes aren't distributed uniformly across all software.


It would only be a No-True-Scotsman argument if the original statement of Linus' Law were, "In an open-source project, all bugs are shallow" and someone were now trying to claim that GnuTLS wasn't open-sourcey enough.

In reality, the law is about code that has many eyeballs on it, and it's a fair argument to point out that evidence suggests GnuTLS didn't have that many eyeballs on it.


Can you present some of that evidence about the lack of eyeballs on GnuTLS? Because my point is that the right kind of eyeballs were not on GnuTLS; my point isn't compatible with this supposed "law".


So are you saying that GnuTLS had "valuable eyes" and missed this or that they had "many eyes" and missed this? What exactly is your alternate hypothesis?


I'm sure lots of people have tried to find bugs in GnuTLS.


So do you mean that (until now) they just failed to find them, or that (until now) they were only found by people who did not have the incentive to report these bugs?

Because in the top post you call the bug "simple and basic".

But then in that same top post you imply (in your last line) that GnuTLS gets so very little attention, but now you say that lots of people have tried to find bugs in it.

I'm not trying to criticize you here, but trying to figure out what you're trying to say?


That's part of the fallacy. In the OSS world it's assumed that when code passes through many hands, is depended upon by many projects, and is critical to the functioning of many systems used by many users then naturally the code in use will be seen by many eyes.

But this is anything but true. The reality is that code review is far less common than code use, with many defects impacting many users as the logical consequence.


> That's part of the fallacy. In the OSS world it's assumed that when code passes through many hands, is depended upon by many projects, and is critical to the functioning of many systems used by many users then naturally the code in use will be seen by many eyes.

Right and as you point out that's not true. What gets less attention is why. I know in my own code there are dependencies I know very, very well. Some of them I have helped to author. Some of them I have helped with regarding maintenance programming later. But there are many which I don't.

There are a bunch of reasons for this:

1. Code has varying degrees of readability and is easily comprehensible to various extents. Code which reads like a book gets more time than that which I have to figure out its structure first.

2. Code that is well maintained provides less necessity to add someone else. I tend to be more likely to read poorly maintained code than well maintained code, for example because bugs don't get fixed is a good reason to try to fix them myself....


I think all that means is that "many users" != "many eyes". I think ESR's ultimate point is that Open Source, if the security or bugs deeply matter to you, you can independently add your own eyes (or provide resources for the same). It's a very different matter with proprietary software.


Exactly. How can you trust something you don't have source code to? You can't, full stop. Any thinking otherwise is at best childishly naive ignorance. Given the recent revelations of the past few years, it should be glaringly obvious to anyone with half a clue that companies have been compromised, either willingly or otherwise, so that trusting closed source is a bad idea (as ESR, RMS and Bruce Schneier have been saying for decades!). The finding of this bug in GNUTLS is a good thing! Claiming that this bug would have been found at all, much less fixed, in proprietary software is galling, to put it lightly.


You can audit binaries. So definitely not "You can't full stop". Most people are in no position to audit all the source code they use, just like most technical people are in no position to audit binaries they use.

(And I'm putting aside the whole issue of backdoors in hardware, compilers, etc.)


Raymond wrote Cathedral and Bazaar almost 20 years ago. In the context of the time and the state of open software "all bugs are shallow" was a pretty damn accurate response to its critics. It still isn't a bad description in many cases.

There are always edge cases where beta testing and multiple eyes fall short of mathematical possibility [never mind sophisticated attacks]. That such a bug as this matters 17 years after Raymond made his remarks is a testimony to the robustness of the mechanism he described.


It was 15 years ago. 1999. But i am sitting here with a scotch.



Im not an ESR fan, but let's be real here, the bug got found. How many bugs of similar impact are hiding in crufty old Windows code that nobody is looking at.

Unless you're talking about an IBM mainframe, validated at EAL 5, there are security bugs all over the place. With open source, you don't get a platoon of elves scanning the code, but you have a much better chance of someone happening across a defect or identifying the responsible party.


But how would you do know that those bugs in closed source systems haven't been found? The bug reporting systems are closed too, and the hotfixes and patches that get issued don't link to a list of bug reports. So you are none the wiser.

For all we know, bugs are found in closed sourced systems all of the time and are fixed frequently; the only difference is that they're not publicised.


I agree.

What I'm saying is, without the benefit of open source, you're relying on third-party certification to evaluate the security of products.


No, actually, there IS a platoon of elves with millions of eyes, carefully scanning all of the open source the code for bugs, meticulously going over it line by line, building rigorous test harnesses, feeding it every input imaginable to probe its weaknesses, and writing up reports describing every quirk flaw they detect in it. And then not fixing it.

And your tax dollars are paying for all of that work. And you can be sure those elves have known about those bugs for years.

And those bugs have caused many unfortunate consequences. So it's just not wise to go around giving people a false sense of security in order to promote your brand.


> This is yet another nail in the coffin of Eric Raymond's irresponsible and fallacious "many eyes make all bugs shallow" theory.

What a facetious and unsupported assertion you have made.

You are seemingly purposefully disingenuous.

The simple fact of the matter is, ESR is still right. Perhaps because few people use a piece of s/w these things slip through. The seriousness of this flaw is limited and thus not subject to the fierce post humous questioning you give it.

Please define how many eyes saw or used or benefited from this code. I certainly live in the world of this code and don't depend on it.

There is a chance you are talking shit.


> The simple fact of the matter is, ESR is still right.

ESR is not completely mistaken here, but the problem is that he's not exhaustively right with "Linus's Law".

It's more a description of why beta testing with access to source code is good, than a description of why open source is inherently good (and before you flip out, I've contributed to open source longer than I've done my day job).

Some types of code flaws will simply never show up to the kind of beta testing that introducing software to a wide population provides. E.g. proving that a X.509 certificate which is faulty in a certain way is actually caught by a software library; few people run into that in practice.

What these types of bugs require is code auditing (after the fact) or before-the-fact code review that prevents their entry into the source in the first place. But neither auditing nor pre-commit review are inherent to open-source, and in fact it could be argued that closed-source software companies are better able to ensure sure things happen.

The saving grace for open source is that these companies optimize for market success and not code quality (except to the bare extent needed for market success). Additionally you can pay to audit open source code much more easily than you can closed source (e.g. Google's audit teams that do exactly this).

But "bugs are shallow in the presence of sufficient eyeballs" is not unique to open source, either in theory or in practice.


What supports my assertion are the two recent gigantic but shallow security holes that many eyes (except for the NSA's) didn't see for many years.


It's clear that there aren't "enough (properly skilled, willing and available) eyeballs" for every project out there. Raymond's argument could be correct; still, the irresponsible thing would be to expect, just because of the openness of the code, that someone will ever care for no reason.


Straw man. The phrase "all bugs are shallow" doesn't mean all bugs will be found. It means that, once a bug is found, the correct fix will be obvious to someone.


Certain bugs are more likely to be found if you have a large number of users doing diverse things with your software.

Security vulnerabilities have never worked like that. This "nail" is not new.


It's worth reading what Eric actually wrote:

  Linus was directly aiming to maximize the number of person-hours thrown at debugging and development, even at the possible cost of   instability in the code and user-base burnout if any serious bug proved intractable. Linus was behaving as though he believed something like this:

  8. Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.

  Or, less formally, ``Given enough eyeballs, all bugs are shallow.'' I dub this: ``Linus's Law''.

  My original formulation was that every problem ``will be transparent to somebody''. Linus demurred that the person who understands and fixes the problem is not necessarily or even usually the person who first characterizes it. ``Somebody finds the problem,'' he says, ``and somebody else understands it. And I'll go on record as saying that finding it is the bigger challenge.'' That correction is important; we'll see how in the next section, when we examine the practice of debugging in more detail. But the key point is that both parts of the process (finding and fixing) tend to happen rapidly.

  In Linus's Law, I think, lies the core difference underlying the cathedral-builder and bazaar styles. In the cathedral-builder view of programming, bugs and development problems are tricky, insidious, deep phenomena. It takes months of scrutiny by a dedicated few to develop confidence that you've winkled them all out. Thus the long release intervals, and the inevitable disappointment when long-awaited releases are not perfect.

  In the bazaar view, on the other hand, you assume that bugs are generally shallow phenomena—or, at least, that they turn shallow pretty quickly when exposed to a thousand eager co-developers pounding on every single new release. Accordingly you release often in order to get more corrections, and as a beneficial side effect you have less to lose if an occasional botch gets out the door.

  And that's it. That's enough. If ``Linus's Law'' is false, then any system as complex as the Linux kernel, being hacked over by as many hands as the that kernel was, should at some point have collapsed under the weight of unforseen bad interactions and undiscovered ``deep'' bugs. If it's true, on the other hand, it is sufficient to explain Linux's relative lack of bugginess and its continuous uptimes spanning months or even years.

One can only conclude that either ESR is wrong (it wouldn't be the first time), or that ESR's beloved open source "bazaar" has become a cathedral.


Formatted for easier reading:

Linus was directly aiming to maximize the number of person-hours thrown at debugging and development, even at the possible cost of instability in the code and user-base burnout if any serious bug proved intractable. Linus was behaving as though he believed something like this:

8. Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.

Or, less formally, ``Given enough eyeballs, all bugs are shallow.'' I dub this: ``Linus's Law''.

My original formulation was that every problem ``will be transparent to somebody''. Linus demurred that the person who understands and fixes the problem is not necessarily or even usually the person who first characterizes it. ``Somebody finds the problem,'' he says, ``and somebody else understands it. And I'll go on record as saying that finding it is the bigger challenge.'' That correction is important; we'll see how in the next section, when we examine the practice of debugging in more detail. But the key point is that both parts of the process (finding and fixing) tend to happen rapidly.

In Linus's Law, I think, lies the core difference underlying the cathedral-builder and bazaar styles. In the cathedral-builder view of programming, bugs and development problems are tricky, insidious, deep phenomena. It takes months of scrutiny by a dedicated few to develop confidence that you've winkled them all out. Thus the long release intervals, and the inevitable disappointment when long-awaited releases are not perfect.

In the bazaar view, on the other hand, you assume that bugs are generally shallow phenomena—or, at least, that they turn shallow pretty quickly when exposed to a thousand eager co-developers pounding on every single new release. Accordingly you release often in order to get more corrections, and as a beneficial side effect you have less to lose if an occasional botch gets out the door.

And that's it. That's enough. If ``Linus's Law'' is false, then any system as complex as the Linux kernel, being hacked over by as many hands as the that kernel was, should at some point have collapsed under the weight of unforseen bad interactions and undiscovered ``deep'' bugs. If it's true, on the other hand, it is sufficient to explain Linux's relative lack of bugginess and its continuous uptimes spanning months or even years.


I think this largely shows why ESR is wrong here though. Let's start with two basic assumptions:

1. Some bugs are generally easy to resolve with one-line fixes (the Apple SSL bug being a good idea).

2. Some bugs are genuinely deep because they are design limitations of the software, or flawed assumptions on the part of the person who designed the software contract.

Now let's also point out that bugs of the second class may have an apparent shallow fix which in fact simply paper over deeper problems. A bug fix needs to resolve the issue, not just provide some cruft to make life immediately easier.

Certainly if you see Linux push back on patches, you see he is pretty heavily aware of that fact.

There are two things I have learned on this topic in my time as a programmer. The first is that the only really deep bugs are those which are design flaws. The second is that review in advance prevents problems, not eyes on the problems in retrospect.

You can't turn a deep bug into a shallow bug after it is already there. By the time you have beta testing going on it is too late. What you can do is have a few good people who design things well (and review eachothers' work there) and then deep bugs don't happen as frequently.


I don't have a link to it so I'll paraphrase what I remember:

A bunch of programmers were endlessly arguing back and forth about how to do such-and-such to emacs on the gnu-emacs mailing list.

RMS derailed the argument by pointing out that there were just a few people in the world who know the code well enough that they could actually just sit down and solve the problem themselves without discussing it with anyone else, and they were all very busy.

But all the bike shedding and social chatter about what to do, how to do it, what to call it, and what color to paint it, by the people who either can't or won't actually do it themselves, is just distracting and wasting the precious time of the few people who can just solve the problem themselves without any discussion.


I would be surprised to see a security library developed with the bazaar model. It'd be difficult to assemble enough interest and capable reviewers.

In this case it looks like the GnuTLS bug was introduced and fixed by the same person, but I didn't go look to see how many others there were.


Unfortunately, it may take 2^1024 eyes on crypto code before the right set looks at the code.


In other literatures, there is a notion of diffusion of responsibility; I haven't read the works that you mentioned, but is such a risk risk mentioned in them?


I have never thought of that as the meaning. In fact, how does this wording apply at all to bug discovery when it seems to me discussing bug resolution, which feels like a very different thing? However, I have never read The Cathedral and the Bazaar, so I just took the meaning that made the most sense.


ESR is doing an "AMA" on /. today, and I posted asking about what he thought about this, specifically my theory is that everyone is ignoring the amount of code in proportion to the eyes that see it.


This not a case of GnuTLS not being audited, or audited by the right/wrong eyes. Audits were done; the Debian maintainers chose to ignore the conclusions.


Not sure about this, because someone found the bug.


It's kind of like saying free libraries lead to a utopia without any extra social programs.


THIS


Cryptography is uniquely difficult to audit precisely because it is encrypted. It requires more effort to tell what an application is doing if all its input/output are completely incompressible.

Maybe cryptography should be handled exclusively by the OS. Software would communicate through a protocol that can be easily audited (like normal http with special headers). Firewalls could be used to verify what is going on and normal tools could be used to inspect the data. That kind of checking is essential and doesn't require the talent needed for code review.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: