
Quality Software Costs Money – Heartbleed Was Free - dblohm7
http://queue.acm.org/detail.cfm?id=2636165
======
kazinator
Simply throwing money at FOSS will not fix any security bugs. The money will
be soaked up by those who are good at soaking up free money and making it
disappear, that is all.

If you want to spend money wisely on FOSS, you do that by hiring someone to
implement some specific change to a specific program: something that can be
estimated, scheduled and tracked to completion.

"Find and fix security bugs in software component X" is not really a specific
development task; it's a crapshoot. You cannot put a concrete amount of time
and dollar figure on it. A team could burn through months of salary and not
come up with anything. On a weekly status report, you could say "All week
long, I looked painstakingly through such such directories and didn't find any
security issues" even though you spent maybe ten minutes on it, and the rest
of the time on HN. Even if someone else finds issues in the same code, nobody
can prove that you didn't spend that time; you just overlooked those things.
(For this reason, it's better to involve tools: if you cannot easily lie that
you applied a certain code verification tool and found nothing, because
someone else can run the same tool.)

This is probably why Hearbleed was not found earlier. Even though the software
is packaged and "supported" by all kinds of vendors (for instance vendors of
hardware who provide FOSS-based firmware and support it for their customers),
they don't spend resources on auditing all the FOSS packages that they bundle
and they don't do that because they know it is a big money hole. They
implement things customers ask for, and respond to outside reports of issues
(from customers or elsewhere).

~~~
collingreene
Maybe you just enjoy hyperbole but while part of what you say is correct
(finding security vulns in software is unavoidably a bit of a crapshoot) your
conclusions are wrong.

Finding deep, serious vulns like this in software can currently only be done
by human beings. Tools are better at being authoritative but can only find
vulns of a given type. For example static analysis is a great fit for any vuln
that boils down to a dataflow problem, user controlled source -> ... ->
dangerous sink. XSS, sql injection, etc fit this model. Fuzzers are great at
finding bugs in parsers (and there are a surprising amount of parsers in the
world, 90% of which should never have been written). Instrumented dynamic
analysis can do awesome work for memory issues. I explain all this to show
there are areas where tools are fantastic for their area. But there are many
areas for which tools cannot help at all, heartbleed was one of these areas.

The best security tools available were (presumably) run across openssl before
and (certainly) with increased scrutiny after heartbleed. None of them found
it. Simple limitations in static analysis lead me to believe they would never
have found it on their own (most static analysis tools stop at 5 levels of
indirection) Some background:

1\. [http://blog.trailofbits.com/2014/04/27/using-static-
analysis...](http://blog.trailofbits.com/2014/04/27/using-static-analysis-and-
clang-to-find-heartbleed/) 2\. [http://security.coverity.com/blog/2014/Apr/on-
detecting-hear...](http://security.coverity.com/blog/2014/Apr/on-detecting-
heartbleed-with-static-analysis.html) 3\.
[http://www.grammatech.com/blog/finding-heartbleed-with-
codes...](http://www.grammatech.com/blog/finding-heartbleed-with-codesonar)

If you have immature projects sure run tools against it and some bugs will
shake out. But if you want to find the next heartbleed a tool wont do it which
is your mistaken conclusion.

The question then becomes how to cultivate and encourage more people to find
vulns like this. Money seems like a good incentive for most, although Neel
Mehta did it of his own volition. I dont know the answer to that question but
things like googles project zero are exactly what I would try first.

~~~
kazinator
What is my mistaken conclusion? Okay, so all known tools have been exhausted,
so now you're down to people and their talent for finding bugs. What people
should you pay? How do you know you're getting your money's worth out of those
people? What if there really is nothing left to find: are you prepared to
believe six months' worth of status reports which say "found nothing?"

My point wasn't that only tools should be used; I put that in as an aside
(wrapped in glaring parentheses!). If I hadn't, someone would have pointed it
out for me in a reply: "Hey you fool, of course you can track whether people
are really bug hunting and being honest about their activity, if they are
using tools whose results are reproducible."

Of course tools only find things that they are designed to find. My point was
not at all that tools should be used because they will find the next
Heartbleed, but rather that you have some hope of tracking the progress of a
security team that is applying tools.

The topic of submission isn't about what is the best way to find security
holes, but about spending money on it. My view is that spending money wisely
requires some definition of a "return on investment" and tracking of concrete
goals. This is hard to do with security research (once tools-based approaches
have been exhausted).

~~~
collingreene
Your acute mistaken conclusion> Simply throwing money at FOSS will not fix any
security bugs.

I can't think of anything closer to "throwing money at FOSS" than something
like the internet bug bounty. Google/Facebook/etc collected a pile of money
and put it up for a bug bounty for software used by most of us on the
internet. [https://hackerone.com/ibb](https://hackerone.com/ibb) click through
to the projects and look at all the bugs that have been rewarded.
[https://hackerone.com/internet](https://hackerone.com/internet) and
[https://hackerone.com/sandbox](https://hackerone.com/sandbox) are the
coolest.

My interpretation of your general conclusion is: without quantification
spending money/effort on security is not useful. I disagree with that because
its the nature of the beast. Its useful to have people look through code and
some weeks there will not be a lot of findings. Its absolutely okay for a
status report to read "I tried this, thought think might work, investigated
the way X works to ensure it doesn't do Y - 0 total findings".

What people to pay & how to know you are getting your moneys worth are not
unsolvable problems. For example at the company I work with we hold yearly
bake-offs giving different security consultants the same code to see what bugs
they find, we then use the best 2 or 3. Thats an approximation sure, but it
solves your what people to pay problem.

How to know if you are getting your moneys worth, this is harder and rubs
against the essence of security/QA work. No one knows what lurks in
randomCode.tar.gz. That is the whole point of the exercise. But apparently the
world agrees its useful to have corporate application security teams to do
some vetting of the code looking for vulns, more useful that nothing at least.
More useful than tools? Well thats a weird comparison because you likely need
security people (or engineers with a bit of security background at least) to
run some tools. I think tools vs people is a different debate but I would bet
on people even at an equal cost point.

I agree quantification of security research is hard, I disagree that because
we can't quantify something it is not useful.

------
prodigal_erik
I wouldn't be allowed to build a skyscraper out of fatigued and rusty scrap
iron. There is no reasonable amount of money and effort that could make that
work, and just wanting to try would mark me as unqualified to do construction.

The heartbleed bug was a classic buffer overrun of the kind C has been causing
for decades. When someone comes along saying "If you pay me, I'm going to
write security software using unchecked pointer arithmetic" how do we as a
community agree that the only response will be "That's a terrible idea and
nobody will pay you for it"?

~~~
rando289
C security software is being massively funded and is very successfully the
basis for trillions of business, eg. the linux & windows & mac kernel. But,
according to #1 voted random psuedonymous internet commenter, this is
obviously a terrible idea.

~~~
xenadu02
All of which have had massive security flaws, including hundreds or even
thousands of zero-day remote exploits. And that's just counting the C memory
corruption bugs, not actual algorithmic bugs.

If C requires me to only code in certain styles and only use certain tools to
only get _some_ increase in safety... Why not just use a language that builds
safety in from the beginning?

That said, we're just going to continue soldiering on because of horrible
programmer attitudes like yours. Everyone wants to believe that they are a
unique little snowflake who wouldn't make those kinds of mistakes (oh wait,
Coverty couldn't catch Heartbleed, oops) _and_ choose the comfortable fiction
of a Just World where people get what they deserve. After all, if people just
drove better we wouldn't have so many accidents, so why bother with air bags
and seat belts? Better hope everyone else drives better too or you'll be
changing all your passwords and keys with the rest of us schlubs no matter how
fancy your valgrind test suite is.

~~~
jeffdavis
"If C requires me to only code in certain styles and only use certain tools to
only get some increase in safety... Why not just use a language that builds
safety in from the beginning?"

What language actually meets the need? GC'd languages haven't proven
themselves useful for things like OSs, databases, or libraries that are
deployed extremely widely in many environments.

So what safe, non-GC language are you advocating?

~~~
zurn
This is irrelevant (and false).

GC is orthogonal to memory safety.

(And all of those things have been successfully been done in GC'd languages.)

~~~
jeffdavis
Not sure what I said was false?

I agree that GC and memory safety are orthogonal -- see rust. But there aren't
a lot of options if you want safe and non-GC, which was my point.

And GC is just not an option sometimes. GC has been around forever but it just
hasn't proven itself in a lot of domains. I don't think it's for lack of
trying, I think it's because sometimes you want to manage the memory.

------
ffk
This is why non-profit organizations such as the Apache Foundation and the
Linux Foundation are extremely important. They act as a conduit for funds and
resources to these important projects.

Organizations like Team Cymru also play an important role in discovering and
mitigating exploits in both open and closed source software.

Perhaps there should be an open source crypto foundation or a crypto umbrella
at the Apache Foundation to help foster and secure these types of very
important projects.

~~~
humanrebar
It's my understanding that you cannot get tax-exempt non-profit status purely
for developing and distributing software, regardless of license.

It's my understanding that software-oriented tax-exempt organizations,
501(c)(3)s, like the Apache Foundation categorize themselves as educational
foundations in order to get tax-exempt status. It's hinted in the article that
it's a pain to do FOSS despite altruistic intentions. I'm surprise that this
shortcoming of the tax code wasn't underscored more.

If we are serious about getting more funding for open source, we should be
lobbying to get FOSS (for some definition) categorized as providing scientific
benefit or else add a new category of non-profits that provide free software
(for some definition of free).

[http://en.wikipedia.org/wiki/501(c)_organization](http://en.wikipedia.org/wiki/501\(c\)_organization)

[http://en.wikipedia.org/wiki/Apache_Software_Foundation](http://en.wikipedia.org/wiki/Apache_Software_Foundation)

~~~
ffk
This is definitely true. The Yorba Foundation is a recent relevant example
where the tax code failed.

I think the risk that the IRS is trying to mitigate is for companies to
establish a development only organization and single-licensing it to a shell
to sell. We will need to find a nice dividing line that pushes FOSS causes
forward while simultaneously preventing the creation of loop holes exploitable
by for-profit entities.

~~~
AnthonyMouse
> I think the risk that the IRS is trying to mitigate is for companies to
> establish a development only organization and single-licensing it to a shell
> to sell.

That seems easy to fix. Require that the software be available to the general
public under the same terms. Which is probably how it already is, if I recall
correctly a non-profit can't exist solely to benefit a private party.

------
grondilu
There's a saying in the FOSS community: « Good, Cheap, Fast. Pick two. » A
quick search on Google points to what seems to be called the Project
Management Triangle:

[https://en.wikipedia.org/wiki/Project_triangle](https://en.wikipedia.org/wiki/Project_triangle)

This author seems to imply that FOSS needs money in order to be good. That's
not exactly true. It can help but that's not the only way.

He says for instance:

« It would not even be close to morally defensible to ask these people to
forgo time to play with their kids or walk their dogs in order to develop and
maintain the software that drives the profit in other people's companies. The
right way to go—the moral way to go—and by far the most productive way to go
is to pay the developers so they can make a living from the software they
love. »

Sure, you can pay them. But you also can just be patient, and let them invest
as little time as they want.

Free and Open Source Software is free so it's comprehensible that people
usually don't work on it full time. But they do work on it and the result will
turn out to be good. Eventually.

~~~
cwyers
«It would not even be close to morally defensible to ask these people to forgo
time to play with their kids or walk their dogs in order to develop and
maintain the software that drives the profit in other people's companies. The
right way to go—the moral way to go—and by far the most productive way to go
is to pay the developers so they can make a living from the software they
love.»

I'm... not really sure that "giving software away for free and then making
impassioned posts on the Internet that people have a moral obligation to pay
you to continue to work on it" is a strategy that has "success" written all
over it.

------
taeric
Isn't this just running afoul of the fungibility of money causing issues
placing a value on something?

That is, to focus on the money that OpenSSL gets as the only way that it is
given value is to ignore all of the developers that directly contribute to it.
Because, they aren't paid elsewise?

------
jarin
What if there was some kind of public/privately funded foundation that could
hire open source developers to work on their projects on a part-time or full-
time basis?

Something like those physicist think tanks that Feynman refused to join. Along
those lines, maybe universities could have that kind of thing too.

~~~
tedunangst
I believe any of the Mozilla, Apache, FreeBSD, or OpenBSD Foundations more or
less fit the bill.

------
wglass
Interesting article - I liked how it was based on a personal experience. As a
side note, there's a pretty serious misunderstanding of the Apache Software
Foundation in the article. Apache provides no money to developers. Instead it
provides community support, infrastructure, and legal help to more than 150
projects under its umbrella. In particular the Apache focus on building
communities means that a project has a life beyond involvement of a single
contributor or company. Fundraising for development is reasonable and helpful,
but must be balanced with contributor diversity. Otherwise the the project
will fall apart when the funding stops, leaving users in the lurch. (note -
I'm an Apache Software Foundation member, though I speak for myself in this
comment).

------
bewo001
Non-free software is made by companies which are subject to national
regulations. The "give us a backdoor to your security product or it will get
very ugly for you." kind of regulation. Being non-free, those back doors are
much harder to find than in Open Source SW. Even if you found a back door,
publishing it would be risky, as the SW company would immediately sue you for
violating the EULA.

------
nycticorax
Has anyone ever tried to push seriously to just have governments fund more
open source software development? It just seems that open source software
suffers from all the funding difficulties that go along with being a "public
good" (in the economic sense). And there's a well-established mechanism for
creating public goods: a government.

~~~
pc86
I think you'd be extremely hard pressed to make the argument that tax dollars
should fund the development of Node, or WordPress, or a JavaScript calendar
plugin.

~~~
boomlinde
How about software that much of the IT infrastructure of the country sort of
relies on? The obvious example here is again OpenSSL. I don't think that the
argument is that the government should fund _all_ open source software.

------
robert_tweed
Maybe Universities should do more. As part of every undergrad CS course, get
extra credit for finding and accurately reporting a bug in any well-known FOSS
package. Bonus credit if it is a security-related bug. Even more credit for
actually fixing the bug.

~~~
konstruktor
I would hate to see that happen. In a project with a sufficiently good review
processes, the investment of time for helping somebody contribute the first
patch(es) will far outweigh the benefit, and amortise only for developers who
stay with the project for longer.

Add extrinsic motivation to contribute patches, like extra credit, and what
you will end up doing is abusing experienced open source contributors as
teaching assistants instead of an actual contribution to open source.

Let's not forget that one of the guys who did open source to further their
academic career, only to drop it later, was Robin Seggelmann, the creator of
Heartbleed.

~~~
robert_tweed
There's certainly an argument for research on this topic. The core theory of
open source quality is "many eyes" and in the similar case of Wikipedia, it
works very well: much better than Encyclopaedia Britannica with it's long-term
expert curation. Whether or not it works for open source largely depends on
the competence of the individuals vs the complexity of the bugs. However, most
bugs are caused by trivial errors that just aren't easy to spot and happen to
pass the test cases. That's where a high-volume, low quality process like
crowd sourcing can be very effective: someone, somewhere will spot those
stupid errors, provided you have sufficient eyes on the code.

The idea that people should be discouraged from working on FOSS projects
unless they plan to commit to one project is against the principles of open
source, namely that it is open. In fact, if this were done over the course of
a 3-4 year course, it's likely that each student would stay with one project
anyway, and very likely they'd continue contributing after graduation. I agree
that a "one off" exercise would just lead to an influx of inexperienced coders
and "do my homework" questions on mailing lists, but that's not what I had in
mind at all. Ultimately, any university implementing such a programme should
be thinking about the net benefit to the community and doing things like
penalising students for filing duplicate bug reports to mitigate against
possible negative consequences.

