
The Need for Open Research in Software Security - wglb
http://breakingbits.net/2015/02/05/open-research-software-security/
======
dguido
Sheesh people, this is the second time this week that this topic has come up
and no one mentioned the DARPA Cyber Grand Challenge! It was on 60 minutes
this weekend for crying out loud. tl;dr DARPA is challenging companies to
develop and improve upon the core ideas proven in SAGE.

[https://news.ycombinator.com/item?id=9012051](https://news.ycombinator.com/item?id=9012051)

[http://www.cbsnews.com/news/darpa-dan-kaufman-internet-
secur...](http://www.cbsnews.com/news/darpa-dan-kaufman-internet-
security-60-minutes/)

Dylan, seeing that you're in NY, I'd be happy to talk this over with you over
a few beers at the next NYSEC:
[https://twitter.com/nysecsec](https://twitter.com/nysecsec)

~~~
dsacco
Wow, this is fantastic. Thanks for the link. I'll be reading up on this now.

Also, I'll take you up on that offer. I've been meaning to make an appearance
there for a while.

------
w8rbt
Nice article. The reality is that mainstream IT security is driven by audit
and compliance, not by engineers asking how can I break things/how can system
X be more secure. Most companies want to be XYZ compliant (and nothing more
than that). So they follow check-lists and continually get hacked and really
don't care. They'll fire the C level executive responsible and hire another,
but nothing really changes. Welcome to modern IT security.

~~~
dsacco
Author here. Yes, that's a fair summary of what's happening. There's more too
- investing in R&D for security is risky, tedious and sometimes doesn't result
in fruitful tools or findings. The companies that can afford to aren't opening
up their methods, and the rest can't really afford to develop the novel
approaches necessary for real progress.

~~~
dguido
Sorry, but I'm not certain how you make a business model off that idea. To
win, you need to ensure that people can build and use these tools to secure
the world AND make money off it.

Second, investing in security R&D is not that risky if you know who to partner
with. There are many organizations whose sole purpose for existence is to fund
foundational->applied R&D across a variety of fields. You seem to have
mentioned none of these opportunities in your blog post?

Third, if you're a software developer and you really care, the tools already
exist to write secure code. However, there are few to no incentives to use
them! I actually disagree somewhat that more R&D is needed on the security
side. It's an inaccurate characterization of the state of the art in security
and the degree to which people take advantage of it.

For exampleeeee, Mayhem has found and reported hundreds of bugs in Debian and
did the work to submit bug reports with reproducible crashes and POCs. I think
< 10 of the bugs they reported were fixed. I can't find the link right now,
but the exact systems you referenced are being used out in the open to help
people and no one is listening.

~~~
dsacco
Hey Dan, thanks for responding.

I can't address your first point because I agree with it - part of the reason
I'm pessimistic is because there isn't a strong business model.

However, I think that it's hard to know who to partner with. For all the
organizations that you can partner with to produce fruitful research, you have
many who are trying to make a quick buck on the trendiness of security. We
don't have a lot of Mayhems. We have a lot of Acunetixes.

I agree that the methodologies and tools required to write secure code already
exist. However, I don't think it's fair to place all the onus on software
developers. Even if they avoid C/C++, they still have modern pitfalls in just
about any language.

I think more research should be done on security software of course, which is
why I wrote this blog post. What my post doesn't address however is that I
also believe more research should be done in educating developers. It doesn't
matter if all the tools to help them exist if few of them bother to look for
them. Research could help in this regard - education.

Finally, I think you're much more familiar with Mayhem than I am, but I call
into question the bugs Mayhem found. How many were high severity and/or
actually exploitable? (I'm actually asking because I've seen conflicting
reports from both Mayhem and Michal Zalewski on the matter).

When I say being used in the open, I mean in a similar way to Project Zero's
activities. Mayhem was used once on Debian as far as I'm aware, and to my mind
it looks more like publicity than an ongoing effort to find bugs.

~~~
dguido
> Finally, I think you're much more familiar with Mayhem than I am, but I call
> into question the bugs Mayhem found. How many were high severity and/or
> actually exploitable? (I'm actually asking because I've seen conflicting
> reports from both Mayhem and Michal Zalewski on the matter).

Exploitable? Every. Single. One. Each came with a unique PoC demonstrating
control of execution.

> However, I think that it's hard to know who to partner with. For all the
> organizations that you can partner with to produce fruitful research, you
> have many who are trying to make a quick buck on the trendiness of security.
> We don't have a lot of Mayhems. We have a lot of Acunetixes.

This sounds like you simply haven't investigated where or how to get money for
R&D!

------
galapago
We are building an open source symbolic executor based of BARF [1] starting
reimplementing pathgrind [2], which it is not working on modern Linux
distributions because of the use of SSE2/3 instructions on the current libc
versions. After we have a beta, we plan to use it submit bugs on the fuzzing
project [3].

[1]: [https://github.com/programa-stic/barf-
project](https://github.com/programa-stic/barf-project)

[2]:
[https://github.com/codelion/pathgrind](https://github.com/codelion/pathgrind)

[3]: [https://fuzzing-project.org/](https://fuzzing-project.org/)

~~~
wglb
What is your take on
[http://blog.regehr.org/archives/1217](http://blog.regehr.org/archives/1217)
and
[http://blog.regehr.org/archives/1217](http://blog.regehr.org/archives/1217),
who seem to be less than totally excited about the output of static analysis
tools in terms of bugs found.

------
jenandre
IMO as long as the research in these tools are being funded by corporate
entities (e.g., Microsoft) then there's little hope of any open research.

Fortunately, there's money to be had for open source and research projects
that are willing to organize and look elsewhere for some cash. Look at
projects like Bro and Suricata -- commercial security tools which are
government and educator funded.

~~~
rmac
The problem with open source security tools (e.g., bro, suricata, brakeman) is
that they require security expertise to operate, continually. In my experience
many small/medium organizations who actually care about security don't have
such expertise in-house and can't find good security people to hire. This
limits them to buying commercial solutions which (also in my experience) tend
to blow.

We need more security engineers, but the problem is I don't even know what
that job title requires. The author pokes fun at CISSP, but how else can I
figure out if someone is 'good' at security? They are already so rare and
mostly employed by google (joke).

~~~
dsacco
_> > The author pokes fun at CISSP, but how else can I figure out if someone
is 'good' at security?_

I'm the author.

If you are hiring a security consultant for your firm and you know how to
judge infosec skill, use a work sample and check references.

If you're hiring a security consultant to perform a penetration test or audit
for your (non-infosec) company, hire people who have a healthy mix of the
following:

1\. Public, verifiable work (e.g. bug bounties).

2\. Solid references and past experience with clients who themselves
understand what to look for in a security consultant. You obviously check
these references. Alternatively, a solid reference that the candidate worked
at NCC Group, Accuvant, Leviathan, etc.

3\. Research in the field, such as discovering a new class of vulnerability,
publishing vulnerabilities in ubiquitous software, etc.

Prioritize #2, because not all adept security folks like to conduct research
or participate in publicly verifiable work.

Of the certifications you can have, the Offensive Security[1] certs are pretty
rigorous. For example, the OSCP is a good indicator that a candidate knows
what they're doing to offensively test a client's network. That's about it.
Almost all other certifications are run by people who have, at best, textbook
knowledge of information security. People who get the CISSP can probably
accurately describe a cross-site scripting attack to you in an interview, but
there is no guarantee they can practically find it or defend against it.

The other issue is that while some certifications are good, a lot of folks in
infosec just don't care for them. They can find high paying jobs in
prestigious companies without a degree _or_ a certification of any kind, so
they simply don't bother, even though they could pass it. This means that you
can't reliably throw out candidates with no certifications...which circles
back to my original recommendation. Work samples, references and public work
are the best ways to judge a candidate's talent. I'm directly aware that this
system is used at Matasano and Accuvant, and it's likely the norm at the other
"quality" security consultancies.

'tptacek would have a lot of great advice to contribute on this matter as
well.

[1]: [https://www.offensive-security.com/information-security-
cert...](https://www.offensive-security.com/information-security-
certifications/)

~~~
sprkyco
+1 for anything from OffSec I failed their OSCP once and after some
reassessing I will not be taking for another year or so. However, there are
many companies that list if not require a CISSP and other CISSP holding
individuals are hesitant to degrade the efficacy of the cert.

~~~
tptacek
Generally: a requirement that candidates hold CISSP is a strong negative
signal about the job. This observation would have qualified as "insightful" 10
years ago, but in 2015 it's verging on conventional wisdom.

------
notoriginal
This title is quite broad. There is plenty of open research into software
security. It seems you're referring to open development of fuzzing tools. I am
not really convinced that any kind of after-the-fact testing is the ultimate
mechanism for making security guarantees. It's simply inherently flawed. I'm
sure you make a lot of good discoveries with it, but when you're talking about
100% confidence I'd highly doubt it's going to happen. And in security, the
difference between 100% soundness versus anything less is huge.

------
sudioStudio64
I agree that commercialization has perverse incentives...but its a little
ironic that we are talking about security research on commercial products.
There is an academic side to security research in CoSci, but most security
research is basically poking commercial products to see where they fail, and
if that failure can be controlled.

That being said, Microsoft should open source MAGE. It's sounds cool.

