First, it's actually not true at all that Schneier has "long argued that the process of finding vulnerabilities in software system increases overall security". For a good long chunk of time in the not-so-distant-past, he had exactly the opposite perspective. For instance, when the Code Red worm was released, Schneier blamed the team that found the vulnerability:
We shouldn't lose sight of who is really to blame for this problem.
It's not the system administrators who didn't install the patch in time,
or the firewall and IDS vendors whose products didn't catch the problem.
It's the authors of the worm and its variants, eEye for publicizing the
vulnerability, and especially Microsoft for selling a product with this
security problem.
Note the language: finding the vulnerability.
There's an insidious pattern among security pundits regarding disclosure. Disclosure tends to be A-O.K. when it's done by friends, or friends of friends. So, for instance, no vulnerability blessed by Elias Levy (Bugtraq) was ever going to be criticized, because Elias is the nicest guy and lots of people had met him and liked him. But when the scruffy-haired kids at eEye published advisories, look out! Schneier is far from the only person to turn the disclosure debate into a clique defense mechanism, but it's good to recognize that his ethics in this case are situational.
Second: while I (a) agree with Schneier's take on vulnerability markets and (b) have written about it in many places including here, I'm not sure what the point of harping on it is. There is nothing you can do about vulnerability markets except to compete with them. It is after all very easy for Schneier, who does not do vulnerability research, to suggest that there's something sleazy about taking money in exchange for vulnerabilities. But researchers rightly think there's something sleazy about the demand that they work for free. The researchers will win this argument.
"The whole point of disclosing security vulnerabilities is to put pressure on vendors to release more secure software". No. That's what we want the point to be. But as adults, and, clearly, businesspeople, we should be able to evaluate the world for what it actually is. Vulnerability markets are a problem, but they're a problem created by insecure software, not by researchers. We're not going to solve the problem by moralizing.
You're probably referring to http://www.schneier.com/crypto-gram-0108.html#1 where Schneier criticizes eEye for publishing (not finding) a vulnerability with potential for a large-scale worm outbreak while selling a product that protects against that vulnerability. The blame is for leaving everyone but their own customers out in the cold, not for finding the vulnerability. The blame for the flaw gets directed to MS, where it deserves.
This quote from The Register sheds some light on this:
> eEye makes several good security products for Windows and IIS, and has been responsible for finding and aggressively publicizing a number of holes in Microsoft products, especially IIS.
> But the business of searching for and publicizing security holes while at the same time selling the solutions is a tricky and controversial business, not unlike the model pursued by anti-virus companies. We note, for example, that eEye has yet to publicize an IIS hole that its SecureIIS product won't defeat. Their discoveries inevitably support the claim that SecureIIS is a very wise investment.
eEye discovered MS01-033 (the flaw later used by Code Red) and was thanked by Microsoft in their advisory for it. Schneier, in the very post we're responding to here, claims to stick up for publishing vulnerabilities; it was "the only way" to handle them.
But of course, Schneier didn't come to his current opinion easily; no, he detoured through a prolonged period of using his status in the industry to slander companies who handled the vulnerabilities they found with their own labor in ways he disapproved of.
The Register piece you cited doesn't even make logical sense. "Oh ho!", it trumpets. "I see eEye has yet to release a bug their own product doesn't defend against! Surely, if they were ethical, they would cripple their own product to give competing products a fighting chance!"
People who don't practice vulnerability research have a very had habit of incorrectly attributing vulnerabilities. Vulnerability researchers don't create them.
People who don't practice vulnerability research have a very had habit of incorrectly attributing vulnerabilities. Vulnerability researchers don't create them.
Historically true, but will it remain so?
Schneier's point is that the existence of vulnerability markets means that there are incentives for there to be "vulnerability researchers" who are selling the bugs that they themselves created.
It is the security version of, "I'm gonna code me up a minivan!"
The Venn diagram of vulnerability researchers and product developers has only a sliver of overlap. Most researchers are not professional developers.
No vulnerability that anyone knows about has ever been attributed to product sabotage.
If there exists an incentive to sabotage products on behalf of governments or organized crime, that incentive exists with or without formal vulnerability markets.
Markets allow vendors to participate and discover vulnerabilities. They are in that sense a uniquely bad setting for internal saboteurs to sell access to code; there is a non-negligible chance that the organization who ends up with your work will be able to "git blame" you.
I'm not a supporter of vulnerability markets, but this strikes me as a particularly dumb argument against them.
Historically economic incentives did not encourage the selling of vulnerabilities that you created. And we have no demonstrated history of the creation of vulnerabilities for sale. Not surprising with the lack of incentives, and not evidence that it won't happen in the future.
But disgruntled IT people do all sorts of remarkable stuff. Take the case of Terry Childs, who locked everyone else out of San Francisco's network. Or a friend of a friend of mine who took out his frustration at a previous company by translating a chunk of their code to Latin and then implementing his project in Latin. (Yes, there is a Perl module that lets you program in Latin. No, this is not recommended in production code...)
There are a lot of IT people. And some do stupid things. Or interesting things. For instance there is a persistent rumor that http://www.win.tue.nl/~aeb/linux/hh/thompson/trust.html was not hypothetical, and that Thompson was actually observed using it in the wild. Really cool security hole. (And good luck doing a "git blame" for a bug that appears nowhere in your source code.)
But you're right that it is not the best argument against vulnerability markets. Given the quality of most company's code bases, there is no need to add security holes. It is easy and much safer to find the ones that are already there. However when you leak source code for private analysis, then don't report the bugs you find, the result is more 0-days and bugs not getting fixed. This makes us all less secure.
Of course with the increasing interest in compromising systems, this is happening regardless of whether or not there is a formal market for them. The present is insecure, and the future is guaranteed to be less so.
Vulnerability researchers in the 00's have not been particularly hindered by lack of access to source code. It turns out that when you actually start tooling up, having access only to compiled C code isn't much of an impediment.
I hate it when Schneier talks about things he knows very little about, because he tends to be listened to and treated as an expert when he might not really know much. This is one of those cases. A lot of what he describes are clearly assumptions and it's hard to get facts, but because he hasn't described them as so, many people will accept them as a primary source because of who he is. I don't blame Schneier for this, but it's important to remember he's just a guy with a blog writing an article about his thoughts. He doesn't sell vulnerabilities, he's not active in the marketplace.
I'd really love to hear your thoughts on what's wrong in his view. I tend to agree that he's wrong, but I can't put my finger on why, honestly. Personally, the zero-day market is beginning to look very interesting to me; I went from reverse-engineering things in public, to working as a security consultant, to now working as a software dev. After everything I've seen of the security world, it seems to me that the vulnerability market is the most interesting, lucrative part of the industry.
I just wish that there was more information available; behind-closed-doors meetings introduced by friends-of-a-friend seem to me to not only be dangerous (to me and everyone else) but inefficient from a market perspective.
Are you disagreeing with his assertion that the 0day market creates an incentive for internal code sabotage?
I agree that that part is speculative, but I think his major point (as I read it) is that the monetization of exploits by third parties (bad guys, govts, etc) is predicated on the value of keeping the software exploitable, to the detriment of all but those on the high side of the informational disparity curve.
I agree with him that that is unfortunate, but it seems like business as usual to me, in every industry everywhere.
> Are you disagreeing with his assertion that the 0day market creates an incentive for internal code sabotage?
No. I'm disagreeing with his speculation being stated as fact with nothing to back it up. This is something he does a lot. I realise he's not the most technical author, but a link would go a long way.
The monetization of exploits has been around for a long time, a lot longer than OWASP and longer than people had mature software security programmes. Whatever incentive there is, so far hasn't resulted in a public incident that I can clearly think of, so I would say that while the incentive may exist, the relationship damage between the governments in question buying an exploit of a deliberately introduced bug and getting caught doing so would be too high risk for the government in question to buy it. I admit that there may be more look in finding a private sector customer, or even some intermediary, but again, we're not aware of this ever happening, this market has existed for a long time and if it was a real incentive I would have expected something high profile by now.
I am happy to be proven wrong on this one, so if anyone has any examples please do call me out on this.
Exactly which vulnerability market participants would buy and hide code-sabotage flaws? Oh, right, the same ones who were buying vulnerabilities before ZDI existed.
> "security for the 1%." And it makes the rest of us less safe.
Actually it seems like the opposite. The chances that the government or other entity with a purchased exploit is going to hack you is very small. The targets there are the 1%.
If instead of being sold on the black market, the exploit is disclosed to the public. Worms can be written, your grandparent's unpatched computer could be taken down, and the overall impact on the average Joe will be much higher.
If he's right about the incentive for internal code sabotage, I wonder if this will strengthen the security perceptions of open source software. Particularly strongly curated open source software.
He somewhat alludes to this with his comment:
"No commercial vendors perform the level of code review that would be necessary to detect, and prove mal-intent for, this kind of sabotage."
Have a look at the Underhanded C contest[1] and see which of those entries (suitably scaled up, possibly spread over several commits by different people) would be likely to be spotted by OSS maintainers or contributors.
I'm not sure the 'all bugs are shallow...' applies to actively malicious contributions.
In some sense, proprietary software might be more secure, exactly because it doesn't accept contributions from anyone, and can much more easily background-check the people who do work on it. I'm not sure how common OSS projects that don't accept contributions are, but I'm sure they exist.
Why not sell zero-day exploits to security vendors? Working in network/information security, I don't know how often we're hit by a virus days before McAfee et al had definitions for them. While I'm all for disclosure and making the developers patch their code, it's really quite irritating how long it takes before AVs have caught up.
Well then maybe they should. Vendors are complaining that they aren't getting disclosures for free anymore. maybe they need to start being the highest bidder, so that exploit hunters get paid.
...a disclosed vulnerability is one that -- at least in most cases -- is patched. And a patched vulnerability makes us all more secure.
This is why the new market for vulnerabilities is so dangerous; it results in vulnerabilities remaining secret and unpatched. That it's even more lucrative than the public vulnerabilities market means that more hackers will choose this path. And unlike the previous reward of notoriety and consulting gigs, it gives software programmers within a company the incentive to deliberately create vulnerabilities in the products they're working on -- and then secretly sell them to some government agency.
When market forces are working against your interests, it's the worst kind of news. For a business, it means bankruptcy. For a nation or empire, it means decline and dissolution.
I thought I'd do one comment per subject. So, to take this apart (and bear in mind I'm referring to personal experience or the experience of people I know):
> Recently, there have been several articles about the new market in zero-day exploits: new and unpatched computer vulnerabilities.
It's not a new market. Exploits have been bought and sold (as far as I'm aware) going back into the 90s. My visibility of the market ends there, but it may well go back further. Companies like ZDI, Immunity and Rapid 7 have been in this market for a long time (doing different things), but it shows that there has been a market for a while.
> This market is larger than most people realize, and it's becoming even larger.
I suspect that the impression of the market becoming larger is due to more of the market being publicly visible, which in turn may generate more interest and growth, or may not.
> In fact, it took years for our industry to move from a norm of full-disclosure -- announcing the vulnerability publicly and damn the consequences -- to something called "responsible disclosure": giving the software vendor a head start in fixing the vulnerability.
This is quite a shocking statement for me to see. The industry (if you want to call it that) didn't push for "responsible disclosure", the vendors did. "Responsible disclosure" was nothing more than a marketing statement to shift the blame from the origin of a software defect to the person disclosing it if they didn't toe the vendor line on waiting months or years for a fix and keeping shtum at the time.
> This is why the new market for vulnerabilities is so dangerous; it results in vulnerabilities remaining secret and unpatched.
This assumes that software security is a zero sum game, which it isn't. Just because a bug exists and a zero day exploit exists, doesn't mean that you'll be exploited. A zero day exploit in VLC doesn't automatically result in compromising your system if your usage pattern for VLC doesn't intersect with the exploit pattern.
> No commercial vendors perform the level of code review that would be necessary to detect, and prove mal-intent for, this kind of sabotage.
> And unlike the previous reward of notoriety and consulting gigs, it gives software programmers within a company the incentive to deliberately create vulnerabilities in the products they're working on -- and then secretly sell them to some government agency.
> No commercial vendors perform the level of code review that would be necessary to detect, and prove mal-intent for, this kind of sabotage.
This is utter FUD. Vendors like Microsoft routinely work with governments directly to provide secure products, while the governments in question have their own teams to spot vulnerabilities and develop their own bugs. They don't need a vendor programmer putting bugs in for them, it would massively damage the relationship if they were caught.
> With the rise of these new pressures to keep zero-day exploits secret, and to sell them for exploitation, there will be even less incentive on software vendors to ensure the security of their products.
Except for the fact that we have legislation and regulation at the end-user side that makes software security a requirement for a vendor, something we didn't have 20 years ago. Many larger vendors already have a fully integrated process for security management. Companies like Microsoft have even gone public with the Security Development Lifecycle.
I would make the counterpoint that the disclosure that the market exists makes responsible vendors even more determined to build more secure code. A vendor without a software security capability is no different whether or not there's a public market as the bugs will still exist and going from bug to exploit isn't necessarily hard when the vendor hasn't considered security. A vendor with a mature software security capability will build in layered defences to ensure that the cost of exploit development grows higher.
Schnier writes "This is very different than in 2007, when researcher Charlie Miller wrote about his attempts to sell zero-day exploits; and a 2010 survey implied that there wasn't much money in selling zero days. The market has matured substantially in the past few years."
I believe Schnier's point is that it's now an active market where it's much easier for buyers and sellers to meet, not that this is the first time exploits were for sale.
To think that Schneier believes that there was no market would be to think that Schneier did not accept the reported prices listed in the linked-to PDF by Miller.
He uses the words new market to describe the thing four times in the article. It's very clear that to him it's a new market. Having said that his definition of new may be different to yours or mine. 5 years may still be new if you've been in an industry for several decades.
Immunity Canvas has been on sale since the early 2000s, same with Core Impact. A lot of their marketing went into releasing 0day exploits. Exploit trading is as old as the exploits themselves. The value of an exploit is based on it's utility and effectiveness. A remote code execution in Windows is worth a fortune because Microsoft's strategy has been specifically to increase the cost and effort required to develop a reliable exploit.
And of course, 0-day exploit trading isn't news to Schneier; in the early-to-mid '90s, many serious Unix vulnerabilities were traded among vendors and admins on secret mailing lists.
Of course, those lists were the laughingstock of the hack scene, and posted to tfile BBS's right next to cDc zine issues. They made the list participants feel a little more special, though, and they did keep vulnerabilities from being disseminated... to the majority of system operators.
What is kinda hilariously terrifying about the legitimization and anonymous nature of selling 0days is that it provides a venue for creators of software that people rarely pay for and that have few contributors (free tweet or voip clients, say) to purposely build in mistakes, get traction and then sell the exploits.
It would be a lot easier to agree that security researchers shouldn't be selling vulnerabilities to the highest bidder if, say, they were offered $250k to $1mm each from the vendors involved. Baring that, it's hard not to think this is mostly very wealthy companies bitching that researchers are declining to work for free to fix the companies' inability or lack of desire to write secure code.
It feels like the companies first tried to hold a bulwark and claim that vulnerabilities are worth nothing. Now that fb, google, and mozilla offer bug bounties, we're starting to establish a market price. Like the old joke apocryphally attributed to Churchill:
"Churchill: "Madam, would you sleep with me for five million pounds?"
Socialite: "My goodness, Mr. Churchill... Well, I suppose... we would
have to discuss terms, of course... "
Churchill: "Would you sleep with me for five pounds?"
Socialite: "Mr. Churchill, what kind of woman do you think I am?!"
Churchill: "Madam, we've already established that. Now we are haggling
about the price”
There's an insidious pattern among security pundits regarding disclosure. Disclosure tends to be A-O.K. when it's done by friends, or friends of friends. So, for instance, no vulnerability blessed by Elias Levy (Bugtraq) was ever going to be criticized, because Elias is the nicest guy and lots of people had met him and liked him. But when the scruffy-haired kids at eEye published advisories, look out! Schneier is far from the only person to turn the disclosure debate into a clique defense mechanism, but it's good to recognize that his ethics in this case are situational.
Second: while I (a) agree with Schneier's take on vulnerability markets and (b) have written about it in many places including here, I'm not sure what the point of harping on it is. There is nothing you can do about vulnerability markets except to compete with them. It is after all very easy for Schneier, who does not do vulnerability research, to suggest that there's something sleazy about taking money in exchange for vulnerabilities. But researchers rightly think there's something sleazy about the demand that they work for free. The researchers will win this argument.
"The whole point of disclosing security vulnerabilities is to put pressure on vendors to release more secure software". No. That's what we want the point to be. But as adults, and, clearly, businesspeople, we should be able to evaluate the world for what it actually is. Vulnerability markets are a problem, but they're a problem created by insecure software, not by researchers. We're not going to solve the problem by moralizing.