Hacker News new | past | comments | ask | show | jobs | submit login

In reality, exploit sellers and exploit buyers are engaged in discovering the value of security exploits. That the value of those exploits might be pinned to unethical, immoral, unlawful, or belligerent conduct is irrelevant; markets have to operate in the real world, and we cannot stipulate that the bad actors absent themselves from the real world.

So while I personally find the sale of exploits distasteful†, I think Soghoian is in the weeds with this argument about exploit developers being "modern merchants of death". Exploits are nothing like conventional munitions. They're extremely scarce and their extraction from software imposes no intrinsic costs on the rest of the world.

In other words: vendors can simply outbid intelligence agencies for their bugs, or, better yet, invest more heavily in countermeasures to moot those bugs. Unlike guns, which can be manufactured so cheaply and at such a scale that no one organization could hope to stem the tide with markets, vendors can stop immoral abuses of their own software simply by participating more actively in the market.

$200,000 sounds like a lot of money, but it's under the cost of one senior headcount at a major software vendor, and vendor cash flows are expressed in high multiples of their total headcount cost. The higher the prices go, the more incented vendors are to stop vulnerabilities at the source.

Even today, the whole technology industry is captivated by the misconception that vulnerabilities somehow cost some fraction --- maybe 1/3, maybe 1/4 --- of a senior full-time dev salary. After all, they're generated by people who would otherwise be occupying that kind of headcount. And for the most part, that misconception has been bankable, because the best exploit developers almost as a rule suck at marketing themselves.

Every other price in the application security field follows from this misconception, from headcounts and org charts at vendors to assessment budgets to shipping schedules for products to the salaries of full-time application security people.

It's all built on a misconception; that misconception creates a market inefficiency; people like (allegedly) The Gruguqhquq are arbitraging on that inefficiency. But the solution to a market inefficiency is to eliminate it, not, as Soghoian implies, to install umpires around it and erect bleachers and a jumbotron so we can watch it more carefully.

I see this story as evidence of chickens coming home to roost, not as some dangerous new ethical lapse on the part of the security industry.

This is an easy moral stance for me to take because I don't invest any serious time into developing exploits for the targets on this price list.




I disagree, the more demand there is for exploits, the more exploits there will be. If there is enough demand for them, we will even start to see employees on the inside of these companies purposely creating them.

Companies do not directly lose money if their products are exploited. How many thousands of exploits have been developed for windows? They're still doing just fine.

Software is buggy and exploitable by it's very nature. The cost to secure a large software project is orders of magnitude higher than the cost to find a flaw and exploit it.

By participating in an open market for exploits and greatly raising demand for them, the government is making us all less secure. "This is why we can't have nice things".


Exploits do not come out of nowhere. They can't be scaled with demand.

The fundamental moral problem with the market isn't the value being imputed to exploits; it's the lack of value imputed to resilient software.


> Exploits do not come out of nowhere. They can't be scaled with demand.

Actually they can and are[1]. Not so much the exploit dev bit, but the bug hunting is getting more automated.

[1] - http://www.scribd.com/doc/55229891/Bug-Shop


The point I'm making is that people have to create defects in the first place. Contrary to some claims on these threads, most code does have a finite amount of exploitable defects.


Ah right, got you now. I was referring to the scalability issue.

Of course the great thing about code defects is that updates are just as good at introducing new bugs if the developers don't have proper security processes in the first place.


The large strategic moves major vendors like Microsoft, Adobe, Google, and especially Apple with the IOS platform seem to be doing a good job of killing whole subclasses of vulnerabilities, and of driving up the cost of exploitation (above and beyond flaw discovery).

Your point about software maintenance introducing a continuous stream of new flaws is well taken, but ultimately I think vendors who take this problem seriously are in a very good position to do something about it.


You're right. The bigger boys are in various stages of getting it together, it's the ones that don't seem to have immediate column-inch impact (Oracle, SAP etc.) that aren't quite there yet, and then you've got everyone else who lack the resources or interest to pull it off.


An again economics is firmly in our corner here, since the effort to build exploits for exotic targets isn't that much less than the effort to target e.g. Android... but the incentive to build those exploits is far lower.


> Exploits do not come out of nowhere. They can't be scaled with demand.

Why not? All large software projects have flaws. Doesn't more demand for exploits mean more people are going to look for and find them?

> The fundamental moral problem with the market isn't the value being imputed to exploits; it's the lack of value imputed to resilient software.

I think it's both. People shouldn't be selling exploits to entities that will use them offensively. And vendors largely don't care about security as much as they should.


More demand does cause more people to look for exploits. But since there's a finite number of vulnerabilities to be extracted from code, I'm not sure how that's relevant.


I do not understand this. You seem to be saying that the more efficient and open the market is, the greater the demand -- even to the point of having employees create holes on purpose (one supposes to be able to sell them?)

But couldn't employees already create holes on purpose to sell them? With an open market, perhaps I know that there is a bug in IE that involves flash and allows easy access to root. It just sold at 4 million bucks. With a closed market, I may suspect the same thing, but I don't really know for sure. The thing is, the vulnerability, the market, the sale, and the exploit still exist regardless of whether I know about it or not. The only question here is whether other people are in on what's going on in the marketplace.

"The cost to secure a large software project is orders of magnitude higher than the cost to find a flaw and exploit it."

Yes, that is the current state of affairs. But the current state of affairs is that there are all sorts of vulnerabilities that the average person doesn't see. It's not the cost to secure a large project, it's the relative cost to the customer base of the exploit versus the current margin to the software provider. That's the way it should have been working all along. If you sell me a product for a buck and it steals my bank account -- or even if there is a one-in-a-million chance of it stealing my bank account -- I'm not buying it. Right now vendors create walled gardens and put everything in there. What probably should be happening is that separate physical devices should handle different types/values of things. My iPad should probably never both run Angry Birds in Space and control my brokerage account. That's simply too many eggs in one basket. Vendors get away with this because they are trying to hide all of the hidden risks. It's my belief that this practice has to stop. Immediately.

Because as technologists we love to generalize we are always trying to create multi-purpose walled gardens. But that's not the way anything else works in the world. My wallet does not also function as a gaming device, something I wave around to exercise with, and a device for meeting girls. I don't take all the physical cash I own to Starbucks and build little towers out of it. We keep things in physically separate areas for a very good reason -- it decreases risk. (And we accept various kinds of risk for various kinds of things) Opening up this market will only cause an evolution that has needed to occur for a decade or more: the end of the general-purpose computer.


But couldn't employees already create holes on purpose to sell them?

Most corporate programmers would have no idea who would buy such a thing or what the right price is. Making that market clear and making transactions easy should increase production. That's what every commodities market does. As an example, consider the Chicago Mercantile Exchange, the early history of which is well described here: http://www.amazon.com/The-Merc-Emergence-Financial-Powerhous...


Thanks for the link. Remarkable Amazon overview: "Nor does the author offer any particularly illuminating perspectives...his frequently fawning account of the CME's origins and first 50 years as an arena for commercial hedgers and venturesome speculators amounts to little more than a family album in which forgotten knaves are as fondly and foolishly remembered as hitherto unheralded princes and their lightweight aides. Remarkable mainly for its consistently graceless style"

Ouch. Remind me never to have that guy review any of my books.

I understand what you are saying, and I understand why it's feasible to keep it small, concealed and restrict participants to certain customers. At least early in the game.

What I'm saying is that this state of affairs is temporary at best. Forbes is out with it. There will be many more articles. The prices are already in the 6-figure range. Soon they'll be at seven figures. No matter what we'd like the market to be like, any programmer with Google access should easily be able to determine he could make himself a millionaire just by releasing a vulnerability into the wild. Whether that information is easy to find right now or not is moot. It'll get easier. We're all connected. Supply meets demand. No amount of wishing it weren't so is going to change any of that. Works this way for illegal drugs, will work this way for security vulnerabilities.

I think the question here is whether to shun, outlaw, shame and hide this kind of stuff or to embrace it. In my opinion, we have enough examples that the first choice doesn't work so well, where the second choice benefits the rest of us even if we find the entire affair distasteful.

But I believe the greater point is that there are so many people affected by this hidden market that keeping information from them should be a crime. Yes, I wish that we could live in a world where we could slap a big old Google, Microsoft, Amazon, or Apple logo on something and know that it is safe. But that world doesn't exist and it's never going to exist. Might as well start living in the world we find ourselves.


Yeah, the book is definitely an in-house history, but it does a fine job of showing how a commodities market emerges and the way it shapes commerce as long as you skim a little.

Illegal drugs is a poor analogy; there are a lot of participants in the market, a lot of small transactions, and it can be a victimless crime. If you are looking to buy a little weed, your friends probably don't care.

A better one is high-end weapons. E.g., missiles. That's a market that's relatively small and obscure, and the prices are high. State actors can get away with trafficking, but individuals run a substantial risk of running across sting operations and other law enforcement activities. Further, as long as the market is widely reviled, random citizens are likely to report suspicious activity.


So long as there's significant penalty to participating as a seller in this market for those actually writing the software then I don't see why it can't be public. I don't think it would be a good idea to allow without penalty someone who is writing the software to be cashing in on million+ dollar bonuses for back doors he writes himself however. That seems analogous to insider trading.

Forfeiture of all money gained as such and a stiff jail sentence should be enough to discourage any but those who are already doing this without public knowledge of the market.


The more recent book _The Futures_ covers the same subject and has most of the same flaws, excepting perhaps that it takes a more measured tone about the schemers who rigged the early commodity markets.


Nonsense. The demand isn't created by governments seeking to purchase the exploits, it's created by criminal organizations willing to use exploits for botnets, identity theft, and other monetization models. That ZDI, and VUPEN etc are purchasing exploits only improves security.

View it like this. The exploits are out there, discovered and used by black hats. The idea that a whitehat security researcher is in anyway decreasing the level of security by selling an exploit to Google, or Apple, or any third party, is silly.

The more security researchers that are evaluating software, the more secure we'll be. Zero days will always be around, simply because the incentives are still favoring the blackhats. Until that's reversed, we'll be playing catch up.

The government buying exploits has nothing to do with why we can't have "nice things." We can't have "nice things" because this is a hard world, with people who won't blink to take advantage of poorly designed and poorly coded software.


I am not an expert economist but I do think suggesting Google simply pay $200k for exploits would fix things is far too simple a solution. It would simply make the exploits even more desirable, and push the rewards of selling on the black market even more lucrative.


Exploits are already desirable. Google paying a respectable sum for them won't make them less desirable.

An exploit for a widely used application, especially a client-side application with a good reputation like Chrome is extremely valuable to someone wanting to create a botnet etc. There's no way to avoid bugs in software, but rewarding security researchers can help mitigate the risk. And security is all about mitigating risk in a cost effective manner.


Exactly this.

If Google started paying $200k to match the spot price on the exploit market, the market would react by pushing the price up. Soon Google would be paying $300k, then $400k.

But at some point, that price appreciation has to stop, because there will stop being counterparties who will see $500k or $700k as a rational price to pay for an exploit.

When that happens, one of the legs of the vulnerability market will get knocked out; the market will have discovered some approximation of the true value of an exploitable security vulnerability (again: that value is based on immoral behavior, but reality doesn't care about that). Google will pay it, because it can, because the final price of exploitable vulnerabilities is certainly a tiny tiny fraction of their total overhead, and because Google has an advantaged long-term position that will enable it to control the supply of exploits and eventually bring its costs back down.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: