Hacker News new | past | comments | ask | show | jobs | submit login
The Mercenaries: Ex-NSA hackers are shaping the future of cyberwar (slate.com)
109 points by dthal on Nov 13, 2014 | hide | past | web | favorite | 69 comments

Also: the closing grafs in the article, about Cisco's acquisition of Sourcefire, are particularly dumb.

Sourcefire is the commercial backer of Snort, the open source network intrusion detection system (and also the owners of ClamAV). The author of this article and his sources express surprise that Cisco would pay big money for an open-source product that anyone can use.

Cisco paid just about 10x trailing revenue for Sourcefire, a public company that had managed to dominate enterprise network security and which competed directly with products that had been cash cows for Cisco for over a decade. Cisco has for as long as I've been in the industry --- in fact, for as long as there's been that industry --- been the single most important acquirer of network security companies. They acquired security companies with the same fervor in 1998 as they do today.

Cisco's acquisition of Sourcefire might qualify as the single least interesting story in information security in the last 5 years.

Want to make a couple hundred million dollars? You too can do what Sourcefire did: start an open source project that appeals to enterprise teams who spend monopoly money to buy products (that is, start any enterprise-relevant open source project). Get thousands of people to use it. Then start a company and hire an inside sales team. Have them call company after company and ask, "Do you use our open-source project?" Sell extra stuff to the people who say "yes".

FWIW, I think that the idea of software bugs (vulnerabilities) as a product is a scary concept and a bad precedent for overall security.

Once you have legitimate corporations who's goal is to find software vulnerabilities, combine them with delivery systems and sell to either specific entities (e.g. the US government) or the highest bidder, I think that the incentives for people involved in software development and testing get odd, and not in a good way.

For example, will we see these companies hiring ex-developers and testers from software product companies, as they might have inside knowledge of where products are weak.

Another example is, are there incentives now for people who work in development or testing, who aren't perhaps happy in their jobs, to sell knowledge of bugs or flaws to these companies? Given the prices paid, which could be several multiples of peoples annual salary, and the anonymity afforded to people who report the flaws, it could be a low risk way to make a lot of money.

And then you have open source software which is heavily used in a lot of commercial products that might get attacked. With this kind of thing there's a big incentive not to report bugs to the project but to sell them to a company who has no incentive to see them fixed...

I'm uncomfortable with vulnerability markets for other reasons. But, anyways, you write:

For example, will we see these companies hiring ex-developers and testers from software product companies, as they might have inside knowledge of where products are weak.

Two things.

First, you're not clear on why this would be a bad thing. The flaws are there whether insiders out them or not. The implication in your comment is that we'd be better off with those flaws kept secret. Obviously, we'd all be happier if the vendor outed their own flaws, or if a non-"mercenary" researcher outed them for public consumption. But even private vulnerability sales have the effect of eventually burning the bug.

Second, it's a little naive to think that most flaws are known only to insiders. In fact, the advantage insiders have in getting full access to repositories is probably dwarfed by the advantage attackers have in committing entire careers to studying exploitable bugs. For most competent researchers, lack of source code is just a speed bump.

Realistically in my experiences every large company has "skeletons", which are primarily known to insiders and of which outsiders have less knowledge.

Be that a product which is known not to have as rigorous a security regime as others, or perhaps a service which is considered "legacy" and not developed actively.

When offensive companies start hiring people to get access to that information to use against their prior employers, I think that's not great for overall security.

I didn't think I was implying that keeping flaws secret in the long term was desirable, I don't think it is.

Also whilst I agree it's naive to suggest that only insiders know most flaws, I feel that it's also reasonable to suggest that insiders have information which would be useful to attackers, and that could be tapped by hiring them.

As I said originally that was just one example of where I think potential problems could arise from "vulnerabilities as a product", but I'd be interested to hear what you think are the downsides to vulnerability markets.

You seem to be describing companies that are institutionally concealing serious product flaws from their customers, and suggesting that overall security as a public policy goal is improved by a strategy of just hoping that reverse engineers won't notice.

that's not what I was intending to describe. I was suggesting that insiders have inside information and sometimes that is relevant to attacking companies, such that hiring those insiders could be useful to them.

and definitely not hoping that reverse engineers won't notice, I've been in security long enough to see all my pronouncements of "you know someone could do x" and more come true...

I'm not sure that you've clarified anything with this comment. "Insider knowledge of information relevant to attacking software" is "insider knowledge of product flaws". Flaws need to be fixed, not concealed.

Institutional knowledge is about process as well as the software. Knowing the magic words and people can make social engineering or avoiding countermeasures much easier, even in the absence of an explicit software flaw.

Now we're playing Six Degrees of Kevin Bacon. We start out with moles inserting vulnerabilities. Then it's insiders who know about flaws. Then insiders who know about weak spots to look for flaws in. Now it's magic words to help with social engineering. At some point, these stop being important considerations for public policy.

On your second point...

Yes but having a paid insider gives you a major advantage. You can now seed vulnerabilities, and derive a predictable income stream from these seeded vulnerabilities.

What is more scary to me is that big money is involved here. What if you can't get a developer to be tempted by money to insert vulnerabilities for you, and you start using a more heavy handed approach (death threats etc).

Also by paying developers to insert vulnerabilities, you no longer need experts looking for vulnerabilities. These experts are in short supply, so it might become a more viable path.

This is why I am uncomfortable with vulnerability markets...

First, I thought you were simply talking about insiders who had knowledge of targeted software. Here it seems like you're talking about moles being paid to insert new vulnerabilities.

But even then, I don't find this threat particularly credible. After all, what we're talking about here are W2 employees with social security numbers or immigration tracking committing galactically expensive torts against their employers and in all likelihood most of the Fortune 500, in addition to (in all probability) multiple felonies. How much money do you think Endgame can afford to pay these people to shoulder that risk? There's a reason this doesn't actually happen all the time.

>> There's a reason this doesn't actually happen all the time.

How can we be sure this doesn't happen all the time though? I suppose because it hasn't leaked into the media, but still...

For interest sake, you mentioned you are against vulnerability markets. Why so?

I think the point is, it's extremely likely you'd end up in a situation where someone intentionally introduces a hard to detect flaw. If the flaw is then sold to a government agency: 1. The person who introduced the flaw will not suffer any consequences. 2. You've now set the precedence that it's free game for people to intentionally leave holes in software and get paid to do so.

This seems very unlikely, since intentionally introducing product flaws is at the very least an incredibly damaging tort, not just against your employer but against everyone who ends up using the software.

I think that the idea of software bugs (vulnerabilities) as a product is a scary concept and a bad precedent for overall security

That was my first impression as well however, after giving it some thought I wish there are more vulnerability trading companies. The point is that currently this business is reserved for government sponsored customers:

According to three sources familiar with Endgame’s business, nearly all of its customers are U.S. government agencies.

Expanding this business to the private sector might have positive effects. Corporations could buy the "zero days" in their own products in order to fix them. To some extent such outsourcing already exists in the form of pen-testing companies, but it seems like a different league.

> Expanding this business to the private sector might have positive effects. Corporations could buy the "zero days" in their own products in order to fix them.

I should point out that some companies are already getting vulnerabilities reported to them for free (in the best case, for the price of a symbolic bounty program) and fail to fix them. See http://reverse.put.as/2014/10/31/patching-what-apple-doesnt-... for one recent example that comes to mind. Giving them the opportunity to pay market prices for the vulnerabilities is not going to improve the situation.

Everything else remaining identical, the only thing the emergence of a free market for vulnerabilities will make efficient is the exploitation of victims that aren't informed of and do not have the capacity to fix them.

The only real fix to the current situation starts with legislation that aligns the interests of the companies that rely on vulnerable code and those of their users. Forbidding the trading of vulnerabilities won't help, and encouraging it won't help either. Recent legislation in the EU forcing companies to disclose security breaches they are aware of is a first, baby step in the right direction. There are many essays by Bruce Schneier on this topic, here is one picked at random: https://www.schneier.com/blog/archives/2004/11/computer_secu...

The fundamental problem with trying to retard vulnerability markets is that vulnerability research is expensive. There aren't that many people who can do it(†).

You could conceivably ban overt vulnerability sales. But that doesn't actually fix anything, because the firms that want to buy bugs now will simply retain researchers directly instead of licensing their work product.

You could even conceivably ban commercial vulnerability research. ACLU's vulnerability markets guy has come close to suggesting that in the past. But that also won't fix anything. Firms that are using vulnerabilities illegally would continue to (now illegally) fund vulnerability research --- an activity that is far harder to monitor than actual hacking, which is already hard to monitor. The net effect would be to clear the industry of everyone doing "benevolent" research --- no more Metasploit, but lots more VUPEN(††)-type stuff.

Where "it" means the kind of vulnerability research that commands 5-6 figure premiums, virtually none of which is web app security work

†† Full disclosure: I know fuck-all about VUPEN and am just using the name as a shorthand for "the vulnerability boogeyman".

As another slightly less draconian approach than actually trying to ban vulnerability research, what about mandating a market where researchers have to sell to the developers and developers are obliged to buy from the researcher, with a middle-party setting the prices.

That might serve to provide incentives to the vendors who aren't already working actively on their security to do so, and also reduce (although obv. it wouldn't eliminate) the number of vulns going straight to offensive products.

So I have two options: I can sell work product in a market controlled by "the FCC of vulnerabilities" with capped upside, or I can work directly as a 1099 contractor at an enormous daily rate for firms that will pay to get vulnerabilities before vendors do.

Why would I take the market route?

In this case what are the firms who pay these rates doing with the information?

Defensive work (e.g. IPS vendors) well after they've got their early day protection they can just sell on to the vendor.

Offensive (e.g. "cyberweapons" ugh I had that term). well that's the point I'm making that whole industry is bad for defensive security as it involves keeping vulns secret as long as possible, so they can be used by them.

Gov's have to make a choice whether that's an industry they want to encourage, be neutral to, or discourage.

But given this line of thinking is one you'd disagree with, what option for addressing the problem do you prefer?

What difference does it make what they do with it? Stipulate for now that they use them to hack Russian and Chinese computers. That is, stipulate that there is a good public policy reason to regulate that kind of work. How would you accomplish that regulation? What, exactly, would you ban?

If you can't articulate a reasonable and effective regulation that would control vulnerability research, regulation will do more harm than good: it will wipe out beneficial research and drive talent towards malicious research.

It's not on me to come up with a way to "address" the "problem". Doing nothing seems like a more credible response than trying to outlaw specific kinds of computer programming.

Indeed, however in the same way as it's not up to you to "address" the "problem" nor is it up to me :)

Doing nothing would seem like a losing response given the current swing of events but I'll defer to your greater experience.

BTW I'd respond to the other threads but HN seems to object to deep threads.

On "flaws fixed rather than concealed" sure in a perfect world they do, but limited resources == prioritization and some systems and packages inevitably get left behind, insiders know which those are..

What do I, as a user and customer of software, care why companies harbor known security flaws (that, after all, is what you're talking about)? Hope is not a strategy. Those flaws are going to be discovered whether or not insiders leak them.

I'm not sure where I said hope was a strategy, I said insiders might have information that's of value to attackers (unless I got my threads mixed here)

The solution is obvious: all software development needs to be classified as a utility and regulated under Title II.

It's a systemic problem. One option is to decriminalize cracking, and then to associate a fine with creating vulnerabilities. It's straightforward to find the original author of some code. $500 maximum fine, or something on that level.

Changing the system (obviously) has a bunch of other obvious (and a bunch of unforeseeable) consequences. However, code would get a whole lot tighter.

It would also help if we got OSes that do not give applications blanket rights to read and write everywhere and use as much processing power as it likes. Abandoning the shared security model would help a lot.

I'm having a hard time coming up with an example of a critically important vulnerability that relied on permissions models. Arbitrary code execution is usually game-over no matter what privilege level you have.

The exception to this is sandboxing, which is effective (but unreliable) in limited, specific scenarios but not at all effective for the general problem of controlling real, full-featured user programs. Compare the Chrome content sandboxes to the Apple application sandbox.

> Corporations could buy the "zero days" in their own products in order to fix them.

The implicit threat that if you don't pay enough to secure exclusive access to the zero-day, it will be sold to others who may use it against you - that seems a lot like blackmail. It's different with bug bounty programs, which are not explicitly set up to compete with the price a vulnerability would get on the black market, but rather to incentivize civic-minded responsible disclosure.

Assuming the exploitation of a vulnerability is considered illegal, selling the vulnerability to someone who you have reason to suspect will exploit it should also be illegal, surely?

interesting idea, to an extent this already exists with bug bounty programmes that a number of companies have in place.

The risk is where either the company doesn't have a programme of that kind, or where the value of the bug is substantially higher to the offensive industry, than the product company...

> And then you have open source software which is heavily used in a lot of commercial products that might get attacked. With this kind of thing there's a big incentive not to report bugs to the project but to sell them to a company who has no incentive to see them fixed...

Then a lot of people need to revisit their own thoughts on exactly why they use open source, and the consequences of what happens if it goes away.

> For example, will we see these companies hiring ex-developers and testers from software product companies, as they might have inside knowledge of where products are weak.

That might happen, but it depends on intent. Products may be realized to be weak afterwards, due to a changing software landscape, so the company hiring them is a good thing - the software gets fixed in some way, and a need was fulfilled.

Then the question becomes: how likely do you think it is that someone purposefully creates a flawed product in order to make a gain later, after the project has been completed, released; after they have left the job; after they have been hired by these testing companies? It sounds as likely to me as people purposefully putting back-doors into software for future self interest. There exists another outlet for unethical intent, but in my opinion, the problem begins with ethics to begin with.

Software is too complex to perfect. There will always be bugs. Regulation of industry and regulation of the regulatory bodies is additionally, very complex. The idea of regulating where knowledge is allowed to flow on top of that horrifies me: i.e., whether a developer is allowed to work at one of these companies eventually, in judgment of prior work.

I wish people had greater incentive to maintain a standardization of ethics, but this is all theoretical to begin with, at least from my direct observations. There is nothing to judge unless it happens, and then the best one can do is act.

t's not a giant leap from "not reporting a bug" to "deliberately inserting a bug" and then selling details of it.

This has aleady been going on, publicly, for many years and is the value proposition of several companies.

Take a look at TippingPoint ZDI and VUPEN. All they do is find/buy vulnerabilities, privately weaponize (or provide mitigations for) them, and sell the new product to companies and governments.

"A survey of 181 attendees at the 2012 Black Hat USA conference in Las Vegas found that 36 percent of “information security professionals” said they’d engaged in retaliatory hack-backs."

What? Black Hat attendance is in the high thousands. A plurality of those attending are IT professionals --- people that wouldn't have the technical capability to take over a botnet even if they wanted to. Even if you broadened the definition of "hacking back", as some people do, to recon activities like port scans. No part of this anecdote makes sense.

For my part (I'm a security researcher by background, though that's not what I'm doing now, and I've presented at Black Hat numerous times): not only have I never met a professional who claimed to have "hacked back" anything, but I've never even met one who didn't think that was a crazy idea.

There is a difference between major organized efforts to bring down botnets and "hackback" the way the term gets associated with Endgame.

> I never met a professional who claimed to have "hacked back" anything, but I've never even met one who didn't think that was a crazy idea.

Coming from the amateur side of things, my observations mirror your observations. The only people who think it's a good idea, from what I've seen, are script kiddies-- not professionals.

I interviewed with Endgame recently. Their arrogance was striking.

More topically, there's a basic problem in security - vulnerabilities have value. They have more value to people who want to use them than to people who want to close them. Unless this shifts, the current situation is only going to get worse.

Making it illegal isn't going to work. There is already a functional black market. Removing the white market will just drive more groups to the black market.

There's no easy answer here. Yesteryear's EFNet junkies have been turned into today's mercenaries and weapon designers. Cyberspace is valuable, and controlling it moreso. It's a dangerous time to have interesting information.

Worth adding: even basic software security engineering services are, compared to other services, spectacularly expensive. In ten years of software security consulting for big companies, I met with very few who didn't get sticker shock from the cost of even a basic web app assessment.

Supply/demand is a motherfucker. The solution is probably going to have to focus on the supply side.

A lot of basic stuff can be automated, but that only goes so far. Security engineering is becoming its own distinct and highly specialized discipline, and the supply is probably always going to be limited.

I think a better answer is for companies to take security more seriously from the beginning. This means being willing to invest in developer training and in-house infosec. The expense of outside expertise should be ample reason to bring that inside.

The company profiled in the article ("ex-NSA") isn't exactly the first player in its space - e.g. VUPEN is a pretty established company (http://www.vupen.com/english/services/lea-index.php), and there have been earlier articles on this market (for instance http://www.forbes.com/sites/andygreenberg/2012/03/23/shoppin... is pretty readable).

This may be a very good book, worth reading, but it's not really news.

We need to allow corporations to fight back? Why stop at cyberspace, I think we should let multinational corporations field their own private armies as well. What could possibly go wrong? It's not like they'd ever abuse that power!

I just want to make sure I have this right.

The government hires these guys and then keeps the vulnerabilities in our software and our businesses' software secret?

They then use this to launch attacks and record our communications and actions?

I, of course, would have an opinion on this, I just want to make sure I've got this correct.

Edit: punctuation.

That is not a very good summary of Endgame or VUPEN.

Cool, what's a better summary?

Edit: Thank you for the feedback.

So asking questions is also grounds for being down voted? I am confused ...

That doesn't matter, people will up or down vote. It was, indeed a question loaded with my preexisting bias, but also a genuine question.

VUPEN and Endgame are companies that employ people to do vulnerability research and develop exploits.

They sell a subscription service that provides access to a catalogue of their exploits to Government groups (law enforcement and intelligence agencies mostly). Depending on the company the list of acceptable clients will vary, some of these firms sell only to the federal agencies of 5-Eyes nations, others will sell more broadly than that, some may only sell to ${Local SIGINT agency}.

Government groups might do any number of things with these exploits but typically law enforcement will use them to execute warrants to help in their surveillance of suspects. Intelligence agencies may use them in the same way (pursuant to their authorizations). Other customers might somehow try and defend friendly networks with the information but this doesn't work.

I'm not sure what in particular tptacek objected to but my guess is characterizing them as part of the Government. The Government isn't keeping any secrets here (except for the ones they're presumably contractually obligated to keep by Endgame / VUPEN / etc) and the vulnerabilities have been discovered before the Government has contracted with the supplier.

Sounds like part of my summary's issue was grammar/word choice. I definitely understand the problem now and will be more careful.

New summary: The governments (plural) hire these companies (as opposed to "guys") and may, but don't always, keep these software vulnerabilities secret in order to collect information on people/targets. This is sometimes done with a warrant and in other times is done without the need.

I don't see how the governments and these companies aren't keeping the vulnerabilities in the software we rely on secret, I need more convincing.

I really appreciate you taking the time to flesh it out with me, even though it's unlikely we'd end up agreeing (just from hints in the tone we're using), I'm glad it won't just be over poor writing on my part. Thanks!

Yes, the vulnerabilities are kept secret. The value of an exploit decreases significantly after the vulnerability is patched, and they are in the business of selling high value exploits. If they couldn't sell the exploits, they wouldn't be finding the vulnerabilities either. Banning exploit sales won't suddenly result in VUPEN turning into a vuln finding charity.

For whatever it's worth, zeroday exploits are rare in practice. The vast majority of exploited systems are taken down with public vulns because they weren't patched in time. Very few organizations are interested in specific targets; carpet bombing the internet and searching for unpatched shellshock/drupal/etc installations will collect enough low hanging fruit.

If the NSA buys an exploit in Windows, does the NSA's contract preclude VUPEN from selling that exploit to Microsoft?

Presumably. If you're paying for an unpatched vuln, you don't want to get a patched vuln.

Yes: the VUPENs of the world aren't exfiltrating vulnerabilities from NSA. These aren't government secrets being leaked to private sector companies. There's probably more wrong with the summary than that, but I stopped there.

Guys was referring to the companies. "Ours" meaning private businesses and individuals.

I'm sorry if my comment implied "vulnerabilities from the NSA" these are vulnerabilities in everything else, being sold to the NSA, then kept secret by the NSA from the people they are supposed to be protecting.

If I accidentally left a door open on my house and a police officer that knew who I was saw my house left wide open, I'd hope a) he'd let me know or b) not do anything. I hope he wouldn't just go inside and take pictures of my private affairs for using against me later when it's convenient.

This is opposed from him seeing the door with indications of a break in, that'd be different, but again, I'd hope he doesn't go in.

This isn't out of fear of illegal activities being discovered (I have none, I'm boring), instead it's from the fear of someone using their position of power to take advantage of me for personal or professional, or the protection/enforcement of political ideologies.

Allowing this is like allowing the development of engineered bioweapons and an open market selling them to the highest bidder.

A couple of ways that vulnerability research is nothing like bioweapon engineering:

1. Vulnerabilities exist in software, whether a researcher uncovers them or not. America's adversaries will continue to identify and exploit these vulnerabilities. So legislation that keeps this information out of the hands of our defense/intelligence community would really only serve to weaken us relative to our enemies, rather than making us safer.

2. Die Hard movies aside, software vulnerabilities are far less likely to lead to apocalyptic outcomes than nuclear or biological weapons. Maybe a better analogy would be the open market manufacturing of surveillance technologies, like cameras and radios. Where's the outrage about telephoto lenses that camera companies make that can be used to monitor our enemies or to take pictures of your daughter in your backyard swimming pool from a half-mile away?

> America's adversaries will continue to identify and exploit these vulnerabilities. So legislation that keeps this information out of the hands of our defense/intelligence community would really only serve to weaken us relative to our enemies, rather than making us safer.

Encouraging freelancers to find vulns and sell them to the highest bidder also makes us less safe. It also encourages vulns to be created in projects for the purpose of later selling them.

Secondly, the American point of view on this is colored by the fact the US has never been the target of a cyberweapon of the power of stuxnet, designed to cripple a large and critical military or industrial system.

And you would stop allowing it how? Serious question. I commented upthread about that problem.

How do we not have freelance bioweapons developers? Would it be any more difficult to prohibit the development and auctioning of cyberweapons?

I don't mean to be evasive by answering your question with a question, but freelance development of weapons that can cause widespread damage seems to be controllable. Why not quash the development sold-to-the-highest-bidder cyberweapons by the same means?

> How do we not have freelance bioweapons developers? Would it be any more difficult to prohibit the development and auctioning of cyberweapons?

First of all, we do, in the Middle East. But bioweaponry development is orders of magnitude more difficult: when you accidentally release your bioweapon into your own shed you die and it may be detected from abroad, when you accidentally release your cyberweapon into your own LAN you reimage machines and no one from outside noticed unless you're incredibly stupid.

Likewise, bioweaponry distribution is orders of magnitude more difficult, if you don't maintain the bioweapon properly the weapon doesn't work, if you don't climate control properly the weapon doesn't work, if you don't weaponize and launch it properly the weapon doesn't work. Cyberweapons can be copied around on cards no bigger than my thumbnail, and maintain practically forever.

These problems go on and on. You're literally talking about controlling information and communications. NSA has enough trouble trying to read communications (and we're all working as hard as we can to close that ability), let alone to have government control that flow of communications.

Your take on this is colored by the assumption that the US has a permanent lead in all these types of weapons. Aspects of handling bio-weapons can be more capital equipment intensive than making cyber-weapons, but I bet it cost tens of millions in hardware to test Stuxnet, where testing a bioweapon could be very cheap. Virus particles and bacteria can be very durable in plain glass vials and can be "shelf stable." At first-tier player level, I doubt the costs differ by orders of magnitude.

Yes, "cyberweapons" (I hate that term but know why people use it) are much harder to control than bioweapons.

How would you control them? What would that policy look like? Want to take a stab at one? I'm interested in what people think should be done about this problem.

The answer is obviously complex and difficult, but I think it's fair to say the current approach isn't working well..

So you need something new to address the problem. I think that the answer is government legislation.

To be clear, I think that it's a terrible answer, in fact everything I'm about to write has glaring problems, I just can't come up with a better alternative.

So software suppliers have to be held responsible for the security of the products they supply, it would obviously be a long and arduous process, and would likely have serious repercussions on the industry as it stands today.

Flip-side is that you try to regulate the market for vulns to reduce their use in malware etc. you could do this by requiring companies to buy discovered vulns and require researchers to sell to the developers.

One of the many major issues with this approach is, what do you do about open source software. There's no money, so liability has no meaning. Here, if it's constituted into a commercial product (think all the lovely "appliances" out there) the liability passes to the guy making the money. Where an end-user company directly uses open source, they get the liability to go with it.

Along-side this you start slowly ramping up the security compliance requirements for companies and organisations processing transactions or personal data on the Internet.

I'd compare this to security becoming like "health & safety" . I don't think most companies really want to spend money on it, and I think a huge amount of money is wasted in unneeded process, but compared to the alternative, it has been seen to be the best approach.

What does the policy that requires me to sell to vendors actually look like? I keep asking because nobody is really answering.

O.K., so if I find an RCE in Windows 10, I have to sell it to Microsoft. But if I don't like the price I'm going to get from Microsoft, why wouldn't I just not independently find RCEs in Windows 10, and instead just sell engineering services to non-Microsoft companies who want to find and exploit those vulnerabilities themselves?

Obviously, you'd want to ban that kind of service. But how would you do that? What does that ban look like? A ban on reverse engineering? How do you differentiate the kind of reversing that Tridge did to build Samba from the kind of reversing HD Moore's team does to build Metasploit? Also: you want to ban Metasploit?

Such a ban would presumably put Veracode out of business as well, since such tools could easily be weaponized to find vulns in other peoples software.

So it wouldn't be ALL bad. :P


well I'm not a legislator, and if you're looking to pick holes, trust me you would be able to :)

The question is, is that the best approach and if not, what is? Once an approach is agreed then the inevitable x years of wrangling on details could occur.

You are crazy if you think that legislators are going to do a better job of designing a policy to regulate vulnerability researchers than technologists will. Whatever the policy would end up being, it will be asymptotically as effective and reasonable as whatever we talk about here.

If a bunch of technologists (including vuln researchers) can't define a reasonable policy, I think it's a pretty safe bet that there's no reasonable policy to be had.

Well I did say at the top of this thread that the proposal had glaring problems :) I was interested to hear if other people had alternate suggestions which would address the issue.

Sure. "It's not an issue." Problem solved.

Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact