I'm uncomfortable with vulnerability markets for other reasons. But, anyways, you write:
For example, will we see these companies hiring ex-developers and testers from software product companies, as they might have inside knowledge of where products are weak.
Two things.
First, you're not clear on why this would be a bad thing. The flaws are there whether insiders out them or not. The implication in your comment is that we'd be better off with those flaws kept secret. Obviously, we'd all be happier if the vendor outed their own flaws, or if a non-"mercenary" researcher outed them for public consumption. But even private vulnerability sales have the effect of eventually burning the bug.
Second, it's a little naive to think that most flaws are known only to insiders. In fact, the advantage insiders have in getting full access to repositories is probably dwarfed by the advantage attackers have in committing entire careers to studying exploitable bugs. For most competent researchers, lack of source code is just a speed bump.
Realistically in my experiences every large company has "skeletons", which are primarily known to insiders and of which outsiders have less knowledge.
Be that a product which is known not to have as rigorous a security regime as others, or perhaps a service which is considered "legacy" and not developed actively.
When offensive companies start hiring people to get access to that information to use against their prior employers, I think that's not great for overall security.
I didn't think I was implying that keeping flaws secret in the long term was desirable, I don't think it is.
Also whilst I agree it's naive to suggest that only insiders know most flaws, I feel that it's also reasonable to suggest that insiders have information which would be useful to attackers, and that could be tapped by hiring them.
As I said originally that was just one example of where I think potential problems could arise from "vulnerabilities as a product", but I'd be interested to hear what you think are the downsides to vulnerability markets.
You seem to be describing companies that are institutionally concealing serious product flaws from their customers, and suggesting that overall security as a public policy goal is improved by a strategy of just hoping that reverse engineers won't notice.
that's not what I was intending to describe. I was suggesting that insiders have inside information and sometimes that is relevant to attacking companies, such that hiring those insiders could be useful to them.
and definitely not hoping that reverse engineers won't notice, I've been in security long enough to see all my pronouncements of "you know someone could do x" and more come true...
I'm not sure that you've clarified anything with this comment. "Insider knowledge of information relevant to attacking software" is "insider knowledge of product flaws". Flaws need to be fixed, not concealed.
Institutional knowledge is about process as well as the software. Knowing the magic words and people can make social engineering or avoiding countermeasures much easier, even in the absence of an explicit software flaw.
Now we're playing Six Degrees of Kevin Bacon. We start out with moles inserting vulnerabilities. Then it's insiders who know about flaws. Then insiders who know about weak spots to look for flaws in. Now it's magic words to help with social engineering. At some point, these stop being important considerations for public policy.
Yes but having a paid insider gives you a major advantage. You can now seed vulnerabilities, and derive a predictable income stream from these seeded vulnerabilities.
What is more scary to me is that big money is involved here. What if you can't get a developer to be tempted by money to insert vulnerabilities for you, and you start using a more heavy handed approach (death threats etc).
Also by paying developers to insert vulnerabilities, you no longer need experts looking for vulnerabilities. These experts are in short supply, so it might become a more viable path.
This is why I am uncomfortable with vulnerability markets...
First, I thought you were simply talking about insiders who had knowledge of targeted software. Here it seems like you're talking about moles being paid to insert new vulnerabilities.
But even then, I don't find this threat particularly credible. After all, what we're talking about here are W2 employees with social security numbers or immigration tracking committing galactically expensive torts against their employers and in all likelihood most of the Fortune 500, in addition to (in all probability) multiple felonies. How much money do you think Endgame can afford to pay these people to shoulder that risk? There's a reason this doesn't actually happen all the time.
I think the point is, it's extremely likely you'd end up in a situation where someone intentionally introduces a hard to detect flaw. If the flaw is then sold to a government agency:
1. The person who introduced the flaw will not suffer any consequences.
2. You've now set the precedence that it's free game for people to intentionally leave holes in software and get paid to do so.
This seems very unlikely, since intentionally introducing product flaws is at the very least an incredibly damaging tort, not just against your employer but against everyone who ends up using the software.
For example, will we see these companies hiring ex-developers and testers from software product companies, as they might have inside knowledge of where products are weak.
Two things.
First, you're not clear on why this would be a bad thing. The flaws are there whether insiders out them or not. The implication in your comment is that we'd be better off with those flaws kept secret. Obviously, we'd all be happier if the vendor outed their own flaws, or if a non-"mercenary" researcher outed them for public consumption. But even private vulnerability sales have the effect of eventually burning the bug.
Second, it's a little naive to think that most flaws are known only to insiders. In fact, the advantage insiders have in getting full access to repositories is probably dwarfed by the advantage attackers have in committing entire careers to studying exploitable bugs. For most competent researchers, lack of source code is just a speed bump.