Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's not at all true that researchers on the whole do what they do to prevent exploits. Many of the best researchers do the opposite!


Imagine if someone in infectious disease research said "I t's not at all true that researchers on the whole do what they do to prevent disease. Many of the best researchers do the opposite!"

It would be interesting if monetizing the next flu bug worked the way that the market for vulns works.


Huh? Infectious disease researchers are the ones creating the annual flu bugs? That's a theory I hadn't heard before.


Infectious disease researchers are finding microbes, just like security researchers are finding vulns.

Now let's try putting words in your mouth: You would be happy with disease microbes being sold to the highest bidder and weaponized, and turned against the population, just as vulns are when security researchers sell them to spy agencies and law enforcement. Is that what you are saying? Are those acceptable professional ethics for... biologists? Anyone?


If it was up to me, we'd come pretty close to banning the manufacture of firearms and ammunition, so I'm not the right person to ask about this. But, once again:

* Vulnerability researchers do not as a rule disclose to vendors. Some do, some don't.

* Sponsoring the discovery of a vulnerability so you can write an exploit for it doesn't prevent others from finding that vulnerability and patching it. If anything, sponsoring vulnerability discovery for exploit development increases the likelihood that the bug will be patched.

* When I ran a security consultancy, we had a "no selling vulnerabilities" rule. Published, on our website. I was comfortable with that, because "my company my rules". I am a lot less comfortable dictating my own morals on other people that don't have a contractual agreement with me.

* It is difficult to come up with an argument that vendors should get disclosure of vulnerabilities that doesn't involve vendors entitling themselves to the (often very expensive) work of vulnerability researchers. It's especially galling to see companies that don't spend any real money on software security expressing that sentiment.

And, of course: software vulnerabilities aren't infectious disease agents. The revulsion we have for weaponizing infectious diseases comes from the concern that they will spread unchecked. But that's not how software vulnerabilities work.


The question is whether selling vulns, or weaponizing them, or stockpiling weaponized vulns is acceptable professional ethics. Some people think that the government having stockpile of zero-days is a good thing. Some even think that vulnerable endpoints are a good compromise outcome so that encryption doesn't turn into intellectual contraband.

But it would be better, for everyone, for it be considered unethical and unprofessional to add to the stockpile and actively keep endpoint devices vulnerable. I think stockpiles of vulns should be disclosed, even through hacks or leaks, like the Hacking Team leaks. Hence the analogy to biologists auctioning off their discoveries secretly to be weaponized. It's analogous enough: The practice of stockpiling vulns for the purpose of spying leaves everyone with less privacy and security, at the mercy of the unaccountable and outright evil. It creates perverse incentives for deeply unethical behavior. It poisons the whole software and hardware industries globally. If vulnerability stockpiles were unilaterally disclosed, it would be a large net benefit to the common technology user.

Also, rewarding researchers for disclosure is fine. There are open, transparent, and ethical ways to do that, like published bug bounties followed by timely public disclosure.

You might have good intentions and high ethics, but industry norms have to be designed for people like Hacking Team.


Your worldview is that because there are bad actors like Hacking Team, anyone who does vulnerability research is obligated to disclose their findings to vendors?

No. Vulnerabilities exist because vendors ship bad code, not because researchers read that bad code. I refuse to sign on to an "ethic" that entitles negligent vendors to the work product of researchers.

You do the work, you choose what to do with the vulnerabilities. There are packages --- Cryptocat is a great example --- where I've found grave vulnerabilities, disclosed that I found them, but refused to divulge details. I would personally never sell a vulnerability; I think vulnerability markets are immoral. But I don't get to impose that morality on others. Would that I could! I think Cryptocat is immoral, too! But I have to live and work in a world where not everyone agrees with me.

The one common denominator we can all share is "nobody is entitled to appropriate my work from me without my consent".


I don't know cryptocat or its authors, so I have no idea why you consider them immoral. What's the story?

Obligations are a two-way street, and good ethics should have support. If you have the means to reward disclosure of a vuln you should announce a bug bounty.

Professions have ethical standards. Some are stronger than others. They are meant to impose a basic level of morality. In the real world, that never happens perfectly. But some of them definitely imply disclosing one's work without extracting every last penny from it, such as disclosing abandoned clinical trials.


I feel about Cryptocat the way you would probably feel about someone who set up an inner-city neurosurgery clinic after reading a bunch of Usenet HOWTO posts.

I think there are two separable arguments here. We may disagree on both of them. But:

* The first argument is whether it's OK for researchers to stockpile vulnerabilities --- to learn things about software and then not share them. This might seem like an artificial distinction, but there are lots of good researchers who back-pocket great, important vulnerabilities. They don't exploit them, they don't sell them, they just find them, make some notes, and move on.

* The second argument is whether it's ok for anyone to weaponize vulnerabilities. If you believe that the USG has an obligation to disclose vulnerabilities, you're almost (but not quite) required to believe they can't do exploit development work --- for any reason. Disclosing vulnerabilities to vendors kills exploits.

I'm OK with researchers stockpiling. I'm OK with the USG weaponizing. I'm OK with the latter in the same sense as I'm OK with them carrying firearms or breaking down doors to serve warrants or freezing bank accounts. Obviously, I'm not OK when the USG abuses those powers.


Inn the abstract is seems OK for researchers to simply sit on vulns they have found, but is that what really happens? Why do that? Do they get sold eventually? Are there a lot of cases where the developer is hostile to fixing them? How OK this is depends on the eventual disposition.

The other one seems clearer: "Disclosing vulnerabilities to vendors kills exploits." Well, yes. The problem is that, in the present situation, endpoint security is terrible. It seems unlikely that our government has made it possible for themselves to break endpoint security, but not the Chinese or any other nation, organized crime group, or other non-state actor with some software smarts. It may take some catastrophic infrastructure penetration or super-Snowden leak to show why this is unwise.


Yes. Vendors are usually hostile to researchers, and vendors generally do feel entitled to researcher work-product. Their feeling is, it's their code, so they're entitled to know about problems with it.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: