Hacker News new | past | comments | ask | show | jobs | submit login

Finally, note that Zoom effectively does not pay for bug bounties, so researchers should think twice about donating their expertise to a selfish for-profit corporation

I've read this a few times and am curious if this has really become the prevailing view about what security researchers are doing (i.e., uncompensated labor) when they notify vendors about security vulnerabilities.

The traditional view (which I think was widespread in the 90s or whatever) was that engineers who find vulnerabilities in products have a special responsibility to the public, and owe a duty to the people at risk: the users of the product (or whoever would be harmed if the vulnerability were exploited to malicious ends). Just like if you used your training as an engineer to discover that the Bay Bridge had a structural flaw and that drivers were at risk (or, in the case of Diane Hartley, that the new Citicorp Center had a design flaw and officeworkers were at risk). And this duty can be discharged a few ways, but often the most efficient way to help the people at risk is to educate the vendor and keep on their ass until they fix the problem in a free update. If the vendor pays you, fantastic, but you shouldn't accept payment that would prevent you from discharging your duty to the people actually harmed by the vulnerability's existence (e.g., if you take the vendor's money and it comes with an indefinite NDA, and they never fix the problem and the users remain at risk of being harmed by bad actors forever, you have not behaved acceptably as an engineer). This view probably emerged at a time when bug-finders mostly had salaried jobs and were privileged not to have to depend on payments from the same vendors they were annoying with information on their product's flaws.

A newer view (probably informed by bug bounties, etc., and also a broader community of people doing this stuff) seems to "no more free bugs for software vendors" -- that researchers who find vulnerabilities in commercial products are producing knowledge that's of value to the vendor, and the vendor ought to give them compensation for it, and if the vendor doesn't want to do that, the researcher would basically just be doing uncompensated labor to give it to the vendor, and is free to go sell the fruits of their discovery to somebody who does value their labor instead. Even if that means selling the bug to unknown counterparties at auction and signing a forever NDA not to tell anybody else.

The first view is mostly what we teach students in Stanford's undergrad computer-ethics course and what I think is consistent with the rest of the literature on engineering ethics (and celebrated examples like Diane Hartley and William LeMessurier, etc.), but I do think it seems to be out-of-step with the prevailing view among contemporary vuln-finders. I'd love to find some reading where this is carefully discussed that we could assign students.

I can't imagine selling bugs to the highest bidder ever becoming ethically acceptable. You can't pretend not to know that the high bidder is probably a cybercriminal. If you do this, your hat is clearly black.

Once upon a time, vulnerabilities were just nuisances and people could justify some gray-hat casuistry when the damage was just some sysadmin overtime to clean up. But now there are serious organized crime rings and rogue nation-states using vulnerabilities to steal and extort billions and ruin people's lives.

It's OK to choose not to work on products with no bug bounties, but if you do find a bug in one you must disclose it responsibly.

>you must disclose it responsibly.

While most people agree selling a vulnerability is immoral, there is much debate on whether "full disclosure" is ok, and whether "responsible disclosure" is a term anyone should ever say (some argue the correct term is "coordinated disclosure").


The first view meets some sort of ideal (I guess) but causes all sorts of free riding problems. In larger society these sorts of problems are solved through regulations. For example if someone identifies a structural vulnerability in a bridge, the agency in charge of the bridge has a legal obligation to take steps to fix it. That sort of regulation doesn't exist in software land.

The second view as you describe it (selling to the highest bidder) is clearly black hat, but it is completely ethical for a researcher to disclose a vulnerability to the public if the vendor doesn't fix it in a reasonable amount of time. So Project Zero and this disclosure are both fine. Yes, ordinary users may be harmed in the crossfire, but the vendor should be liable for damages.

Beyond just a prevailing "view", this duty to public safety is actually explicitly codified in the laws and regulations of most professional engineering organizations. To act otherwise would be a) unethical and subsequently b) grounds for loss of license to practice.

If only software development was actually an Engineering profession....

I would say the 'first view' you've described is what the bulk of professionals in the information security industry would still espouse as the ideal.

In my opinion this second view you are observing is carried by a vocal minority of participants in bug bounty programs and would be good fodder for a computer-ethics course.

They’re donating their expertise because, yes, this research is extremely valuable and important, but the vendor should obviously be paying for it.

I feel like selling bugs to the highest bidder is usually ethically questionable, no matter how “new” your viewpoint is.

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact