Hacker News new | past | comments | ask | show | jobs | submit login

> vulnerability researchers justifying bad behaviour because they find a vulnerability in code

This is an extremely bad faith take that makes me irrationally angry to read.

He's not using bad code as a reason to engage in bad behavior, he's using bad responses to responsible disclosure. Read the section under "Personal Considerations". It only took him two days to find the problem, but 22 days to get developers to admit there's a vulnerability, even when shown PoCs.

Imagine finding a vulnerability, responsibly disclosing it, being told "meh, not an issue", responding with a PoC showing full code execution, and still being told "meh, not an issue".




> Imagine finding a vulnerability, responsibly disclosing it, being told "meh, not an issue", responding with a PoC showing full code execution, and still being told "meh, not an issue".

I would still want to be responsible. I shouldn't get to choose to be irresponsible when I have a bad experience. Then, naturally when the time is up and the disclosure happens according to the timetable, I would be the side looking much the better from it. As such behaving as he did and justifying it in that way is illogical.

I speculate that maybe the reasons he gave may not be entirely the whole story because he would have looked better responsibly disclosing, but its important to note that he doesn't blame poor code, thank you for the correction. And I am speculating for the reasons. Maybe in the future I shouldn't.


Disagreeing with someone's decisions is not a valid justification for misrepresenting their motives.

I agree with you, it's kinda shitty, but I get where he's coming from. It's incredibly frustrating to want to improve the security of the world, but when developers have too much ego and push back against claims of vulnerabilities in the face of proof, well...every hero either dies or lives long enough to become the villain.

I've experienced it first-hand at a previous job. I found a buffer overflow in some firmware, and engineering just said "Meh, at worst you'll just segfault the device, and the user can just reboot". The fix would have literally just been a two-line buffer length test that throws a 400 Bad Request (It was an embedded web server written in C, with the vuln being in an XML parsing library), but I had to go through the effort of taking that bug and learning ARM assembly and return-oriented programming in order to create a PoC before engineering decided to fix it.

I suppose I should be happy, though, as that learning experience was the cannon that shot me from just being a test engineer into getting into AppSec.


Yes, I can totally empathise with him too. I've behaved in emotional ways in with frustration because of code (but thankfully not in a public way with certain standards of behaviour). Let's hope he can learn from it. It's hard to act professional when acting alone and outside and against the so-called "real professionals".

Ultimately it's about trust. Perhaps these organisations have become too large and uncaring or maybe we have become too impatient and frustrated. I don't think anyone wants to see researchers not responsibly disclosing as well as companies irresponsibly interacting with external researchers who just want to help. It's easy to this as a path from white to black hat.


I genuinely liked your opening statement (disagreeing...)

I am sorry to hear you had such a raw experience. Maybe you were dealing with pretty clueless engineers, since most do realize a buffer overflow should be treated exploitable unless proven otherwise. I've had better experience trying to argue the cost of fix -- it being pretty low was incentive enough for engineering to fix it.

That said, I am worried evilsocket may not be taken seriously next time he finds a vulnerability with CVSS 9.9. To some extent I am surprised by his argument on not knowing CVSS scoring rubrik. There may have been language barrier at play as well, leading to some of his sentences coming across as more abrasive than they should have been.


Responsible Disclosure has two forms: Coordinated disclosure where the vuln is disclosed to the vendor with a time limit for public disclosure. Full disclosure, where the vuln is disclosed to the public so they can take mitigation steps.

Irresponsible disclosure is selling the vuln to criminal groups or intelligence agencies.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: