Even innocent looking "low impact" issues can be used as stepping stones to something much worse.
Creative exploitation can make two or more low impact vulnerabilities into a very serious compromise. Some of those vulnerabilities might be in a completely different system.
In my experience, people submitting vulnerabilities to bug bounties tend to incorrectly estimate in the other direction, since they have financial incentive to do so.
You replied to a comment indicating that even minor bugs can be serious -- which they can -- but many researchers will find actually minor bugs (nginx version disclosure, existing account enumeration on signup page, etc) and claim they're critical.
There's a balance there, and it can be tough to find. In any event, my stance is that if there's a security bug, you should damn well fix it :)
A good point. There's definitely motivation to exaggerate the impact.
However, could there be a flip side to this as well?
People not reporting vulnerabilities they consider minor, thinking they'll get no financial gain. No money or fame, so why bother?
Some of those unreported issues could be very critical when connected with something else minor or perhaps even something that's not a bug at all.
Perhaps some of bug bounty funds should be retained for awarding any reported issues that turned out to be a (part of) critical vulnerability in the future?
Retroactively awarding issues that became more critical later on should encourage reporting of maximum number of bugs. Even those the discoverer considered insignificant.
As a result, everyone involved would be better of. More bugs would get reported and their discoverers properly compensated.
> ... minor bugs (nginx version disclosure, existing account enumeration on signup page, etc) and claim they're critical.
Wouldn't you agree it's preferable they're reported regardless, no matter how minor they may be? For example unintended version disclosure could very well make targeted attack a successful one. Account enumeration disclosure could conceivably be used to attack the account holder in some other way, including social engineering attacks.
This applies to all parties- the ones who are vulnerable and the ones who are discovering.
I totally agree with your statement about reporters inflating the impact of minor bugs. In fact, it is the spirit of arrogance which they exhibit that attracted me to OP's comment!
On e-commerce sites, they might add a credit card - thereby allowing the attacker to use it.
Fortunately, HN doesn't have these risks, but it's a good illustration of how CSRF works in the real world.
Believe it or not, there have been apps with serious flaws stemming from logout fixation --- but those flaws were notable because they weaponized logout CSRF. :)
e.g you can have a SQL injection that's not a vulnerability because the input passes through a substr($_GET["id"],1,1).
Depending on the application it could significant implications. Imagine for instance if this was possible in an application, and after switching to the attacker's account the victim was tricked into doing an OAuth to some third party service.
In the years I've used uMatrix I can't remember a single case where sending a cookie with a 3rd party request was needed. Why do browsers do this still?
Your server must expect the POST to answer back with a valid csrf token. Otherwise, the request results in an error and doesn't go through. Attacker pages won't know anything about the csrf token, so they can't forge the form.
A random CSRF token on a login form is easily defeated - the attacker just requests a valid one and uses it when submitting the CSRF form.
I have to agree with the GP, blogging about an issue like this just seems tacky.
The reactions would undoubtedly be different if it was a practically exploitable CSRF, like something that allows you to change users email addresses.
Whereas here, when it's HN with a CSRF issue, "eh, it would break some third-party clients if we patched this".
We fixed the reported vulnerability and have a fix for the remaining issue ready if it's needed. There's no "eh" here; it's a question of what the right tradeoff is.
And I don't see people calling out the "breaks third-party clients" justification for not rolling out the full fix.
A phrase like "calling out" assumes that it's obvious what we should do. It's not obvious; the parts that were obvious are done. Our goal is to do what's best for the community, not to avoid getting criticized on the internet.
If I were going to respond to you the way I feel HN would generally respond to a CSRF hole in a major non-HN site/service, I'd say something like "Well, that line shows you're as good at formal logic as you are at preventing/patching CSRF holes".
You know the same as I do that HN's getting light treatment from its users in this thread, compared to how security issues in other things typically get received. It's OK to admit that.