Hacker News new | past | comments | ask | show | jobs | submit login

There is no universe in which the CVE database is going to be reliable as a source of analytical data about vulnerabilities. It exists to provide a common vocabulary in discussions about specific vulnerabilities, and that is literally all it can do (& even then...). It is gamed six ways from Sunday by practitioners, and nothing is going to stop that from happening, because the project isn't staffed adequately to meaningfully adjudicate, let alone analyze, all the vulnerabilities that get filed, and nobody in the industry is remotely interested in funding such a body.

So, however valid these complaints may be, they fundamentally misconstrue the role of NVD. They're taking NVD artifacts far too seriously. I'm sure that's a reaction to incompetents in the security industry also misconstruing NVD, but the correct response to that is to dunk on those incompetents, not to attempt to hold NVD to an impossible standard.

Meanwhile, the CVSS is bad? You don't say. On this point, Stenberg's right to make noise: there is a broad (if quite shallow) belief that CVSS scores have some meaning, and they do not: they are a Ouija board that reflect the interests of whoever calculated the score, and it's easy to show pairs of sev:lo-sev:hi vulnerabilities that illustrate exactly how ridiculous the resulting scores are.

It would be better if the NVD CVE database didn't include CVSS scores; they don't work, they're unscientific, and they too hold NVD to a standard they can't possibly meet, making a lot of this their fault.




The problem is the CVE score do work in most cases. A lot of organizations still prioritize updates based on their CVE score, and don't bother with updating unless it meets a certain threshold. If it doesn't meet that threshold, then they wait until their monthly patching cycle, or don't even bother updating ever at all.

Until that culture is fixed/adjusted, having a scoring mechanism that is easy enough for manager/executive type people to easily understand risk without any technical knowledge is important. Way easier to argue for an emergency patch/downtime that could cost money when there is a big scary 9 associated with it. And so if the scoring is off and not accurately representing risk, let's work to improve those scores, rather than getting rid of them.

Plus, there is a reason environmental scores exist in the CVSS mechanizm, as it allows for folks to adjust the CVE number to better fit their environment and specifications. I'd personally rather see more CVEs appear and be tracked quickly for easier referencing and discussing, with a slightly adjusted formula to better reflect severity.


> The problem is the CVE score do work in most cases. A lot of organizations still prioritize updates based on their CVE score, and don't bother with updating unless it meets a certain threshold. If it doesn't meet that threshold, then they wait until their monthly patching cycle, or don't even bother updating ever at all.

That doesn't mean it works; that could just mean organizations either a) don't understand the actual severity of their own vulnerabilities, and prioritize fixes based on an incorrect metric; or b) recognize that the CVE score is garbage, but don't want to appear to their users as ignoring or de-prioritizing supposedly-severe issues.

Neither one of these options is good!


They do not work in most cases. They provide psychological comfort to enterprise IT teams, but so would a literal Ouija board, as long as you hid it from the people using them. Obviously, the "environmental score" component of a CVSS is an admission that the whole system is intellectually bankrupt.

I'd be interested if you could find a serious vulnerability researcher (say, anyone who has given a Black Hat talk or submitted a Usenix WOOT paper) who'd be willing to defend CVSS. My perception as an (erstwhile) practitioner is that CVSS is sort of an industrywide joke.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: